Sunday, February 6, 2011

Android gaining as enterprises ramp up mobile app development across platforms and business models

It's a post-PC world, and mobile development is the name of the game. According to a report from Appcelerator and IDC, businesses and developers are racing to define a winning mobile strategy, while keeping an eye on platforms and business models.

The first-quarter report, garnered from a survey of 2,235 Appcelerator Titanium developers, shows that while the iPhone and iPad are still the leaders of the pack, Android smartphones and tablets are gaining large amounts of developer interest. Google has nearly caught up to Apple in smartphones and is closing the gap on tablets.

Businesses are increasingly taking a multi-platform approach. On average, respondents said they plan to deploy apps on at least four different devices.

The report also shows that for enterprises the days of mobile app exploration are drawing to a close and companies are moving, or have moved, into an acceleration phase, with an eye toward greater innovation. This year, developers and businesses expect to triple their app development efforts, and the average developer is now building for four different devices.

In addition, there is a dramatic increase in the integration of geo-location, social, and cloud connectivity services, along with increased plans to integrate advertising and in-app purchase business models.

With the growth in the market, Appcelerator and IDC have developed a "Mobile Maturity Model" to describe the three phases of mobility adoption -- exploration, acceleration, and innovation.

Last year, most respondents (44 percent) said they were in the exploration phase of their mobile strategy. A simple app or two -- typically on iPhone -- and a focus on free brand-affinity apps was standard practice. This year, 55 percent of respondents said they are now shifting into the ‘acceleration’ phase.

Summary of findings

Other findings from the report:
  • On average, each respondent said they plan to develop 6.5 apps this year, up 183 percent over last year.

  • Businesses are increasingly taking a multi-platform approach. On average, respondents said they plan to deploy apps on at least four different devices (iPhone, iPad, Android Phone, Android Tablet) this year, up two-fold over 2010.

  • Ubiquitous cloud-connectivity: 87 percent of developers said their apps will connect to a public or private cloud this year, up from only 64 percent deploying cloud-connected apps last year.

    In addition to cloud services, integration of social and location services will explode in 2011 and will define the majority of mobile experiences this year.

  • Always connected, personal, and contextual: in addition to cloud services, integration of social and location services will explode in 2011 and will define the majority of mobile experiences this year. Interest in commerce apps is also on the rise, with PayPal beating Apple as the most preferred method for payments.

  • Business models are evolving along with these more engaging mobile app experiences. Developers are shifting away from free brand affinity apps and becoming less reliant on 99-cent app sales. Increasingly, the focus is on user engagement models such as in-app purchasing and advertising, with mobile commerce on the horizon.

  • Outsource goes in-house: the enterprise takes control of its mobile destiny. 81 percent of respondents said they insource their development, with the majority saying they have an integrated in-house web and mobile team.
A mobile strategy

What do Appcelerator and IDC recommend for business trying to develop a mobile strategy? It's a four pronged approach:
  • Platforms: Cross-platform is mandatory, as is deploying to multiple form factors like tablets. In the third innovation phase, a business is thinking about possibilities across all major platforms and devices.

  • Customer: This perspective considers the shift away from simple content-based apps that inform or entertain to more complex and engaging applications that make use of location, social, and cloud services to transactional applications such as mobile commerce. As the customer experience evolves, so does application sophistication, customer expectations, business transformation opportunities, and the underlying business models. Free branded apps and a reliance on purely app store sales give way to advertising, in-application purchasing, and mobile commerce.

    What starts as a tactical outsourcing of development “to get an app done fast” quickly turns into a more strategic discussion.

  • People: There is an increasing shift from outsourcing to in-house development. What starts as a tactical outsourcing of development “to get an app done fast” quickly turns into a more strategic discussion around competitive advantage, control over a sustainable long-term mobile strategy, and rapid time-to-market considerations.

  • Technology: In order to meet the demand for more apps, new devices, frequent updates, and deeper customer engagement, a business needs to drive down costs, time-to-market, and complexity by developing and leveraging reusable components. Ultimately, this results in the need for a cross-platform, fully integrated mobile architecture that spans a company’s entire app portfolio.
A copy of the full report is available from the Appcelerator site.

You may also be interested in:

Wednesday, February 2, 2011

Open Group conference next week focuses on role and impact of enterprise architecture amid shifting sands for IT and business

Next week's The Open Group conference in San Diego comes at an important time in the evolution of IT and business. And it's not too late to attend the conference, especially if you're looking for an escape from the snow and ice.

From Feb. 7 through 9 at the Marriott San Diego Mission Valley, the 2011 conference is organized around three key themes: architecting cyber security, enterprise architecture (EA) and business transformation, and the business and financial impact of cloud computing. CloudCamp San Diego will be held in conjunction with the conference on Wednesday, Feb. 9. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Registration is open to both members and non-members of The Open Group. For more information, or to register for the conference in San Diego please visit: Registration is free for members of the press and industry analysts.

The Open Group is a vendor- and technology-neutral consortium, whose vision of Boundaryless Information Flow™ will enable access to integrated information within and between enterprises based on open standards and global interoperability.

I've found these conferences over the past five years an invaluable venue for meeting and collaborating with CIOs, enterprise architects, standards stewards and thought leaders on enterprise issues. It's one of the few times when the mix of technology, governance and business interests mingle well for mutual benefit.

The Security Practitioners Conference, being held on Feb. 7, provides guidelines on how to build trusted solutions; take into account government and legal considerations; and connects architecture and information security management. Confirmed speakers include James Stikeleather, chief innovation officer, Dell Services; Bruce McConnell, cybersecurity counselor, National Protection and Programs Directorate, U.S. Department of Homeland Security; and Ben Calloni, Lockheed Martin Fellow, Software Security, Lockheed Martin Corp.

Change management processes requiring an advanced, dynamic and resilient EA structure will be discussed in detail during The Enterprise Architecture Practitioners Conference on Feb. 8. The Cloud Computing track, on Feb. 9, includes sessions on the business and financial impact of cloud computing; cloud security; and how to architect for the cloud -- with confirmed speakers Steve Else, CEO, EA Principals; Pete Joodi, distinguished engineer, IBM; and Paul Simmonds, security consultant, the Jericho Forum.

General conference keynote presentation speakers include Dawn Meyerriecks, assistant director of National Intelligence for Acquisition, Technology and Facilities, Office of the Director of National Intelligence; David Mihelcic, CTO, the U.S. Defense Information Systems Agency; and Jeff Scott, senior analyst, Forrester Research.

I'll be moderating an on-stage panel on Wednesday on the considerations that must be made when choosing a cloud solution -- custom or "shrink-wrapped" -- and whether different forms of cloud computing are appropriate for different industry sectors. The tension between plain cloud offerings and enterprise demands for customization is bound to build, and we'll work to find a better path to resolution.

I'll also be hosting and producing a set of BriefingsDirect podcasts at the conference, on such topics as the future of EA groups, EA maturity and future roles, security risk management, and on the new Trusted Technology Forum (TTF) established in December. Look for those podcasts, blog summaries and transcripts here over the next few days and weeks.

For the first time, The Open Group Photo Contest will encourage the members and attendees to socialize, collaborate and share during Open Group conferences, as well as document and share their favorite experiences. Categories include best photo on the conference floor, best photo of San Diego, and best photo of the conference outing (dinner aboard the USS Midway in San Diego Harbor). The winner of each category will receive a $25 Amazon gift card. The winners will be announced on Monday, Feb. 14 via social media communities.

It's not too late to join in, or to plan to look for the events and presentations online. Registration is open to both members and non-members of The Open Group. For more information, or to register for the conference in San Diego please visit: Registration is free for members of the press and industry analysts.

You may also be interested in:

Tuesday, January 25, 2011

HP enters public cloud market, puts muscle behind hybrid computing value and management for enterprises

HP today fully threw its hat into the public cloud-computing ring, joining the likes of Amazon Web Services (AWS) and IBM, to provide a full range of infrastructure as a service (IaaS) offerings hosted on HP data centers.

Targeting enterprises, independent software vendors (ISVs), service providers, and the global HP channel and partner ecosystem, the new HP Enterprise Cloud Services-Compute (ECS-Compute) bundles server, storage, network and security resources for consumption as pure services.

ECS-Compute is an HP-hosted compute fabric that's governed via policies for service, performance, security, and privacy requirements. The fabric is available next month via bursting with elasticity provisioning that rapidly adjusts infrastructure capacity, as enterprise demands shift and change, said HP. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP CloudSystem, a new private-hybrid cloud enablement offering that automates private cloud provisioning, uses HP Cloud Service Automation (CSA) solutions and HP Converged Infrastructure physical assets so that enterprises, governments, and service providers can better build, manage, and consume hybrid cloud services, said HP.

This is a hybrid services delivery capability, and you can manage it all as a service.

HP CloudSystem supports a broad spectrum of applications while speeding and simplifying the buying, deployment and support of cloud environments, said HP. CloudSystem brings "cloud maps" to play so that more applications can be quick-start "ported" to a cloud or hybrid environment.

The ECS-Compute and CloudSystem announcements much more fully deepen HP's cloud strategy, building on earlier announcements around CSA and Cloud Assure offerings. HP, however, is coming to the public cloud space from a hosting and multi-tenancy heritage, in large part through its EDS acquisition. That, HP expects, will make its cloud models more appealing to large businesses, governments and applications providers. HP is also emphasizing the security and management capabilities of these offerings.

As a new public cloud provider, HP is competing more directly with IBM, Rackspace, AWS, and Microsoft, and very likely over time, with private and hybrid cloud products from EMC/VMware, Oracle, Cisco, Red Hat, TIBCO and Google. There will be more overlap with burgeoning software-as-a-service (SaaS) providers like, as they seek to provide more cloud-based infrastructure services.

Yet even among that wide field, HP is seeking to differentiate itself with a strong emphasis on hybrid computing over assemblages or components of plain vanilla public cloud services. HP sees a governance path for computing resources and services from a variety of sources and models (including legacy IT) that add up to IT as a service as its long-term strategic value.

"This is a hybrid services delivery capability, and you can manage it all as a service," said Rebecca Lawson, director of cloud initiatives at HP. The services are designed to help organizations "grow and manage the applications," regardless of the applications' heritage, production model, or technology, said Lawson.

"We're now saying, 'welcome to our data center' ... but we're ecumenical and agnostic on platform and applications," she said.

Also part of the Jan. 25 news, HP Hybrid Delivery will help businesses and governments build, manage, and consume services using a combination of traditional, outsourced and cloud services best suited to them. It consists of HP Hybrid Delivery Strategy Service, to provide a structured understanding of the programs, projects, and main activities required to move to a hybrid delivery model; and HP Hybrid Delivery Workload Analysis Service, to analyze enterprise workloads to determine the best fits for hybrid environments.

Professional services

HP sees these as enabling a "journey" to cloud and hybrid computing, with a strong emphasis on the professional services component of learning how to efficiently leverage cloud models.

HP's vision for the cloud -- part of its solution set for the demands of the "Instant-On Enterprise" -- clearly emphasizes openness and neutrality when it comes to operating systems, platforms, middleware, virtual machines, cloud stacks, SaaS providers, and applications, said Lawson. HP will support all major workloads and platforms from its new cloud hosting services, and help to govern and manage across them via HP's hybrid computing and private cloud capabilities as well, said Lawson.

The achievement of the instant-on enterprise, said Sandeep Johri, vice president of strategy and industry solutions at HP, comes from an increasing ability to automate, orchestrate, secure and broker services -- regardless of their origins: traditional IT, or public or private clouds.

HP therefore has a rare opportunity to appeal to many organizations and governments that fear cloud lock-in.

In other words, hybrid computing (perhaps even more than cloud itself) will become a key enabling core competency for enterprises for the foreseeable future. HP is banking on that, expecting that the platform and lock-in wars will push customers to an alternative lower-risk partner that emphasizes inclusion and open standards over singular cloud stacks.

HP therefore has a rare opportunity to appeal to many organizations and governments that fear cloud lock-in, as well as the costs and complexity of following a SaaS or software platform vendor's isolated path to cloud, which may come from a heritage of on-premises platform or proprietary stack lock-in, rather than from a support of heterogeneity and of a heritage of a myriad of hosted services.

Whereas some vendors such as VMware, Oracle, Microsoft, Cisco, Red Hat and Citrix are cobbling together so-called integrated cloud stacks -- and then building a set of hosting services that will most likely favor their stacks and installed bases, HP is working to focus at the higher abstraction of management and governance across many stacks and models. Hence the emphasis on hybrid capabilities. And, where some SaaS and business applications vendors are working to bring cloud infrastructure services and/or SaaS delivery to their applications, HP is working to help its users provide an open cloud home and/or hybrid support for all their applications, inclusive of those hosted anywhere.

HP's cloud strategy, then, closely follows (for now) its on-premises data center infrastructure strategy, with many options on software and stack, and an emphasis on overall and holistic management and cost-efficiency.

Less complex path

Some analysts I've heard recently, say that HP is coming late to public cloud. But, coming from a hosting and single- and multi-tenancy applications support services heritage may very well mean that HP already has a lot of cloud and hosted services DNA, and that the transition from global hosting for Fortune 500 enterprises to a full cloud offerings is a less tortured and complex path than those from other vendors, such as traditional on-premises OS, platform, middleware, and infrastructure license providers, as well as SaaS-to-cloud providers.

HP may be able to effectively position itself as more IT transformation-capable and mission-critical support-ready -- and stack-neutral and applications-inclusive -- to provide a spectrum of hybrid cloud services at global scale with enterprise-calibre response, security and reliability. And because HP does not have a proprietary middleware stack of its own to protect, it can support the requirements of more of its customers across more global regions.

Enterprise mature from the get-go, not late to the cloud-hype party, might be a better way to describe HP's timing on cloud sourcing and support services. The value HP seems to be eyeing comes from agility and total costs reduction for IT -- not on a technology, license or skills lock-in basis.

By allowing a large spectrum of applications support -- and the ability to pick and choose (and change) the sourcing for the applications over time -- the risk of lock-in, and for unwillingly paying high IT prices, goes down. Hybrid, says HP, offers the best long-term IT value and overall cost-efficiencies. Hybrid, says HP, can save 30-40 percent of the cost of traditional IT, though not offering too many specifics on how long such savings would take.

"You can now run mission-critical applications with the economics of cloud," said Patrick Harr, vice president of cloud strategy and solutions at HP. "It's a hybrid world."

HP is also thinking hybrid when it comes to go-to-market strategies. It expects to appeal to ISVs, resellers, and system integrators/outsourcers with the newest cloud offerings. By being hybrid-focused and open and agnostic to underlying platforms, more channel partners will look to HP with less strategic angst and the potential for later direct competition as they might with an Oracle or Microsoft.

I can easily see where a choice of tool/framework and openness too in terms of workload and operations environments joined to a coordinated, managed services and hybrid hosting spectrum would be very appealing.

And, HP is putting a lot of consulting and professional services around the hybrid push, including HP Cloud Discovery Workshops that help enterprises develop a holistic cloud strategy, with a focus on cloud economics, applications and cloud security.

HP ECS-Compute will be available in the US and EMEA countries in February, and in Asia-Pacific countries in June.

“To create an Instant-On Enterprise, organizations need to close the gap between what customers and citizens expect and what the enterprise can deliver,” said Ann Livermore, executive vice president, HP Enterprise Business. “With HP’s cloud solutions, clients can determine the right service delivery models to deliver the right results, in the right time frame, at the right price.”

These new offerings will not be a last chapter in HP's cloud and IT transformation drive. Looking back to last month's ALM 11 announcements, and HP's long heritage of SaaS test and dev services, one can easily envision a more end-to-end applications lifecycle and hybrid cloud operations capabilities set. Think of it as a coordinated, hybrid services approach to applications definition, build, test, deploy and brokering -- all as an open managed lifecycle.

That means joining PasS and hybrid computing on an automated and managed continuum, for ISVs, service providers, governments and enterprises. I can easily see where a choice of tool/framework and openness too in terms of workload and operations environments joined to a coordinated, managed services and hybrid hosting spectrum would be very appealing.

Such a flexible cloud support horizon -- from cradle to grave of applications and data -- could really impact the total cost of IT downward, while reducing complexity, and allowing businesses to focus on their core processes, innovation and customer value, rather than on an ongoing litany of never-ceasing IT headaches.

You may also be interested in:

Platform ISF 2.1 improves use, breadth of private cloud management, sets stage for use of public cloud efficiencies

Platform Computing on Tuesday released Platform ISF 2.1, which improves ease of use and automation for building and managing enterprise private clouds.

Platform's cloud management software helps enterprises transition from internal IT to more productive and efficient private cloud infrastructure services that support multi-tier applications.

New in Platform ISF 2.1 is a dynamic “single cloud pane” for cloud administration; expanded definitions for support of multi-tier application environments such as Hadoop, Jboss, Tomcat and WebSphere; and enhanced business policy-driven automation that spans across multiple data centers. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

Enterprises looking to take advantage of the cloud do so for many reasons but one of the key ones is to enhance their agility in response to changing business dynamics.

By automating delivery of complex enterprise infrastructure and production applications across heterogeneous virtual, physical and public cloud resources, Platform ISF also helps reduce electricity and cooling requirements while freeing up capacity in data centers. The management layer provides improved monitoring, policy management, and workload management across multiple and heterogenous cloud and traditional IT stacks. By capturing corporate standards and business policies within the automation engine, companies can improve both compliance and security, said Platform Computing.

Via the single-pane administration capabilities, what Toronto-based Platform calls a "cloud cockpit," users can self-select approved services to support a wide variety of applications. Enhanced end-user portals are also new, including drag-and-drop portlet-based dashboards and customizable application instantiation pages.

What's more, the applications be can monitored from both private and public clouds, such as Amazon Web Services (AWS). The degree of management allows for future planning and capacity management, to help exploit hybrid computing benefits and cut the total overall costs of supporting applications.

Enhancing agility

“Enterprises looking to take advantage of the cloud do so for many reasons but one of the key ones is to enhance their agility in response to changing business dynamics,” said Cameron Haight, Research Vice President, Gartner, in a release. “This means that the technology used to manage cloud environments should be similarly agile and act to facilitate and not impede this industry movement. IT organizations should look for tools that can address the various cloud usage scenarios without demanding excessive investments in management infrastructure or staff support.”

Key capabilities in Platform ISF 2.1 include: self-service and chargeback, policy-based automated provisioning of applications, dynamic scaling of applications to meet service level agreements (SLAs) and unification of distributed and mixed-vendor resource pools for sharing. A unique “Active-Active” multiple data center supports higher availability and scalability by leveraging Oracle GoldenGate.

The goal is to manage the heterogeneous applications lifecycle, not just multiple cloud instances.

Ease of use benefits in the new release, which is available now, include account management and delegation based on applications or business processes. Such delegation can occur for such cloud-supported functions as platform as a service (PaaS), infrastructure as a service (IaaS), and hierarchical applications and their supporting components and services. Also included is self-service hierarchical account and resource management (including Active Directory for 10,000+ users) supporting an unlimited number of organizational tiers.

Business benefits include less downtime for applications, even as they are supported by hybrid resources, SLA-driven shared services, less need for specialized administrators, higher availability and creation of richer applications services catalogs. Use of Platform ISF 2.1 for private cloud activities clearly puts the users in a better position to use, exploit and manage public clouds, and to move quickly to the hybrid computing model. The goal is to manage the heterogeneous applications lifecycle, not just multiple cloud instances, said Jay Muelhoefer, VP Enterprise Marketing, Platform Computing.

A free 30-day trial of Platform ISF 2.1 can be downloaded at Platform Computing is also hosting a webinar, “Building A Private Cloud Strategy – Best Practices” on Feb. 16. For more information about Platform ISF or the webinar, visit

You may also be interested in:

Tuesday, January 18, 2011

Mobile access and social collaboration form a cloud catalyst for enterprises

Back in the mid-1990s, then Microsoft CEO Bill Gates offered a prophetic observation. The impact of the web, he wrote, would be greater than most people thought, but would take longer to happen than was commonly supposed.

Turns out, happily for Microsoft, that he was right.

Yet now, perhaps not so pleasantly for Redmond, the confluence of mobile computing, social online interactions and cloud computing are together supporting a wave of change that will both be more impactful than many think -- and also happen a lot quicker than is expected.

More evidence of this appeared this week, building on momentum that capped a very dynamic 2010. Start-up Bitzer Mobile Inc. this week announced its Enterprise Virtualized Mobility solution (EVM), which makes a strong case for an ecumenical yet native apps approach to mobile computing for enterprises.

Bitzer Mobile is banking on the urgency that enterprise IT departments are feeling to deliver apps and data to mobile devices -- from Blackberries to iOS, Android, and WebOS. But knowing the enterprise, they also know that adoption of such sweeping change needs to be future-proofed and architected for enterprise requirements. More on EVM later.

Another hastening development in the market is's pending release the first week of February of the Spring '11 release of its flagship CRM SaaS applications. The upgrade includes deeper integrations with Chatter collaboration and analytics services, so that sales, marketing and service employees can be far more powerful and productive in how they innovate, learn and teach in their roles. The trend toward collaborative business process that mobile-delivered mobile web apps like's CRM suite now offer are literally changing the culture of workers overnight.

Advancing cloud services

Last month, at its Dreamforce conference, Salesforce also debuted a database in the cloud service,, that combines attractive heterogeneous features for a virtual data tier for developers of all commercial, technical and open source persuasions. Salesforce also bought Heroku and teamed with BMC Software on its RemedyForce cloud configuration management offering.

Salesforce's developments and offerings provide a prime example of how social collaboration, mobile and cloud reinforce each other, spurring on adoption that fosters serious productivity improvements that then invite yet more use and an accelerating overall adoption effect. This is happening not at what we quaintly referred to as Internet Time, but at far more swiftly viral explosion time.

As I traveled at the end of 2010, to both Europe and the U.S. coasts, I was struck by the pervasive use of Apple iPads by the very people who know a productivity boon when they see it and will do whatever they can to adopt it. Turns out they didn't have to do too much nor spend too much. Bam.

I also recently fielded calls from nearly frantic IT architects asking how they can hope to satisfy the demand to quickly move key apps and data to iPads and the most popular smartphones for their employees. My advice was an is: the mobile web. It's not a seamless segue, but it allows the most mobile extension benefits the soonest, does not burn any deployment bridges, and allows a sane and thoughtful approach to adopting native apps if and when that becomes desired.

Clearly, the decision now for apps providers is no longer Mac or PC, Java or .NET -- but rather native or web for mobile? The architecture discussion for supporting cloud is also shifting toward lightweight middleware.

I still think that the leveraging of HTML5 and extending current web, portal, and RIA apps sets to the mobile tier (any of the major devices types) is the near-term best enterprise strategy, but Bitzer Mobile and its EVM has gotten me thinking. Their approach is architected to support the major mobile native apps AND the web complements.

IT wants to leverage and exploit all the remote access investments they've made. They want to extend the interception of business processes to anyone anywhere with control and authenticity. And they do not necessarily want to buy, support and maintain an arsenal of new mobile devices -- not when their power users already possess a PC equivalent in their shirt pockets. Not when their CFOs won't support the support costs.

A piece of mobile real estate

So Bitzer Mobile places a container on the user's personal mobile device and allows the IT department to control it. Its a virtual walled garden on the tablet or smartphone that, I'm told, does not degrade performance. The device does need a fair amount of memory, and RIM devices will need a SD card flash supplement (for now).

The Bitzer Mobile model also places a virtualization layer for presentation layer delivery at the app server tier for the apps and data to be delivered to the mobile containers. And there's a control panel (either SaaS or on-premises) that manages the deployments, access and operations of the mobile tier enablement arrangement. Native apps APIs and SKDs can be exploited, ISV apps can be made secure and tightly provisioned, and data can be delivered across the mobile networks and to the containers safely, Bitzer Mobile says.

That was fast. It's this kind of architected solution, I believe, that will ultimately appeal most to IT and service providers ... the best of the thin client, virtualized client, owner-managed client and centrally controlled presentation layer of existing apps and data model. It lets enterprise IT drive, but users get somewhere new fast.

Architecture is destiny in IT, but we're now seeing the shift to IT architecture as opposed to only enterprise architecture. Your going to need both. That's what happens when SaaS providers fulfill their potential, when data and analytics can come from many places, when an individual's iPhone is a safe enterprise end-point.

And so as cloud providers like provide the new models, and the likes of Bitzer Mobile extend the older models, we will see the benefits of cloud, mobile and social happen bigger and faster than any of us would have guessed.

Wednesday, January 12, 2011

Move to cloud increasingly requires adoption of modern middleware to support PaaS and dynamic workloads

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: WSO2.

Learn more about WSO2 and cloud management
Download "Effective Cloud Management with WSO2 Strategies"
More information on WSO2 Stratos
Attend a WSO2 SOA Workshop to Energize your Business with SOA and Cloud

The role and importance of private cloud infrastructure models has now emerged as a stepping-stone to much needed new general operational models for IT.

Even a lot of the early interest in cloud computing was as much about a wish to escape the complex and wasteful ways of the old than as an outright embrace of something well understood and new. Cloud computing may then well prove a catalyst to needed general IT transformation.

This cloud effect should force even the largest enterprises to remake themselves into business service factories. It's a change that mimics the maturation of other important aspects of business over the decades. Modernizing IT -- via Internet-enabled sourcing that better supports business processes -- comes in the same vein that industrial engineering, lean manufacturing, efficiency measurement, just-in-time inventory, and various maturity models revolutionized bricks and mortar businesses.

So the burning question now is how to attain IT transformation from current moves to leverage and exploit cloud computing? What are the practical steps that can help an organization begin now? How can enterprises learn to adopt new services support and sourcing models that work for them in the short- and long-terms?

By recognizing the transformative role of private cloud infrastructures, IT leaders can identify and justify improved dynamic workloads and agile middleware that swiftly advance the process of IT maturity and efficiency.

To discuss how modern workload assembly in the private cloud provides a big step in the right direction for IT’s future, BriefingsDirect joined Paul Fremantle, the UK-based Chief Technology Officer and co-founder of WSO2, and Paul O’Connor, Chief Technology Officer at ANATAS International in Sydney, Australia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
O'Connor: It’s unfortunate, but it’s fair to say that all of the past initiatives that we tried in large, complex enterprises have been a failure. In some cases, we’ve actually made things worse.

Large enterprises, at the same time, still have to focus on efficiency, agility, and delivery to their end users, so as to achieve market competitiveness. We still have that maniacal focus on delivery and efficiency, and now some new thinking has come in.

We serve the Asia-Pacific region and have focused for a number of years on next-gen architecture -- technical architecture, enterprise architecture and service oriented architecture (SOA). In the last couple of years, we’ve been focusing as well on cloud, and on how these things come together to give us a shot at being more efficient in large complex enterprises.

Specifically, we [as an industry now] have cloud or the everything-as-a-service operating model coupled with a series of other trends in the industry that are being bolted together for a final assault on meaningful efficiency. You hit the nail on the head when you mentioned industrial engineering, because industrial engineering is the organizing principle for weaving all of these facets together.

When we focus on industrial engineering, we already have an established pattern. The techniques are now lean manufacturing, process improvement and measurement of efficiency, just-in-time inventory, maturity models. Ultimately, large enterprises are now approaching the problem effectively including cloud, including moving to new operating models. They're really focusing on building out that factory.

IT itself is transformative and you have to be pushing the boundaries in order to compete in the modern world.

Fremantle: We've discovered that you cannot just build an IT system or an IT infrastructure, put your feet up, sit back, and say, "Well, that will do the business," because the business has learned that IT itself is transformative and you have to be pushing the boundaries in order to compete in the modern world.

Effectively, it’s no longer good enough to just put in a new system in every 5 or 10 years and sit back and run it. People are constantly pushing to create new value to build new processes, to find better ways of using what they have, linking it together, composing it, and doing new things.

So the speed of delivery and the agility of organizations have become absolutely key to their competitiveness and fundamentally to their stock price. A huge move in agility came first with web, with portals, and with SOA. People discovered that, rather than writing things from scratch, they could reuse, they could reconfigure, and they could attach things together in new ways to build function. As they did that, the speed of development and the speed of creating these new processes has skyrocketed.

I'm a firm believer that the real success in cloud is going to come from designing systems that are inherently built to run in the cloud, whether that's about scale, elasticity, security, or things like multi-tenancy and self-service.

The first and most important thing is to use middleware and models that are designed around federated security. This is just a simple thing. If you look back at middleware, for example message queuing products from 10 years ago, there was no inherent security in them.

If you look at the SOA stack and the SOAP models or even REST models, there are inherent security models such as WS-Trust, WS-SecureConversation, or in the REST model things like SAML2, OAuth and OpenID. These models allow you to build highly secure systems.

But, however much I think it's possible to build secure cloud systems, the reality is that today 90 percent of my customers are not willing or interested in hosting things in a public cloud. It’s driving a huge demand for private cloud. That’s going to change, as people gain confidence and as they start to protect and rebuild their systems with federated security in mind from day one, but that's going to take some time.

Those concepts of building things that run in the cloud and making the software inherently cloud aware, comes back to what Paul O'Connor was talking about with regard to having the right architecture for the future and for the cloud.

O'Connor: When we say better architecture, I think what we are talking about is the facets of architecture that are about process, that are about that how you actually design and build and deliver. At the end of the day, architecture is about change, and it must be agile. I can architect a fantastic Sydney Opera House, but if I can't organize the construction materials to show up in a structured way, then I can’t construct it. Effectively, we’ve embraced that concept now in large enterprises.

Specifically in IT, we find coming into play around this concept a lot of the same capabilities that we’ve already developed, some of which Paul alluded to, plus things like policy-based, model-driven configuration and governance, management and monitoring and asset metadata, asset lifecycle management types of things relative to services and the underlying assets that are needed to actually provision and manage them.

We're seeing those brought to bear against the difficult problems of how might I create a very agile architecture that requires an order of magnitude less people to deliver and manage.

It helps with problems like this: How can I keep configured a thousand end-points in my enterprise, some of which might be everything from existing servers and web farms all the way up to instances of lean middleware like WSO2 that I might spin up in the cloud to process large workloads and all of the data associated with it?

Also, you're not allowed to do anything in large enterprises architecturally without getting past security. When I say get past security, I'm talking about the people who have magnifying glasses on your architectural content documents. It's important enough to say again what Paul brought out about location not being the way to secure your customer data anymore.

The reality is that today 90 percent of my customers are not willing or interested in hosting things in a public cloud. It’s driving a huge demand for private cloud.

The motivation for a new security model is not just in terms of movement all the way to the other end of the agility rainbow, where in a public cloud you’re mashing up some of your data with everybody else's, potentially, and concerned about it going astray.

It’s really about that internal factory configuration and design that says, even internally in large enterprises, I can't rely on having zones of network security that I pin my security architecture to. I have to do it at the message level. I have to use some of the standards and the technologies that we've seen evolved over the past five, six, seven years that Paul Fremantle was referencing to really come to bear to keep me secure.

Once I do that, then it's not that far of a leap to conceive of an environment where those same security structures, technologies, and processes can be used in a more hybrid architecture, where maybe it's not just secure internal private cloud, but maybe it's virtual private cloud running outside of the enterprise.

That brings in other facets that we really have to sort out. They have to do with how we source that capacity, even if it's virtual private cloud or even if it's tenanted. We have to work on our zone security model that talks about what's allowed to be where. We have to profile our data and understand how our data relates to workloads.

As Paul mentioned, we have to focus on federated identity and trust, so identity as a service. We have to assemble the way that processing environments, be they internal or external, get their identities, so that they can enforce security. PKI, and, this is a big one, we have to get our certificates and private keys into the right spot.

Policy-driven governance

Once we build all those foundations for this, we then have to focus on policy-driven governance of how workloads are assembled with respect to all of those different security facets and all of the other facets, including quality of service, capacity, cost, and everything else. But, ultimately yes, we can solve this and we will solve this over the next few years. All this makes for good, effective security architecture in general. It's just a matter of helping people, through forums like this, to think about it in a slightly different way.

Fremantle: I believe that the world has slightly gone backward, and that isn't actually that surprising. When people move forward into such a big jump as to move from a fixed infrastructure to a cloud infrastructure, sometimes it's kind of easy to move back in another area. I think what's happened to some extent is that, as people have moved forward into cloud infrastructure, they have tended to build very straightforward monolithic applications.

The way that they have done that is to focus on, "I'm going to take something standalone and simple that I can cloud-enable and that's going to be my first cloud project." What's happened is that people have avoided the complexity of saying,"What I really need to be doing is building composite applications with federated identity, with business process management (BPM), ESB flows, and so forth."

Learn more about WSO2 and cloud management
Download "Effective Cloud Management with WSO2 Strategies"
More information on WSO2 Stratos
Attend a WSO2 SOA Workshop to Energize your Business with SOA and Cloud

And, that's not that surprising, when they're taking on something new. But, very rapidly, people are going to realize that a cloud app on its own is just as isolated as an enterprise app that can't talk to anything.

The result is that people are going to need to move up the stack. At the moment, everyone is very focused on virtual machines (VMs) and IaaS. That doesn't help you with all the things that Paul O'Connor has been talking about with architecture, scalability, and building systems that are going to really be transformative and change the way you do things.

From my perspective, the way that you do that is that you stop focusing on VMs and you try and move up a layer, and start thinking about PaaS instead of IaaS.

You try to build things that use inherent cloud capabilities offered by a platform that give you scalability, federated security, identity, billing, all the things that you are going to need in that cloud environment that you don't want to have to write and build yourself. You want a platform to provide that. That's really where the world is going to have to move in order to take the full advantage of cloud -- PaaS.

The name of the game

O'Connor: I totally agree with everything Paul Fremantle just said. PaaS is the name of the game. If you go to 10 large enterprises, you're going to find them by and large focusing on IaaS. That's fine. It's a much lower barrier of entry relative to where most shops are currently in terms of virtualization.

But, when you get up into delivering new value, you're really creating that factory. Just to draw an analogy, you don't go to an auto factory, where the workers are meant to be programming robots. They build cars. Same thing with business service delivery in IT -- it's really important to plug your reference model and your reference architectures for cloud into that factory approach.

You want your PaaS to be a one-stop-shop for business service production and that means from the very beginning to the very end. You have to tenant and support your customers all along the way. So it really takes the vertical stack, which is the way we currently think about cloud in terms of IaaS, and fans it out horizontally, so that we have a place to plug different customers in the enterprise into that.

And what we find is, just as in any good factory or any good process design, we really focus on what it is those customers need and when. For example, just to take one of many things that's typically broken in large enterprises, testing and test environments. Sometimes it takes weeks in large organization to get test environments. We see customers who literally forgo key parts of testing and really sort of do a big bang test approach at the end, because it is so difficult to get environment and to manage the configuration of those environments.

One of the ways we can fix that is by organizing that part of the PaaS story and wrap around some of the attendant next-generation configuration management capabilities that go along with that. That would include things like service test virtualization, agile operations, asset metadata management, some of the application lifecycle management (ALM) stuff, and focus on systemically killing the biggest impedances in the order of most pain in the enterprise. You can do that without worrying about, or going anywhere near, public cloud to go do data processing.

I think we will see larger appetites by the business for more applications and a need to put them into a place where they are more easily managed.

So that's the here and now, and I'd say that that's also supportive of a longer term, grand unified field theory of cloud, which is about consuming IT entirely as a service. To do that, we have to get our house in order in the same way and focus on organizing and re-organizing in terms of transformation in the enterprise to support first the internal customers, followed by using the same presets and tenets to focus on getting outside of the organization in a very structured way.

But eventually moving workloads out of the organization and focusing on direct interaction with the business, I think we will see larger appetites by the business for more applications and a need to put them into a place where they are more easily managed, and eventually, it may take 20 years, but I think you'll see organizations move to turn off their internal IT departments and focus on business, focus on being an insurance company, a bank, or a logistics company. But, we start in the here and now with PaaS.

New means to workload assembly

Next is workload assembly. What I mean by that is that we need a profile of what it is we do in terms of work. If I plug a job into the wall that is my next-gen IT architecture, what is it actually doing and how will I know? The types of things vary. It varies widely between phases of my development cycle.

Obviously, if I do load and performance testing, I've got a large workload. If I do production, I’ve got a large workload. If I move to big data, and I am starting to do massively scalar analytics because the business realizes that you go after such an application, thanks to where IT is taking the enterprise, then that's a whole other ball of wax again.

What I have to do is understand those workloads. I have to understand them in terms of the data that they operate on, especially in terms of its confidentiality. I have to understand what requirements I need to assemble in terms of the workload processing.

If I have identify show up, or private key, I have to do integration, or I have to wire into different systems and data sources, all of that has to be understood and assembled with that workload. I have to characterize workload in a very specific way, because ultimately I want to use something like WSO2 Stratos to assemble what that workload needs to run. Once I can assemble it, then it becomes even easier for me to work my way through the dev, test, stage, release, operate cycle.

Fremantle: What we have done is build our Carbon middleware on OSGi. About two years ago, we started thinking how we're going to make that really effective in a cloud environment. We came up with this concept of cloud-native software. We were lucky, because, having modularized Carbon, we had also kernelized it. We put everything around a single kernel. So, we were able to make that kernel operate in a cloud environment.

That’s the engineering viewpoint, but from the architecture viewpoint, what we're providing to architects like Paul O’Connor is a complete platform that gives you what you need to build out all of the great things that Paul O’Connor has been talking about.

That starts with some very simple things, like identity as a service, so that there is a consistent multi-tenant concept of identity, authorization, and entitlement available wherever you are in the private cloud, or the public cloud, or hybrid.

The next thing, which we think absolutely vital, is governance monitoring, metering, and billing -- all available as a service -- so that you can see what's happening in this cloud. You can monitor and meter it, you can allocate cost to the right people, whether that’s a public bill or an internal report within a private cloud.

Then, we're saying that as you build out this cloud, you need the right infrastructure to be able to build these assemblies and to be able to scale. You need to have a cloud native app server that can be deployed in the cloud and elastically scale up and down. You need to have an ESB as a service that can be used to link together different cloud applications, whether they're public cloud, private cloud, or a combination of the two.

Pulling together

And, you need to have things like business process in the cloud, portal in the cloud, and so on, to pull these things together. Of course, on the way, you're going to need things like queues or databases. So, what we're doing with Stratos is pulling together the combination of those components that you need to have a good architecture, and making them available as a service, whether it's in a private cloud or a public cloud.

That is absolutely vital. It's about providing people with the right building blocks. If you look at what the IaaS providers are doing, they're providing people with VMs as the building blocks.

Twenty years ago, if someone asked me to build an app, I would have started with the machine and the OS and I would start writing code. But, in the last 20 years we've moved up the stack. If someone asked me to build an app now, I would start with an app server, a message queuing infrastructure, an ESB, a business process server, and a portal. All these components help me be much more effective and much quicker. In a cloud, those are the cloud components that you need to have lying around ready to assemble, and that to me is the answer.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: WSO2.

Learn more about WSO2 and cloud management
Download "Effective Cloud Management with WSO2 Strategies"
More information on WSO2 Stratos
Attend a WSO2 SOA Workshop to Energize your Business with SOA and Cloud

You may also be interested in:

Thursday, January 6, 2011

Case study: How McKesson develops software faster and better with innovative use of new HP ALM 11 suite

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series in conjunction with the recent HP Software Universe 2010 Conference in Barcelona.

At the conference we explored some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

Now, this customer case-study from the conference focuses on McKesson and how their business has benefited from advanced application lifecycle management (ALM). To learn more about McKesson's innovative use of ALM and its early experience with HP's new ALM 11 release, I interviewed Todd Eaton, Director of ALM Tools and Services at McKesson. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Eaton: In our business at McKesson, we have various groups that develop software, not only for internal use, but also external use by our customers and software that we sell. We have various groups within McKesson that use the centralized tools, and the ALM tools are pretty much their lifeblood. As they go through the process to develop the software, they rely heavily on our centralized tools to help them make better software faster.

The ALM suite that HP came out with is definitely giving us a bigger view. We've got QA managers that are in the development groups for multiple products, and as they test their software and go through that whole process, they're able to see holistically across their product lines with this.

We've set up projects with the same templates. With that, they have some cohesion and they can see how their different applications are going in an apples-to-apples comparison, instead of like the old days, when they had to manually adjust the data to try to figure out what their world was all about.

Better status

hen HP came up with ALM 11, they took Quality Center and Performance Center and brought them together. That's the very first thing, because it was difficult for us and for the QA managers to see all of the testing activities. With ALM, they're able to see all of it and better gauge where they are in the process. So, they can give their management or their teams a better status of where we are in the testing process and where we are in the delivery process.

The other really cool thing that we found was the Sprinter function. We haven't used it as much within McKesson, because we have very specific testing procedures and processes. Sprinter is used more as you're doing ad hoc testing. It will record that so you can go back and repeat those.

How we see that being used is by extending that to our customers. When our customers are installing our products and are doing their exploratory testing, which is what they normally do, we can give them a mechanism to record what they are doing. Then, we can go back and repeat that. Those are a couple of pretty powerful things in the new release that we plan to leverage.

When we're meeting at various conferences and such, there's a common theme that we hear. One is workflow. That's a big piece. ALM goes a long way to be able to conquer the various workflows. Within an organization, there will be various workflows being done, but you're still able to bring up those measurements, like another point that you are bringing up, and have a fairly decent comparison.

They can find those defects earlier, verify that those are defects, and there is less of that communication disconnect between the groups.

With the various workflows in the past, there used to be a real disparate way of looking at how software is being developed. But with ALM 11, they're starting to bring that together more.

The other piece of it is the communication, and having the testers communicate directly to those development groups. There is a bit of "defect ping-pong," if you will, where QA will find a defect and development will say that it's not a defect. It will go back and forth, until they get an agreement on it.

ALM is starting to close that gap. We're able to push out the use of ALM to the development groups, and so they can see that. They use a lot of the functions within ALM 11 in their development process. So, they can find those defects earlier, verify that those are defects, and there is less of that communication disconnect between the groups.

We have several groups within our organization that use agile development practices. What we're finding is that the way they're doing work can integrate with ALM 11. The testing groups still want to have an area where they can put their test cases, do their test labs, run through their automation, and see that holistic approach, but they need it within the other agile tools that are out there.

It's integrating well with it so far, and we're finding that it lends itself to that story of how those things are being done, even in the agile development process.

Company profile

cKesson is a Fortune 15 company. It is the largest health-care services company in the U.S. We have quite a few R&D organizations and it spans across our two major divisions, McKesson Distribution and McKesson Technology solutions.

In our quality center, we have about 200 projects with a couple of thousand registered users. We're averaging probably about 500 concurrent users every minute of the day, following-the-sun, as we develop. We have development teams, not only in the U.S, but nearshore and offshore as well.

We're a fairly large organization, very mature in our development processes. In some groups, we have new development, legacy, maintenance, and the such. So, we span the gamut on all the different types of development that you could find.

That's what we strive for. In my group, we provide the centralized R&D tools. ALM 11 is just one of the various tools that we use, and we always look for tools that will fit multiple development processes.

They have to adapt to all that, and we needed to have tools that do that, and ALM 11 fits that bill.

We also make sure that it covers the various technology stacks. You could have Microsoft, Java, Flex, Google Web Toolkit, that type of thing, and they have to fit that. You also talked about maturity and the various maturity models, be it CMMI, ITIL, or when you start getting into our world, we have to take into consideration FDA.

When we look at tools, we look at those three and at deployment. Is this going to be internally used, is this going to be hosted and used through an external customer, or are we going to package this up and send it out for sale?

We need tools that span across those four different types, four different levels, that they can adapt into each one of them. If I'm a Microsoft shop that’s doing Agile for an internal developed software, and I am CMMI, that's one. But, I may have a group right next door that's waterfall developing on Java and is more an ITIL based, and it gets deployed to a hosted environment.

They have to adapt to all that, and we needed to have tools that do that, and ALM 11 fits that bill.

ALM 11 had a good foundation. The test cases, the test set, the automated testing, whether functional or performance, the source of truth for that is in the ALM 11 product suite. And, it's fairly well-known and recognized throughout the company. So, that is a good point. You have to have a source of truth for certain aspects of your development cycle.

Partner tools

here are partner tools that go along with ALM 11 that help us meet various regulations. Something that we're always mindful of, as we develop software, is not only watching out for the benefit of our customers and for our shareholders, but also we understand the regulations. New ones are coming out practically every day, it seems. We try to keep that in mind, and the ALM 11 tool is able to adapt to that fairly easily.

When I talk to other groups about ALM 11 and what they should be watching out for, I tell them to have an idea of how your world is. Whether you're a real small shop, or a large organization like us, there are characteristics that you have to understand. How I identify those different stacks of things that they need to watch out for; they need to keep in mind their organization’s pieces that they have to adapt to. As long as they understand that, they should be able to adapt the tool to their processes and to their stacks.

Most of the time, when I see people struggling, it's because they couldn’t easily identify, "This is what we are, and this is what we are dealing with." They usually make midstream corrections that are pretty painful.

Something that we've done at McKesson that appears to work out real well [is devote a team to managing the ALM tools themselves]. When I deal with various R&D vice presidents and directors, and testing managers and directors as well, the thing that they always come back to is that they have a job to do. And one of the things they don't want to have to deal with is trying to manage a tool.

They've got things that they want to accomplish and that they're driven by: performance reviews, revenue, and that type of thing. So, they look to us to be able to offload that, and to have a team to do that.

McKesson, as I said, is fairly large, thousands of developers and testers throughout the company. So, it makes sense to have a fairly robust team like us managing those tools. But, even in a smaller shop, having a group that does that -- that manages the tools -- can offload that responsibility from the groups that need to concentrate on creating code and products.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: