Wednesday, January 12, 2011

Move to cloud increasingly requires adoption of modern middleware to support PaaS and dynamic workloads

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WSO2.

Learn more about WSO2 and cloud management
Download "Effective Cloud Management with WSO2 Strategies"
More information on WSO2 Stratos
Attend a WSO2 SOA Workshop to Energize your Business with SOA and Cloud

The role and importance of private cloud infrastructure models has now emerged as a stepping-stone to much needed new general operational models for IT.

Even a lot of the early interest in cloud computing was as much about a wish to escape the complex and wasteful ways of the old than as an outright embrace of something well understood and new. Cloud computing may then well prove a catalyst to needed general IT transformation.

This cloud effect should force even the largest enterprises to remake themselves into business service factories. It's a change that mimics the maturation of other important aspects of business over the decades. Modernizing IT -- via Internet-enabled sourcing that better supports business processes -- comes in the same vein that industrial engineering, lean manufacturing, efficiency measurement, just-in-time inventory, and various maturity models revolutionized bricks and mortar businesses.

So the burning question now is how to attain IT transformation from current moves to leverage and exploit cloud computing? What are the practical steps that can help an organization begin now? How can enterprises learn to adopt new services support and sourcing models that work for them in the short- and long-terms?

By recognizing the transformative role of private cloud infrastructures, IT leaders can identify and justify improved dynamic workloads and agile middleware that swiftly advance the process of IT maturity and efficiency.

To discuss how modern workload assembly in the private cloud provides a big step in the right direction for IT’s future, BriefingsDirect joined Paul Fremantle, the UK-based Chief Technology Officer and co-founder of WSO2, and Paul O’Connor, Chief Technology Officer at ANATAS International in Sydney, Australia. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
O'Connor: It’s unfortunate, but it’s fair to say that all of the past initiatives that we tried in large, complex enterprises have been a failure. In some cases, we’ve actually made things worse.

Large enterprises, at the same time, still have to focus on efficiency, agility, and delivery to their end users, so as to achieve market competitiveness. We still have that maniacal focus on delivery and efficiency, and now some new thinking has come in.

We serve the Asia-Pacific region and have focused for a number of years on next-gen architecture -- technical architecture, enterprise architecture and service oriented architecture (SOA). In the last couple of years, we’ve been focusing as well on cloud, and on how these things come together to give us a shot at being more efficient in large complex enterprises.

Specifically, we [as an industry now] have cloud or the everything-as-a-service operating model coupled with a series of other trends in the industry that are being bolted together for a final assault on meaningful efficiency. You hit the nail on the head when you mentioned industrial engineering, because industrial engineering is the organizing principle for weaving all of these facets together.

When we focus on industrial engineering, we already have an established pattern. The techniques are now lean manufacturing, process improvement and measurement of efficiency, just-in-time inventory, maturity models. Ultimately, large enterprises are now approaching the problem effectively including cloud, including moving to new operating models. They're really focusing on building out that factory.

IT itself is transformative and you have to be pushing the boundaries in order to compete in the modern world.



Fremantle: We've discovered that you cannot just build an IT system or an IT infrastructure, put your feet up, sit back, and say, "Well, that will do the business," because the business has learned that IT itself is transformative and you have to be pushing the boundaries in order to compete in the modern world.

Effectively, it’s no longer good enough to just put in a new system in every 5 or 10 years and sit back and run it. People are constantly pushing to create new value to build new processes, to find better ways of using what they have, linking it together, composing it, and doing new things.

So the speed of delivery and the agility of organizations have become absolutely key to their competitiveness and fundamentally to their stock price. A huge move in agility came first with web, with portals, and with SOA. People discovered that, rather than writing things from scratch, they could reuse, they could reconfigure, and they could attach things together in new ways to build function. As they did that, the speed of development and the speed of creating these new processes has skyrocketed.

I'm a firm believer that the real success in cloud is going to come from designing systems that are inherently built to run in the cloud, whether that's about scale, elasticity, security, or things like multi-tenancy and self-service.

The first and most important thing is to use middleware and models that are designed around federated security. This is just a simple thing. If you look back at middleware, for example message queuing products from 10 years ago, there was no inherent security in them.

If you look at the SOA stack and the SOAP models or even REST models, there are inherent security models such as WS-Trust, WS-SecureConversation, or in the REST model things like SAML2, OAuth and OpenID. These models allow you to build highly secure systems.

But, however much I think it's possible to build secure cloud systems, the reality is that today 90 percent of my customers are not willing or interested in hosting things in a public cloud. It’s driving a huge demand for private cloud. That’s going to change, as people gain confidence and as they start to protect and rebuild their systems with federated security in mind from day one, but that's going to take some time.

Those concepts of building things that run in the cloud and making the software inherently cloud aware, comes back to what Paul O'Connor was talking about with regard to having the right architecture for the future and for the cloud.

O'Connor: When we say better architecture, I think what we are talking about is the facets of architecture that are about process, that are about that how you actually design and build and deliver. At the end of the day, architecture is about change, and it must be agile. I can architect a fantastic Sydney Opera House, but if I can't organize the construction materials to show up in a structured way, then I can’t construct it. Effectively, we’ve embraced that concept now in large enterprises.

Specifically in IT, we find coming into play around this concept a lot of the same capabilities that we’ve already developed, some of which Paul alluded to, plus things like policy-based, model-driven configuration and governance, management and monitoring and asset metadata, asset lifecycle management types of things relative to services and the underlying assets that are needed to actually provision and manage them.

We're seeing those brought to bear against the difficult problems of how might I create a very agile architecture that requires an order of magnitude less people to deliver and manage.

It helps with problems like this: How can I keep configured a thousand end-points in my enterprise, some of which might be everything from existing servers and web farms all the way up to instances of lean middleware like WSO2 that I might spin up in the cloud to process large workloads and all of the data associated with it?

Also, you're not allowed to do anything in large enterprises architecturally without getting past security. When I say get past security, I'm talking about the people who have magnifying glasses on your architectural content documents. It's important enough to say again what Paul brought out about location not being the way to secure your customer data anymore.

The reality is that today 90 percent of my customers are not willing or interested in hosting things in a public cloud. It’s driving a huge demand for private cloud.



The motivation for a new security model is not just in terms of movement all the way to the other end of the agility rainbow, where in a public cloud you’re mashing up some of your data with everybody else's, potentially, and concerned about it going astray.

It’s really about that internal factory configuration and design that says, even internally in large enterprises, I can't rely on having zones of network security that I pin my security architecture to. I have to do it at the message level. I have to use some of the standards and the technologies that we've seen evolved over the past five, six, seven years that Paul Fremantle was referencing to really come to bear to keep me secure.

Once I do that, then it's not that far of a leap to conceive of an environment where those same security structures, technologies, and processes can be used in a more hybrid architecture, where maybe it's not just secure internal private cloud, but maybe it's virtual private cloud running outside of the enterprise.

That brings in other facets that we really have to sort out. They have to do with how we source that capacity, even if it's virtual private cloud or even if it's tenanted. We have to work on our zone security model that talks about what's allowed to be where. We have to profile our data and understand how our data relates to workloads.

As Paul mentioned, we have to focus on federated identity and trust, so identity as a service. We have to assemble the way that processing environments, be they internal or external, get their identities, so that they can enforce security. PKI, and, this is a big one, we have to get our certificates and private keys into the right spot.

Policy-driven governance

Once we build all those foundations for this, we then have to focus on policy-driven governance of how workloads are assembled with respect to all of those different security facets and all of the other facets, including quality of service, capacity, cost, and everything else. But, ultimately yes, we can solve this and we will solve this over the next few years. All this makes for good, effective security architecture in general. It's just a matter of helping people, through forums like this, to think about it in a slightly different way.

Fremantle: I believe that the world has slightly gone backward, and that isn't actually that surprising. When people move forward into such a big jump as to move from a fixed infrastructure to a cloud infrastructure, sometimes it's kind of easy to move back in another area. I think what's happened to some extent is that, as people have moved forward into cloud infrastructure, they have tended to build very straightforward monolithic applications.

The way that they have done that is to focus on, "I'm going to take something standalone and simple that I can cloud-enable and that's going to be my first cloud project." What's happened is that people have avoided the complexity of saying,"What I really need to be doing is building composite applications with federated identity, with business process management (BPM), ESB flows, and so forth."

Learn more about WSO2 and cloud management
Download "Effective Cloud Management with WSO2 Strategies"
More information on WSO2 Stratos
Attend a WSO2 SOA Workshop to Energize your Business with SOA and Cloud

And, that's not that surprising, when they're taking on something new. But, very rapidly, people are going to realize that a cloud app on its own is just as isolated as an enterprise app that can't talk to anything.

The result is that people are going to need to move up the stack. At the moment, everyone is very focused on virtual machines (VMs) and IaaS. That doesn't help you with all the things that Paul O'Connor has been talking about with architecture, scalability, and building systems that are going to really be transformative and change the way you do things.

From my perspective, the way that you do that is that you stop focusing on VMs and you try and move up a layer, and start thinking about PaaS instead of IaaS.

You try to build things that use inherent cloud capabilities offered by a platform that give you scalability, federated security, identity, billing, all the things that you are going to need in that cloud environment that you don't want to have to write and build yourself. You want a platform to provide that. That's really where the world is going to have to move in order to take the full advantage of cloud -- PaaS.

The name of the game

O'Connor: I totally agree with everything Paul Fremantle just said. PaaS is the name of the game. If you go to 10 large enterprises, you're going to find them by and large focusing on IaaS. That's fine. It's a much lower barrier of entry relative to where most shops are currently in terms of virtualization.

But, when you get up into delivering new value, you're really creating that factory. Just to draw an analogy, you don't go to an auto factory, where the workers are meant to be programming robots. They build cars. Same thing with business service delivery in IT -- it's really important to plug your reference model and your reference architectures for cloud into that factory approach.

You want your PaaS to be a one-stop-shop for business service production and that means from the very beginning to the very end. You have to tenant and support your customers all along the way. So it really takes the vertical stack, which is the way we currently think about cloud in terms of IaaS, and fans it out horizontally, so that we have a place to plug different customers in the enterprise into that.

And what we find is, just as in any good factory or any good process design, we really focus on what it is those customers need and when. For example, just to take one of many things that's typically broken in large enterprises, testing and test environments. Sometimes it takes weeks in large organization to get test environments. We see customers who literally forgo key parts of testing and really sort of do a big bang test approach at the end, because it is so difficult to get environment and to manage the configuration of those environments.

One of the ways we can fix that is by organizing that part of the PaaS story and wrap around some of the attendant next-generation configuration management capabilities that go along with that. That would include things like service test virtualization, agile operations, asset metadata management, some of the application lifecycle management (ALM) stuff, and focus on systemically killing the biggest impedances in the order of most pain in the enterprise. You can do that without worrying about, or going anywhere near, public cloud to go do data processing.

I think we will see larger appetites by the business for more applications and a need to put them into a place where they are more easily managed.



So that's the here and now, and I'd say that that's also supportive of a longer term, grand unified field theory of cloud, which is about consuming IT entirely as a service. To do that, we have to get our house in order in the same way and focus on organizing and re-organizing in terms of transformation in the enterprise to support first the internal customers, followed by using the same presets and tenets to focus on getting outside of the organization in a very structured way.

But eventually moving workloads out of the organization and focusing on direct interaction with the business, I think we will see larger appetites by the business for more applications and a need to put them into a place where they are more easily managed, and eventually, it may take 20 years, but I think you'll see organizations move to turn off their internal IT departments and focus on business, focus on being an insurance company, a bank, or a logistics company. But, we start in the here and now with PaaS.

New means to workload assembly

Next is workload assembly. What I mean by that is that we need a profile of what it is we do in terms of work. If I plug a job into the wall that is my next-gen IT architecture, what is it actually doing and how will I know? The types of things vary. It varies widely between phases of my development cycle.

Obviously, if I do load and performance testing, I've got a large workload. If I do production, I’ve got a large workload. If I move to big data, and I am starting to do massively scalar analytics because the business realizes that you go after such an application, thanks to where IT is taking the enterprise, then that's a whole other ball of wax again.

What I have to do is understand those workloads. I have to understand them in terms of the data that they operate on, especially in terms of its confidentiality. I have to understand what requirements I need to assemble in terms of the workload processing.

If I have identify show up, or private key, I have to do integration, or I have to wire into different systems and data sources, all of that has to be understood and assembled with that workload. I have to characterize workload in a very specific way, because ultimately I want to use something like WSO2 Stratos to assemble what that workload needs to run. Once I can assemble it, then it becomes even easier for me to work my way through the dev, test, stage, release, operate cycle.

Fremantle: What we have done is build our Carbon middleware on OSGi. About two years ago, we started thinking how we're going to make that really effective in a cloud environment. We came up with this concept of cloud-native software. We were lucky, because, having modularized Carbon, we had also kernelized it. We put everything around a single kernel. So, we were able to make that kernel operate in a cloud environment.

That’s the engineering viewpoint, but from the architecture viewpoint, what we're providing to architects like Paul O’Connor is a complete platform that gives you what you need to build out all of the great things that Paul O’Connor has been talking about.

That starts with some very simple things, like identity as a service, so that there is a consistent multi-tenant concept of identity, authorization, and entitlement available wherever you are in the private cloud, or the public cloud, or hybrid.

The next thing, which we think absolutely vital, is governance monitoring, metering, and billing -- all available as a service -- so that you can see what's happening in this cloud. You can monitor and meter it, you can allocate cost to the right people, whether that’s a public bill or an internal report within a private cloud.

Then, we're saying that as you build out this cloud, you need the right infrastructure to be able to build these assemblies and to be able to scale. You need to have a cloud native app server that can be deployed in the cloud and elastically scale up and down. You need to have an ESB as a service that can be used to link together different cloud applications, whether they're public cloud, private cloud, or a combination of the two.

Pulling together


And, you need to have things like business process in the cloud, portal in the cloud, and so on, to pull these things together. Of course, on the way, you're going to need things like queues or databases. So, what we're doing with Stratos is pulling together the combination of those components that you need to have a good architecture, and making them available as a service, whether it's in a private cloud or a public cloud.

That is absolutely vital. It's about providing people with the right building blocks. If you look at what the IaaS providers are doing, they're providing people with VMs as the building blocks.

Twenty years ago, if someone asked me to build an app, I would have started with the machine and the OS and I would start writing code. But, in the last 20 years we've moved up the stack. If someone asked me to build an app now, I would start with an app server, a message queuing infrastructure, an ESB, a business process server, and a portal. All these components help me be much more effective and much quicker. In a cloud, those are the cloud components that you need to have lying around ready to assemble, and that to me is the answer.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WSO2.

Learn more about WSO2 and cloud management
Download "Effective Cloud Management with WSO2 Strategies"
More information on WSO2 Stratos
Attend a WSO2 SOA Workshop to Energize your Business with SOA and Cloud

You may also be interested in:

Thursday, January 6, 2011

Case study: How McKesson develops software faster and better with innovative use of new HP ALM 11 suite

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series in conjunction with the recent HP Software Universe 2010 Conference in Barcelona.

At the conference we explored some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

Now, this customer case-study from the conference focuses on McKesson and how their business has benefited from advanced application lifecycle management (ALM). To learn more about McKesson's innovative use of ALM and its early experience with HP's new ALM 11 release, I interviewed Todd Eaton, Director of ALM Tools and Services at McKesson. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Eaton: In our business at McKesson, we have various groups that develop software, not only for internal use, but also external use by our customers and software that we sell. We have various groups within McKesson that use the centralized tools, and the ALM tools are pretty much their lifeblood. As they go through the process to develop the software, they rely heavily on our centralized tools to help them make better software faster.

The ALM suite that HP came out with is definitely giving us a bigger view. We've got QA managers that are in the development groups for multiple products, and as they test their software and go through that whole process, they're able to see holistically across their product lines with this.

We've set up projects with the same templates. With that, they have some cohesion and they can see how their different applications are going in an apples-to-apples comparison, instead of like the old days, when they had to manually adjust the data to try to figure out what their world was all about.

Better status

W
hen HP came up with ALM 11, they took Quality Center and Performance Center and brought them together. That's the very first thing, because it was difficult for us and for the QA managers to see all of the testing activities. With ALM, they're able to see all of it and better gauge where they are in the process. So, they can give their management or their teams a better status of where we are in the testing process and where we are in the delivery process.

The other really cool thing that we found was the Sprinter function. We haven't used it as much within McKesson, because we have very specific testing procedures and processes. Sprinter is used more as you're doing ad hoc testing. It will record that so you can go back and repeat those.

How we see that being used is by extending that to our customers. When our customers are installing our products and are doing their exploratory testing, which is what they normally do, we can give them a mechanism to record what they are doing. Then, we can go back and repeat that. Those are a couple of pretty powerful things in the new release that we plan to leverage.

When we're meeting at various conferences and such, there's a common theme that we hear. One is workflow. That's a big piece. ALM goes a long way to be able to conquer the various workflows. Within an organization, there will be various workflows being done, but you're still able to bring up those measurements, like another point that you are bringing up, and have a fairly decent comparison.

They can find those defects earlier, verify that those are defects, and there is less of that communication disconnect between the groups.



With the various workflows in the past, there used to be a real disparate way of looking at how software is being developed. But with ALM 11, they're starting to bring that together more.

The other piece of it is the communication, and having the testers communicate directly to those development groups. There is a bit of "defect ping-pong," if you will, where QA will find a defect and development will say that it's not a defect. It will go back and forth, until they get an agreement on it.

ALM is starting to close that gap. We're able to push out the use of ALM to the development groups, and so they can see that. They use a lot of the functions within ALM 11 in their development process. So, they can find those defects earlier, verify that those are defects, and there is less of that communication disconnect between the groups.

We have several groups within our organization that use agile development practices. What we're finding is that the way they're doing work can integrate with ALM 11. The testing groups still want to have an area where they can put their test cases, do their test labs, run through their automation, and see that holistic approach, but they need it within the other agile tools that are out there.

It's integrating well with it so far, and we're finding that it lends itself to that story of how those things are being done, even in the agile development process.

Company profile

M
cKesson is a Fortune 15 company. It is the largest health-care services company in the U.S. We have quite a few R&D organizations and it spans across our two major divisions, McKesson Distribution and McKesson Technology solutions.

In our quality center, we have about 200 projects with a couple of thousand registered users. We're averaging probably about 500 concurrent users every minute of the day, following-the-sun, as we develop. We have development teams, not only in the U.S, but nearshore and offshore as well.

We're a fairly large organization, very mature in our development processes. In some groups, we have new development, legacy, maintenance, and the such. So, we span the gamut on all the different types of development that you could find.

That's what we strive for. In my group, we provide the centralized R&D tools. ALM 11 is just one of the various tools that we use, and we always look for tools that will fit multiple development processes.

They have to adapt to all that, and we needed to have tools that do that, and ALM 11 fits that bill.



We also make sure that it covers the various technology stacks. You could have Microsoft, Java, Flex, Google Web Toolkit, that type of thing, and they have to fit that. You also talked about maturity and the various maturity models, be it CMMI, ITIL, or when you start getting into our world, we have to take into consideration FDA.

When we look at tools, we look at those three and at deployment. Is this going to be internally used, is this going to be hosted and used through an external customer, or are we going to package this up and send it out for sale?

We need tools that span across those four different types, four different levels, that they can adapt into each one of them. If I'm a Microsoft shop that’s doing Agile for an internal developed software, and I am CMMI, that's one. But, I may have a group right next door that's waterfall developing on Java and is more an ITIL based, and it gets deployed to a hosted environment.

They have to adapt to all that, and we needed to have tools that do that, and ALM 11 fits that bill.

ALM 11 had a good foundation. The test cases, the test set, the automated testing, whether functional or performance, the source of truth for that is in the ALM 11 product suite. And, it's fairly well-known and recognized throughout the company. So, that is a good point. You have to have a source of truth for certain aspects of your development cycle.

Partner tools

T
here are partner tools that go along with ALM 11 that help us meet various regulations. Something that we're always mindful of, as we develop software, is not only watching out for the benefit of our customers and for our shareholders, but also we understand the regulations. New ones are coming out practically every day, it seems. We try to keep that in mind, and the ALM 11 tool is able to adapt to that fairly easily.

When I talk to other groups about ALM 11 and what they should be watching out for, I tell them to have an idea of how your world is. Whether you're a real small shop, or a large organization like us, there are characteristics that you have to understand. How I identify those different stacks of things that they need to watch out for; they need to keep in mind their organization’s pieces that they have to adapt to. As long as they understand that, they should be able to adapt the tool to their processes and to their stacks.

Most of the time, when I see people struggling, it's because they couldn’t easily identify, "This is what we are, and this is what we are dealing with." They usually make midstream corrections that are pretty painful.

Something that we've done at McKesson that appears to work out real well [is devote a team to managing the ALM tools themselves]. When I deal with various R&D vice presidents and directors, and testing managers and directors as well, the thing that they always come back to is that they have a job to do. And one of the things they don't want to have to deal with is trying to manage a tool.

They've got things that they want to accomplish and that they're driven by: performance reviews, revenue, and that type of thing. So, they look to us to be able to offload that, and to have a team to do that.

McKesson, as I said, is fairly large, thousands of developers and testers throughout the company. So, it makes sense to have a fairly robust team like us managing those tools. But, even in a smaller shop, having a group that does that -- that manages the tools -- can offload that responsibility from the groups that need to concentrate on creating code and products.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, December 21, 2010

HP's Kevin Bury on how cloud and SaaS help pave the way to increased IT efficiency in 2011, and beyond

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast from the HP Software Universe 2010 Conference in Barcelona, an interview with Kevin Bury, Vice President and General Manager, and Neil Ashizawa, Manager of Products, both with HP Software as a Service.

We were at Software Universe in early December to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

This discussion, with two executives from HP, focuses on the software as a service (SaaS) market and how it and cloud computing are reshaping the future of IT.

Dana Gardner, Principal Analyst at Interarbor Solutions, moderated the discussion just after the roll-out of HP’s big application lifecycle management (ALM) news, the release of ALM 11. [See more on HP's new ALM 11 offerings.]

Here are some excerpts on the future of SaaS discussion:
Bury: We are seeing a lot of interest in the market today for SaaS and cloud. I think it’s an extension of what we've seen over the last decade, of companies looking at ways that they can drive the most efficiency from their IT budgets. And as they are faced especially in these trying economics times of trying to do as much as they can, they're looking for ways to optimize on their investment.

When you look at what they are doing with SaaS, it gives them the ability to outsource applications, take advantage of the cloud, take advantage of web technologies to be able to deliver those software solutions to their customers or constituents inside of the business, and do it in a way where they can drive speed to value very, very quickly.

They can take advantage of getting more bang for their buck, because they don’t have to have their people focused on those initiatives internally and they're able to do it in a financial model that gives them tremendous value, because they can treat it as an operating expense as opposed to a capital expense. So, as we look to the interest of our customers, we're seeing a lot more interest in, "HP, help us understand what is available as a service."

Various components then include SaaS, infrastructure as a service (IaaS), certainly platform as a service (PaaS), with the ultimate goal of moving more and more into the cloud. SaaS is a stepping stone to get there, and today about half of all of the cloud types of solutions start with SaaS.

Where is this thing going? When is it going to end? Is it going to end? I don’t believe it is. I think it’s an ongoing continuum. It’s really an evolution of what services their constituents are trying to consume, and the business is responding by looking for different alternatives to provide those solutions.

For example, if you look at where SaaS got started, it got started because business departments were frustrated, because IT wasn’t responsive enough. They went off and they made decisions to start consuming application service provider (ASP) source solutions, and they implemented them very, very quickly. At first, IT was unaware of this.

Now, as IT has become more aware of this, they recognize that their business users are expecting more. So, they're saying, "Okay, we need to not only embrace it, but we need to bring it in-house, figure out how we can work with them to ensure that we are still driving standardization, and we're still addressing all of the compliance and security issues."

Corporate data is absolutely the most valuable asset that most companies have, and so they have seen now that they have to embrace it. But, as they look down the road, it moves from just SaaS into now looking at a hybrid model, where they're going to embrace IaaS and Platform as a Service, which really formed the foundation of what the cloud is and what we can see of it today. But, it will continue to evolve, mature, and offer new things that we don’t even know about yet.

Somewhere in between

Ashizawa: About a year, year-and-a-half ago, people were still trying to get their minds wrapped around this idea of cloud. We're at a stage now where a lot of organizations are actually adopting the cloud as a sourcing strategy or they are building other strategies to adopt it. We're probably past early adopter and more into mainstream. I anticipate it will continue to grow and gain momentum.

Now, IT is becoming much more involved. I would say that they are actually becoming more of a broker. Before, when it came to providing services to drive business, they were more focused on build. Now, with this cloud they're acting in a role as a broker, as Kevin said, so that they can build the business benefits of the cloud.

One of the key differentiators, as it’s evolved, in the way I see it, is really in the economic principles behind cloud versus managed service and ASP. With cloud, as Kevin mentioned earlier, you basically leverage your operation expense budgets and reduce that capitalization that typically you would still need to do in a historic ASP or managed service.

Cloud brings to the table a very compelling economic business model that is very important to large organizations.



Cloud brings to the table a very compelling economic business model that is very important to large organizations.

But if they are going to adopt the SaaS solution, that they vet out the integration possibilities -- to get out in front that. Also, integration doesn’t just stop at the technical level. There are also the business aspects of integration as well. You need to also make sure that the service levels are going to be what your business users' desire and that you can enforce, and also integration from the support model.

If the user needs help, what’s the escalation? What’s the communication point? Who is the person who is actually going to help them, given the fact that now there is a cloud vendor in the mix, as well as the cloud consumer.

Bury: Organizations can become overwhelmed by the promise and the hype of cloud and what it can offer. My recommendation is usually to start with something small. I go out and spend a lot of time talking to our customers and prospective customers. There are a couple of very common bits of feedback that I hear that CXOs are looking at, when they view where to start with a cloud or as a service type of initiative.

The first of these is, is it core to my business? If a business process is absolutely core to what they are doing, it’s probably not a great place to start. However, if it’s not core, if it’s something that is ancillary or complimentary to that, it’s something that may make some sense to look at outsourcing, or moving to the cloud.

The second is if it’s mission-critical or not. If it’s mission-critical and it’s core, that’s something you want to have your scarce resource, your very highly valued IT resources working on, because that’s what ultimately drives the business value of IT. Going back to what Neil said earlier, IT is becoming a broker. They only have so much bandwidth that they can deliver to those solutions and offerings to their customers. So, if it’s not core and it’s not critical, those are good candidates.

We recommend starting small. Certainly, IT needs to be very involved with that. Then, as you get more-and-more comfortable and you’re seeing more value, you can continue to expand. In addition, we see projects that make a lot of sense, things like testing as a service, where the IT organizations can leverage technology that’s available through their partners, but deliver via a cloud or a SaaS solution, as opposed to bringing it in-house.

Key opportunity for HP

We see SaaS as one of the key drivers, one of the strategic initiatives for HP to embrace. As I talk with my peers on the leadership team, we recognize SaaS as one of only two consumption model customers have for obtaining software from HP. In the traditional license play, they can consume the license and pay maintenance or, if they want to treat as an operating expense, it will be via the SaaS model.

As we look to what we need to do, we're investing very heavily in making all of our applications SaaS ready, so that customers can stand them up in their own data center and our data center or via a hybrid, where we may involve either a combination of those or even include a third-party.

For example, they may have a managed service provider that is providing some of the testing services. To your point earlier about the integration, HP, because of our breadth and our depth of our applications, can provide the ability to integrate that three-way type of solution whereas other companies don’t have that type of depth to be able to pull that off.

As SaaS now becomes much more mainstream and much more mature, big customers are now looking to companies like HP, because of the fact that we have the size, the depth, and the breadth of the solutions.

Looking for a relationship

T
hey're looking for that relationship that is going to transcend this solution and is going to be part of the overall relationship between HP and their organization over the long haul. So, size definitely matters when it comes to cloud and SaaS.

The thing that’s important to note here is that this is an evolution or a maturation. It’s interesting, having been in this phase for so long, to see what customers are now looking at. And it’s something where, as I start to look out to the future and speculate about where they want to go next, I'm seeing a lot of indications toward a model where customers will want to consume this idea of everything as a service. We’ve even seen recently customers say, "You're already doing this for us," whatever that as-a-service solution might be.

"Can you also take some of our people, put them back into that, and then just charge us that monthly or annual fee?" Neil and I spent a lot of time contemplating this idea of business process as a service. That’s what we're speculating could be a next generation of SaaS or cloud. It’s the idea of customers who wanted to consume business processes as service, which is just another step toward consuming everything as a service.

How will companies in the future really be able to deliver on the promise of what is and what we are recognizing as the Instant-On type of enterprise? It’s the ability to take in data very, very quickly and then be able to analyze it, make assessments on it, make decisions and to be, in the term you use, very agile in the way that they are reacting to these inputs.

Developing patterns

I mentioned earlier this movement toward those applications or those areas of the business that are not core and critical that they are looking to move outside of their data center. So that’s certainly something, when we look at things like complementing what IT does around things like testing as a service. Security as a service is a big area that we are seeing growth in. Project portfolio management (PPM), helping those IT organizations manage their business, the day-to-day business, are some of the areas that we are seeing a lot of growth.

In the traditional license play, they can consume the license and pay maintenance or, if they want to treat as an operating expense, it will be via the SaaS model.



In the past, companies generally have been very siloed. Information would come in and they didn’t have the access nor the visibility into it from another division or another department. When you look at what the instant-on enterprise is going to, it’s the ability to consume information very, very quickly, analyze it, then make decisions, and make directional changes to what’s going on inside of their environment.

As-a-service and cloud are very much enablers of that, because it gives you the ability to take advantage of technology as an enabler, as opposed to the past when they were just to serve one solution or one business process in the past. Now, they're able to have that stratify the entire organization. So, they have the insight and the agility to make real-time types of decisions.

Moving a difficult business process from in house to out of house, right into the cloud, doesn’t mean that the problem goes away or that the challenges go away.



The key is to get engaged early, learn as much as you can about the cloud and about as a service, and then look to companies like HP that have the experience in doing this. We’ve been doing it for more than 10 years. We’ve got a lot of success stories that we can point to on how we can help companies take advantage of the cloud and also what to avoid when you move into the cloud.

The single-most important thing is not to go into it with expectations, any preconceived expectations that it’s going to be nirvana, or that it’s going to be easy. Moving a difficult business process from in house to out of house, right into the cloud, doesn’t mean that the problem goes away or that the challenges go away from it. You still need to approach it with discipline, rigor, and formal types of processes and methodologies, which is what IT is really good at.

Ashizawa: You really want to look for trust. If you are going to be outsourcing business processes to a vendor, you really want to have that trust. What we're seeing is that there is a strong linkage between your compliance levels that you have in your organizations and the trust that your cloud vendor can also provide you a solution that can help you maintain your compliance and standards.

So, at the end of the day, you really want to just make sure that you go into this with a trusted vendor that has a proven experience, that can really make sure that they understand your need and your requirements, and they have a SaaS solution that can really fit your organization.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

Tuesday, December 14, 2010

Case study: Automated client management from HP helps Vodafone standardize in 30 countries

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Barcelona.

We were here earlier this month to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

This customer case-study from the conference focuses on Vodafone and how they worked toward improved PC client management and automation of client management. I interviewed two executives from their IT organization, Michael Janssen, the Manager of Deployment Automation with Vodafone , and Michael Schroeder, also Manager of Deployment Automation, both based at Vodafone group in Düsseldorf, Germany.

Here are some excerpts:
Janssen: Vodafone had independent countries operating their [IT] environments by themselves. So, we had 30 countries worldwide with all the solutions in place. That meant 30 times software deployment, 30 times application packaging, 30 times Active Directory, and so on.

Vodafone decided in 2006 to go for a global IT project and centralization in terms of PC client automation. It came down to us to evaluate the current solutions in place in all these countries and then come up with a solution which would be the best solution for the new global environment. That was our main problem.

Standardization and reducing cost

If you're starting a centralization process, then it’s all about standardization and reducing cost. That meant reducing cost by reducing effort of the solutions and make as much as possible automated and self-service. That was the main reason we started this exercise.

Schroeder: The most important thing was that administration should be very easy. It shouldn’t be too complex in the end and it should fit every need in every country. At that time, we had a whole zoo of hardware and software products. We had about 8,000 different software applications in place at that time. We tried to reduce that as much as we could.

The most important thing was that administration should be very easy. It shouldn’t be too complex in the end and it should fit every need in every country.



The overall number of clients in Vodafone is 65,000, and at the moment, we've finished the transition for 52,000 clients. Nearly 80 percent is done after four years. Of course, there is a long wait with the smaller countries, and we need to migrate 15 other countries that are still in the loop.

In the past, in each of these 30 countries, we had one to four people working within the client automation environments. Today, we have five people left doing that globally. You can imagine 30 times a minimum of two persons. That was 60 people working for client deployment, and that's now reduced to five for the global solution.

Always pros and cons

There are always pros and cons with standardization and with centralization. The consensus takes a little bit longer, because there are no strict processes to bring new applications. But, the main advantage is that much of the applications are already there for any country. We test it once and can deploy to many, instead of doing this 30 times, like we did that in the past, and we avoid any double spend of money.

Then, of course, with the global environment, the main advantage is that now we are all connected, which was not possible in the past, because all the networks were independent and all the applications were independent. There was no unified messaging or anything like that. This is the major benefit of the global environment.

Security is one big thing we're now dealing with. For example, if we are talking about client automation, we're talking about patch management as well. We're able to bring out patches -- for example, security patches from Microsoft -- within two days, if it’s a real hot-fix, or even within 24 hours, if it’s a major issue.

Countries that used HP Client Automation had much higher success rate, 90 percent or higher, in deploying application and patches, than the others.



Janssen: First, there was the evolution phase, where we studied all the countries. What were the products that they used in the past? Then we decided what was the best way forward. For us, that was a major split between countries that already used the HP Client Automation solution and the other countries that used other deployment suites.

That was also one of the major criteria for the final decision. Countries that used HP Client Automation had much higher success rate, 90 percent or higher, in deploying application and patches, than the others, where they were on average at 70 percent. So, this was the first big decision point.

The second was countries using HP Client Automation had less operational staff than the others. It was mainly one to two full-time employees fewer than in countries that operated with other tools.

Policy-based technology

Schroeder: If we're talking about the Client Automation Suite from HP, we're talking about policy-based or a desired state technology. That is one of the criteria. Everything is done every day. For example, if you're trying to deploy applications to clients, this is done every day. It's controlled every day, managed every day, and without any admin or user interaction. That’s a great point for us.

Janssen: What I can recommend is that there are two main issues that you need to overcome. First, you only can deploy what you receive from the business. We already were experienced in the Vodafone-Germany organization, where we did the same exercise five years ago. You need to have a strict software standardization process in place. There is one main rule for that.

Also, in the global environment, that means that if there is a business application, then the business needs to have an application owner for that. Otherwise, the application does not exist in the whole company.

We gave that function or that responsibility back to the business, and now they're all responsible and they finally approve before application goes live.



The application owner is responsible for the whole application lifecycle, including describing the application installation documents, doing the final testing and approval after packaging, his responsibility is to look after security issues of the application, look after upgrades or version or release changes, and so on.

It's not not the packaging team, the client team, or the central IT team that is responsible for all the applications and their functionality. We gave that function or that responsibility back to the business, and now they're all responsible and they finally approve before application goes live.

Schroeder: We have in place self-service, which is a web application. You can go to a store and choose different applications to install on your machine, depending on your needs. You can choose an application, just click a box, and the application request goes to your line manager who has to approve the license costs, if there are any. Then, the policy will go back to your machine and the installation of this specific application goes straight to your machine. The user experience with it is very good.

Janssen: The self-service web shop is not only for software. We use that also for other user needs, like access rights, permissions on some projects, mobile device management and so on. This is a global web shop solution, but very effective. It avoids any help desk calls for new applications, paperwork to approve licenses, and so on. It’s very efficient and, of course, one of our main parts of this new global solution.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

Thursday, December 9, 2010

WAN governance and network unification make or break successful cloud and hybrid computing implementations

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Ipanema Technologies.

Get a free white paper on WAN Governance for Cloud Computing.

Get the free Cloud Networking Report.

The latest BriefingsDirect podcast discussion focuses on the rapidly escalating complexity and consequent need for network management innovation in the age of hybrid computing.

And that's because the emphasis nowadays is on "networks," not "network." Long gone are the days when a common and controlled local area network (LAN) served as the workhorse for critical applications and data delivery.

With the increased interest in cloud, software as a service (SaaS), and mobile computing, applications are jockeying across multiple networks, both in terms of how services are assembled, as well in how users in different environments access and respond to these critical applications.

Indeed, cloud computing forces a collapse in the gaps between the former silos of private, public, and personal networking domains. Since the network management and governance tasks have changed and continue to evolve rapidly, so too must the ways in which solutions and technologies address the tangled networks environment we all now live and work in.

Automated network unification and pervasive wide area networking (WAN) governance are proving essential to ensure quality, scale, and manage security across all forms of today's applications use. Join us as we explore the new and future path to WAN governance and to better understand how Ipanema Technologies is working to help its customers make clear headway, so that the next few years bring about a hybrid cloud computing opportunity and not a hastening downward complexity spiral.

We're here now to discuss the new reality of networks and applications delivery performance with Peter Schmidt, Chief Technology Officer, North America, for Ipanema Technologies, and David White, Vice President of Global Business Development at Ipanema. The panel is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Schmidt: As soon as you start using multiple networks, you're in the cloud, because now you're making use of resources that are outside the control of your own IT organization and your service provider. Whether people think about it or not, just by adding a second network, they're taking their first steps into the cloud.

Anybody who carries a smartphone is experiencing the personal, private, public boundary of operations themselves. But what seems natural to somebody carrying an iPhone or Blackberry is a tremendous challenge to the traditional models of IT.

Even as little as three years ago, the focus was on how to get the most performance for your applications out of your single MPLS network. I am talking enterprises where all of their applications are hosted on their property. They’ve got a single MPLS network from one service provider and they're still struggling to deliver reliable application performance across the infrastructure.

Now, we throw in multiple places to host applications. You have SaaS, Salesforce, and Google Docs. You have platform as a service (PaaS) and infrastructure as a service (IaaS). People’s critical applications can be hosted in numerous locations, many of which are beyond their control. Then, as I mentioned, these are being accessed via multiple networks, and you have the legacy MPLS plus the Internet.

There are increasing numbers or diversity of models of those networks, whether the Internet connection gets to a service provider POP and then via MPLS to their own data center, or what is the impact of content delivery networks? So we've got a situation where enterprises who are struggling to master the complexity with one data center and one network are now using multiple data centers and multiple networks. Something is going to have to give.

White: This is also all focused once again on the branch office. We’ve had server consolidation where we try to remove any type of issues for the branch and remove intelligence from the branch. As cloud computing has come in, we are now putting more stress on the branch.

We're not necessarily putting intelligence out there, but we're having 2, 3, 4, 5, or more networks, all coming into the branch at the same time, and that traffic has to be managed. It’s something a lot of people haven’t thought about.

When you look at the announcements that have been coming out and the hype on cloud in the industry, it's all focused on the data center. That’s because most of the vendors say, "That’s where the big bucks are being made. We are going to make money out of the data center."

Ipanema, on the other hand, is focused on application acceleration, and in order to do that, you have to take care of what goes on in the branch and manage it.

At a high level, the first thing you have do is provide some type of WAN governance, simply meaning that we are going to make sure that you have taken care of the management of your business. Because that’s what WAN governance means -- providing the type of control over your business to allow it to continue to be productive, as you're making changes to your WAN.

Simply put, you first of all have to find out what's going on in the network.



Simply put, you first of all have to find out what's going on in the network. You have to understand what's happening on those 4, 5, or 6 different flows that are all going in from different sources to your branch. You have to be able to control those flows and manage them, so that you don't have your edge device or edge router getting congested.

You have to be able to guarantee performance and, very importantly, you also have to then unify, balance, and optimize the performance over those multiple network points that are coming into your branch.

If you're doing it the right way, at least what we would say is the right way, it needs to be dynamic, automatic and, in Ipanema terminology, autonomic, meaning that not only does it happen automatically, but the network learns and manages itself. It doesn’t require extra human intervention.

Schmidt: The way the enterprise is going to get its arms around this increasingly complex environment is not through throwing people at it. Throwing people at network management has never worked and, now that the environment is more complex, it's going to work even less.

The whole point of cloud is that you're going to use virtualization and automation to bring up instances of servers quickly and automatically, and that's where this order of magnitude improvement potential comes from. But, if you don't want the multiple networks to be the bottleneck, then you have to apply automation in that domain as well. That's what we've done. We've made it possible for the network to run itself to meet the businesses’ objectives.

The effect that has in a branch office with multiple network connections is really to hide all the complexity that that multiplicity brings, because the system is managing them all in a unified way. That's what we're getting at when we're talking about network unification. The details that bedeviled traditional management just kind of disappear.

WAN governance is what the CIO wants to buy. CIOs don’t want to buy a WAN, and they certainly don't want to buy WAN optimization controllers. What they want to buy is reliable application performance across their infrastructure with the best possible performance and lowest possible cost. My high-level definition of WAN governance is that it's the technology and techniques that allow the CIO to buy that.

iPhone app

W
e're about to release our first iPhone app to provide an interface into our central management system, and it's terrific. It's exactly the kind of thing the CIO would want to have in their hand. That just shows the value of pushing IT to be democratized and put into the hands of all of the people tied to the enterprise.

The Ipanema system is designed to provide the full control by giving the enterprise IT organization not just visibility in reporting on every user's access to their IT infrastructure, but also to automatically control all of that traffic in accordance with various policies.

Get a free white paper on WAN Governance for Cloud Computing.

Get the free Cloud Networking Report.

We don't see any other way around it. You're not going to do this manually. You've got to build smarter systems. We happen to think that we are a huge piece of that puzzle in terms of how we control things at the network level.

White: And we look at this WAN governance as really a piece of ISO standard for IT governance, which is an official ISO standard. There is a section in there on WAN governance. In a way, it talks about what you have to do to manage your wide area.

Ipanema strongly believes the WAN governance is really a standard that should be put on the books, but isn't yet. If you're really going to have governance over your IT, since the network is a strategic asset to promote enterprise customers, you need to have governance over the wide area as well.

Schmidt: Ipanema has pioneered a unique approach that stems from the idea that all that matters is that end users are able to get good performance from their applications, because that’s when they are most productive. When application performance slows down, end users start surfing the web. So, ensuring the performance of the application is critical. That’s what the enterprise needs to reorient itself toward.

The fundamental input into our system is a list of applications and their performance. The system itself is intelligent enough to monitor and dynamically control all of the traffic to achieve those objectives on behalf of the business. So, it’s imposing the business’s will on the network.

White: It starts with our three founders who got together and took a look at what the needs were from an application perspective. Their goal was to find a way to ensure that, as users, we all had the performance we needed and that enterprises could deliver performance from an application perspective to their users.

That’s what they started out with. Then they took a look at how you would deliver that service and recognized the best way to provide for the delivery of the right type of consistent application performance is to do it over the wide area and to look what happens over the WAN.

They were very visionary in recognizing that application performance over the wide area is going to be the single most critical piece of the puzzle, when it comes to taking a look at how we as users of enterprise deliver service and do it in conjunction with major service providers and network providers, because they are the ones that deliver the wide area connections.

When they started out, they were told that they were wrong and weren't looking at it the right way. When you see what’s happened to the network and how it’s evolved, particularly now that we are moving into the cloud generation, they were focused exactly in the right area. Although we have a lot of new features, the basic architecture has been there for years and it’s been proven in major service provider networks and is installed on a global basis.

The basic architecture has been there for years and it’s been proven in major service provider networks and is installed on a global basis.



Schmidt: There are a couple of things that are the secret sauce, but the easiest one to explain probably is the fact that our appliances actually cooperate with each other, and this is unique. Our appliances know about not just the traffic that’s impinging on their network interfaces, but they actually know about the flows that are active everywhere on the network.

It’s actually not that that simple. They really only need to know about the flows that might conflict with the flows that they are managing. But conceptually, every device on the network knows about all the other flows it needs to know about. They are constantly communicating with each other -- what flows are active and what performance those flows are getting from the infrastructure, which includes the whole WAN, but also the data center and the service. So what does that enable?

Global perspective

Sharing this information means that all of the decisions made by an individual device are made from a global perspective. They're no longer making a local optimization decision. They each run the same algorithm and can come to the same result. And that result is a globally optimum traffic mix on the network.

When I say globally optimum, that’s a valid technical term as opposed to a marketing term, because the information has been collected globally from the entire system. In terms of optimum, what I mean is the best possible performance from the most applications using the given network infrastructure and its status at that point in time.

White: The point I'd like to make is that it's absolutely impossible to measure it in a cloud environment as an enterprise network manager, because you only see a piece of the network. Unless you’ve done something different, which is what we provide, than the way you are going to look at your network, if you are looking at it the way you’ve done for the last 10 or 20 years, there is no way that you can see everything.

The closing point here is that the first step is visibility into the network, and the next step is providing the control. You need to do that in the cloud environment, and that's what Ipanema does.

Get a free white paper on WAN Governance for Cloud Computing.

Get the free Cloud Networking Report.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Ipanema Technologies.

You may also be interested in: