Wednesday, December 14, 2011

Case study: How SEGA Europe uses VMware to standardize cloud environment for globally distributed game development

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Our next VMworld case study interview focuses on how a major game developer in Europe has successfully leveraged the hybrid cloud model.

We’ll learn how SEGA Europe is standardizing its cloud infrastructure across its on-premises operations, as well as with a public cloud provider. The result is a managed and orchestrated hybrid environment to test and develop multimedia games, one that dynamically scales productively to the many performance requirements at hand.

This story comes as part of a special BriefingsDirect podcast series from the recent VMworld 2011 Conference in Copenhagen. The series explores the latest in cloud computing and virtualization infrastructure developments. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here to tell us more about how the hybrid approach to multiple, complementary cloud instances is meeting SEGA’s critical development requirements in a new way is Francis Hart, Systems Architect at SEGA Europe, in London. The case study interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: Clearly one of the requirements in game development is the need to ramp up a lot of servers to do the builds, but then they sit there essentially unproductive between the builds. How did you flatten that out or manage the requirements around the workload support?

We work on the idea of having a central platform for a lot of these systems. Using virtualization to do that allowed us to scale off at certain times.



Hart: Typically, in the early stages of development, there is a fair amount of testing going on, and it tends to be quite small -- the number of staff involved in it and the number of build iterations.

Going on, when the game reaches to the end of its product life-cycle, we’re talking multiple game iterations a day and the game size has gotten very large at that point. The number of people involved in the testing to meet the deadlines and get the game shipped on date is into the hundreds and hundreds of staff.

Gardner: How has virtualization and moving your workloads into different locations evolved over the years?

Hart: We work on the idea of having a central platform for a lot of these systems. Using virtualization to do that allowed us to scale off at certain times. Historically, we always had an on-premise VMware platform to do this. Very recently, we’ve been looking at ways to use that resource within a cloud to cut down from some of Capex loading but also remain a little bit more agile with some of the larger titles, especially online games that are coming around.

Gardner: We’re all very familiar with the amazing video games that are being created nowadays. And SEGA of course is particularly well-known for the Sonic the Hedgehog franchise going back a number of years. What are some of the other critical requirements that you have from a systems architecture perspective when developing these games?

Hart: We have a lot of development studios across the world. We're working on multiple projects. We need to ensure that we supply them with a highly scalable and reliable solution in order to test, develop, and produce the game and the code in time. ... We’re probably looking at thousands of individual developers across the world.

... The first part was dealing with the end of the process, and that was the testing and the game release process. Now, we’re going to be working back from that. The next big area that we’re actively involved in is getting our developers to develop online games within the hybrid environment.

So they’re designing the game and the game’s back-end servers to be optimal within the VMware environment. And then, also pushing from staging to live is a very simple process using the Cloud Connector.

We're restructuring and redesigning the IT systems within SEGA to be more of a development operations team to provide a service to the developers and to the company.

Gardner: How did you start approaching that from your IT environment, to build the right infrastructure?

Targeting testing

Hart: One of the first areas we targeted very early on was the last process in those steps, the testing, arguably one of the most time-consuming processes within the development cycle. It happens pretty much all the way through as well to ensure that the game itself behaves as it should, it’s tested, and the customer gets the end-user experience they require.

The biggest technical goal that we had for this is being able to move large amounts of data, un-compiled code, from different testing offices around the world to the staff. Historically we had some major issues in securely moving that data around, and this is what we started looking into cloud solutions for this.

For very, very large game builds, and we're talking game builds above 10 gigabytes, it ended up being couriered within the country and then overnight file transfer outside of the country. So, very old school methods.

We needed both to secure that up to make sure we understood where the game builds were, and also to understand exactly which version each of the testing offices was using. So it’s gaining control, but also providing more security.

Gardner: So we’re seeing a lot more of the role-play games (RPG) types of games, games themselves in the cloud. That must influence what you're doing in terms of thinking about your future direction.

Hart: Absolutely. We’ve been looking at things like the hybrid cloud model with VMware as a development platform for our developers. That's really what we're working on now. We've got a number of games in the pipeline that have been developed on the hybrid cloud platform. It gives the developers a platform that is exactly the same and mirrored to what it would eventually be in the online space through ISPs like Colt, which should be hosting the virtual cloud platform.

Gaining cost benefits

And one of the benefits we're seeing in the VMware offering is that regardless of what data center in the world is the standard platform, it also allows us to leverage multiple ISPs, and hopefully gain some cost benefits from that.

Very early on we were in discussions with Colt and also VMware to understand what technology stack they were bringing into the cloud. We started doing a proof of concept with VMware and a professional services company, and together we were able to come over a proof of concept to distribute our game testing code, which previously was a very old-school distribution system. So anything better would improve the process.

There wasn't too much risk to the company. So we saw the opportunity to have a hybrid cloud set up to allow us to have an internal cloud system to distribute the codes to the majority of UK game testers and to leverage high bandwidth between all of our sites.

For the game testing studios around Europe and the world, we could use a hosted version of the same service which was up on the Colt Virtual Cloud Director (VCD) platform to supply this to trusted testing studios.

Doing this allows us to manage it at one location and simply clone the same system to another cloud data center.



Gardner: When you approach this hybrid cloud model, what about managing that? What about having a view into what’s going on so that you know what aspects of the activity and requirements are being met and where?

Hart: The virtual cloud environment of vCloud Director has a web portal that allows you to manage a lot of this configuration in a central way. We’re also using VMware Cloud Connector, which is a product that allows you to move the apps between different cloud data centers. And doing this allows us to manage it at one location and simply clone the same system to another cloud data center.

In that regard, the configuration very much was in a single place for us in the way that we designed the proof of concept. It actually helped things, and the previous process wasn’t ideal anyway. So it was a dramatic improvement.

One of the immediate benefits was around the design process. It's very obvious that we were tightening up security within our build delivery to the testing studios. Nothing was with a courier on a bike anymore, but within a secured transaction between the two offices.

Risk greatly reduced

Also from a security perspective, we understood exactly what game assets and builds were in each location. So it really helped the product development teams to understand what was where and who was using what, and so from a risk point of view it’s greatly reduced.

In terms of stats and the amount of data throughput, it’s pretty large, and we’ve been moving terabytes pretty much weekly nowadays. Now we’re going completely live with the distribution network.

So it’s been a massive success. All of the UK testing studios are using the build delivery system day to day, and for the European ones we’ve got about half the testing studios on board that build delivery system now, and it’s transparent to them.

VMware was very good at allowing us to understand the technology and that's one of the benefits of working with a professional services reseller. In terms of gotchas, there weren't too many. There were a lot of good surprises that came up and allowed us to open the door to a lot of other VMware technologies.

There were a lot of good surprises that came up and allowed us to open the door to a lot of other VMware technologies.



Now, we're also looking at alternating a lot of processes within vCenter Orchestrator and other VMware products. They really gave us a good stepping stone into the VMware catalogue, rather than just vSphere, which we were using previously. That was very handy for us.

Gardner: I’d like to just pause here for a second. Your use of vSphere 4.1 must have been an important stepping stone to be able to have the dynamic ability to ramp up and down your environments, your support infrastructure, but also skills.

Hart: Absolutely. We already have a fair footprint in Amazon Web Services (AWS), and it was a massive skill jump that we needed to train members of the staff in order to use that environment. With the VMware environment, as you said, we already have a large amount of skill set using vSphere. We have a large team that supports our corporate infrastructure and we've actually got VMware in our co-located public environment as well. So it was very, very assuring that the skills were immediately transferable.

Gardner: Now that you've done this, any words of wisdom, 20/20 hindsight, that you might share with others who are considering moving more aggressively into private cloud, hybrid cloud, and ultimately perhaps the full PaaS value?

Hart: Just get some hands-on experience and play with the cloud stack from VMware. It’s inexpensive to have a go and just get to know the technology stack.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Monday, December 12, 2011

Efficient data center transformation requires tracking and proving improvements incrementally across critical IT tasks

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.

You don’t need to go very far in IT nowadays to find people who are diligently working to do more with less, even as they're working to transform and modernize their environments.

One way to keep the interest high -- and those operating and investment budgets in place -- is to show fast results, and then use that to prime the pump for even more improvement -- and even more funding -- with perhaps even growing budgets.

The latest BriefingsDirect discussion then explores how to build quick data center project wins, by leveraging project tracking and scorecards, as well as by developing a common roadmap for both facilities and IT infrastructure.

We'll hear from a panel of HP experts on some of their most effective methods for fostering consolidation and standardization across critical IT tasks and management. This is the second in a series of podcast on data center transformation (DCT) best practices and is presented in conjunction with a complementary video series.

With us now to explain how these solutions can drive successful data center transformation is our panel, Duncan Campbell, Vice President of Marketing for HP Converged Infrastructure and small to medium-sized businesses (SMBs); Randy Lawton, Practice Principal for Americas West Data Center Transformation & Cloud Infrastructure Consulting at HP, and Larry Hinman, Critical Facilities Consulting Director and Worldwide Practice Leader for HP Critical Facility Services and HP Technology Services. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Campbell: We've seen that when a customer is successful in breaking down a large project into a set of quick wins, there are some very positive outcomes from that.

Breeds confidence

N
umber one, it breeds confidence, and this is a confidence that is actually felt within the organization, within the IT team, and into the business as well. So it builds confidence both inside and outside the organization.

The other key benefit is that when you can manifest these quick wins in terms of some specific return on investment (ROI) business outcome, that also translates very nicely as well and gets a lot of key attention, which I think has some downstream benefits that actually help out the team in multiple ways.

It's not just about attracting the best talent and executing well, but it's about marketing the team’s results as well.

One of the benefits in that is that you can actually break down these projects just in terms of some specific type of wins. That might be around standardization, and you can see a lot of wins there. You can quickly consolidate to blades. You can look at virtualization types of quick wins, as well as some automation quick wins.
We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.


We would advocate that customers think about this in terms of almost a step-by-step approach, knocking that down, getting those quick wins, and then marketing this in some very tangible ways that resonate very strongly.

Gardner: When you start to develop a cycle of recognition, incentives, and buy-in, we could also start to see some sort of a virtuous adoption cycle, whereby that sets you up for more interest, an easier time evangelizing, and so on.

Campbell: A virtuous cycle is well put. That allows really the team to get the additional green light to go to the next step in terms of their blueprint that they are trying to execute on. It gets a green light also in terms of additional dollars and, in some cases, additional headcount to add to their team as well.

What this does is, and I like this term the virtuous cycle, not only allow you to attract key talent, but it really allows you to retain folks. That means you're getting the best team possible to duplicate that, to get those additional wins, and it really does indeed become a virtuous cycle.

TCO savings

A
good example is where we have been able to see a significant total cost of ownership (TCO) type of savings with one of our customers, McKesson, that in fact was taking one of these consolidated approaches with all their development tools. They saw a considerable savings, both in terms of dollars, over $12.9 million, as well as a percentage of TCO savings that was upwards of 50 percent.

When you see tangible exciting numbers like that, that does grab people’s attention and, you bet, it becomes part of the whole social-media fabric and people want to go to a winner. Success breeds success here.

Lawton: Many of the transformation programs we engage in with our customers are substantially complex and span many facets of the IT organization. They often involve other vendors and service providers in the customer organization.

So there’s a tremendous amount of detail to pull together and organize in these complex engagements and initiatives. We find that there’s really no way to do that, unless you have a good way of capturing the data that’s necessary for a baseline.

It’s important to note that we manage these programs through a series of phases in our methodology. The first phase is strategy and analysis. During that phase, we typically run a discovery on all IT assets that would include the data center, servers, storage, the network environment, and the applications that run on those environments.
During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics.


From that, we bridge into the second phase, which is architect and validate, where we begin to solution out and develop the strategies for a future-state design that includes the standardization and consolidation approaches, and on that begin to assemble the business case. In a detailed design, we build out those specifications and begin to create the data that determines what the future-state transformation is.

Then, through the implementation phase, we have detailed scorecards that are required to be tracked to show progress of the application teams and infrastructure teams that contribute to the program in order to guarantee success and provide visibility to all the stakeholders as part of the program, before we turn everything over to operations.

During the course of the last few years, our services unit has made investments in a number of tools that help with the capture and management of the data, the scorecarding, and the analytics through each of the phases of these programs. We believe that helps offer a competitive advantage for us and helps enable more rapid achievement of the programs from our customer perspective.

Complex engagements

In these complex engagements it’s normally some time before there are quick-win type of achievements that are really notable.

For example, in the HP IT transformation program we undertook over several years back through 2008, we were building six new data centers so that we could consolidate 185 worldwide. So it was some period of time from the beginning of the program until the point where we moved the first application into production.

All along the way we were scorecarding the progress on the build-out of the data centers. Then, it was the build-out of the compute infrastructure within the data centers. And then it was a matter of being able to show the scorecarding against the applications, as we could get them into the next generation data centers.

If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required. With some of these tools and approaches and the scorecarding, we were able to demonstrate the progress and keep very visible to management the movements and momentum of the program.
If we didn't have the ability to show and demonstrate the progress along the way, I think our stakeholders would have lost patience or would not have felt that the momentum of the program was going on the kind of track that was required.


Cloning applications

A
very notable example is one of our telecom customers we worked with during the last year and finished a program earlier this year. The company was purchasing the assets of another organization and needed to be able to clone the applications and infrastructure that supported business processes from the acquired company.

Within the mix of delivery for stakeholders in the program, there were nine different companies represented. There were some outsourced vendors from the application support side in the acquiree’s company, outsourcers in the application side for the acquiring company, and outsourcers in the data centers that operated data center infrastructure and operations for the target data centers we were moving into.

What was really critical in pulling all this together was to be able to map out, at a very detailed level, the tasks that needed to be executed, and in what time frame, across all of these teams.

The final cutover migration required over 2,500 tasks across these 9 different companies that all needed to be executed in less than 96 hours in order to meet the downtime window of requirements that were required of the acquiring company’s executive management.

It was the detailed scorecarding and operating war rooms to keep those scorecards up to date in real-time that allowed us to be able to accomplish that. There’s just no possible way we would have been able to do that ahead of time.
For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.
Gardner: Has there usually been a completely separate direction for facilities planning in IT infrastructure? Why was that the case, and why is it so important to end that practice?

Hinman: If you look over time and over the last several years, everybody has data centers and everybody has IT. The things that we've seen over the last 10 or 15 years are things like the Internet and criticality of IT and high density and all this stuff that people are talking about these days. If you look at the ways companies organized themselves several years ago, IT was a separate organization, facilities was a separate organization, and that actually still exists today.

One of the things that we're still seeing today is that, even though there is this push to try to get IT groups and facilities organizations to talk and work each other, this gap that exists between truly how to glue all of this together.

If you look at the way people do this traditionally -- and when I say people, I'm talking about IT organizations and facilities organization -- they typically will model IT and data centers, even if they are attempting to try and glue them together, they try to look at power requirements.
What we figured out was that you have to stop and back up a few notches to really start to get all this glued together.


So we took this whole complex framework and data center program and broke it into four key areas. It looks simplistic in the way we've done this, and we have done this over many, many years of analysis and trying to figure out exactly what direction we should take. We've actually spun this off in many directions a few times, trying to continually make it better, but we always keep coming back to these four key profiles.

Business and risk is the first profile. IT architecture, which is really the application suite, is the second profile. IT infrastructure is the third. Data center facilities is the fourth.

One of the things that you will start to hear from us, if you haven’t heard it already via the data center transformation story that you guys were just recently talking about, is this nomenclature of IT plus facilities equals the data center.

Getting synchronized

L
ook at that, look at these four profiles, and look at what we call a top-down approach, where I start to get everybody synchronized on what risk profiles are and tolerances for risk are from an IT perspective and how to run the business, gluing that together with an IT infrastructure strategy, and then gluing all that into a data center facility strategy.

What we found over time is that we were able to take this complex program of trying to have something predictable, scalable, all of the groovy stuff that people talk about these days, and have something that I could really manage. If you're called into the boss’s office, as I and others have been over the many years in my career, to ask what’s the data center going to look like over the next five years, at least I would have some hope of trying to answer that question.

One of the the big lessons learned for us over the years has been this ability to not only provide this kind of modeling and predictability over time for clients and for customers. We had to get out of this mode of doing this once and putting it on a shelf, deploying a future state data center framework, keep client pointing in the right direction.

The data gets archived, and they pick it up every few years and do it again and again and again, finding out that a lot of times there's an "aha" moment during those periods, the gaps between doing it again and again.

We've taken all of our modeling tools and integrated them to common databases, where now we can start to glue together even the operational piece, of data center infrastructure management (DCIM), or architecture and infrastructure management, facilities management, etc., so now the client can have this real-time, long-term, what we call a 10-year view of the overall operation.

So now, you can do this. You get it pointing the right direction, collect the data, complete the modeling, put it in the toolset, and now you have something very dynamic that you can manage over time. That's what we've done, and that's where we have been heading with all of our tools and processes over the last two to three years.

EcoPOD concept

Gardner: I also remember with great interest the news from HP Discover in Las Vegas last summer about your EcoPOD and the whole POD concept toward facilities and infrastructure. Does that also play a part in this and perhaps make it easier when your modularity is ratcheted up to almost a mini data center level, rather than at the server or rack level?

Hinman: With the various what we call facility sourcing options, which PODs are certainly one of those these days, we've also been very careful to make sure that our framework is completely unbiased when it comes to a specific sourcing option.

What that means is, over the last 10 plus years, most people were really targeted at building new green-field data centers. It was all about space, then it became all about power, then about cooling, but we were still in this brick and mortar age, but modularity and scalability has been driving everything.

With PODs coming on the scene with some of the other design technologies, like multi-tiered or flexible data center, what we've been able to do is make sure that our framework is targeted at almost a generic framework where we can complete all the growth modeling and analysis, regardless of what the client is going to do from a facilities perspective.

It lays the groundwork for the customer to get their arms around all of this and tie together IT and facilities with risk and business, and then start to map out an appropriate facility sourcing option.
We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down.


We find these days that POD is actually a very nice fit with all of our clients, because it provides high density server farms, it provides things that they can implement very quickly, and gets the power usage effectiveness (PUE) and power and operational cost down. We're starting to see that take a stronghold in a lot of customers.

Gardner: As we begin to wrap up, I should think that these trends are going to be even more important, these methods even more productive, when we start to factor in movement toward private cloud. Any thoughts about how scorecards and tracking will be even more important in the future, as we move, as we expect we will, to a more cloud-, mobile-, and eco-friendly world?

Lawton: In a lot of ways, there is added complexity these days with more customers operating in a hybrid delivery model, where there may be multiple suppliers in addition to their internal IT organizations.

Greater complexity

Just like the example case I gave earlier, where you spread some of these activities not only across multiple teams and stakeholders, but also into separate companies and suppliers who are working under various contract mechanism, the complexity is even greater. If that complexity is not pulled into a simplified model that is beta driven, that is supported by plans and contracts, then there are big gaps in the programs.

The scorecarding and data gathering methods and approaches that we take on our programs are going to be even more critical as we go forward in these more complex environments.

Operating the cloud environments simplifies things from a customer perspective, but it does add some additional complexities in the infrastructure and operations of the organization as well. All of those complexities add up to, meaning that even more attention needs to be brought to the details of the program and where those responsibilities lie within stakeholders.

Gardner: Larry Hinman, we're seeing this drive toward cloud. We're also seeing consolidation and standardization around data center infrastructure. So perhaps more large data centers to support more types of applications to even more endpoints, users, and geographic locations or business units. Getting that facilities and IT equation just right becomes even more important as we have fewer, yet more massive and critical, data centers involved.

Hinman: Dana, that's exactly correct. If you look at this, you have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.
You have to look at the data center facilities piece, not only from a framework or model or topology perspective, but all the way down to the specific environment.


It could be that based on a specific client’s business requirements and IT strategy that it will require possibly a couple of large-scale core data centers and multiple remote sites and/or it could just be a bunch of smaller types of facilities.

It really depends on how the business is being run and supported by IT and the application suite, what the tolerances for risk are, whether it’s high availability, synchronous, all the groovy stuff, and then coming up with a framework that matches all those requirements that it’s integrating.

We tell clients constantly that you have to have your act together with respect to your profile, and start to align all of this, before you can even think about cloud and all the wonderful technologies that are coming down the pike. You have to be able to have something that you can at least manage to control cost and control this whole framework and manage to a future-state business requirement, before you can even start to really deploy some of these other things.

So it all glues together. It's extremely important that customers understand that this really is a process they have to do.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.
For more information on The HUB, HP's video series on data center transformation, go to www.hp.com/go/thehub.
You may also be interested in:

Wednesday, December 7, 2011

Embarcadero brings self-service app store model to enterprises to target PCs and their universe of software

Embarcadero Technologies, a provider of database and application development software, recently announced AppWave, a free platform that provides self-service, one-click access to PC software within organizations for business PCs and even personal employee laptops.

Available via a free download, the AppWave platform gives users access to more than 250 free PC productivity apps for general business, marketing, design, data management, and development including OpenOffice, Adobe Acrobat Reader, 7Zip, FileZilla, and more.

AppWave users also can add internally developed and commercial software titles, such as Adobe Creative Suite products and Microsoft Visio, for on-demand access, control, and visibility into software titles they already own. [Disclosure: Embaradero Technologies is a sponsor of BriefingsDirect podcasts.]

The so-called app store model, pioneered by Apple, is rapidly gaining admiring adopters thanks to its promise of reducing cost of distribution and of updates -- and also of creating whole new revenue streams and even deeper user relationships.

As mobile uses rapidly change the way the world accesses applications, data and services, the app store model is changing expectations and behaviors. And this is a good lesson for enterprises.

App stores work well for both users and providers, internal or external. The users are really quite happy with ordering what they need on the spot, as long as that process is quick, seamless, and convenient.

As with SOA registries, it now makes sense to explore how such "stores" can be created quickly and efficiently to distribute, manage, and govern how PC software is distributed inside of corporations.

The AppWave platform provides business users ways to quickly build productivity, and speed-to-value benefits for PC-based apps. Such approaches form an important advance as organizations pursue more efficient ways to track, manage, and deliver their worker applications, and bill for them based on actual usage.

Easily consumed

The AppWave platform converts valued, but often cumbersome business software into easily consumed and acquired "apps," so business users don't have to wait in line for IT to order, install, and approve the work tools that they really need without delay.

With AppWave, companies have a consumer-like app experience with the software they commonly use. With rapid, self-service access to apps, and real-time tracking and reporting of software utilization, the end result is a boost in productivity and lowering of software costs. Pricing to enable commercial and custom software applications to run as AppWave apps starts at $10 to $400 per app.

With rapid, self-service access to apps, and real-time tracking and reporting of software utilization, the end result is a boost in productivity and lowering of software costs.



Increasing demand for consumer-like technology experiences at work has forced enterprises to face some inconvenient truths about traditional application delivery models. Rather than wait many months for dated applications that take too long to install manually on request, business managers and end users alike are seeking self-provisioning alternatives akin to the consumer models they know from their mobile activities.

You may also be interested in:

Monday, December 5, 2011

HP hybrid cloud news shows emphasis on enabling the telcos and service providers first

HP at the Discover 2011 Conference in Vienna last week announced a wide range of new Cloud Solutions designed to advance deployment of private, public and hybrid clouds for enterprises, service providers, and governments. Based on HP Converged Infrastructure, the new and updated HP Cloud Solutions provide the hardware, software, services and programs rapidly and securely deliver IT as a service.

I found these announcements a clearer indicator of HP's latest cloud strategy, with an emphasis on enabling a global, verticalized and marketplace-driven tier of cloud providers. I've been asked plenty about HP's public cloud roadmap, which has been murky. This now tells me that HP is going first to its key service provider customers for data center and infrastructure enablement for their clouds.

This makes a lot of sense. The next generation of clouds -- and I'd venture the larger opportunity once the market settles -- will be specialized clouds. Not that Amazon Web Services, Google, and Rackspace are going away. But one-size fits all approaches will inevitably give way to specialization and localization. Telecos are in a great position to step up and offer these value-add clouds and services to their business customers. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

And HP is better off providing the picks and shovels to them in spades, than to come to market in catch-up mode with plain vanilla public cloud services under its own brand. It the classic clone strategy that worked for PCs, right? Partnerships and ecosystem alliances are the better way. A good example is the partnership announced last week with Savvis.

HP’s new offerings address the key areas of client needs – building differentiated cloud offerings, consuming cloud services from the public domain, and managing, governing and securing the entire environment. This again makes sense. No need for channel conflict on cloud services between this class of nascent cloud providers and the infrastructure providers themselves.

Expanding the ecosystem

Among the announcements was an expansion of the cloud ecosystem with new partners, offerings and programs:
  • New HP CloudSystem integrations with Alcatel-Lucent will enable communications services providers to deliver high-value cloud services using carrier-class network and IT by automating the provisioning and management of cloud resources.

  • HP CloudAgile Service Provider Program offers service providers expanded sales reach, an enhanced services portfolio and an accelerated sales cycle through direct access to HP’s global sales force. HP has expanded the program with its first European partners and with new certified hosting options that enable service providers to deliver reliable, secure private hosted clouds based on HP CloudSystem.

    Clients want to understand, plan, build and source for cloud computing in a way that allows them to gain agility, reduce risk, maintain control and ensure security.



  • HP CloudSystem Matrix 7.0, the core operating environment that powers HP CloudSystem, enables clients to build hybrid clouds with push-button access to externally sourced cloud-based IT resources with out-of-the-box “bursting capability.” This solution also includes automatic, on-demand provisioning of HP 3PAR storage to reduce errors and speed deployment of new services to just minutes.


  • The HP Cloud Protection Program spans people, process, policies and technologies to deliver a comparable level of security for a hybrid cloud as a private internet-enabled IT environment would receive. The program is supported by a Cloud Protection Center of Excellence that enables clients to test HP solutions as well as partner and third-party products that support cloud and virtualization protection.
Enterprise-class services

New and enhanced HP services that provide a cloud infrastructure as a service to address rapid and secure sourcing of compute services include:
Guidance and training

HP has also announced guidance and training to transform legacy data centers for cloud computing:
  • Three HP ExpertONE certifications – HP ASE Cloud Architect, HP ASE Cloud Integrator and HP ASE Master Cloud Integrator, which encompass business and technical content.

  • Expanded HP ExpertONE program that includes five of the industry’s largest independent commercial training organizations that deliver HP learning solutions anywhere in the world. The HP Institute delivers an academic program for developing HP certified experts through traditional two- and four-year institutions, while HP Press has expanded self-directed learning options for clients.

  • HP Cloud Curriculum from HP Education Services offers course materials in multiple languages covering cloud strategies. Learning is flexible, with online virtual labs, self study, classroom, virtual classroom and onsite training options offered through more than 90 HP education centers worldwide.

    The new offerings are the culmination of HP’s experience in delivering innovative technology solutions, as well as providing the services and skills needed to drive this evolution.



  • Driven by HP Financial Services, HP Chief Financial Officer (CFO) Cloud Roundtables help CFOs understand the benefits and risks associated with the cloud, while aligning their organizations’ technology and financial roadmaps.

  • HP Storage Consulting Services for Cloud, encompassing modernization and design, enable clients to understand their storage requirements for private cloud computing as well as develop an architecture that meets their needs.

  • HP Cloud Applications Services for Windows Azure accelerate the development or migration of applications to the Microsoft Windows Azure platform-as-a-service offering.
A recording of the HP Discover Vienna press conference and additional information about HP’s announcements at its premier client event is available at www.hp.com/go/optimization2011.

You may also be interested in:

Wednesday, November 30, 2011

Big Data meets Complex Event Processing: AccelOps delivers a better architecture to attack the data center monitoring and analytics problem

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: AccelOps. Connect with AccelOps: Linkedin, Twitter, Facebook, RSS.

The latest BriefingsDirect podcast discussion centers on how new data and analysis approaches are significantly improving IT operations monitoring, as well as providing stronger security.

The conversation examines how AccelOps has developed technology that correlates events with relevant data across IT systems, so that operators can gain much better insights faster, and then learn as they go to better predict future problems before they emerge. That's because advances in big data analytics and complex events processing (CEP) can come together to provide deep and real-time, pattern-based insights into large-scale IT operations.

Here to explain how these new solutions can drive better IT monitoring and remediation response -- and keep those critical systems performing at their best -- is Mahesh Kumar, Vice President of Marketing at AccelOps. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: AccelOps is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Is there a fundamental change in how we approach the data that’s coming from IT systems in order to get a better monitoring and analysis capability?

Kumar: The data has to be analyzed in real-time. By real-time I mean in streaming mode before the data hits the disk. You need to be able to analyze it and make decisions. That's actually a very efficient way of analyzing information. Because you avoid a lot of data sync issues and duplicate data, you can react immediately in real time to remediate systems or provide very early warnings in terms of what is going wrong.

The challenges in doing this streaming-mode analysis are scale and speed. The traditional approaches with pure relational databases alone are not equipped to analyze data in this manner. You need new thinking and new approaches to tackle this analysis problem.

Gardner: Also for issues of security, offeners are trying different types of attacks. So this needs to be in real-time as well?

Kumar: You might be familiar with advanced persistent threats (APTs). These are attacks where the attacker tries their best to be invisible. These are not the brute-force attacks that we have witnessed in the past. Attackers may hijack an account or gain access to a server, and then over time, stealthily, be able to collect or capture the information that they are after.

These kinds of threats cannot be effectively handled only by looking at data historically, because these are activities that are happening in real-time.



These kinds of threats cannot be effectively handled only by looking at data historically, because these are activities that are happening in real-time, and there are very, very weak signals that need to be interpreted, and there is a time element of what else is happening at that time. This too calls for streaming-mode analysis.

If you notice, for example, someone accessing a server, a database administrator accessing a server for which they have an admin account, it gives you a certain amount of feedback around that activity. But if on the other hand, you learn that a user is accessing a database server for which they don’t have the right level of privileges, it may be a red flag.

You need to be able to connect this red flag that you identify in one instance with the same user trying to do other activity in different kinds of systems. And you need to do that over long periods of time in order to defend yourself against APTs.

Gardner: It's always been difficult to gain accurate analysis of large-scale IT operations, but it seems that this is getting more difficult. Why?

Kumar: If you look at trends, there are on average about 10 virtual machines (VMs) to a physical server. Predictions are that this is going to increase to about 50 to 1, maybe higher, with advances in hardware and virtualization technologies. The increase in density of VMs is a complicating factor for capacity planning, capacity management, performance management, and security.

In a very short period of time, you have in effect seen a doubling of the size of the IT management problem. So there are a huge number of VMs to manage and that introduces complexity and a lot of data that is created.

Cloud computing

Cloud computing is another big trend. All analyst research and customer feedback suggests that we're moving to a hybrid model, where you have some workloads on a public cloud, some in a private cloud, and some running in a traditional data center. For this, monitoring has to work in a distributed environment, across multiple controlling parties.

Last but certainly not the least, in a hybrid environment, there is absolutely no clear perimeter that you need to defend from a security perspective. Security has to be pervasive.

Given these new realities, it's no longer possible to separate performance monitoring aspects from security monitoring aspects, because of the distributed nature of the problem. ... So change is happening much more quickly and rapidly than ever before. At the very least, you need monitoring and management that can keep pace with today’s rate of change.

At the very least, you need monitoring and management that can keep pace with today’s rate of change.



The basic problem you need to address is one of analysis. Why is that? As we discussed earlier, the scale of systems is really high. The pace of change is very high. The sheer number of configurations that need to be managed is very large. So there's data explosion here.

Since you have a plethora of information coming at you, the challenge is no longer collection of that information. It's how you analyze that information in a holistic manner and provide consumable and actionable data to your business, so that you're able to actually then prevent problems in the future or respond to any issues in real-time or in near real-time.

You need to nail the real-time analytics problem and this has to be the centerpiece of any monitoring or management platform going forward.

Advances in IT

Gardner: So we have the modern data center, we have issues of complexity and virtualization, we have scale, we have data as a deluge, and we need to do something fast in real-time and consistently to learn and relearn and derive correlations.

It turns out that there are some advances in IT over the past several years that have been applied to solve other problems that can be brought to bear here. You've looked at what's being done with big data and in-memory architectures, and you've also looked at some of the great work that’s been done in services-oriented architecture (SOA) and CEP, and you've put these together in an interesting way.

Big data is about volume, the velocity or the speed with which the data comes in and out, and the variety or the number of different data types and sources that are being indexed and managed.



Kumar: Clearly there is a big-data angle to this.

Doug Laney, a META and a Gartner analyst, probably put it best when he highlighted that big data is about volume, the velocity or the speed with which the data comes in and out, and the variety or the number of different data types and sources that are being indexed and managed.

For example, in an IT management paradigm, a single configuration setting can have a security implication, a performance implication, an availability implication, and even a capacity implication in some cases. Just a small change in data has multiple decision points that are affected by it. From our angle, all these different types of criteria affect the big data problem.

Couple of approaches

There are a couple of approaches. Some companies are doing some really interesting work around big-data analysis for IT operations.

They primarily focus on gathering the data, heavily indexing it, and making it available for search, thereby derive analytical results. It allows you to do forensic analysis that you were not easily able to with traditional monitoring systems.

The challenge with that approach is that it swings the pendulum all the way to the other end. Previously we had a very rigid, well-defined relational data-models or data structures, and the index and search approach is much more of a free form. So the pure index-and-search type of an approach is sort of the other end of the spectrum.

What you really need is something that incorporates the best of both worlds and puts that together, and I can explain to you how that can be accomplished with a more modern architecture. To start with, we can't do away with this whole concept of a model or a relationship diagram or entity relationship map. It's really critical for us to maintain that.

What you really need is something that incorporates the best of both worlds and puts that together.



I’ll give you an example. When you say that a server is part of a network segment, and a server is connected to a switch in a particular way, it conveys certain meaning. And because of that meaning, you can now automatically apply policies, rules, patterns, and automatically exploit the meaning that you capture purely from that relationship. You can automate a lot of things just by knowing that.

If you stick to a pure index-and-search approach, you basically zero out a lot of this meaning and you lose information in the process. Then it's the operators who have to handcraft these queries to have to then reestablish this meaning that’s already out there. That can get very, very expensive pretty quickly.

Our approach to this big-data analytics problem is to take a hybrid approach. You need a flexible and extensible model that you start with as a foundation, that allows you to then apply meaning on top of that model to all the extended data that you capture and that can be kept in flat files and searched and indexed. You need that hybrid approach in order to get a handle on this problem.

Gardner: Why do you need to think about the architecture that supports this big data capability in order for it to actually work in practical terms?

Kumar: You start with a fully virtualized architecture, because it allows you not only to scale easily, ... but you're able to reach into these multiple disparate environments and capture and analyze and bring that information in. So virtualized architecture is absolutely essential.

Auto correlate

Maybe more important is the ability for you to auto-correlate and analyze data, and that analysis has to be distributed analysis. Because whenever you have a big data problem, especially in something like IT management, you're not really sure of the scale of data that you need to analyze and you can never plan for it.

Think of it as applying a MapReduce type of algorithm to IT management problems, so that you can do distributed analysis, and the analysis is highly granular or specific. In IT management problems, it's always about the specificity with which you analyze and detect a problem that makes all the difference between whether that product or the solution is useful for a customer or not.

In IT management problems, it's always about the specificity with which you analyze and detect a problem that makes all the difference.



A major advantage of distributed analytics is that you're freed from the scale-versus-richness trade-off, from the limits on the type of events you can process. If I wanted to do more complex events and process more complex events, it's a lot easier to add compute capacity by just simply adding VMs and scaling horizontally. That’s a big aspect of automating deep forensic analysis into the data that you're receiving.

I want to add a little bit more about the richness of CEP. It's not just around capturing data and massaging it or looking at it from different angles and events. When we say CEP, we mean it is advanced to the point where it starts to capture how people would actually rationalize and analyze a problem.

The only way you can automate your monitoring systems end-to-end and get more of the human element out of it is when your CEP system is able to capture those nuances that people in the NOC and SOC would normally use to rationalize when they look at events. You not only look at a stream of events, you ask further questions and then determine the remedy.

No hard limits

To do this, you should have a rich data set to analyze, i.e. there shouldn’t be any hard limits placed on what data can participate in the analysis and you should have the flexibility to easily add new data sources or types of data. So it's very important for the architecture to be able to not only event on data that are is stored in in traditional models or well-defined relational models, but also event against data that’s typically serialized and indexed in flat file databases.

Gardner: What's the payoff if you do this properly?

Kumar: It is no surprise that our customers don’t come to us saying we have a big data problem, help us solve a big data problem, or we have a complex event problem.

Customers say they are so interconnected that they want these managed on a common platform.



Their needs are really around managing security, performance and configurations. These are three interconnected metrics in a virtualized cloud environment. You can't separate one from the other. And customers say they are so interconnected that they want these managed on a common platform. So they're really coming at it from a business-level or outcome-focused perspective.

What AccelOps does under the covers, is apply techniques such as big-data analysis, complex driven processing, etc., to then solve those problems for the customer. That is the key payoff -- that customer’s key concerns that I just mentioned are addressed in a unified and scalable manner.

An important factor for customer productivity and adoption is the product user-interface. It is not of much use if a product leverages these advanced techniques but makes the user interface complicated -- you end up with the same result as before. So we’ve designed a UI that’s very easy to use, requires one or two clicks to get the information you need; a UI-driven ability to compose rich events and event patterns. Our customers find this very valuable, as they do not need super-specialized skills to work with our product.

Key metrics

What we've built is a platform that monitors data center performance, security, and configurations. The three key interconnected metrics in virtualized cloud environments. Most of our customers really want that combined and integrated platform. Some of them might choose to start with addressing security, but they soon bring in the performance management aspects into it also. And vice versa.

And we take a holistic cross-domain perspective -- we span server, storage, network, virtualization and applications. What we've really built is a common consistent platform that addresses these problems of performance, security, and configurations, in a holistic manner and that’s the main thing that our customers buy from us today.

Free trial download

Most of our customers start off with the free trial download. It’s a very simple process. Visit www.accelops.com/download and download a virtual appliance trial that you can install in your data center within your firewall very quickly and easily.

Getting started with the AccelOps product is pretty simple. You fire up the product and enter the credentials needed to access the devices to be monitored. We do most of it agentlessly, and so you just enter the credentials, the range that you want to discover and monitor, and that’s it. You get started that way and you hit Go.

We do most of it agentlessly, and so you just enter the credentials, the range that you want to discover and monitor, and that’s it.



The product then uses this information to determine what’s in the environment. It automatically establishes relationships between them, automatically applies the rules and policies that come out of the box with the product, and some basic thresholds that are already in the product that you can actually start measuring the results. Within a few hours of getting started, you'll have measurable results and trends and graphs and charts to look at and gain benefits from it.

Gardner: It seems that as we move toward cloud and mobile that at some point or another organizations will hit the wall and look for this automation alternative.

Kumar: It’s about automation and distributed analytics and about getting very specific with the information that you have, so that you can make absolutely more predictable, 99.9 percent correct of decisions and do that in an automated manner. The only way you can do that is if you have a platform that’s rich enough and scalable and that allows you to then reach that ultimate goal of automating most of the management of these diverse and disparate environments.

That’s something that's sorely lacking in products today. As you said, it's all brute-force today. What we have built is a very elegant, easy-to-use way of managing your IT problems, whether it’s from a security standpoint, performance management standpoint, or configuration standpoint, in a single integrated platform. That's extremely appealing for our customers, both enterprise and cloud-service providers.

I also want to take this opportunity to encourage those of your listening or reading this podcast to come meet our team at the 2011 Gartner Data Center Conference, Dec. 5-9, at Booth 49 and learn more. AccelOps is a silver sponsor of the conference.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: AccelOps. Connect with AccelOps: Linkedin, Twitter, Facebook, RSS.

You may also be interested in: