Tuesday, September 14, 2010

Want client virtualization? Time then to get your back-end infrastructure act together

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

We've all heard about client virtualization or virtual desktop infrastructure (VDI) over the past few years, and there are some really great technologies for delivering a PC client experience as a service.

But today’s business and economic drivers need to go beyond just good technology. There also needs to be a clear rationale for change -- both business and economic. Second, there needs to be proven methods for properly moving to client virtualization at low risk and in ways that lead to both high productivity and lower total costs over time.

Cloud computing, mobile device proliferation, and highly efficient data centers are all aligning to make it clear that the deeper and flexible client platform support from back-end servers will become more the norm and less the exception over time.

Client devices and application types will also be dynamically shifting both in numbers and types, and crossing the chasm between the consumer and business spaces. The new requirements for business mobile use point to the need for planning and proper support of the infrastructures that can accommodate these edge, wireless clients.

To help guide business on client virtualization infrastructure requirements, learn more about client virtualization strategies and best practices that support multiple future client directions, and see why such virtualization makes sense economically, we went to Dan Nordhues, Marketing and Business Manager for Client Virtualization Solutions in HP's Industry Standard Servers Organization. The interview is conducted by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Nordhues: In desktop virtualization, what really comes out to the user device is just pixel information. These protocols just give you the screen information, collect your user inputs from the keyboard and mouse, and take those back to the application or the desktop in the data center.

When you look at desktop virtualization, whether it’s a server-based computing environment, where you are delivering applications, or if you are delivering the whole desktop, as in VDI, to get started you really have to take a look at your whole environment -- and make sure that you're doing a proper analysis and are actually ready.

On the data center side, as we start talking about cloud, the solution is really progressing. HP is moving very strongly toward what we call converged infrastructure, which is wire it once and then have it provisioned and be ready to provide the services that you need. We're on a path where the hardware pieces are there to deliver on that.

But you have to look at the data center and its capacity to house the increased number of servers, storage, and networking that has to go there to support the user.

So now you get the storage folks in IT, the networking folks, and the server support folks all involved in the support of the desk-side environment. It definitely brings a new dynamic.

This is not a prescription for getting rid of those IT people. In fact, there is a lot of benefit to the businesses by moving those folks to do more innovation, and to free up cycles to do that, instead of spending all those cycles managing a desktop environment that may be fairly difficult to manage.

Where we're headed with this, even more broadly than VDI, is back to the converged infrastructure, where we talked about wire it once and have it be a solution. Say you're an office worker and you're just getting applications virtualized out to you. You're going to use Microsoft Office-type applications. You don’t need a whole desktop. Maybe you just need some applications streamed to you.

Maybe, you're more of a power user, and you need that whole desktop environment provided by VDI. We'll provide reference architectures with just wire it once type of infrastructure with storage. Depending on what type of user you are, it can deliver both the services and the experience without having to go back and re-provision or start over, which can take weeks and months, instead of minutes.

Also, really a hybrid solution could deliver in the future VDI plus server-based computing together and cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the highest end power users that you have.

And, we're going to see services wrapped around all of this, just to make it that much simpler for the customers to take this, deploy it, and know that it’s going to be successful.

Why VDI now?

It’s a digital generation of millions of new folks entering the workforce, and they've grown up expecting to be mobile and increasingly global. So, we need to have computing environments that don’t have us having to report to a post number in an office building in order to get work done.

We have an increasingly global and mobile workforce out there. Roughly 60 percent of employees in organizations don’t work where their headquarters are for their company, and they work differently.

When you go mobile, you give up some things. However, the major selling point is that you can get access. You can check in on a running process, if you need to see how things are progressing. You can do some simple things like go in and monitor processes, call logs, or things like that. Having that access is increasingly important.

Delivering packaged services out to the end user is something that’s still being worked out by software providers, and you're going to see some more elements of that come out as we go through the next year.



And, of course, there's the impact of security, which is always the highest on customer lists. We have customers out there, large enterprise accounts, who are spending north of $100 million a year just to protect themselves from internal fraud.

With client virtualization, the security is built in. You have everything in the data center. You can’t have users on the user endpoint side, which may be a thin client access device, taking files away on USB keys or sticks.

It’s all something that can be protected by IT, and they can give access only to users as they see fit. In most cases, they want to strictly control that. Also, you don’t have users putting applications that you don't want ... on top of your IT infrastructure.

And there is really a catalyst coming as well in the Windows 7 availability and launch since late last year. Many organizations are looking at their transition plans there. It’s a natural time to look at a way to do the desktop differently than it has been done in the past.

Reference architectures support all clients

W
e've launched several reference architectures and we are going to continue to head down this path. A reference architecture is a prescribed solution for a given set of problems.

A lot of the deployment issue, and what makes this difficult, is that there are so many choices.



For example, in June, we just launched a reference architecture for VDI that uses some iSCSI SAN storage technology, and storage has traditionally been one of the cost factors in deploying client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So, moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic performance.

In this reference architecture, we've done the system integration for the customer. A lot of the deployment issue, and what makes this difficult, is that there are so many choices. You have to choose which server to use and from which vendor: HP, Dell, IBM, or Cisco? Which storage to choose: HP, EMC, or NetApp? Then, you have got the software piece of it. Which hypervisor to use: Microsoft, VMware, or Citrix? Once you chase all these down and do your testing and your proof of concept, it can take quite a substantial length of time.

We targeted the enterprise first. Some of our reference architectures that are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the lower-end offerings we have, they are still in the 400-500 range.

We're looking at bringing that down even further with some new storage technologies, which will get us down to a couple of hundred users, the small and medium business (SMB) market, certainly the mid-market, and making it just very easy for those folks to deploy. They'll have it come completely packaged.

Today, we have reference architectures based on VDI or based on server-based computing and delivering just the applications. As I mentioned before, were looking at marrying those, so you truly have a wire-it-once infrastructure that can deliver whatever the needs are for your broad user community.

What HP has done with these reference architectures is say, "Look, Mr. Customer, we've done all this for you. Here is the server and storage and all the way out to the thin client solution. We've tested it. We've engineered it with our partners and with the software stack, and we can tell you that this VDI solution will support exactly this many knowledge workers or that many productivity users in your PC environment." So, you take that system integration task away from the customer, because HP has done it for them.

We have a number of customer references. I won’t call them out specifically, but we do have some of these posted out on HP.com/go/clientvirtualization, and we continue to post more of our customer case studies out there. They are across the whole desktop virtualization space. Some are on server-based computing or sharing applications, some are based on VDI environments, and we continue to add to those.

With any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different.



HP also has an ROI or TCO calculator that we put together specifically for this space. You show a customer a case study and they say, "Well, that doesn’t really match my pain points. That doesn’t really match my problem. We don’t have that IT issue," or "We don’t have that energy, power issue."

We created this calculator, so that customers can put in their own data. It’s a fairly robust tool, but we can put in information about what’s your desktop environment costing you today, what would it cost to put in a client virtualization environment, and what you can expect as far as your return on investment. So, it’s a compelling part of the discussion.

Obviously, with any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different, which is why we have provided the tool and the consulting around that.

On that same website that I mentioned, HP.com/go/clientvirtualization, we have our technical white papers that we've published, along with each of these reference architectures.

For example, if you pick the VDI reference architecture that will support 1,000-plus users in general, there is a 100-page white paper that talks about exactly how we tested it, how we engineered it, and how it scales with the VMware view or with Microsoft Hyper-V, plus Citrix XenDesktop.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, September 13, 2010

HP gets more than security benefits from ArcSight acquisition, it gets closer to comprehensive BI for IT

The build, buy or partner equation has favored "buy" once again as HP moves aggressively to dominate IT operations management and governance software and services.

HP on Monday announced the intention to buy 10-year-old ArcSight for $1.5 billion, rapidly filling out its software products portfolio again under Bill Veghte, Executive Vice President of the HP Software & Solutions group. HP has been on a tear after recently acquiring Fortify and 3Par. I guess we should expect even more buying by HP as the economy and stock market makes these companies attractive before their value increases. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

ArcSight -- with a $200 million revenue run rate and 35 percent annual top line growth -- might be best known for providing the means to snuff out cyber crime and user access and data management risks. And the systems log capture and management portfolio at ArcSight is also adept at helping with regulatory oversight requirements and compliance issues. To solve these problems, the company sells to the largest enterprises, including the US government and military, and financial, telco and retail giants.

But for me the real value for HP is in gaining a comprehensive platform and portfolio via ArcSight for total systems log management. Being able to manage and exploit the reams of ongoing log data across all data center devices offers huge benefits, even the ability to correlate business events and IT events for what I call BI for IT.

We're right on the cusp of reliable and penetrating levels predictive types of IT analysis, and HP needs to in the vanguard on this. VMware just last month bought privately held Integrien for the same reason. The market is looking for de facto standard governance systems of record and HP's other governance products plus ArcSight makes that a market opportunity only one for HP to lose.

This predictive approach to IT failures -- of identifying and ameliorating system snafus before they impact applications and data performance -- stands as the progeny of better IT operations continuity. The structured and unstructured systems data and analysis from ArcSight will help HP develop a constant feedback loop between build, manage and monitoring processes, to help ensure that enterprises remain secure and reliable in operations, says HP.

Consider too that managing security and dependability at the edge takes on a whole new meaning as enterprises dive more deeply into smartphones, mobile apps, netbooks, thin clients and desktop virtualization, and the need to not just manage each of them -- but all of them in an orchestra of coordinated data and applications access, provisioning and compliance.

Virtualization drives need for governance


Oh, and then there's the virtualization revolution that's only partly played out in enterprise IT and growing fast. And so how to manage and govern fleeting virtual instances of servers, networking equipment and storage? The logs. The logs data. It's a sure way to gain a complete view of IT operations, even as that picture is rapidly changing moment by moment.

Another complement to the ArcSight-HP match-up: All that log data needs to be crunched and reported, a function of BI-adept hardware and optimized systems, which, of course, HP has in spades.

So all this deep and wide governance capability from ArcSight is a strong complement to HP's Business Service Automation and Cloud Service Automation solutions, among several others. Given that HP already resells ArcSight's appliances (and soon, we're told all-software products, too), we should expect the combined solutions to be moving down-market to the SMBs pretty quickly. This global and massive market has also been a recent priority for HP across other products and services.

Don't just view the ArcSight purchase today through the lens of cyber security and compliance solutions. This is a synergistic acquisition for HP on many levels. The common denominator is comprehensive governance, and the next goal for the combined HP and ArcSight products and services is predictive BI for IT ... and correlating that all to the real-time business events and processes. That's the total business insight capability that companies so desperately need -- and only IT can provide -- to effectively manage complexity and risk.

You may also be interested in:

Thursday, September 9, 2010

SAS joins crowded vendor landscape moving to bring affordable BI to the masses

We're only in the first years of the data-driven decade. More companies will be making more of their business decisions -- and also added revenue -- on their own data services.

Investing in good data analytics infrastructure now allows companies to know themselves and their markets far better. It eliminates guessing and brings more of a real-time picture of their operations, challenges and opportunities.

Good data organizers can also then share or sell that data and analytics to partners and/or customers, and acquire meaningful additional outside data themselves from other data services purveyors.

The trick for IT is to allow their companies to extract business intelligence (BI) from these vast data sets at an affordable price. And more companies, that is small and medium businesses, will want in on the data and analytics revolution. Competition will drive them to.

So what's needed now is a change in the economics of business intelligence via value-oriented offerings for the mid-market. Traditional entry points for large data warehouses are often $500,000 and up, not to mention the ongoing operations costs and need to acquire data and systems management skills.

BI comes to wider audience

SAS at the A2010 conference last week launched Rapid Predictive Modeller (RPM), a service targeting non-analytical business users to help create more BI reports. SAS RPM joins the latest release of SAS Enterprise Miner 6.2, which includes an add-in for Microsoft Excel.

These steps toward making BI and reports available to more users and uses at a lower price will no doubt be welcome to SMBs and enterprises dripping in data, but struggling to make sense of it all.

We're only now seeing massively parallel data warehousing appliances priced at the $50,000 mark. And these appliances tend to be cheaper to administrate and operate. Aster Data Systems, for example, recently came out with a lower-cost competitive solution dubbed MapReduce Data Warehouse Appliance – Express Edition. Aster also has a new CEO, Quentin Gallivan, announced today.

Aster, Netezza and Teradata are all focusing on the mid-market. Green Plum was recently bought by EMC. A recent Forrester report put Teradata, Oracle, IBM and Microsoft at the head of the data warehouse market, with Netezza, Sybase and SAP noted for niche deployments.

Oracle and HP teamed up two years ago on the Exadata appliance for Oracle warehouse workloads. And now Oracle is putting its Sun Microsystems acquisition to use for its own Exadata appliances line-up.

Expect a vendor slugfest on the lower end of the data warehousing and BI market in the next few years. It will be fascinating to see how these vendors will both enter the entry-level markets, while also seeking to maintain the high-end pricing for the largest users. There could be a value sweet spot in the middle.

We should therefore expect to see prices come down on these systems across the board, making the systems more attainable for even more types of uses and users.

Wednesday, September 8, 2010

HP product barrage uses integration, low-cost, simplicity to bring latest IT advances to price-sensitive SMBs

Figuring that small- and medium-sized businesses (SMBs) want the best in IT advances too, HP on Wednesday unleashed a barrage of products and services that use integration, low-cost, and simplicity to bring cutting edge enterprise IT capabilities to the global mid-market.

The new products and services -- ranging from the $329 HP ProLiant MicroServer to $424 minitower PCs to simplified virtualization, networking and storage bundles -- come from multiple organizations across HP, but with a singular Goldilocks target of “Just Right IT” for SMBs. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The slew of value-oriented offerings is also designed to give HPs various global channel partners a new horse to ride into town on as the SMBs look beyond recession-reckoning for how to grow their operations while becoming more productive. The products and services are also available from HP directly.

HP is also putting financial muscle behind the channel partners and users by providing aggressive financing options leasing, life cycle asset management and upgrade services. HP Financial Services is the second-largest captive IT leasing company in the world, said HP. Leasing provides SMBs with flexibility (with no or low upfront payments) and a path to migrate to newer technology.

While the value and utilization benefits of virtualization have been quickly adopted by larger companies and IT departments, the use of hypervisors has been slower in SMBs. To help solve that, HP has developed more complete virtualization environments using Virtualization Smart Bundles with Microsoft Hyper-V Server 2008 R2. The bundles target storage, servers and networking virtualization technology uses.

The SMB-targeted worker productivity releases include:
  • HP ProLiant MicroServer, an energy-efficient file server designed for businesses with up to 10 employees to centralize information and securely access files faster (at about half the size and 50 percent quieter than most entry-level servers)
The SMB-targeted storage management releases include:
The SMB-targeted networking and communications releases include:
  • HP VCX 9.5 IP Telephony system and 350x IP Phones (starting at $119), which enable the convergence of voice and data onto a single network infrastructure.
SMBs are where economists look for growth to emerge from recessions, and in developing countries. For years, though, large IT vendors have focused on the top ends of the IT market. It makes a lot of sense for HP to scale the technology and offerings down to the SMBs -- which is a huge total market, poised for unprecedented growth in the world's most populous regions.

Fact is, too, that due to proliferating mobile devices and wireless networks, nearly all companies of any size need to deeply embrace technology and networking to remain competitive. Data explosion also makes it unavoidable to bring in managed storage and backup, not to mention the burgeoning requirements of security and managed access.

While many of us analysts harp on about the virtues and inevitability of cloud computing, for many small companies and in many regions, the promise of cloud cannot be considered until the basics of IT are modernized and managed.

Mobile devices alone can not take the place of a LAN and managed storage. In many ways, these new HP products and bundles -- with their pricing and simplicity -- can be seen as stepping stones for SMBs to soon be able to exploit the value and potential of cloud-based services, too.

And then we actually might see these SMBs leap-frog their larger corporate brethren, rather than be seen as a lagging market category, in regards to IT productivity and enablement. And wouldn't that be exciting?

You may also be interested in:

Friday, September 3, 2010

ZapThink defines IT transformation crisis points in 2020 vision framework

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


By Jason Bloomberg

In our last ZapFlash, The Five Supertrends of Enterprise IT, ZapThink announced our new ZapThink 2020 conceptual framework that helps organizations understand the complex interrelationships among the various forces of change impacting IT shops over the next 10 years, and how to leverage those forces of change to achieve the broader goals of the organization.

In many ways, however, ZapThink 2020 is as much about risk mitigation as it is about strategic benefit. Every element of ZapThink 2020 is a problem, as well as an opportunity. Nowhere is this focus on risk mitigation greater than with ZapThink 2020’s six Crisis Points.

Defining a crisis point

Of course, life in general -- as well as business in particular -- are both filled with risks, and a large part of any executive’s job description is dealing with everyday crises. A Crisis Point, however, goes beyond everyday, garden-variety fire fighting. To be a Crisis Point, the underlying issue must both be potentially game-changing as well as largely unexpected. The element of surprise is what makes each Crisis Point especially dangerous – not that the crisis itself is necessarily a surprise, but rather, just how transformative the event promises to be.

Here then are ZapThink 2020’s seven Crisis Points, why they’re surprising, and why they’re game-changing. Over the next several months we’ll dive deeper into each one, but for now, here’s a high-level overview.
Collapse of enterprise IT – Enterprises who aren’t in the IT business stop doing their own IT, and furthermore, move their outsourced IT off-premise.

Why is it that so many enterprises today handle their own IT, and in particular, write their own software? They use office furniture, but nobody would think of manufacturing their own, except of course if you’re in the office furniture manufacturing business.

The game-changing nature of this Crisis Point is obvious, but what’s surprising will be just how fast enterprises rush to offload their entire IT organizations, once it becomes clear that the first to do so have achieved substantial benefits from this move.

IPv4 exhaustion –
Every techie knows that we’re running out of IP addresses, because the IPv4 address space only provides for about 4.3 billion IP addresses, and they’ve almost all been assigned.

IPv6 is around the corner, but very little of our Internet infrastructure supports IPv6 at this time. The surprise here is what will happen when we run out of addresses: the secondary market for IP addresses will explode.

As it turns out, a long time ago IANA assigned most IP addresses to a select group of Class A holders, who each got a block of about 16.8 million addresses.

Companies like Ford, Eli Lilly, and Halliburton all ended up with one of these blocks. How much money do you think they can make selling them once the unassigned ones are all gone?

Fall of frameworks –
Is your chief Enterprise Architect your CEO’s most trusted, important advisor? No? Well, why not?

After all, EA is all about organizing the business to achieve its strategic goals in the best way we know how, and the EA is supposed to know how. The problem is, most EAs are bogged down in the details, spending time with various frameworks and other artifacts, to the point where the value they provide to their organizations is unclear.

In large part the frameworks are to blame – Zachman Framework, TOGAF, DoDAF, to name a few. For many organizations, these frameworks are little more than pointless exercises in organizing terminology that leads to checklist architectures.

At this Crisis Point, executives get fed up, scrap their current EA efforts, and bring in an entirely new way of thinking about Enterprise Architecture. Does ZapThink have ideas about this new approach EA? You bet we do. Stay tuned – or better yet, sign up for our newly revised Licensed ZapThink Architect SOA & Cloud Architecture Boot Camp.

Cyberwar
Yes, most risks facing IT shops today are security related. Not a day goes by without another virus or Windows vulnerability coming to light.

But what happens when there is a concerted, professional, widespread, expert attack on some key part of our global IT infrastructure? It’s not a matter of if, it’s a matter of when.

The surprise here will be just how effective such an attack can be, and perhaps how poor the response is, depending on who the target is. Will terrorists take down the Internet? Maybe just the DNS infrastructure? Or will this battle be between corporations? Regardless, the world post-Cyberwar will never be the same.

As the quantity and complexity of available information exceeds our ability to deal with such information, we’ll need to take a new approach to governance.

Arrival of Generation Y
These are the kids who are currently in college, more or less. Not only is this generation the “post-email” generation, they have grown up with social media.

When they hit the workforce they will hardly tolerate the archaic approach to IT we have today. Sure, some will change to fit the current system, but enterprises who capitalize on this generation’s new perspective on IT will obtain a strategic advantage.

We saw this generational effect when Generation X hit the workforce around the turn of the century – a cadre of young adults who weren’t familiar with a world without the Web. That generation was instrumental in shifting the Web from a fad into an integral part of how we do business today. Expect the same from Generation Y and social media.

Data explosion –
As the quantity and complexity of available information exceeds our ability to deal with such information, we’ll need to take a new approach to governance.

ZapThink discussed this Crisis Point in our ZapFlash The Christmas Day Bomber, Moore’s Law, and Enterprise IT. But while an essential part of dealing with the data explosion crisis point is a move to governance-driven Complex Systems, we place this Crisis Point in the Democratization of Technology Supertrend.

The shift in thinking will be away from the more-is-better, store-and-analyze school of data management to a much greater focus on filtering and curating information. We’ll place increasingly greater emphasis on small quantities of information, by ensuring that information is optimally valuable.

Enterprise application crash –
The days of “Big ERP” are numbered – as well as those of “Big CRM” and “Big SCM” and … well, all the big enterprise apps. These lumbering monstrosities are cumbersome, expensive, inflexible, and filled at their core with ancient spaghetti code.

There’s got to be a better way to run an enterprise. Fortunately, there is. And once enterprises figure this out, one or more of the big enterprise app vendors will be caught by surprise and go out of business. Will it be one of your vendors?
The ZapThink take

We can’t tell you specifically when each of these Crisis Points will come to pass, or precisely how they will manifest. What we can say with a good amount of certainty, however, is that you should be prepared for them. If one or another proves to be less problematic or urgent than feared, then we can all breathe a sigh of relief. But should one come to pass as feared, then the organizations who have suitably prepared for it will not only be able to survive, but will be able to take advantage of the fact that their competition was not so well equipped.

The real challenge with preparing for such Crisis Points is in understanding their context. None of them happens in isolation; rather, they are all interrelated with other issues and the broader Supertrends that they are a part of.

That’s where ZapThink comes in. We are currently putting together a poster that will help people at a variety of organizations understand the context for change in their IT shops over the next ten years, and how they will impact business. We’re currently looking for sponsors. Drop us a line if you’d like more information.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Tuesday, August 31, 2010

Process automation elevates virtualization use while transforming IT's function to app and cloud service broker

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

T
he trap of unchecked virtualization complexity can have a stifling effect on the advantageous spread of virtualization in data centers.

Indeed, many enterprises may think they have already exhausted their virtualization paybacks, when in fact, they have only scratched the surface of the potential long-term benefits.

Automation, policy-driven processes and best practices are offering more opportunities for optimizing virtualization so that server, storage, and network virtualization can move from points of progress into more holistic levels of adoption.

The goals then are data center transformation, performance and workload agility, and cost and energy efficiency. Many data centers are leveraging automation and best practices to attain 70 percent and even 80 percent adoption rates.

By taking such a strategic outlook on virtualization, process automation sets up companies to better exploit cloud computing and IT transformation benefits at the pace of their choosing, not based on artificial limits imposed by dated or manual management practices.

To explore how automation can help achieve strategic levels of virtualization, BriefingsDirect brought together panelists Erik Frieberg, Vice President of Solutions Marketing at HP Software, and Erik Vogel, Practice Principal and America's Lead for Cloud Resources at HP. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Vogel: Probably the biggest misconception that I see with clients is the assumption that they're fully virtualized, when they're probably only 30 or 40 percent virtualized. They've gone out and done the virtualization of IT, for example, and they haven't even started to look at Tier 1 applications.

The misconception is that we can't virtualize Tier 1 apps. In reality, we see clients doing it every day. The broadest misconception is what virtualization can do and how far it can get you. Thirty percent is the low-end threshold today. We're seeing clients who are 75-80 percent virtualized in Tier 1 applications.

Frieberg: The three misconceptions I see a lot are, one, automation and virtualization are just about reducing head count. The second is that automation doesn't have as much impact on compliance. The third is if automation is really at the element level, they just don't understand how they would do this for these Tier 1 workloads.

You're starting to see the movement beyond those initial goals of eliminating people to ensuring compliance. They're asking how do I establish and enforce compliance policies across my organization, and beyond that, really capturing or using best practices within the organization.

When you look at the adoption, you have to look at where people are going, as far as the individual elements, versus the ultimate goal of automating the provisioning and rolling out a complete business service or application.

When I talk to people about automation, they consistently talk about what I call "element automation." Provisioning a server, a database, or a network device is a good first step, and we see gaining market adoption of automating these physical things. What we're also seeing is the idea of moving beyond the individual element automation to full process automation.

Self-service provisioning

As companies expand their use of automation to full services, they're able to reduce that time from months down to days or weeks. This is what some people are starting to call cloud provisioning or self-service business application provisioning. This is really the ultimate goal -- provisioning these full applications and services versus what is often IT’s goal -- automating the building blocks of a full business service.

This is where you're starting to see what some people call the "lights out" data center. It has the same amount or even less physical infrastructure using less power, but you see the absence of people. These large data centers just have very few people working in them, but at the same time, are delivering applications and services to people at a highly increased rate rather than as traditionally provided by IT.

Vogel: One of the challenges that our clients face is how to build the business case for moving from 30 percent to 60 or 70 percent virtualized. This is an ongoing debate within a number of clients today, because they look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.

In order to really make that jump, the business case has to be made beyond just reduction in headcount or less work effort. We see clients having to look at things like improving availability, being able to do migrations, streamlined backup capabilities, and improved fault-tolerance. When you start looking across the broader picture of the benefits, it becomes easier to make a business case to start moving to a higher percentage of virtualization.

One of the things we saw early on with virtualization is that just moving to a virtual environment does not necessarily reduce a lot of the maintenance and management that we have, because we haven’t really done anything to reduce the number of OS instances that have to be managed.

More than just asset reduction

T
he benefits are relatively constrained, if we look at it from just a physical footprint reduction. In some cases, it might be significant if a client is running out of data-center space, power, or cooling capacity within the data center. Then, virtualization makes a lot of sense because of the reduction in asset footprint.

But, when we start looking at coupling virtualization with improved process and improved governance, thereby reducing the number of OS instances, application rationalization, and those kinds of broader process type issues, then we start to see the big benefits come into play.

Now, we're not talking just about reducing the asset footprint. We're also talking about reducing the number of OS instances. Hence, the management complexity of that environment will decrease. In reality, the big benefits are on the logical side and not so much on the physical side.

It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services.



Frieberg: What we're seeing in companies is that they're realizing that their business applications and services are becoming too complex for humans to manage quickly and reliably.

The demands of provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT, where you're consuming more business services, is really moving beyond what a lot of people can manage. The idea is that they are looking at automation to make their life easier, to operate IT in a compliant way, and also deliver on the overall business goals of a more agile IT.

Companies are almost going through three phases of maturity when they do this. The first aspect is that a lot of automation revolves around "run book automation" (RBA), which is this physical book that has all these scripts and processes that IT is supposed to look at.

But, what you find is that their processes are not very standardized. They might have five different ways of configuring your device, resetting the server, and checking why an application isn’t working.

So, as we look at maturity, you’ve got to standardize on a set of ways. You have to do things consistently. When you standardize methods, you then find out you're able to do the second level of maturity, which is consolidate.

Transforming how IT operates

Vogel: It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services. We have to start changing the way we are thinking when we're going to stand up a number of virtual images.

When we start moving to a cloud environment, we talk about how we share a resource pool. Virtualization is obviously key and an underlying technology to enable that sharing of a virtual resource pool.

We're seeing the virtualization providers coming out with new versions of their software that enable very flexible cloud infrastructures.

This includes the ability to create hybrid cloud infrastructures, which are partially a private cloud that sits within your own site, and the ability to burst seamlessly to a public cloud as needed for excess capacity, as well as the ability to seamlessly transfer workloads in and out of a private cloud to a public cloud provider as needed.

We're seeing the shift from IT becoming more of a service broker, where services are sourced and not just provided internally, as was traditionally done. Now, they're sourced from a public cloud provider or a public-service provider, or provided internally on a private cloud or on a dedicated piece of hardware. IT now has more choices than ever in how they go about procuring that service.

But it becomes very important to start talking about how we govern that, how we control who has access, how we can provision, what gets provisioned and when. ... It's a much bigger problem and a more complicated problem as we start going to higher levels of virtualization and automation and create environments that start to look like a private cloud infrastructure.

I don’t think anybody will question that there are continued significant benefits, as we start looking at different cloud computing models. If we look at what public cloud providers today are charging for infrastructure, versus what it costs a client today to stand up an equivalent server in their environment, the economics are very, very compelling to move to a cloud-type of model.

Governance crosses boundary to strategy

Without the proper governance in place, we can actually see cost increase, but when we have the right governance and processes in place for this cloud environment, we've seen very compelling economics, and it's probably the most compelling change in IT from an economic perspective within the last 10 years.

Frieberg: If you want to automate and virtualize an entire service, you’ve got to get 12 people to get together to look at the standard way to roll out that environment, and how to do it in today’s governed, compliant infrastructure.

The coordination required, to use a term used earlier, isn’t just linear. It sometimes becomes exponential. So there are challenges, but the rewards are also exponential. This is why it takes weeks to put these into production. It isn’t the individual pieces. You're getting all these people working together and coordinated. This is extremely difficult and this is what companies find challenging.

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months. Get that payback and then address the next challenge and the next challenge and the next challenge. It's not a big bang approach. It's this idea of continuous payback and improvement within your organization to move to the end goal of this private cloud or hybrid IT infrastructure.

Vogel: We've developed a capability matrix across six broad domains to look at how a client needs to start to operationalize virtualization as opposed to just virtualizing a physical server.

We definitely understand and recognize that it has to be part of the IT strategy. It is not just a tactical decision to move a server from physical machine to a virtual machine, but rather it becomes part of an IT organization’s DNA that everything is going to move to this new environment.

We're really going to start looking at everything as a service, as opposed to as a server, as a network component, as a storage device, how those things come together, and how we virtualize the service itself as opposed to all of those unique components.

It really becomes baked into an IT organization’s DNA, and we need to look very closely at their capability -- how capable an organization is from a cultural standpoint, a governance standpoint, and a process standpoint to really operationalize that concept.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, August 30, 2010

HP eyes automated apps deployment, 'standardized' private cloud creation with integrated CloudStart package

Clearly seeing a sweet spot amid complex and costly applications support for Microsoft Exchange, SharePoint and SAP R/3 implementations, HP on Monday delivered a CloudStart package of turnkey private cloud infrastructure capabilities with a self-service, SasS portal included.

Delivered at the VMworld conference in San Francisco, HP is taking a practical approach for creating cloud and shared services deployment models that make quick economic sense by targeting costly and sprawling server farms that support seas of Microsoft, SAP and other "out of the box" business applications as services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

In doing so, HP is moving quickly to try and carve out a leadership position for the fast (30 days, they say) set-up of private clouds, coupled with the ease of a SaaS-based deployment, maintenance and ongoing operations portal that implements and supports the clouds and the applications they support. The targeting of costly and often inefficient Microsoft Exchange and SharePoint farms also points to the creeping separation of Microsoft's and HP's infrastructure -- and cloud -- strategies.

At the same time, HP's cloud hardware, software and services packaging via CloudStart exploits HP's product strengths while setting the stage for enterprise application stores, service catalogs of metered apps as services, more choices of moving to hybrid clouds, and easy segues to multiple sourcing and hosting options, all of which play into HP's Enterprise Services (nee EDS) on the hosting side.

CloudStart is also what I believe is only the opening salvo in a comprehensive private cloud initiative and strategy drive that HP aims to win. Expect more developments through the fall on HP Cloud Service Automation (CSA) and applications lifecycle management products, services and professional services support offerings.

HP's VMworld news today also comes on the heels of a slew of private cloud product and services offerings last week.

Partners form ecosystem approach

The HP CloudStart package -- with third-party partner ecosystem players like Intel, Samsung, VMware and Carnegie Mellon -- combines the features of HP BladeSystem Matrix, Converged Infrastructure, Cloud Service Automation stack, StorageWorks, and other governance and management offerings. That on top of the globally available HP server hardware and networking hardware portfolios. HP says, however, that CloudStart is designed to integrate well with an enterprises's existing heterogeneous platforms, any hypervisor, and third-party and open source middleware.

Such mission-critical aspects as disaster recovery, security, storage efficiency, governance, patches support, compliance and audits support, and use metering and charge-backs billing are also included in the CloudStart offerings and road map, HP said.

HP also announced Cloud Maps for use with apps and solutions from VMware, SAP, Oracle and Microsoft to significantly speed application deployment via tested, cloud-ready app configurations. Cloud Maps are imported directly into private cloud environments, enabling them to develop a catalog of cloud services.

The combination of the cloud elements could lead to a "standardized" approach for creating and expanding private clouds throughout an enterprise, said Paul Miller, vice president, Solutions and Strategic Alliances, Enterprise Servers, Storage and Networking at HP. The solution is designed to be deployed on-premises but uses an HP-operated, off-premises and SaaS setup and operations portal.

And that SaaS, self-service aspect could be a key to the practical deployment of enterprise clouds, which HP sees as rapidly growing in interest in a "multi-source IT world," even if enterprises are not quite sure how to begin. HP recognizes that moving from a non-cloud problem set of complexity and sprawl to a cloud-based world of complexity and sprawl sort of defeats the purpose and economics.

IT leaders need cloud road map

"When CIOs have a simplified way to map their path to the private cloud, including all the necessary components from infrastructure and applications to services, they are more likely to identify a comprehensive and realistic deployment scenario for their organization," said Matt Eastwood, group vice president, Enterprise Platform Group, IDC, in a release. "With the HP CloudStart solution, clients now have a way to accelerate the adoption of service-oriented environments for a private cloud that matches the speed, flexibility and economies of public cloud without the risk or loss of control."

So CloudStart works to consolidate, integrate, and converge the cloud support elements -- and in doing so creates a compelling alternative to IT infrastructure as usual. And maybe a standard on-ramp to the use of heterogeneous private clouds?

The HP CloudStart solution is offered now in Asia-Pacific and Japan and expected to be available globally in December.

I see the self-service portal as a critical differentiator, and could also lead to what we think of the "app stores" model for consumer and entertainment uses moving to the enterprise apps space. Because once a private cloud has been deployed, and if managed via a HP portal, applications in a service catalog via the portal could be then chosen and deployed in a common manner, all with a managed pay-as-you go metered model or other SLAs. Indeed, other apps within the enterprise could also be brought into the cloud to also be metered and charged back by usage to the business users.

Kind of reminds me of getting the values of SOA but having someone else build it out.

Accountants love this model, as it helps move IT from a cost center into an SLA-driven service center. Over time a variety of hybrid cloud offerings -- perhaps leveraging the standardized CloudStart deployment model and common billing model -- could be explored and transitioned to. That is, HP could then go the enterprises using CloudStart and via the management portal, offer to run those or other apps on its data centers -- perhaps substantially cutting the total costs of apps delivery.

This way, the enterprise app store and service catalog becomes the interface between the IT managers and the service vendors. IT becomes a procurement and brokering function, amid -- one hopes -- a vibrant market of cloud services offerings. It makes IT into more like any other mature business function ... like materials, logistics, supply chain, HR, energy, facilities, etc.

Future of IT?

Here's where the future of IT is headed. Whatever vendor/supplier/service provider (and its ecosystem) gets to IT as a service first and best, and then offers the best long-term value, support, management and reliability ... wins.

HP clearly wants to be on the short list of such winning providers.

You may also be interested in:

Friday, August 27, 2010

Platform Computing steps up with easy-entry solution for building private clouds

Platform Computing has paved the way to faster private cloud adoption with a low-risk, low-cost way for companies to evaluate their use of cloud computing. The Platform ISF Starter Pack, announced this week, will enable architects and IT managers to get a cloud sandbox environment up and running in less than 30 minutes, the company says.

The $4,995 Starter Pack announcement comes as a slew of vendors are focused on the adoption path for private clouds. More private cloud developments are expected at next week's VWworld conference.

Platform ISF manages application workloads across multiple virtual machine (VM) technologies and provisioning tools. It includes self-service, automated provisioning and chargeback capabilities. It supports multiple VM technologies, including ESX, Xen, KVM and Hyper-V, as well as popular provisioning tools, such as Red Hat Satellite, IBM xCAT, Symantec Altiris, and Platform Cluster Manager. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

“Organizations have plenty of toolkits to choose from as they evaluate private cloud, but they require multiple tools that users must string together themselves,” said James Pang, Vice President Product Management for Platform. “What’s more, these toolkits can cost $50,000 or more, and require 30-plus days of onsite consulting to build and customize an evaluation environment. We wanted to provide a cheap and easy way for users to get up and running quickly with a single product."

Software and best practices

The ISF Starter Pack, which costs $4,995, includes software, best practices advice and help to set up private cloud and includes:
  • One-year Platform ISF term license for 10 sockets, including support

  • Half-day orientation training

  • Half-day cloud building consultation

  • Integration advice for Platform ISF with your internal tools
You may also be interested in: