Wednesday, February 1, 2012

EMC's Hadoop strategy cuts to the chase

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

To date, Big Storage has been locked out of Big Data. It’s been all about direct attached storage for several reasons. First, Advanced SQL players have typically optimized architectures from data structure (using columnar), unique compression algorithms, and liberal usage of caching to juice response over hundreds of terabytes. For the NoSQL side, it’s been about cheap, cheap, cheap along the Internet data center model: have lots of commodity stuff and scale it out. Hadoop was engineered exactly for such an architecture; rather than speed, it was optimized for sheer linear scale.

Over the past year, most of the major platform players have planted their table stakes with Hadoop. Not surprisingly, IT household names are seeking to somehow tame Hadoop and make it safe for the enterprise.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary, invent their own technology, then share and harvest from the open source community at will. Hardly a suitable scenario for the enterprise mainstream, the common thread behind the diverse strategies of IBM, EMC, Microsoft, and Oracle toward Hadoop has been to not surprisingly make Hadoop more approachable.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary.

What’s been conspicuously absent so far was a play from Big Optimized Storage. The conventional wisdom is that SAN or NAS are premium, architected systems whose costs might be prohibitive when you talk petabytes of data.

Similarly, so far there has been a different operating philosophy behind the first generation implementations from the NoSQL world that assumed that parts would fail, and that five nines service levels were overkill. And anyway, the design of Hadoop brute forced the solution: replicate to have three unique copies of the data distributed around the cluster, as hardware is cheap.

As Big Data gains traction in the enterprise, some of it will certainly fit this pattern of something being better than nothing, as the result is unique insights that would not otherwise be possible. For instance, if your running analysis of Facebook or Twitter goes down, it probably won’t take the business with it. But as enterprises adopt Hadoop – and as pioneers stretch Hadoop to new operational use cases such as what Facebook is doing with its messaging system – those concepts of mission-criticality are being revisited.

And so, ever since EMC announced last spring that its Greenplum unit would start supporting and bundling different versions of Hadoop, we’ve been waiting for the other shoe to drop: When would EMC infuse its Big Data play with its core DNA, storage?

Today, EMC announced that its Isilon networked storage system was adding native support for Apache Hadoop’s HDFS file system. There were some interesting nuances to the rollout.

Big vendors feeling their way

It’s interesting to see how IT household names are cautiously navigating their way into unfamiliar territory. EMC becomes the latest, after Oracle and Microsoft, to calibrate their Hadoop strategy in public.

Oracle announced its Big Data appliance last fall before it lined up its Hadoop distribution. Microsoft ditched its Dryad project built around its HPC Server. Now EMC has recalibrated its Hadoop strategy; when it first unveiled its Hadoop strategy last spring, the spotlight was on the MapR proprietary alternatives to the HDFS file system of Apache Hadoop. It’s interesting that vendor initial announcements have either been vague, or have been tweaked as they’ve waded into the market. For EMC’s shift, more about that below.


For EMC, HDFS is the mainstream

MapR’s strategy (and IBM’s along with it, regarding GPFS) has prompted debate and concern in the Hadoop community about commercial vendors forking the technology. As we’ve ranted previously, Hadoop’s growth will be tied, not only to megaplatform vendors that support it, but the third party tools and solutions ecosystem that grows around it.

For such a thing to happen, ISVs and consulting firms need to have a common target to write against, and having forked versions of Hadoop won’t exactly grow large partner communities.

Regarding EMC, the original strategy was two Greenplum Hadoop editions: a Community Edition with a free Apache distro and an Enterprise Edition that bundled MapR, both under the Greenplum HD branding umbrella. At first blush, it looked like EMC was going to earn the bulk of its money from the proprietary side of the Hadoop business.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve.

What’s significant is that the new announcement of Isilon support pertains on to the HDFS open source side. More to the point, EMC is rebranding and subtly repositioning its Greenplum Hadoop offerings: Greenplum HD is the Apache HDFS edition with the optional Isilon support, and Greenplum MR is the MapR version, which is niche targeted towards advanced Hadoop use cases that demand higher performance.

Coming atop recent announcements from Oracle and Microsoft that have come clearly out on the side of OEM’ing Apache rather than anything limited or proprietary, and this amounts to an unqualified endorsement of Apache Hadoop/HDFS as not only the formal, but also the de facto standard.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve. Other forks may emerge, but they will not be at the base file system layer. This leaves IBM and MapR pigeonholed – admittedly, there will be API compatibility, but clearly both are swimming upstream.

Central Storage is newest battleground

As noted earlier, Hadoop’s heritage has been the classic Internet data center scale-out model. The advantage is that, leveraging Hadoop’s highly linear scalability, organizations could easily expand their clusters quite easily by plucking more commodity server and disk. Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

In blunt terms, the choice is whether you pay now or pay later. As mentioned before, do-it-yourself compute clusters require sweat equity – you need engineers who know how to design, deploy, and operate them. The flipside is that many, arguably most corporate IT organizations either lack the skills or the capital. There are various solutions to what might otherwise appear a Hobson’s Choice:

  • Go to a cloud service provider that has already created the infrastructure, such as what Microsoft is offering with its Hadoop-on-Azure services;
  • Look for a happy, simpler medium such as Amazon’s Elastic MapReduce on its DynamoDB service;
  • Subscribe to SaaS providers that offer Hadoop applications (e.g., social network analysis, smart grid as a service) as a service;

    Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

  • Get a platform and have a systems integrator put it together for you (key to IBM’s BigInsights offering, and applicable to any SI that has a Hadoop practice)
  • Go to an appliance or engineered systems approach that puts Hadoop and/or its subsystems in a box, such as with Oracle Big Data Appliance or EMC’s Greenplum DCA. The systems engineering is mostly done for you, but the increments for growing the system can be much larger than simply adding a few x86 servers here or there (Greenplum HD DCA can scale in groups of 4 server modules). Entry or expansion costs are not necessarily cheap, but then again, you have to balance capital cost against labor.
  • Surrounding Hadoop infrastructure with solutions. This is not a mutually exclusive strategy; unless you’re Cloudera or Hortonworks, which make their business bundling and supporting the core Apache Hadoop platform, most of the household names will bundle frameworks, algorithms, and eventually solutions that in effect place Hadoop under the hood. For EMC, the strategy is their recent announcement of a Unified Analytics Platform (UAP) that provides collaborative development capabilities for Big Data applications. EMC is (or will be) hardly alone here.

With EMC’s new offering, the scale-up option tackles the next variable: storage. This is the natural progression of a market that will address many constituencies, and where there will be no single silver bullet that applies to all.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Tuesday, January 31, 2012

Enterprise architects play key role in transformation, data analytics value -- but they need to act fast, say Open Group speakers

Good data management, analytics, and helping to shape the goals of the business are keys to transforming the enterprise through impactful enterprise architecture (EA). That was the theme, from different perspectives, presented by a series of plenary speakers this week at The Open Group Conference in San Francisco.

Jeanne Ross, Director and Principal Research Scientist at MIT's Center for Information System Research, opened Monday's plenary session, telling the attendees that the stakes are high for EA, which needs to show swift success in the new digital economy. Enterprise architects also now need to help their organizations better use new services and instill a "value cycle." [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Coming from the siloed past in IT, companies are now moving to business service-driven processes across various resources, Ross said. But they need to recognize the forces around consumption of such services, not just the implementation.

Making good data management a priority, a "single source of truth" is also at the heart of making EA valuable, said Ross. Ensuring the quality of data and the speed of data refresh will help enterprise architects rise in performance appreciation more than just about anything else, she said. Ross studies how firms develop competitive advantage through the implementation and reuse of digitized platforms.

Some day CIOs are going to report to the enterprise architect, because that's the way it ought to be.



She is also the co-author of three books: IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Enterprise Architecture As Strategy: Creating a Foundation for Business Execution, and IT Savvy: What Top Executives Must Know to Go from Pain to Gain.

I also interviewed Ross on enterprise transformation issues before the conference.

IT-enablement isn't enough, Ross said, because companies typically under-utilize new systems and applications. It's not that we can't build them, she said of systems, but that companies aren't using them to their potential. Architects need to consider this and then market and evangelize solutions.

And EAs need to be more involved with making quality data center stage in their companies. "You don't get good analytics with bad data," Ross said, "The secret to good EA is to put information in every person's hands so they can use data better." And that in turn will help transform the business and spur added innovation using IT systems and good architecture principles.

Most senior executives aren't very good at combining business and technology strategies, Ross said, and she outlined the architect's elevated role in helping their bosses deliver increased business value:
  • Help senior execs clarify business goals
  • Identify architectural capabilities that can be readily exploited
  • Present options and their implications for business goals
  • Build capabilities incrementally
She closed out, getting applause from the audience, by predicting, "Some day CIOs are going to report to the enterprise architect, because that's the way it ought to be."

Impressive cost reduction

The second plenary speaker, Celso Guiotoko, Corporate Vice President and CIO of Nissan Motor Co, Ltd., told how business value is at the top of IT principles for Nissan, information as an asset comes next, and then reducing complexity.

Using these principles, Nissan in 2005 developed "BEST" as an IT mid-term plan and significantly improved the efficiency of its information systems. BEST is an acronym for business alignment, EA, selective sourcing, and technology simplifications.

This was followed in 2009 with the development of the "Change" program, which provided the basis for further advances by changing people, technology, and "process." And, in 2011, the next IT mid-term plan "VITESSE" was launched, designed to bring direct profit to the company. VITESSE encompasses value, innovation, technology, simplification, and service excellence. Through the various initiatives, Nissan has reduced IT cost by over 40 percent, going from a cost per user of $1.09 to $0.63.

The transformed enterprise

Andy Mulholland, Global Chief Technology Officer and Corporate Vice President at Capgemini, focused on the transformed enterprise and cloud trends, as well as the effect of new devices and social networking. Forty millions tablets and 70 million smartphones are having a huge impact on how workers and consumers expect to work and shop.

The "bring your own device" phenomenon is forcing a change in thinking for enterprises, Mulholland said, as two environments are developing -- inside IT and outside IT. Typically back-end activities operate inside the firewall, while front-end people and activities operate outside the firewall, yet people nowadays want to be able to use smartphones and tablets for both personal and work tasks.

This has led to a situation in which workers are increasingly going outside IT to buy services. Mulholland quoted a Gartner prediction that up to 35 percent of IT expenditures will be outside the IT department by 2015. Other industry analysts like IDC have placed the figure higher.

IT faces a huge “re-integration project” to bring together the inside and outside services in a rational way.



Because of this, IT faces a huge “re-integration project” to bring together the inside and outside services in a rational way, Mulholland said, adding that the transformed enterprise needs to focus on the productivity of people and innovative business models.

I interviewed Mulholland a few weeks ago and we delved even deeper into the cloud duality issues now coming to the fore of enterprise technology issues and planning. I was also intrigued by a Wall Street Journal piece today on how the US faces a new tech boom. It was aligned with much of what Mulholland was saying.

The key to doing this “re-integration project,” according to Mulholland, is governance, and the industry really lacks a good cloud governance model, meaning that many businesses are already in trouble. However, enterprises shouldn't let that get in the way of progress. Mulholland advised, "If business wants something radically different from you, don't try to stop it. Try to understand it and take control of it."

Driving IT transformation

Lauren States, Vice President and Chief Technology Officer, Cloud Computing and Growth Initiatives, IBM, emphasized that transforming the enterprise requires a huge emphasis on analytics, and a successful integration of analytics and IT.

States drew on IBM's decades-long journey of constant transformation, relying on business process excellence, values-based culture, and IT-enablement. This has led to $1.5 billion in IT savings since 2005 as well as avoiding over $20 million in expenses over five years with a private analytics cloud, she said.

According to States, CMOs are overwhelmingly underprepared for the data explosion and recognize the need to invest in and integrate technology and analysis and consider analytics as business differentiators.

CEOs and CIOs are both highly focused on insights, clients, and people skills, States said, feeding into what she called the "new reality," the need to harvest and pass insights and build trusted relationships.

States' takeaway: We're at the beginning of a major change, much like the PC revolution three decades ago. The cloud's sweet spot now, she says, is in bringing new innovation and insights to marketing, sales and customer service.

No need to wait

Speaker Bill Rouse, executive director, Tennenbaum Institute at Georgia Tech, said that many enterprises wait too long to change, with the decision to transform dragging on until the damage is beyond repair. As evidence, he said that in the past 25 years, 1000 companies have dropped from the Fortune 500 list -- showing enterprise transformation has high failure rate, and that waiting for the right time change is a risky business plan.

Moreover, for those enterprises seeking transformation, they need to look at the full ecosystem that a business operates in to effectively transform, says Rouse. Business ecosystems are co-creating high-value services, expanding transformation across supply chains, says Rouse. This is an important nee dimension, he added.

Using analytics better to support evidence-based decision making is transformative and should be a priority, says Rouse. And architecture-oriented thinking can be transformative in itself, he said.

Cyber security threats

On the topic of cyber security, plenary speaker Joseph Menn, cyber security correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet, made it clear that business as usual won't do.

Joe has covered security since 1999 for both the Financial Times and then before that, for the Los Angeles Times. Fatal System Error is his third book, he also wrote All the Rave: The Rise and Fall of Shawn Fanning's Napster. I also recently interviewed him.

"It's in no one's interest to tell us how bad it really is" when it comes to cyber crime and security, said Menn. And the Stuxnet affair is huge as a harbinger of things to come, he said.

As a result, more taxpayer money will be needed for effective government-level defenses against cyber attacks, he suggested. But government intervention won't do the job alone. Increasingly, corporations will need to play more than just defense on attacks, many of which come from Russia and China and from groups that blend state and criminal interests.

Counter attacks may be a strong defense when it comes to cyber risks, and US government may "turn blind eye", says Menn. We may even see cyber crime bounty hunters that corporations hire on the QT to go after those that attack them, he said.

Meanwhile, IT groups and enterprise architects can play a bigger role. Knowing what you have helps you know when something has been taken, so improve tracking of assets, Menn told them. He also suggusted that companies keep their most critical data offline, and protect their intellectual property by burying it in and among fake data.

Allen Brown, President and CEO of The Open Group, said that more than 400 corporations are now members of The Open Group, showing strong growth over past 12 years since its founding. TOGAF 9 certification rates growing rapidly worldwide, he said.

FACE standard

In other news from The Open Group on Monday, The Future Airborne Capability Environment (FACE) Consortium, announced the official release of the FACE Technical Standard, which provides guidelines for creating a common operating environment to support applications across multiple Department of Defense avionics systems. See my interview on FACE as it was just getting under way.

The standard is designed to enhance the U.S. military aviation community’s ability to address issues of limited software reuse and accelerate and enhance warfighter capabilities, as well as enabling the community to take advantage of new technologies more rapidly and affordably.

It is our hope this standard will accelerate the open and secure development of products within the Department of Defense’s Airborne community by enabling industry-government collaboration.

The FACE technical standard will enable developers to create and deploy a wide catalog of applications for use across the spectrum of military aviation systems through a common operating environment. Product development efforts by industry and procurements by government customer organizations are already underway based on the FACE standard.

“The introduction of the FACE Technical Standard is an important milestone in extending interoperability among the armed forces and creating a common platform for avionics that enables systems to work together across each of the branches of the U.S. military,” said Brown.

And on Tuesday, The Open Group announced the arrival of ArchiMate 2.0, the latest version of the organization's open and independent modeling language for enterprise architecture. This version is more tightly aligned to TOGAF, so enterprise architects using the language can improve the way key business and IT stakeholders collaborate and adapt to change.

ArchiMate 2.0 improves collaboration through clearer understanding across multiple functions, including business executives, enterprise architects, systems analysts, software engineers, business process consultants and infrastructure engineers, according to the release. The new standard enables the creation of fully integrated models of an organization's Enterprise Architecture, the motivation behind it, and the programs, projects and migration paths to implement it.

"By combining TOGAF and ArchiMate, TOGAF becomes more easy to apply in any organization," said Harmen van den Berg, partner and co-founder at BiZZdesign. "Having a reference model makes them both easier to apply in any industry or vertical."

He added: "Architects like to make models, and this now helps them to use those models to create change in the organization, for something that means more to the business."

Making the EA function a chief weapon of enterprise transformation in a time of roiling change and complexity, that's the main message from the conference. No time to wait.

You may also be interested in:

Friday, January 20, 2012

CRM data integration provider Scribe boosts cloud offering with GUI synchronization services, developer program for connectors

Scribe Software, a customer relationship management (CRM) data integration provider, will launch next week Scribe Online Synchronization Services (SYS), the second major service delivered on the Scribe Online cloud integration platform.

According to the Manchester, NH-based company, Scribe Online provides a cloud-based alternative to integration middleware, and simplifies the integration experience without sacrificing performance or functionality. The goal is to allow companies to reap the benefits of integrated CRM data from a variety of sources and technologies in days, rather than months.

The timing is more than pretty good because CRM as a category is expanding, driven by businesses' recognition that rich data on customers (and partners) is essential for better productivity, and for leveraging cloud-enabled business innovation outside the company.

Many companies I speak with are looking to pull appropriate and relevant data in near real-time from many internal systems of record to augment the full picture of customers. They are looking to their CRM systems as the meta data repository of such integrated views. And now they want to bring in more data from more sources, including those outside their four walls.

And, of course, the power of knowing the most about customers -- and making the analysis from such data widely available to business units and functions across the enterprise -- can make or break a company. Across the full business cycle, relevant and insightful data on customers drives success, from product development to effective marketing, to help desk and support, to entering new markets.

Scribe then, has developed its cloud offerings, built on Microsoft Azure and released last year, to make the instantiation of CRM data from as many sources as makes sense a function of the cloud, as well as on-premises. Such a hybrid approach to data integration makes even more sense than a hybrid approach to IT infrastructure services, if you ask me. Your really need to be in the cloud to leverage the hybrid data integration benefits.

Now, Scribe has made it easier to leverage that cloud by offering synchronization services for CRM data integration a drag-and-drop affair that many business users can accomplish. Furthermore, Scribe is releasing SPARK, a developer program to help foster a community effort around making more connections to more types of data available to more synchronization efforts.

“Synchronization Services builds on our commitment to deliver superior CRM integration to customers and partners in the cloud. SYS fills a void in the market for an integration tool that is affordable and easy to use,” said Lou Guercia, president and CEO of Scribe. “Until now, integration products have been either too basic or too complex.”

Developer program

Scribe, with the SPARK Solution Developer Program, is targeting software-as-a-service (SaaS) providers, channel partners, systems integrators, VARS, and other business technology consultants. This means that while enterprise IT departments are gearing up for hybrid cloud-based CRM integrations, the community of ISVs and VARs needs to move more quickly, to innovate and expand into new models.

The SPARK Solution Developer Program is designed to help solution providers quickly build data integration capabilities between their solutions and CRM, as well as any other application or endpoint on Scribe Online. This will fit very well, too, into the Salesforce.com ecosystem, and the Microsoft Dynamics one, as well.

Scribe expects that partner networks will share and extend customer data -- and value-added services on top of that joined and integrated data -- for a variety of additional business services, said Guercia. Integrated and automated marketing services providers like HubSpot, Marketo, and Eloqua, certainly come to mind, too.

“CRM is no longer just a contact management system. It’s a critical revenue enabler for the business. Companies that integrate customer data from all areas of the business benefit with increased sales and satisfied customers,” said Roger Hodskins, vice president of strategic alliances at Scribe.

CRM is no longer just a contact management system. It’s a critical revenue enabler for the business.



Using Scribe's latest offering, SaaS independent software vendors (ISVs) who offer integration to more than one CRM vendor can extend their presence in multiple CRM markets. As customers expand the scope of CRM in their businesses, integration can readily incorporate the SaaS ISVs’ offerings with connections both to CRM and to other complementary applications, said Scribe.

For more information on Scribe SYS, sign up for live weekly webinars, or to watch a four-minute demo video at scribesoft.com/online. Scribe Online SYS is available, too, free for 15 days at scribesoft.com/Free-Trials.

You may also be interested in:

Thursday, January 19, 2012

Expert Chat on how HP ecosystem provides holistic support for VMware virtualized IT environments

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

Redefine the potential of your virtualization investments.
View the full Expert Chat presentation on VMware support best practices.

Advanced and pervasive virtualization and cloud computing trends are driving the need for a better, holistic approach to IT support and remediation.

And while the technology to support and fix virtualized environments is essential, it’s the people, skills, and knowledge to manage these systems that provide the most decisive determinants of ongoing performance success.

In a special BriefingsDirect sponsored podcast, created from a recent HP Expert Chat discussion on best practices for VMware environment support, HP experts explain how they have made the service and support of global virtualization market leader VMware a top priority.

For example, Cindy Manderson, Technical Solutions Consultant for Complex Problem Resolution and Quality for VMware Products at HP, provides case studies for how managed escalation and multi-vendor support around the globe can reduce downtime by 70 percent, with large ROI benefits as well.

Other HP experts in the discussion include Pat Lampert, Critical Service Senior Technical Account Manager and Team Leader, as well as Sumithra Reddy, HP Virtualization Engineer. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP and VMware are both sponsors of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Virtualization isn’t just server-by-server, but really impacts the entire data center. You need to think about it more holistically, particularly in regard to things like security, performance and how your brands and businesses are perceived across the globe. Many of the companies that I deal with day in and day out are up at 80 percent and even 90 percent virtualized.

When they think about virtualization, they go beyond just server virtualization. It’s really now looking at storage, applications, networks and even the end-user desktop experience, or desktop as a service (VDI).

Successful virtualization is no longer just about servers, it’s about managing complexity when you get beyond the 20 percent or 30 percent level and expand into converged infrastructure virtualization without failures.

So how to take advantage of the best things about virtualization? Part of that means allowing your IT team to have access to other experienced support teams, from HP and VMware, around the world, 24×7, to help keep systems up and running. Such support also allows your IT team to progress, to learn as they go, and to be able to take advantage of more virtualization benefits over time.

Expert panel

So how do you go about attaining such benefits? How do you keep the positive side of virtualization on track? And how do you put in place an insurance policy around service and support?

Manderson: We have several different packages. Our highest level is the mission-critical. In this particular process, you're assigned a team that are across the technology that you have in your environment. But you also get a set of folks who would actually look at not just the reactive support and even some of the proactive, but how actually your entire business is running according to the ITIL standard.

That is coupled with keeping you up and running, and we also can work with you on a type that would be best suited for your environment.

Our critical and independent support includes onsite resources from HP that also include a lot of proactive support. In addition, they're more focused on specific management, but that would be more of an ITSM technology. We can look at that for you.

... We also have the hardware and software support. One of the cool things we have with our hardware support is support automation, our Insight for remote support. That can notify HP that you're having a disk drive failure. Or we will call you and say that we know that disk drive is failing, or something on a buffer server and storage is about to.

You can even take that a step further to look inside at the Windows operating system. We're hardware agnostic on that operating system. We don't care about the vendor -- and I believe we are looking at expanding that automation to other operating systems. We have installation and startup services that we can actually go out and set up and configure the hardware and software at a site.
We're hardware agnostic on that operating system. We don't care about the vendor.


So we definitely integrate across all the multi-vendor services. We run the gamut between all the x86 operating systems, as well as our proprietary operating systems, our servers and storage. Again, we're no stranger to multi-vendor support and keeping the entire environment up and running.

... One of our most creative services would be Proactive Select, a core product series of credits. You can use these credits for maybe planning on migration and upgrade. You can say you need some consulting time. You can use these credits and work with upgrade and migration. You may need some performance or you may need some type of environmental assessment, and these credits can be used for that.

Gardner: When people do employ these services, how do they measure what the payoff is, the value of these services?

IDC study

Manderson: In 2010, IDC did a study. They went out and looked at the methodology, and this is out on our website. They saw that the customers who have the mission-critical services, reduce their downtime by over 70 percent, and increase their return on investment (ROI) quite high, over 400 percent. The main benefit was in problem management as well as help desk calls, because these were alleviated due to the proactive nature, a lot of looking at the entire environment, and looking at the business processes.

So take a look at the study. It shows IDC's methodology. So looking at things proactively and these support processes can certainly help you reduce that downtime.

... I've been in the multi-vendor space for many, many years -- from applications to operating systems -- all with HP.

In 2002, when VMware came on the scene, HP actually became alliance partners with them. In 2003, we became a reseller, and thus began our support partnership with them. It would only extend recent in 2005, we also became an OEM.
We have the largest number of VMware-certified professionals. We're also the largest global VMware off-site training center
We have thousands of trained and certified Microsoft engineers and Linux professionals, too.

But we have the largest number of VMware-certified professionals. We're also have the largest global VMware off-site training center. So HP also does education on these technologies as well. We’ve trained over 20,000 students in the VMware space alone.

And we have had this very strong collaboration with VMware for many years and have support teams around the globe. In addition, we also offer the same level of training that VMware support engineers do. We actually go to their facilities and train right alongside them, too.

We further do this training virtually. The training is then recorded and made available on demand for reference, for folks who are not able to attend a scheduled course. There's definitely a very strong partnership, and as you see from our history with the other vendors as well as VMware, we are no strangers to multi-vendor support.

With all of the VMware products that HP sells, we do provide support across them all. It runs the gamut from the vSphere operating system that will install on the x86 server, through the enterprise management to the vCenter, and virtual desktop infrastructure products like VMware ThinApp. We also support the converter product getting into vCloud Director.

In addition to that, we have the ability to access our peers on the other teams across HP hardware support. This includes servers and storage, and our networking chain. We are quickly able to collaborate with them and pull together a virtual team in to focus on the customer's whole environment, to provide a one-stop shop.

Expertise across technologies

Additionally, you saw that we’ve been in this multi-vendor support business for so many years, with many experts across the other technologies, such as Microsoft and Linux. Of course, the virtual machines (VMs) are running these operating systems. So if the contract is also with them, we can easily pull them in to help us work an end-to-end solution and support it.

Gardner: Let’s think about what happens when there are different levels of support at work. How does that shake-out?

Manderson: We're in a reactive support business. If the customer has a problem, they can either call in at their local region telephone number -- whether they are in America, Europe, or Asia Pacific. There are different phone numbers for them to call.

They can also log in via the web, and they'll get to our next developer Level 1 engineer. They're a great organization and have solved over 85 percent of their cases.

If they have issues where they have to escalate, first they will be collaborating with us. We also have an online chat tool, where we are all in a virtual room, the Level 1 engineers, Level 2 engineers, etc. So we’ll be consulting and collaborating with them before they even get to a point of escalation.
If the case does end up needing escalation, chances are this person that they're already collaborating with will end up taking that case.


If the case does end up needing escalation, chances are they're already collaborating with the first person, and will then end up taking the case. That saves a lot of information transfer, as far as what type of server you have, what’s the firmware, what build level, and what’s the problem there, etc.

Once it reaches Level 2 support, as far as we can continue to collaborate, we can reach our teammates and the hardware teams, too, so we can look at the server and make sure that the environment is what we need it to be. If we can't resolve it, we can also go to Level 3 with VMware at an offline service-partner level.

We have a great relationship with the folks that we work alongside with and would escalate calls to at VMware. We’re obviously not going into Level 1 at VMware because we’ve already done all that work, and we are a service partner. They'll go right up to our peers over at VMware and then we work together, while always owning the solution that we provide back to the customer.

Another part of our infrastructure-as-a-support-organization is that we have a single customer database. I can give an example. A call came into our Level 1 French engineer. When this call came in, for the European folks, it was already the end of their day, and the French engineer could not speak English. It was a critical down, their VMs were offline.

HP Virtual Room


So we worked in a virtual room and they talked to us, and brought the case to us here in America’s time zone. We worked with this case and another tool called HP Virtual Room, where we could actually all look at the customers' desktops in real time. They happened to have EVA storage, and we quickly got an EVA engineer engaged. Of course, we had to find a resource in the Americas because the European folks had already left. So we're all looking in real-time at the customer’s environment and found out that they had locked the storage.

The EVA engineer helped to get back online, while we all watched and the French engineer was translating in French for the customer in order to get it all resolved. We got it back online, and the customers were ready to home.

We gave instructions on getting log files and we placed a call for follow-up for the daytime hours in Europe the next day. So our counterparts in European support teams picked that up and worked with the customers to resolution, to analyze exactly what happened and prevent it in the future.

We have another process in HP that can actually go with top organizations, our escalation manager process. I was lead source for a particular case where we had a field team assisting a customer deploying a virtual desktop infrastructure (VDI) design. They had a third-party VDI vendor. They had HP hardware, servers, and virtual connects. They had our storage, and we didn’t quite know where the bottleneck was. They were having performance issues by trying to have this VDI at two different locations with the hardware at one site.

The escalation manager was able to get the local office to borrow equipment, and then try to get performance and network traces. They had the Engineering Problem Management Resource (EPMR) lab in Houston trying to duplicate the problems.

Our escalation manager was able to drive the issue to completion across not only the solution standards, but the local office, to owning the actual escalation with all the action items to keep this all on track. We knew where we were going to go. That was about a six-month case, but we did finally find was that the customer was on the technological edge, and the "pipe" to have that performance just did not exist.
Redefine the potential of your virtualization investments.
View the full Expert Chat presentation on VMware support best practices.
Site visits

Pat Lampert is a technical account manager and does site visits. The technical account managers do go out on site. So we’re aware of the environment. We have the information of your environment documented into the database. When you call, we’re not saying, "Now what kind of server is this? What’s the firmware?" We know this because we already have it documented. We could be calling them to say, "Server 3 is running a little off." We already which know VMware version this is on, because we have that information.

And because we have that, we can also offer proactive advice. We can know that there's a new firmware update, or VMware just came out with a new build, and we have a place where you can go find the latest that's specific to your environment. So this helps to reduce further incidents, because we can be more proactive to help you maintain your business.

Gardner: What are some of the the most frequent questions you receive from the field?

Reddy: I'll address two questions that are frequently showing up. One is, what is the difference between the VMware ESXi image and an HP ESXi image?

Basically, HP takes the same ESXi image that VMware provides to the customers. It then adds HP thin components for hardware management, and it also adds any latest fibre channel and network drivers. Once it's tested and certified, it's available for download both from HP and VMware websites.

Major differences

A
nd one of the major difference between the two images is that VMware image is disk installable only, whereas HP image can be installed on a disk, USB key, or a SD card.

The other question we're getting nowadays is how to upgrade from VCA4 to VCA5. As with any major upgrades, planning helps. The first thing I would do is understand the difference between ESX 4 and ESX 5, because starting with ESX 5, we have no service console. So we need to understand what the architectural differences are.

Also learn about the new licensing policies. Then, use the System Analyzer that VMware provides to evaluate the current environments, and download, check, and complete the checklist. Once this is done, hopefully the upgrade will go smoothly.

Lampert: Another question that has come up from customers has to do with the added value of getting support directly from HP. It was partly addressed during the presentation we just gave. First of all, VMware does have a fine support organization. I have a couple of friends who work in VMware Support, and they do a good job of supporting their product.

HP, in addition to a similar level of expertise in the product, also offers our expertise in HP hardware, especially if you have systems based on HP Blades. The infrastructure behind that often is tied very closely to the performance and availability of your ESX host. So when you call us, you will have not only someone who is very familiar with the VMware product, but also is familiar with the HP hardware and able to pull in the proper resourced results, problems you might encounter with running vSphere on HP hardware especially.

In addition to that, we have a partnership agreement with VMware, and when you call in for support through HP, you're getting that same level of service when we have to go to VMware to get answers to questions or fixes.

One other question that has come up is about our lab ability to reproduce problems. We have two global labs, one in India and one in the United States. We have several static vSphere cluster configurations with a number of different types of servers already in those configurations, and the ability, when needed, to add specific models, if there is a problem that’s specific to a particular Blade or rack-mounted server model, or a particular card or something like that. So we're quite able to reproduce most problems that come in. We even have some Dell and IBM equipment in our lab also.

Gardner: What other issues are users grappling with?

Reddy: One question I can answer is how to troubleshoot server crashes. When something goes wrong in ESX, we call it the "Purple Screen of Death." Often, these are results of hardware failure, but we still need to rule out the software. So we collect all the logs, and look at it to see if it's a software issue. If it's not a software issue, then we engage the hardware team to see how we can get to the root cause and fix the issue.

Lampert: To dovetail with Sumithra’s comment there, one of the questions I get frequently is what to do if you don’t have a dump. Say the host hangs, and that seems to be almost more common than the Purple Screen of Death. Some customers are't aware that through HP’s Integrated Lights-Out Management, there is the ability to generate a non-maskable interrupt (NMI) just by pressing a button, and by saving a certain environment variable ahead of time in your ESX host.

KB article

There is a KB article on this, by the way, if you just search on NMI and core dumping in VMware. But with that setup, you can force a dump while a system is in a hung state, and that will assist us usually in troubleshooting and isolating what caused the hang, whether it be hardware or a problem with the ESX host software.

One question that came up ahead of time is what HP suggests as far as getting a handle on our inventory of VMs? I happened to be involved in field testing some new tools from HP that will be available in January and February regarding vSphere.

One of them is a Holistic Blade and Firmware Analysis that takes into account the VMware environment on our Blade systems which we are working on having ready soon. We have just completed field tests.

And the second is a really nifty Inventory Report HP has just put together. We're just completing field tests on that now. It will be available soon. Basically, we install a small Perl script in the customer environment on any machine that has access to the vCenter host and has a vSphere CLI installed.

This Perl Script crawls through the VMware environment and builds an XML file, which we then feed into a report generator here at HP. This can be used for us to gather information on customers, so we have ahead of time a clear picture of the environment. But also it will be sold as a service to customers.

The report is really quite nice, with all sorts of charts and showing availability of machines and availability of memory and also disk space. It's a very nice report.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.
Redefine the potential of your virtualization investments.
View the full Expert Chat presentation on VMware support best practices.
You may also be interested in:

Wednesday, January 18, 2012

The Open Group releases SOA and cloud computing standards, updates OSIMM

The Open Group has announced this week the availability of two new industry standards to integrate fundamental elements of service oriented architecture (SOA) and cloud computing into a solution for enterprise architecture (EA). The new standards are: SOA Reference Architecture (SOA RA) and the Service-Oriented Cloud Computing Infrastructure Framework (SOCCI).

The Open Group has released updates to The Open Group Service Integration Maturity Model (OSIMM), which has now been ratified as an ISO and IEC (ISO/IEC 166880) International Standard. OSIMM gives organizations a common model for developing a roadmap for achieving the right level of service adoption to meet business objectives. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

ROA RA is a blueprint for creating and evaluating SOA solutions, while SOCCI is the first Open Group cloud standard that outlines the concepts and architectural building blocks necessary for infrastructures to support SOA and cloud initiatives.

"In today's global competitive marketplace it is imperative that business and IT drivers are aligned," said Chris Harding, Director for Interoperability, The Open Group. "Each of the three standards is vendor-neutral and helps an organization of any size to design and implement the proper SOA and cloud solutions for its business objectives."

SOA RA is an industry standard reference architecture for the development of SOA solutions. Utilizing the SOA RA Standard, enterprise architects will have a common language and approach for creating SOA solutions that meet different organizational needs and bridge the gap between business and IT.

SOCCI is the industry's first cloud standard for enterprises that wish to provide infrastructure as a service in the cloud and SOA. Developed by The Open Group SOA and Cloud Work Groups, SOCCI is the realization of an enabling framework of service-oriented components for infrastructure to be provided as a service in SOA solutions and the cloud.

The standard details a set of common SOCCI elements and management building blocks for organizations to consider and identifies the synergies that can be realized through cohesive application of SOA and cloud-based principles. Using SOCCI, organizations can incorporate cloud-based resources and services into their infrastructure for increased agility and scale, and lower maintenance costs.

Proven best practices

O
SIMM leverages proven best practices to allow consultants and IT practitioners to assess an organization's readiness and maturity level for adopting services in SOA solutions. By aligning business goals and assessing associated SOA services IT practitioners can create a detailed roadmap for integrating services for SOA and cloud computing solutions into enterprises. With the recent ratification of OSIMM 2.0 by ISO and IEC, organizations worldwide have an extensible framework for understanding the value of implementing a service model, as well as a comprehensive guide for achieving their desired level of service maturity.

Each of the three standards is vendor-neutral and helps an organization of any size to design and implement the proper SOA and cloud solutions for its business objectives.



The SOA RA technical standard, SOCCI framework, and OSIMM 2.0 International standard are available for download from The Open Group Bookstore. These new standards can also be viewed online at: SOA Reference Architecture, Service-oriented Cloud Computing Infrastructure, Open Group Service Integration Maturity Model.

In addition to the standards news, The Open Group on Jan. 30 will begin its San Francisco conference to focus on the role played by IT and EA within enterprise transformation. Among the topics to be explored:
  • The differences between EA and enterprise transformation, and how they relate to one another
  • The use of EA to facilitate enterprise transformation
  • How EA can be used to create a foundation for enterprise transformation that the board and business-line managers can understand and use to their advantage
  • How EA facilitates transformation within IT, and how does such transformation support the transformation of the enterprise as whole
  • How EA can help the enterprise successfully adapt to "disruptive technologies" like cloud computing and ubiquitous mobile access.
Among the speakers at the conference will be Andy Mulholland, the Global Chief Technology Officer and Corporate Vice President at Capgemini. In 2009, Andy was voted one of the top 25 most influential CTOs in the world by InfoWorld. And in 2010, his CTO Blog was voted best blog for business managers and CIOs for the third year running by Computer Weekly.

Andy recently participated in a BriefingsDirect podcast, in which he spoke about an upcoming Capgemini whitepaper, which draws distinctions between what cloud means to IT, and what it means to business -- while examining the complex dual relationship between the two.

Also, Jeanne Ross, Director and Principal Research Scientist at the MIT Center for Information Systems Research. Jeanne studies how firms develop competitive advantage through the implementation and reuse of digitized platforms.

Jeanne recently spoke with me about how adoption of EA leads to greater efficiencies and better business agility and explained how enterprise architects have helped lead the way to successful business transformations.

Also speaking is Joseph Menn, Cyber Security Correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet.

Joe has covered security since 1999 for both the Financial Times and then before that, for the Los Angeles Times. Fatal System Error is his third book, he also wrote All the Rave: The Rise and Fall of Shawn Fanning's Napster.

As a lead-in to his Open Group presentation, entitled "What You're Up Against: Mobsters, Nation-States, and Blurry Lines," Joe recently joined BriefingsDirect to explore the current cyber-crime landscape, the underground cyber-gang movement, and the motive behind governments collaborating with organized crime in cyber space.

Registration remains open for The Open Group Conference in San Francisco, beginning Jan. 30.

You may also be interested in: