Tuesday, February 7, 2012

HP provides more picks and shovels to cloud miners

In two separate recent announcements, HP has affirmed its goal of being the neutral supplier of choice for all things cloud.

Last week, HP delivered HP Discovery and Dependency Mapping Advanced (DDMA) Content Pack 10, bringing with the ability to better manage cloud instances across the enterprise-public cloud continuum, including deep discovery of virtualized workloads' performance inside of Amazon and VMware vCloud clouds.

Then this week, HP on Tuesday further thrust its global market-leading LoadRunner performance testing suite -- via partners -- into development clouds, known as platform as a service (PaaS) providers. This is clearly aimed at the fast-growing mobile development and greenfield SMB development spaces.

Interestingly, neither the cloud operations efficiency benefits of the updated DDMA nor the HP LoadRunner-in-the-Cloud offering will be initially offered inside of any HP public clouds. These formerly enterprise-targeted development and operations tools are being extended to more private and public cloud uses -- but via cloud ecosystems, partners and channels. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Picks and shovels

While HP is not taking the arrival of its own public cloud offerings off the table -- indeed they have committed to them in the past -- they seem to be happy for now to develop the picks and shovels and provide them to the miners and the current mine owners.

The strategy lessens the potential for conflict that other cloud providers such as Microsoft, Google, Amazon, Salesforce.com and VMware can face (no mention yet of Microsoft Azure). And it makes HP more amenable as a supplier to those public clouds, which may be of interest to them, given both HP's technologies and their vast and global installed base of enterprise customers.

While HP is not taking the arrival of its own public cloud offerings off the table . . . they seem to be happy for now to develop the picks and shovels



Digging more deeply into the news items, the DDMA Content Pack 10 brings a critical part of the HP IT Performance Suite to more types of cloud uses, as well as back into more kinds of mainframes, particularly for the IBM iSeries servers. Reaching more deeply into legacy workloads and across various cloud and hybrid models allows for more automation of those apps and runtimes, and fosters far better change management when those loads need to be adjusted to accommodate varying demands.

HP is also enabling any IP-pingable device to be discovered, mapped, and managed via the various online deployments. The overall benefit is more a lifecycle approach to management of apps and devices across legacy and hybrid environments, and to gain a single view as a business service of all the parts that support the apps and processes regardless of their locations.

Discovery capabilities have also been added for HP ServiceGuard, Glassfish open-source server and VMware Datastore. In addition, integration has also been enhanced to include CiscoWorks LAN Management Solution (LMS), Aperture VISTA, NNMi, Application Signature and Service-Now. Functionality has also been added to the integration of Troux. Finally, Content Pack 10 provides new features such as support for SAP JCo3, Oracle VM Server for SPARC, UCMDB to XML export and a BMC Atrium pull adapter.

Three partners

On the LoadRunner news today, HP has worked so far with three partners that will take the LoadRunner on demand services out to their specific customers and on their public clouds of their choices. The initial partners are: Orasi Software Inc., Genilogix and J9 Technologies. These partners will set the pricing, but the performance testing services are deliver on a pay as you go basis.

"This is unique. It's the easiest, lowest-cost way to bring LoadRunner capabilities to the cloud," said Matt Morgan, senior director, Product and Solution Marketing, Software, HP.

It's the easiest, lowest-cost way to bring LoadRunner capabilities to the cloud.



Incidentally, the testing phase of the cloud PaaS proposition is essential for quick devops and RAD benefits. It further allows any investments that enterprises have made in Loadrunner to be extended via the cloud providers to developers working on new mobile projects, or for them to control and view testing results when using third-party developers.

By straddling the cloud-enterprise ecosystem HP may be able to bring more value to the channel partners and end users -- especially SMBs -- then trying to build the whole cloud first and putting in services later. It's the ecosystem of services, after all, not the location of them, that matters most.

You may also be interested in:

Sunday, February 5, 2012

San Francisco Conference observations: Enterprise transformation, enterprise architecture, SOA and a splash of cloud computing

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group.

By Chris Harding, The Open Group

This week I have been at The Open Group conference in San Francisco. The theme was Enterprise Transformation which, in simple terms means changing how your business works to take advantage of the latest developments in IT.

Evidence of these developments is all around. I took a break and went for coffee and a sandwich, to a little cafe down on Pine and Leavenworth that seemed to be run by and for the Millennium generation. True to type, my server pulled out a cellphone, with a device attached through which I swiped my credit card; an app read my screen-scrawled signature and the transaction was complete.

Then dinner. We spoke to the hotel concierge, she tapped a few keys on her terminal and, hey presto, we had a window table at a restaurant on Fisherman's Wharf. No lengthy phone negotiations with the Maitre d'. We were just connected with the resource that we needed, quickly and efficiently.

The power of ubiquitous technology to transform the enterprise was the theme of the inspirational plenary presentation given by Andy Mulholland, Global CTO at Capgemini. Mobility, the Cloud, and big data are the three powerful technical forces that must be harnessed by the architect to move the business to smarter operation and new markets.

If you had thought five years ago that no technical trend could possibly generate more interest and excitement than SOA, cloud computing would now be proving you wrong.



Jeanne Ross of the MIT Sloan School of Management shared her recipe for architecting business success, with examples drawn from several major companies. Indomitable and inimitable, she always challenges her audience to think through the issues. This time we responded with, "Don't small companies need architecture too?" Of course they do, was the answer, but the architecture of a big corporation is very different from that of a corner cafe.

Corporations don't come much bigger than Nissan. Celso Guiotoko, Corporate VP and CIO at the Nissan Motor Company, told us how Nissan are using enterprise architecture for business transformation. Highlights included the concept of information capitalization, the rationalization of the application portfolio through SOA and reusable services, and the delivery of technology resource through a private cloud platform.

The set of stimulating plenary presentations on the first day of the conference was completed by Lauren States, VP and CTO Cloud Computing and Growth Initiatives at IBM. Everyone now expects business results from technical change, and there is huge pressure on the people involved to deliver results that meet these expectations. IT enablement is one part of the answer, but it must be matched by business process excellence and values-based culture for real productivity and growth.

My role in The Open Group is to support our work on Cloud Computing and SOA, and these activities took all my attention after the initial plenary. If you had thought, five years ago, that no technical trend could possibly generate more interest and excitement than SOA, Cloud Computing would now be proving you wrong.

Interest in SOA continues

But interest in SOA continues, and we had a SOA stream including presentations of forward thinking on how to use SOA to deliver agility, and on SOA governance, as well as presentations describing and explaining the use of key Open Group SOA standards and guides: the Service Integration Maturity Model (OSIMM), the SOA Reference Architecture, and the Guide to using TOGAF for SOA.

We then moved into the Cloud , with a presentation by Mike Walker of Microsoft on why Enterprise Architecture must lead Cloud strategy and planning. The “why” was followed by the “how”: Zapthink's Jason Bloomberg described Representational State Transfer (REST), which many now see as a key foundational principle for Cloud architecture. But perhaps it is not the only principle; a later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).

In the evening we had a CloudCamp, hosted by The Open Group and conducted as a separate event by the CloudCamp organization. The original CloudCamp concept was of an "unconference" where early adopters of Cloud Computing technologies exchange ideas. Its founder, Dave Nielsen, is now planning to set up a demo center where those adopters can experiment with setting up private clouds. This transition from idea to experiment reflects the changing status of mainstream cloud adoption.

The public conference streams were followed by a meeting of the Open Group Cloud Computing Work Group. This is currently pursuing nine separate projects to develop standards and guidance for architects using cloud computing. The meeting in San Francisco focused on one of these - the Cloud Computing Reference Architecture. It compared submissions from five companies, also taking into account ongoing work at the U.S. National Institute of Standards and Technology (NIST), with the aim of creating a base from which to create an Open Group reference architecture for Cloud Computing. This gave a productive finish to a busy week of information gathering and discussion.

A later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).



Ralph Hitz of Visana, a health insurance company based in Switzerland, made an interesting comment on our reference architecture discussion. He remarked that we were not seeking to change or evolve the NIST service and deployment models. This may seem boring, but it is true, and it is right. Cloud Computing is now where the automobile was in 1920. We are pretty much agreed that it will have four wheels and be powered by gasoline. The business and economic impact is yet to come.

So now I'm on my way to the airport for the flight home. I checked in online, and my boarding pass is on my cellphone. Big companies, as well as small ones, now routinely use mobile technology, and my airline has a frequent-flyer app. It's just a shame that they can't manage a decent cup of coffee.

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group.

You may also be interested in:

Wednesday, February 1, 2012

EMC's Hadoop strategy cuts to the chase

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

To date, Big Storage has been locked out of Big Data. It’s been all about direct attached storage for several reasons. First, Advanced SQL players have typically optimized architectures from data structure (using columnar), unique compression algorithms, and liberal usage of caching to juice response over hundreds of terabytes. For the NoSQL side, it’s been about cheap, cheap, cheap along the Internet data center model: have lots of commodity stuff and scale it out. Hadoop was engineered exactly for such an architecture; rather than speed, it was optimized for sheer linear scale.

Over the past year, most of the major platform players have planted their table stakes with Hadoop. Not surprisingly, IT household names are seeking to somehow tame Hadoop and make it safe for the enterprise.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary, invent their own technology, then share and harvest from the open source community at will. Hardly a suitable scenario for the enterprise mainstream, the common thread behind the diverse strategies of IBM, EMC, Microsoft, and Oracle toward Hadoop has been to not surprisingly make Hadoop more approachable.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary.

What’s been conspicuously absent so far was a play from Big Optimized Storage. The conventional wisdom is that SAN or NAS are premium, architected systems whose costs might be prohibitive when you talk petabytes of data.

Similarly, so far there has been a different operating philosophy behind the first generation implementations from the NoSQL world that assumed that parts would fail, and that five nines service levels were overkill. And anyway, the design of Hadoop brute forced the solution: replicate to have three unique copies of the data distributed around the cluster, as hardware is cheap.

As Big Data gains traction in the enterprise, some of it will certainly fit this pattern of something being better than nothing, as the result is unique insights that would not otherwise be possible. For instance, if your running analysis of Facebook or Twitter goes down, it probably won’t take the business with it. But as enterprises adopt Hadoop – and as pioneers stretch Hadoop to new operational use cases such as what Facebook is doing with its messaging system – those concepts of mission-criticality are being revisited.

And so, ever since EMC announced last spring that its Greenplum unit would start supporting and bundling different versions of Hadoop, we’ve been waiting for the other shoe to drop: When would EMC infuse its Big Data play with its core DNA, storage?

Today, EMC announced that its Isilon networked storage system was adding native support for Apache Hadoop’s HDFS file system. There were some interesting nuances to the rollout.

Big vendors feeling their way

It’s interesting to see how IT household names are cautiously navigating their way into unfamiliar territory. EMC becomes the latest, after Oracle and Microsoft, to calibrate their Hadoop strategy in public.

Oracle announced its Big Data appliance last fall before it lined up its Hadoop distribution. Microsoft ditched its Dryad project built around its HPC Server. Now EMC has recalibrated its Hadoop strategy; when it first unveiled its Hadoop strategy last spring, the spotlight was on the MapR proprietary alternatives to the HDFS file system of Apache Hadoop. It’s interesting that vendor initial announcements have either been vague, or have been tweaked as they’ve waded into the market. For EMC’s shift, more about that below.


For EMC, HDFS is the mainstream

MapR’s strategy (and IBM’s along with it, regarding GPFS) has prompted debate and concern in the Hadoop community about commercial vendors forking the technology. As we’ve ranted previously, Hadoop’s growth will be tied, not only to megaplatform vendors that support it, but the third party tools and solutions ecosystem that grows around it.

For such a thing to happen, ISVs and consulting firms need to have a common target to write against, and having forked versions of Hadoop won’t exactly grow large partner communities.

Regarding EMC, the original strategy was two Greenplum Hadoop editions: a Community Edition with a free Apache distro and an Enterprise Edition that bundled MapR, both under the Greenplum HD branding umbrella. At first blush, it looked like EMC was going to earn the bulk of its money from the proprietary side of the Hadoop business.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve.

What’s significant is that the new announcement of Isilon support pertains on to the HDFS open source side. More to the point, EMC is rebranding and subtly repositioning its Greenplum Hadoop offerings: Greenplum HD is the Apache HDFS edition with the optional Isilon support, and Greenplum MR is the MapR version, which is niche targeted towards advanced Hadoop use cases that demand higher performance.

Coming atop recent announcements from Oracle and Microsoft that have come clearly out on the side of OEM’ing Apache rather than anything limited or proprietary, and this amounts to an unqualified endorsement of Apache Hadoop/HDFS as not only the formal, but also the de facto standard.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve. Other forks may emerge, but they will not be at the base file system layer. This leaves IBM and MapR pigeonholed – admittedly, there will be API compatibility, but clearly both are swimming upstream.

Central Storage is newest battleground

As noted earlier, Hadoop’s heritage has been the classic Internet data center scale-out model. The advantage is that, leveraging Hadoop’s highly linear scalability, organizations could easily expand their clusters quite easily by plucking more commodity server and disk. Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

In blunt terms, the choice is whether you pay now or pay later. As mentioned before, do-it-yourself compute clusters require sweat equity – you need engineers who know how to design, deploy, and operate them. The flipside is that many, arguably most corporate IT organizations either lack the skills or the capital. There are various solutions to what might otherwise appear a Hobson’s Choice:

  • Go to a cloud service provider that has already created the infrastructure, such as what Microsoft is offering with its Hadoop-on-Azure services;
  • Look for a happy, simpler medium such as Amazon’s Elastic MapReduce on its DynamoDB service;
  • Subscribe to SaaS providers that offer Hadoop applications (e.g., social network analysis, smart grid as a service) as a service;

    Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

  • Get a platform and have a systems integrator put it together for you (key to IBM’s BigInsights offering, and applicable to any SI that has a Hadoop practice)
  • Go to an appliance or engineered systems approach that puts Hadoop and/or its subsystems in a box, such as with Oracle Big Data Appliance or EMC’s Greenplum DCA. The systems engineering is mostly done for you, but the increments for growing the system can be much larger than simply adding a few x86 servers here or there (Greenplum HD DCA can scale in groups of 4 server modules). Entry or expansion costs are not necessarily cheap, but then again, you have to balance capital cost against labor.
  • Surrounding Hadoop infrastructure with solutions. This is not a mutually exclusive strategy; unless you’re Cloudera or Hortonworks, which make their business bundling and supporting the core Apache Hadoop platform, most of the household names will bundle frameworks, algorithms, and eventually solutions that in effect place Hadoop under the hood. For EMC, the strategy is their recent announcement of a Unified Analytics Platform (UAP) that provides collaborative development capabilities for Big Data applications. EMC is (or will be) hardly alone here.

With EMC’s new offering, the scale-up option tackles the next variable: storage. This is the natural progression of a market that will address many constituencies, and where there will be no single silver bullet that applies to all.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Tuesday, January 31, 2012

Enterprise architects play key role in transformation, data analytics value -- but they need to act fast, say Open Group speakers

Good data management, analytics, and helping to shape the goals of the business are keys to transforming the enterprise through impactful enterprise architecture (EA). That was the theme, from different perspectives, presented by a series of plenary speakers this week at The Open Group Conference in San Francisco.

Jeanne Ross, Director and Principal Research Scientist at MIT's Center for Information System Research, opened Monday's plenary session, telling the attendees that the stakes are high for EA, which needs to show swift success in the new digital economy. Enterprise architects also now need to help their organizations better use new services and instill a "value cycle." [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Coming from the siloed past in IT, companies are now moving to business service-driven processes across various resources, Ross said. But they need to recognize the forces around consumption of such services, not just the implementation.

Making good data management a priority, a "single source of truth" is also at the heart of making EA valuable, said Ross. Ensuring the quality of data and the speed of data refresh will help enterprise architects rise in performance appreciation more than just about anything else, she said. Ross studies how firms develop competitive advantage through the implementation and reuse of digitized platforms.

Some day CIOs are going to report to the enterprise architect, because that's the way it ought to be.



She is also the co-author of three books: IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Enterprise Architecture As Strategy: Creating a Foundation for Business Execution, and IT Savvy: What Top Executives Must Know to Go from Pain to Gain.

I also interviewed Ross on enterprise transformation issues before the conference.

IT-enablement isn't enough, Ross said, because companies typically under-utilize new systems and applications. It's not that we can't build them, she said of systems, but that companies aren't using them to their potential. Architects need to consider this and then market and evangelize solutions.

And EAs need to be more involved with making quality data center stage in their companies. "You don't get good analytics with bad data," Ross said, "The secret to good EA is to put information in every person's hands so they can use data better." And that in turn will help transform the business and spur added innovation using IT systems and good architecture principles.

Most senior executives aren't very good at combining business and technology strategies, Ross said, and she outlined the architect's elevated role in helping their bosses deliver increased business value:
  • Help senior execs clarify business goals
  • Identify architectural capabilities that can be readily exploited
  • Present options and their implications for business goals
  • Build capabilities incrementally
She closed out, getting applause from the audience, by predicting, "Some day CIOs are going to report to the enterprise architect, because that's the way it ought to be."

Impressive cost reduction

The second plenary speaker, Celso Guiotoko, Corporate Vice President and CIO of Nissan Motor Co, Ltd., told how business value is at the top of IT principles for Nissan, information as an asset comes next, and then reducing complexity.

Using these principles, Nissan in 2005 developed "BEST" as an IT mid-term plan and significantly improved the efficiency of its information systems. BEST is an acronym for business alignment, EA, selective sourcing, and technology simplifications.

This was followed in 2009 with the development of the "Change" program, which provided the basis for further advances by changing people, technology, and "process." And, in 2011, the next IT mid-term plan "VITESSE" was launched, designed to bring direct profit to the company. VITESSE encompasses value, innovation, technology, simplification, and service excellence. Through the various initiatives, Nissan has reduced IT cost by over 40 percent, going from a cost per user of $1.09 to $0.63.

The transformed enterprise

Andy Mulholland, Global Chief Technology Officer and Corporate Vice President at Capgemini, focused on the transformed enterprise and cloud trends, as well as the effect of new devices and social networking. Forty millions tablets and 70 million smartphones are having a huge impact on how workers and consumers expect to work and shop.

The "bring your own device" phenomenon is forcing a change in thinking for enterprises, Mulholland said, as two environments are developing -- inside IT and outside IT. Typically back-end activities operate inside the firewall, while front-end people and activities operate outside the firewall, yet people nowadays want to be able to use smartphones and tablets for both personal and work tasks.

This has led to a situation in which workers are increasingly going outside IT to buy services. Mulholland quoted a Gartner prediction that up to 35 percent of IT expenditures will be outside the IT department by 2015. Other industry analysts like IDC have placed the figure higher.

IT faces a huge “re-integration project” to bring together the inside and outside services in a rational way.



Because of this, IT faces a huge “re-integration project” to bring together the inside and outside services in a rational way, Mulholland said, adding that the transformed enterprise needs to focus on the productivity of people and innovative business models.

I interviewed Mulholland a few weeks ago and we delved even deeper into the cloud duality issues now coming to the fore of enterprise technology issues and planning. I was also intrigued by a Wall Street Journal piece today on how the US faces a new tech boom. It was aligned with much of what Mulholland was saying.

The key to doing this “re-integration project,” according to Mulholland, is governance, and the industry really lacks a good cloud governance model, meaning that many businesses are already in trouble. However, enterprises shouldn't let that get in the way of progress. Mulholland advised, "If business wants something radically different from you, don't try to stop it. Try to understand it and take control of it."

Driving IT transformation

Lauren States, Vice President and Chief Technology Officer, Cloud Computing and Growth Initiatives, IBM, emphasized that transforming the enterprise requires a huge emphasis on analytics, and a successful integration of analytics and IT.

States drew on IBM's decades-long journey of constant transformation, relying on business process excellence, values-based culture, and IT-enablement. This has led to $1.5 billion in IT savings since 2005 as well as avoiding over $20 million in expenses over five years with a private analytics cloud, she said.

According to States, CMOs are overwhelmingly underprepared for the data explosion and recognize the need to invest in and integrate technology and analysis and consider analytics as business differentiators.

CEOs and CIOs are both highly focused on insights, clients, and people skills, States said, feeding into what she called the "new reality," the need to harvest and pass insights and build trusted relationships.

States' takeaway: We're at the beginning of a major change, much like the PC revolution three decades ago. The cloud's sweet spot now, she says, is in bringing new innovation and insights to marketing, sales and customer service.

No need to wait

Speaker Bill Rouse, executive director, Tennenbaum Institute at Georgia Tech, said that many enterprises wait too long to change, with the decision to transform dragging on until the damage is beyond repair. As evidence, he said that in the past 25 years, 1000 companies have dropped from the Fortune 500 list -- showing enterprise transformation has high failure rate, and that waiting for the right time change is a risky business plan.

Moreover, for those enterprises seeking transformation, they need to look at the full ecosystem that a business operates in to effectively transform, says Rouse. Business ecosystems are co-creating high-value services, expanding transformation across supply chains, says Rouse. This is an important nee dimension, he added.

Using analytics better to support evidence-based decision making is transformative and should be a priority, says Rouse. And architecture-oriented thinking can be transformative in itself, he said.

Cyber security threats

On the topic of cyber security, plenary speaker Joseph Menn, cyber security correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet, made it clear that business as usual won't do.

Joe has covered security since 1999 for both the Financial Times and then before that, for the Los Angeles Times. Fatal System Error is his third book, he also wrote All the Rave: The Rise and Fall of Shawn Fanning's Napster. I also recently interviewed him.

"It's in no one's interest to tell us how bad it really is" when it comes to cyber crime and security, said Menn. And the Stuxnet affair is huge as a harbinger of things to come, he said.

As a result, more taxpayer money will be needed for effective government-level defenses against cyber attacks, he suggested. But government intervention won't do the job alone. Increasingly, corporations will need to play more than just defense on attacks, many of which come from Russia and China and from groups that blend state and criminal interests.

Counter attacks may be a strong defense when it comes to cyber risks, and US government may "turn blind eye", says Menn. We may even see cyber crime bounty hunters that corporations hire on the QT to go after those that attack them, he said.

Meanwhile, IT groups and enterprise architects can play a bigger role. Knowing what you have helps you know when something has been taken, so improve tracking of assets, Menn told them. He also suggusted that companies keep their most critical data offline, and protect their intellectual property by burying it in and among fake data.

Allen Brown, President and CEO of The Open Group, said that more than 400 corporations are now members of The Open Group, showing strong growth over past 12 years since its founding. TOGAF 9 certification rates growing rapidly worldwide, he said.

FACE standard

In other news from The Open Group on Monday, The Future Airborne Capability Environment (FACE) Consortium, announced the official release of the FACE Technical Standard, which provides guidelines for creating a common operating environment to support applications across multiple Department of Defense avionics systems. See my interview on FACE as it was just getting under way.

The standard is designed to enhance the U.S. military aviation community’s ability to address issues of limited software reuse and accelerate and enhance warfighter capabilities, as well as enabling the community to take advantage of new technologies more rapidly and affordably.

It is our hope this standard will accelerate the open and secure development of products within the Department of Defense’s Airborne community by enabling industry-government collaboration.

The FACE technical standard will enable developers to create and deploy a wide catalog of applications for use across the spectrum of military aviation systems through a common operating environment. Product development efforts by industry and procurements by government customer organizations are already underway based on the FACE standard.

“The introduction of the FACE Technical Standard is an important milestone in extending interoperability among the armed forces and creating a common platform for avionics that enables systems to work together across each of the branches of the U.S. military,” said Brown.

And on Tuesday, The Open Group announced the arrival of ArchiMate 2.0, the latest version of the organization's open and independent modeling language for enterprise architecture. This version is more tightly aligned to TOGAF, so enterprise architects using the language can improve the way key business and IT stakeholders collaborate and adapt to change.

ArchiMate 2.0 improves collaboration through clearer understanding across multiple functions, including business executives, enterprise architects, systems analysts, software engineers, business process consultants and infrastructure engineers, according to the release. The new standard enables the creation of fully integrated models of an organization's Enterprise Architecture, the motivation behind it, and the programs, projects and migration paths to implement it.

"By combining TOGAF and ArchiMate, TOGAF becomes more easy to apply in any organization," said Harmen van den Berg, partner and co-founder at BiZZdesign. "Having a reference model makes them both easier to apply in any industry or vertical."

He added: "Architects like to make models, and this now helps them to use those models to create change in the organization, for something that means more to the business."

Making the EA function a chief weapon of enterprise transformation in a time of roiling change and complexity, that's the main message from the conference. No time to wait.

You may also be interested in:

Friday, January 20, 2012

CRM data integration provider Scribe boosts cloud offering with GUI synchronization services, developer program for connectors

Scribe Software, a customer relationship management (CRM) data integration provider, will launch next week Scribe Online Synchronization Services (SYS), the second major service delivered on the Scribe Online cloud integration platform.

According to the Manchester, NH-based company, Scribe Online provides a cloud-based alternative to integration middleware, and simplifies the integration experience without sacrificing performance or functionality. The goal is to allow companies to reap the benefits of integrated CRM data from a variety of sources and technologies in days, rather than months.

The timing is more than pretty good because CRM as a category is expanding, driven by businesses' recognition that rich data on customers (and partners) is essential for better productivity, and for leveraging cloud-enabled business innovation outside the company.

Many companies I speak with are looking to pull appropriate and relevant data in near real-time from many internal systems of record to augment the full picture of customers. They are looking to their CRM systems as the meta data repository of such integrated views. And now they want to bring in more data from more sources, including those outside their four walls.

And, of course, the power of knowing the most about customers -- and making the analysis from such data widely available to business units and functions across the enterprise -- can make or break a company. Across the full business cycle, relevant and insightful data on customers drives success, from product development to effective marketing, to help desk and support, to entering new markets.

Scribe then, has developed its cloud offerings, built on Microsoft Azure and released last year, to make the instantiation of CRM data from as many sources as makes sense a function of the cloud, as well as on-premises. Such a hybrid approach to data integration makes even more sense than a hybrid approach to IT infrastructure services, if you ask me. Your really need to be in the cloud to leverage the hybrid data integration benefits.

Now, Scribe has made it easier to leverage that cloud by offering synchronization services for CRM data integration a drag-and-drop affair that many business users can accomplish. Furthermore, Scribe is releasing SPARK, a developer program to help foster a community effort around making more connections to more types of data available to more synchronization efforts.

“Synchronization Services builds on our commitment to deliver superior CRM integration to customers and partners in the cloud. SYS fills a void in the market for an integration tool that is affordable and easy to use,” said Lou Guercia, president and CEO of Scribe. “Until now, integration products have been either too basic or too complex.”

Developer program

Scribe, with the SPARK Solution Developer Program, is targeting software-as-a-service (SaaS) providers, channel partners, systems integrators, VARS, and other business technology consultants. This means that while enterprise IT departments are gearing up for hybrid cloud-based CRM integrations, the community of ISVs and VARs needs to move more quickly, to innovate and expand into new models.

The SPARK Solution Developer Program is designed to help solution providers quickly build data integration capabilities between their solutions and CRM, as well as any other application or endpoint on Scribe Online. This will fit very well, too, into the Salesforce.com ecosystem, and the Microsoft Dynamics one, as well.

Scribe expects that partner networks will share and extend customer data -- and value-added services on top of that joined and integrated data -- for a variety of additional business services, said Guercia. Integrated and automated marketing services providers like HubSpot, Marketo, and Eloqua, certainly come to mind, too.

“CRM is no longer just a contact management system. It’s a critical revenue enabler for the business. Companies that integrate customer data from all areas of the business benefit with increased sales and satisfied customers,” said Roger Hodskins, vice president of strategic alliances at Scribe.

CRM is no longer just a contact management system. It’s a critical revenue enabler for the business.



Using Scribe's latest offering, SaaS independent software vendors (ISVs) who offer integration to more than one CRM vendor can extend their presence in multiple CRM markets. As customers expand the scope of CRM in their businesses, integration can readily incorporate the SaaS ISVs’ offerings with connections both to CRM and to other complementary applications, said Scribe.

For more information on Scribe SYS, sign up for live weekly webinars, or to watch a four-minute demo video at scribesoft.com/online. Scribe Online SYS is available, too, free for 15 days at scribesoft.com/Free-Trials.

You may also be interested in:

Thursday, January 19, 2012

Expert Chat on how HP ecosystem provides holistic support for VMware virtualized IT environments

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

Redefine the potential of your virtualization investments.
View the full Expert Chat presentation on VMware support best practices.

Advanced and pervasive virtualization and cloud computing trends are driving the need for a better, holistic approach to IT support and remediation.

And while the technology to support and fix virtualized environments is essential, it’s the people, skills, and knowledge to manage these systems that provide the most decisive determinants of ongoing performance success.

In a special BriefingsDirect sponsored podcast, created from a recent HP Expert Chat discussion on best practices for VMware environment support, HP experts explain how they have made the service and support of global virtualization market leader VMware a top priority.

For example, Cindy Manderson, Technical Solutions Consultant for Complex Problem Resolution and Quality for VMware Products at HP, provides case studies for how managed escalation and multi-vendor support around the globe can reduce downtime by 70 percent, with large ROI benefits as well.

Other HP experts in the discussion include Pat Lampert, Critical Service Senior Technical Account Manager and Team Leader, as well as Sumithra Reddy, HP Virtualization Engineer. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP and VMware are both sponsors of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Virtualization isn’t just server-by-server, but really impacts the entire data center. You need to think about it more holistically, particularly in regard to things like security, performance and how your brands and businesses are perceived across the globe. Many of the companies that I deal with day in and day out are up at 80 percent and even 90 percent virtualized.

When they think about virtualization, they go beyond just server virtualization. It’s really now looking at storage, applications, networks and even the end-user desktop experience, or desktop as a service (VDI).

Successful virtualization is no longer just about servers, it’s about managing complexity when you get beyond the 20 percent or 30 percent level and expand into converged infrastructure virtualization without failures.

So how to take advantage of the best things about virtualization? Part of that means allowing your IT team to have access to other experienced support teams, from HP and VMware, around the world, 24×7, to help keep systems up and running. Such support also allows your IT team to progress, to learn as they go, and to be able to take advantage of more virtualization benefits over time.

Expert panel

So how do you go about attaining such benefits? How do you keep the positive side of virtualization on track? And how do you put in place an insurance policy around service and support?

Manderson: We have several different packages. Our highest level is the mission-critical. In this particular process, you're assigned a team that are across the technology that you have in your environment. But you also get a set of folks who would actually look at not just the reactive support and even some of the proactive, but how actually your entire business is running according to the ITIL standard.

That is coupled with keeping you up and running, and we also can work with you on a type that would be best suited for your environment.

Our critical and independent support includes onsite resources from HP that also include a lot of proactive support. In addition, they're more focused on specific management, but that would be more of an ITSM technology. We can look at that for you.

... We also have the hardware and software support. One of the cool things we have with our hardware support is support automation, our Insight for remote support. That can notify HP that you're having a disk drive failure. Or we will call you and say that we know that disk drive is failing, or something on a buffer server and storage is about to.

You can even take that a step further to look inside at the Windows operating system. We're hardware agnostic on that operating system. We don't care about the vendor -- and I believe we are looking at expanding that automation to other operating systems. We have installation and startup services that we can actually go out and set up and configure the hardware and software at a site.
We're hardware agnostic on that operating system. We don't care about the vendor.


So we definitely integrate across all the multi-vendor services. We run the gamut between all the x86 operating systems, as well as our proprietary operating systems, our servers and storage. Again, we're no stranger to multi-vendor support and keeping the entire environment up and running.

... One of our most creative services would be Proactive Select, a core product series of credits. You can use these credits for maybe planning on migration and upgrade. You can say you need some consulting time. You can use these credits and work with upgrade and migration. You may need some performance or you may need some type of environmental assessment, and these credits can be used for that.

Gardner: When people do employ these services, how do they measure what the payoff is, the value of these services?

IDC study

Manderson: In 2010, IDC did a study. They went out and looked at the methodology, and this is out on our website. They saw that the customers who have the mission-critical services, reduce their downtime by over 70 percent, and increase their return on investment (ROI) quite high, over 400 percent. The main benefit was in problem management as well as help desk calls, because these were alleviated due to the proactive nature, a lot of looking at the entire environment, and looking at the business processes.

So take a look at the study. It shows IDC's methodology. So looking at things proactively and these support processes can certainly help you reduce that downtime.

... I've been in the multi-vendor space for many, many years -- from applications to operating systems -- all with HP.

In 2002, when VMware came on the scene, HP actually became alliance partners with them. In 2003, we became a reseller, and thus began our support partnership with them. It would only extend recent in 2005, we also became an OEM.
We have the largest number of VMware-certified professionals. We're also the largest global VMware off-site training center
We have thousands of trained and certified Microsoft engineers and Linux professionals, too.

But we have the largest number of VMware-certified professionals. We're also have the largest global VMware off-site training center. So HP also does education on these technologies as well. We’ve trained over 20,000 students in the VMware space alone.

And we have had this very strong collaboration with VMware for many years and have support teams around the globe. In addition, we also offer the same level of training that VMware support engineers do. We actually go to their facilities and train right alongside them, too.

We further do this training virtually. The training is then recorded and made available on demand for reference, for folks who are not able to attend a scheduled course. There's definitely a very strong partnership, and as you see from our history with the other vendors as well as VMware, we are no strangers to multi-vendor support.

With all of the VMware products that HP sells, we do provide support across them all. It runs the gamut from the vSphere operating system that will install on the x86 server, through the enterprise management to the vCenter, and virtual desktop infrastructure products like VMware ThinApp. We also support the converter product getting into vCloud Director.

In addition to that, we have the ability to access our peers on the other teams across HP hardware support. This includes servers and storage, and our networking chain. We are quickly able to collaborate with them and pull together a virtual team in to focus on the customer's whole environment, to provide a one-stop shop.

Expertise across technologies

Additionally, you saw that we’ve been in this multi-vendor support business for so many years, with many experts across the other technologies, such as Microsoft and Linux. Of course, the virtual machines (VMs) are running these operating systems. So if the contract is also with them, we can easily pull them in to help us work an end-to-end solution and support it.

Gardner: Let’s think about what happens when there are different levels of support at work. How does that shake-out?

Manderson: We're in a reactive support business. If the customer has a problem, they can either call in at their local region telephone number -- whether they are in America, Europe, or Asia Pacific. There are different phone numbers for them to call.

They can also log in via the web, and they'll get to our next developer Level 1 engineer. They're a great organization and have solved over 85 percent of their cases.

If they have issues where they have to escalate, first they will be collaborating with us. We also have an online chat tool, where we are all in a virtual room, the Level 1 engineers, Level 2 engineers, etc. So we’ll be consulting and collaborating with them before they even get to a point of escalation.
If the case does end up needing escalation, chances are this person that they're already collaborating with will end up taking that case.


If the case does end up needing escalation, chances are they're already collaborating with the first person, and will then end up taking the case. That saves a lot of information transfer, as far as what type of server you have, what’s the firmware, what build level, and what’s the problem there, etc.

Once it reaches Level 2 support, as far as we can continue to collaborate, we can reach our teammates and the hardware teams, too, so we can look at the server and make sure that the environment is what we need it to be. If we can't resolve it, we can also go to Level 3 with VMware at an offline service-partner level.

We have a great relationship with the folks that we work alongside with and would escalate calls to at VMware. We’re obviously not going into Level 1 at VMware because we’ve already done all that work, and we are a service partner. They'll go right up to our peers over at VMware and then we work together, while always owning the solution that we provide back to the customer.

Another part of our infrastructure-as-a-support-organization is that we have a single customer database. I can give an example. A call came into our Level 1 French engineer. When this call came in, for the European folks, it was already the end of their day, and the French engineer could not speak English. It was a critical down, their VMs were offline.

HP Virtual Room


So we worked in a virtual room and they talked to us, and brought the case to us here in America’s time zone. We worked with this case and another tool called HP Virtual Room, where we could actually all look at the customers' desktops in real time. They happened to have EVA storage, and we quickly got an EVA engineer engaged. Of course, we had to find a resource in the Americas because the European folks had already left. So we're all looking in real-time at the customer’s environment and found out that they had locked the storage.

The EVA engineer helped to get back online, while we all watched and the French engineer was translating in French for the customer in order to get it all resolved. We got it back online, and the customers were ready to home.

We gave instructions on getting log files and we placed a call for follow-up for the daytime hours in Europe the next day. So our counterparts in European support teams picked that up and worked with the customers to resolution, to analyze exactly what happened and prevent it in the future.

We have another process in HP that can actually go with top organizations, our escalation manager process. I was lead source for a particular case where we had a field team assisting a customer deploying a virtual desktop infrastructure (VDI) design. They had a third-party VDI vendor. They had HP hardware, servers, and virtual connects. They had our storage, and we didn’t quite know where the bottleneck was. They were having performance issues by trying to have this VDI at two different locations with the hardware at one site.

The escalation manager was able to get the local office to borrow equipment, and then try to get performance and network traces. They had the Engineering Problem Management Resource (EPMR) lab in Houston trying to duplicate the problems.

Our escalation manager was able to drive the issue to completion across not only the solution standards, but the local office, to owning the actual escalation with all the action items to keep this all on track. We knew where we were going to go. That was about a six-month case, but we did finally find was that the customer was on the technological edge, and the "pipe" to have that performance just did not exist.
Redefine the potential of your virtualization investments.
View the full Expert Chat presentation on VMware support best practices.
Site visits

Pat Lampert is a technical account manager and does site visits. The technical account managers do go out on site. So we’re aware of the environment. We have the information of your environment documented into the database. When you call, we’re not saying, "Now what kind of server is this? What’s the firmware?" We know this because we already have it documented. We could be calling them to say, "Server 3 is running a little off." We already which know VMware version this is on, because we have that information.

And because we have that, we can also offer proactive advice. We can know that there's a new firmware update, or VMware just came out with a new build, and we have a place where you can go find the latest that's specific to your environment. So this helps to reduce further incidents, because we can be more proactive to help you maintain your business.

Gardner: What are some of the the most frequent questions you receive from the field?

Reddy: I'll address two questions that are frequently showing up. One is, what is the difference between the VMware ESXi image and an HP ESXi image?

Basically, HP takes the same ESXi image that VMware provides to the customers. It then adds HP thin components for hardware management, and it also adds any latest fibre channel and network drivers. Once it's tested and certified, it's available for download both from HP and VMware websites.

Major differences

A
nd one of the major difference between the two images is that VMware image is disk installable only, whereas HP image can be installed on a disk, USB key, or a SD card.

The other question we're getting nowadays is how to upgrade from VCA4 to VCA5. As with any major upgrades, planning helps. The first thing I would do is understand the difference between ESX 4 and ESX 5, because starting with ESX 5, we have no service console. So we need to understand what the architectural differences are.

Also learn about the new licensing policies. Then, use the System Analyzer that VMware provides to evaluate the current environments, and download, check, and complete the checklist. Once this is done, hopefully the upgrade will go smoothly.

Lampert: Another question that has come up from customers has to do with the added value of getting support directly from HP. It was partly addressed during the presentation we just gave. First of all, VMware does have a fine support organization. I have a couple of friends who work in VMware Support, and they do a good job of supporting their product.

HP, in addition to a similar level of expertise in the product, also offers our expertise in HP hardware, especially if you have systems based on HP Blades. The infrastructure behind that often is tied very closely to the performance and availability of your ESX host. So when you call us, you will have not only someone who is very familiar with the VMware product, but also is familiar with the HP hardware and able to pull in the proper resourced results, problems you might encounter with running vSphere on HP hardware especially.

In addition to that, we have a partnership agreement with VMware, and when you call in for support through HP, you're getting that same level of service when we have to go to VMware to get answers to questions or fixes.

One other question that has come up is about our lab ability to reproduce problems. We have two global labs, one in India and one in the United States. We have several static vSphere cluster configurations with a number of different types of servers already in those configurations, and the ability, when needed, to add specific models, if there is a problem that’s specific to a particular Blade or rack-mounted server model, or a particular card or something like that. So we're quite able to reproduce most problems that come in. We even have some Dell and IBM equipment in our lab also.

Gardner: What other issues are users grappling with?

Reddy: One question I can answer is how to troubleshoot server crashes. When something goes wrong in ESX, we call it the "Purple Screen of Death." Often, these are results of hardware failure, but we still need to rule out the software. So we collect all the logs, and look at it to see if it's a software issue. If it's not a software issue, then we engage the hardware team to see how we can get to the root cause and fix the issue.

Lampert: To dovetail with Sumithra’s comment there, one of the questions I get frequently is what to do if you don’t have a dump. Say the host hangs, and that seems to be almost more common than the Purple Screen of Death. Some customers are't aware that through HP’s Integrated Lights-Out Management, there is the ability to generate a non-maskable interrupt (NMI) just by pressing a button, and by saving a certain environment variable ahead of time in your ESX host.

KB article

There is a KB article on this, by the way, if you just search on NMI and core dumping in VMware. But with that setup, you can force a dump while a system is in a hung state, and that will assist us usually in troubleshooting and isolating what caused the hang, whether it be hardware or a problem with the ESX host software.

One question that came up ahead of time is what HP suggests as far as getting a handle on our inventory of VMs? I happened to be involved in field testing some new tools from HP that will be available in January and February regarding vSphere.

One of them is a Holistic Blade and Firmware Analysis that takes into account the VMware environment on our Blade systems which we are working on having ready soon. We have just completed field tests.

And the second is a really nifty Inventory Report HP has just put together. We're just completing field tests on that now. It will be available soon. Basically, we install a small Perl script in the customer environment on any machine that has access to the vCenter host and has a vSphere CLI installed.

This Perl Script crawls through the VMware environment and builds an XML file, which we then feed into a report generator here at HP. This can be used for us to gather information on customers, so we have ahead of time a clear picture of the environment. But also it will be sold as a service to customers.

The report is really quite nice, with all sorts of charts and showing availability of machines and availability of memory and also disk space. It's a very nice report.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.
Redefine the potential of your virtualization investments.
View the full Expert Chat presentation on VMware support best practices.
You may also be interested in: