Thursday, April 12, 2012

SAP and databases no longer an oxymoron

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

In its rise to leadership of the ERP market, SAP shrewdly placed bounds around its strategy: it would stick to its knitting on applications and rely on partnerships with systems integrators to get critical mass implementation across the Global 2000. When it came to architecture, SAP left no doubt of its ambitions to own the application tier, while leaving the data tier to the kindness of strangers (or in Oracle’s case, the estranged).

Times change in more ways than one – and one of those ways is in the data tier. The headlines of SAP acquiring Sybase (for its mobile assets, primarily) and subsequent emergence of HANA, its new in-memory data platform, placed SAP in the database market. And so it was that at an analyst meeting last December, SAP made the audacious declaration that it wanted to become the #2 database player by 2015.

Times change in more ways than one – and one of those ways is in the data tier.



Of course, none of this occurs in a vacuum. SAP’s declaration to become a front-line player in the database market threatens to destabilize existing relationships with Microsoft and IBM as longtime SAP observer Dennis Howlett commented in a ZDNet post. OK, sure, SAP is sick of leaving money on the table to Oracle. But if the database is the thing, to meet its stretch goals, says Howlett, SAP and Sybase would have to grow that part of the business by a cool 6x – 7x.

But SAP would be treading down a ridiculous path if it were just trying to become a big player in the database market for the heck of it. Fortuitously, during SAP’s press conference on announcements of their new mobile and database strategies, chief architect Vishal Sikka tamped down the #2 aspirations as that’s really not the point – it’s the apps that count, and increasingly, it’s the database that makes the apps. Once again.

Main point

Back to our main point, IT innovation goes in waves; during emergence of client/server, innovation focused on database where the need was mastering SQL and relational table structures; during the latter stages of client/server and subsequent waves of Webs 1.0 and 2.0, activity shifted to the app tier, which grew more distributed.

With emergence of Big Data and Fast Data, energy shifted back to the data tier given the efficiencies of processing data big or fast inside the data store itself. Not surprisingly, when you hear SAP speak about HANA, they describe an ability to perform more complex analytic problems or compound operational transactions. It’s no coincidence that SAP now states that it’s in the database business.

So how will SAP execute its new database strategy? Given the hype over HANA, how does SAP convince Sybase ASE, IQ, and SQL Anywhere customers that they’re not headed down a dead end street?

That was the point of the SAP announcements, which in the press release, stated the near term roadmap but shed little light on SAP would get there. Specifically, the announcements were:
  • SAP HANA is now going GA and at the low (SMB) end come out with aggressive pricing: roughly $3000 for SAP BusinessOne on HANA; $40,000 for HANA Edge.

    It’s no coincidence that SAP now states that it’s in the database business.


  • Ending a 15-year saga, SAP will finally port its ERP applications to Sybase ASE, with tentative target date of year end. HANA will play a supporting role as the real-time reporting adjunct platform for ASE customers.
  • Sybase SQL Anywhere would be positioned as the mobile front end database atop HANA, supporting real-time mobile applications.
  • Sybase’s event stream (CEP) offerings would have optional integration with HANA, providing convergence between CEP and BI – where rules are used for stripping key event data for persistence in HANA. In so doing, analysis of event streams could be integrated or directly correlating with historical data.
  • Integrations are underway between HANA and IQ with Hadoop.
  • Sybase is extending its PowerDesigner data modeling tools to address each of its database engines.
Most of the announcements, like HANA going GA or Sybase ASE supporting SAP Business suite, were hardly surprises. Aside from go-to-market issues, which are many and significant, we’ll direct our focus on the technology roadmaps.

We’ve maintained that if SAP were serious about its database goals, that it had to do three basic things:
  1. Unify its database organization. The good news is that it has started down that path as of January 1 of this year. Of course, org charts are only the first step as ultimately it comes down to people.
  2. Branding. Although long eclipsed in the database market, Sybase still has an identifiable brand and would be the logical choice; for now SAP has punted.
  3. Cross-fertilize technology. Here, SAP can learn lessons from IBM which, despite (or because of) acquiring multiple products that fall under different brands, freely blends technologies. For instance, Cognos BI reporting capabilities are embedded into rational and Tivoli reporting tools.
Heavy lifting

The third part is the heavy lift. For instance, given that data platforms are increasingly employing advanced caching, it would at first glance seem logical to blend in some of HANA’s in-memory capabilities to the ASE platform; however, architecturally, that would be extremely difficult as one of HANA’s strengths –dynamic indexing – would be difficult to implement in ASE.

On the other hand, given that HANA can index or restructure data on the fly (e.g., organize data into columnar structures on demand), the question is, does that make IQ obsolete? The short answer is that while memory keeps getting cheaper, it will never be as cheap as disk and that therefore, IQ could evolve as near-line storage for HANA.

Of course that begs the question as to whether Hadoop could eventually perform the same function. SAP maintains that Hadoop is too slow and therefore should be reserved for offline cases; that’s certainly true today, but given developments with HBase, it could easily become fast and cheap enough for SAP to revisit the IQ question a year or two down the road.

SAP maintains that Hadoop is too slow and therefore should be reserved for offline cases.



Not that SAP Sybase is sitting still with Hadoop integration. They are providing MapReduce and R capabilities to IQ (SAP Sybase is hardly alone here, as most Advanced SQL platforms are offering similar support). SAP Sybase is also providing capabilities to map IQ tables into Hadoop Hive, slotting IQ as alternative to HBase.

In effect, that’s akin to a number of strategies to put SQL layers inside Hadoop (in a way, similar to what the lesser-known Hadapt is doing). And of course, like most of the relational players, SAP Sybase is also support the bulk ETL/ELT load from HDFS to HANA or IQ.

On SAP’s side for now is the paucity of Hadoop talent, so pitching IQ as an alternative to HBase may help soften the blow for organizations seeking to get a handle. But in the long run, we believe that SAP Sybase will have to revisit this strategy. Because, if it’s serious about the database market, it will have to amplify its focus to add value atop the new realities on the ground.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Tuesday, April 10, 2012

Top 10 ways HP is different and better when it comes to cloud computing

Today HP filled out its cloud computing strategy, a broad-based set of products and services, that set a date (finally!) for its public cloud debut (May 10).

See my separate earlier blog on all the news. Disclosure: HP is a long-time sponsor of my BriefingsDirect podcasts. I've come to know HP very well in the past 20 years of covering them as a journalist, analyst and content producer. And that's why I'm an unabashed booster of HP now, despite its well-documented cascade of knocks and foibles.

By waiting to tip its hand on how it will address the massive cloud opportunity, HP has clearly identified Cloud as a Business (CaaB) as the real, long-term opportunity. And HP appreciates that any way that it can propel CaaB forward for as many businesses, organizations and individuals as possible, then the more successful it will be, too.

Top 10 reasons

Here's why the cloud, as we now know it, is the best thing that could have happened to HP, and why HP is poised to excel from its unique position to grow right along with the global cloud market for many years. These are the top 10 reasons HP is different and better when it comes to cloud computing:

  1. Opportune legacy. HP is not wed to a profits-sustaining operating system platform, integration middleware platform, database platform, business applications suite, hypervisor, productivity applications suite, development framework, or any other software infrastructure that limits its ability to rapidly pursue cloud models without being hurt badly or fatally financially.

  2. HP has been ecumenical in supporting all the major operating environments, Unix, Linux, Windows -- as well as all major open-source and commercial IT stacks, middleware, virtual machines and applications suites across development and deployment longer and more broadly than any one, any where. This includes product, technology, and services. Nothing prevents HP from doing the same as other innovators arrive -- or adjusting as incumbents leave. HP's robust support of OpenStack and KVM now continues this winning score.

  3. Cloud-value software. The legacy computing software products that HP is deeply entrenched with -- application development lifecycle, testing and quality assurance, performance management, systems management, portfolio management, business services management, universal configuration management databases, enterprise service bus, SOA registry, IT financial management suite (to name a few) -- are all cloud-enablement value-adds. And HP has a long software-as-a-service (SaaS) heritage in test and development and other applications delivery. These are not millstones on the path to full cloud business model adoption, they are core competencies.

    These are not millstones on the path to full cloud business model adoption, they are core competencies.



  4. The right hardware. HP has big honking Unix and high-performance computing platforms, yes, but it bet big and rightly on energy- and space-efficient blades and x86 architecture racks, rooms, pods and advanced containers. HP saw the future rightly in virtualization for servers, storage and networking, and its various lines of converged infrastructure hardware and storage acquisitions are very-much designed of, by, and for super-efficient, fit-for-purpose uses like cloud.

  5. Non-sacred cash cows. HP has a general dependency on revenue from PCs and printers, for sure. But, unlike other cash-cow dependencies from other large IT vendors, these are not incompatible with large, robust and ruthless cloud capitalization. PCs and printers may not be growing like they used to, but high growth in cloud businesses won't be of a zero-sum nature with traditional hardware clients either. As with item number 1 above, the interdependencies do not prohibit rapid cloud model pursuit.

  6. Security, still the top inhibitor to cloud adoption, is not a product but a process born of experience, knowledge, and implementation at many technology points. HP wisely built, bought and partnered to architect security and protection into its products and services broadly. As the role of public cloud provider as first line of defense and protection to all its users grows, HP is in excellent shape to combine security services across hybrid cloud implementations and cloud ecosystem customers.

  7. Data and unstructured information. Because HP supports many databases, commercial and open source, it can act as neutral partner in mixed environments. It's purchase of Autonomy gives it unique strength in making unstructured data as analyzed, controlled and managed as structured, relational data -- even combing the analytics value between and among them them. The business value of data is in using it at higher, combined abstractions across clouds, a role HP can do more in, but nothing should hold it back. It's already providing MySQL cloud data services.

    HP can foster the technology services and new kinds of integrator of services along a cloud business process continuum that please both enterprise costumers and vertical cloud providers.



  8. Technology services and professional services. HP, again the partner more than interloper, developed technology services that support IT, help desks, and solutions-level build support requirements, but also did not become a global systems integrator like IBM, where channel conflict and "coopetition" work against ecosystem-level synergies. It is these process synergies now -- a harmonic supply chain among partners for IT services delivery (not systems-level integration) -- that cloud providers need in order to grow. HP can foster the technology services and new kinds of integrator of services along a cloud business process continuum that please both enterprise costumers and vertical cloud providers.

  9. Management, automation, remote services. Those enterprises and small and medium businesses (SMBs) making the transition from virtualization and on-premises data centers to cloud and hybrid models want to keep their hands firmly on the knobs of their IT, but see less and less of the actual systems. Remote management, unified management, and business service management are keystones to hybrid computing, and HP is a world leader. Again, they are core competencies to advanced cloud use by both the enterprises and cloud providers. And HP's performance management insights, continuous improvement, on the cloud and business services that becomes the key differentiator.

  10. Neutrality, trust, penetration and localization. While HP is in bed with everyone in global IT ecosystems, they are not really married. The relationship with Microsoft is a perfect example. HP is large enough not to be bullied (we'll see how Oracle does with that), but not too aggressive such that HP makes enemies and loses the ability to deliver solutions to the end users because of conflict in the channel or ecosystem. Cloud of clouds services and CaaB values will depend on trust and neutrality, because the partner to the cloud providers is the partner to the users. Both need to feel right about the relationship. HP may be far ahead of all but a few companies in all but a few markets in this role.
Full cloud continuum

The breadth and depth of HP's global ambitions evident from today's news shows its intent on providing a full cloud kit continuum -- from code to countless hybrid cloud options. Most striking for me is HP's strategy of enabling cloud provisioning and benefits for any enterprise, large or small. There are as many on-ramps to cloud benefits realization as there are types of users, as there should be. Yet all the cloud providers need to be compete in their offerings, and they themselves will look to outsource that which is not core.

As Ali Shadman, vice president and chief technologist in HP's Technology Consulting group, told me, HP's cloud provider customers need to deliver Apple "iCloud-like services," and they need help to get there fast. They need help knowing how to bill and invoice, to provide security, to find the right facilities. HP then is poised to become the trusted uncle with a host of cloud strengths for these myriad cloud providers worldwide as they grow and prosper.

This is different from selling piecemeal the means to virtualized private-cloud implementations, or putting a different meter on hosted IT services. This is not one-size-fits-all cloud APIs, or Heathkits for cloud hackers. This is not cloud in a box. HP is approaching cloud adoption as a general business benefit, as a necessary step in the evolution of business as an on-demand architectural advancement. It knows that clouds themselves are made up of lots of services, and HP wants a big piece of that supply chain role -- as partner, not carnivor.

HP then becomes the trusted uncle with a host of cloud strengths for these cloud providers as they grow and prosper.



Shadman said that HP is producing the type of public cloud that appeals to serious and technical providers and enterprises, not hobbyists and -- dare I say it, Silicon Valley startups on a shoestring. This is business-to-business cloud services for established, global, regional and SMB enterprises.

HP is showing that it can have a public cloud, but still be a partner with those building their own targeted public clouds. By viewing clouds as a supply chain of services, HP seeks to empower ecosystems at most every turn, recognizing that 1,000 points of cloud variability is the new norm.

We should expect managed service providers, independent software vendors, legacy enterprise operators to all want a type of cloud that suits their needs, their heritage and best serves their end customers. They will be very careful who they align with, who they outsource their futures to.

In other words, HP is becoming a booster of cloud models for any type of organization or business or government.



Of course, HP is seeking to differentiate itself from Amazon, Google, Microsoft, IBM, Oracle, VMware, Citrix and Rackspace. It seems to be doing it through a maturity mode approach to cloud, not positioning itself as the only choice, or one size fits all. HP wants to grow the entire cloud pie. HP seems confident that if cloud adoption grows, it will grow well, too, perhaps even better.

In other words, HP is becoming a booster of cloud models for any type of organization or business or government, helping them to not only build or acquire cloud capabilities, but seeding the business and productivity rationale for cloud as a strategy … indefinitely.

This is very much like Google's original strategy over the past 10 years, that anything that propels the Web forward for as many businesses, organizations and individuals as possible, then the more successful Google will be with its search and advertising model. This has, and continues to, work well for Google. It places a higher abstraction on the mission, more than just sell more ads, but grow the whole pie.

I believe we're early enough in the cloud game that the emphasis should be on safely enabling the entire cloud enterprise, of growing the pie for everyone. It's too soon to try and carve off a platform or application lock, and expect to charge a toll for the those caught in some sort of trap. Those days may actually be coming to an end.

You may also be interested in:

HP takes its cloud support approach public while converging its offerings around OpenStack, KVM

HP today announced major components and details for its HP Converged Cloud strategy, one that targets enterprises, service providers and small- and medium-sized business (SMBs) with a comprehensive yet flexible approach designed to "confidently" grow the cloud computing market fast around the globe.

HP has clearly exploited the opportunity to step back and examine how the cloud market has actually evolved, and taken pains to provide those who themselves provide cloud-enabled services with an architected path to business-ready cloud services. From its new Cloud Maps to HP Service Virtualization 2.0, HP is seeking to hasten and automate how other people's cloud services come to market.

And finally, HP has put a hard date on its own public cloud offering, called HP Cloud Services, which enters public beta on May 10. See my separate analysis blog on the news. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP's public cloud is geared more as a cloud provider support service set, rather than a componentized infrastructure-as-a-service (IaaS) depot for end-user engineers, start-ups and shadow IT groups. HP Cloud Services seems designed more to appeal to mainstream IT and developers in ISVs, service providers and enterprises, a different group than early public cloud entrant and leader Amazon Web Services has attracted.

There's also an urgency in HP's arrangement of products and services enabling a hastened path to hybrid computing values, built on standards and resisting lock-in, with an apparent recognition that very little in cloud's future will be either fully public or private. HP's has built a lot of its cloud architecture around "hardened open source technology from OpenStack," and chose the GNU-licensed KVM hypervisor for its implementation.

The OpenStack allegiance puts HP front and square in an open source ecosystem of cloud providers, including Rackspace, Red Hat and Cisco. IBM is said to be on board too with OpenStack. And former CloudStack charter contributor Citrix recently threw its efforts behind CloudStack under the Apache Foundation.

HP's OpenStack and KVM choices also keeps it at odds with otherwise partners Microsoft and VMware.

Data drives on-premises infrastructure spend now, but increased cloud spend in the future.



There were more details too today on HP's data services in the cloud strategy, apparently built on MySQL. We should expect more data services from HP, including information services from its recent Autonomy acquisition. The data and "information-as-a-service" support trend could be big wind in HP's cloud sails, and undercut its competitors on premises cash flow.

Data drives on-premises infrastructure spend now, but increased cloud spend in the future. As for the latter, what the data engine/platform is under the cloud hood is not as important as whether the skills in the market -- like SQL -- can use it readily, as is the case the the xSQL crowd.

Furthermore, the economics of data services hosting may favor HP. If HP can help cloud providers to store, manage and analyze MySQL and related databases and information as a service with the most efficiency, then those providers using HP cloud support not only beat out on-premises data services on cost, they beat out other non-HP cloud providers, too. Could data analytics services then become a commodity? Yes, and HP could make it happen, while making good revenue on the infrastructure and security beneath the data services.

The announcement

B
ut back to today's news:
  • Beginning May 10, HP Cloud Services will deliver its initial offering, HP public IaaS offerings, as pubic beta. These include elastic compute instances or virtual machines, online storage capacity, and accelerated delivery of cached content. Still in private beta will be a relational database service for MySQL and a block storage service which supports movement of data from one compute instance to another.

  • HP’s new Cloud Maps extends HP CloudSystem by adding prepackaged templates that create a catalogue of application services, cutting the time to create new cloud services for enterprise applications from months to minutes, says HP.

  • HP Service Virtualization 2.0, which tests the quality and performance of cloud or mobile applications without disrupting production systems and includes access to restricted services in a simulated, virtualized environment.

  • HP Virtual Application Networks to help speeds application deployment, automate management and ensures network service levels (SLAs) in delivering cloud and virtualized applications across the HP FlexNetwork architecture.

    There's also an urgency in HP's arrangement of products and services enabling a hastened path to hybrid computing values.



  • HP Virtual Network Protection Service adds security at the network virtualization management layer to help mitigate common threats.

  • HP Network Cloud Optimization Service helps clients enhance their network to improve cloud-based service delivery up to 93 percent compared to traditional download techniques, says HP.

  • HP Enterprise Cloud Services, outsourcing cloud management for private clouds, business continuity and disaster recovery services and unified communications.

  • In targeting a vertical industry capability, Engineering Cloud Transformation Services are designed to help product development and engineering design teams move to cloud.
As part of the announcement, HP also delivered new Cloud Security Alliance training courses.

More information about HP’s new cloud solutions and services is available at http://www.hp.com/go/convergedcloud2012.

You may also be interested in:

Monday, April 9, 2012

Learn why Ducati races ahead with private cloud and a virtualization rate approaching 100 percent

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

H
igh-performance motorcycle designer and manufacturer Ducati Motor Holding has greatly expanded its use of virtualization and is speeding toward increased private cloud architectures.

With a server virtualization rate approaching 100 percent, Ducati has embraced virtualization rapidly in just the past few years, with resulting benefits of application flexibility and reduced capital costs. As a result, Ducati has also embraced private cloud models now across both its racing and street bike businesses.

To learn more about the technical and productivity benefits of virtualization and private clouds, we're joined by Daniel Bellini, the CIO at Ducati Motor Holding in Bologna, Italy. He's interviewd by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Bellini: Probably most people know about Ducati and the fact that Ducati is a global player in sports motorcycles. What some people may not know is that Ducati is not a very big company. It's a relatively small company, selling little more than 40,000 units a year and has around 1,000 employees.

At the same time, we have all the complexities of a multinational manufacturing company in terms of product configuration, supply chain, or distribution network articulation. Virtualization makes it possible to match all these business requirements with available human and economical resources.

Gardner: Some people like to gradually move into virtualization, but you've moved in very rapidly and are at almost 98 percent. Why so fast?

Bellini: Because of the company’s structure. Ducati is a privately owned company. When I joined the company in 2007, we had a very aggressive strategic plan that covered business, process, and technology. Given the targets we would face in just three to four years, it was absolutely a necessity to move quickly into virtualization to enable all the other products.

Gardner: Of course, you have many internal systems. You have design, development, manufacturing, and supply chain, as you mentioned. So, there's great complexity, if not very large scale. What sort of applications didn’t make sense for virtualization?

Legacy applications

Bellini: The only applications that didn't make sense for virtualization are legacy applications, applications that I'm going to dismiss. Looking at the application footprint, I don’t think there is any application that is not going into virtualization.

Gardner: And now to this notion of public cloud versus private cloud. Are you doing both or one versus the other, and why the mix that you’ve chosen?

Bellini: Private cloud is already a reality in Ducati. Over our private cloud, we supply services to all our commercial subsidiaries. We supply services to our assembly plant in Thailand or to our racing team at racing venues. So private cloud is already a reality.

In terms of public cloud, honestly, I haven’t any seen any real benefit in the public cloud yet for Ducati. My expectation from the public cloud would be to have something that has virtual unlimited scalability, both up and downward.

My idea is something that can provide virtually unlimited power when required and can go down to zero immediately, when not required. This is something that hasn't happened yet. At least it’s not something that I've received as a proposal from a partner yet.

I wouldn’t say that there's a specific link between the private cloud and security, but we take always charge of the security as part of any design we bring to production.



Gardner: How about security? Are there benefits for the security and control of your intellectual property in the private cloud that are attractive for you?

Bellini: Security is something that is common to all applications. I wouldn’t say that there's a specific link between the private cloud and security, but we take always charge of the security as part of any design we bring to production, be it in the private cloud or just for internal use.

Gardner: And because Ducati is often on the cutting edge of design and technology when it comes to your high-performance motorcycles, specifically in the racing domain, you need to be innovative. So with new applications and new technologies, has virtualization in a private cloud allowed you to move more rapidly to be more agile as a business in the total sense?

Bellini: This was benefit number one. Flexibility and agility was benefit number one. What we've done in the past years is absolutely incredible as compared to what technology was before that. We've been able to deploy applications, solutions, services, and new architectures in an incredibly short time. The only requirement before that was careful order and infrastructure planning, but having done that, all the rest has been incredibly quick, compared to that previous period.

Gardner: It’s also my understanding that you’re producing more than 40,000 motorcycles per year and that being efficient is important for you. How has virtualization helped you be conservative when it comes to managing costs?

Limited investment

Bellini: Virtualization has enabled us to support the business in very complex projects and rollouts, in delivering solution infrastructures in a very short time with very limited initial investment, which is always one thing that we have to consider when we do something new. In a company like Ducati, being efficient, being very careful and sensitive about cash flows, is a very important priority.

The private cloud and virtualization especially has enabled us to support the business and to support the growth of the company.

Gardner: Let’s look a little bit to the future, Daniel. How about applying some of these same values and benefits to how you deliver applications to the client itself, perhaps desktop virtualization, perhaps mobile clients in place of PCs or full fat clients. Any thoughts about where the cloud enables you to be innovative in how you can produce better client environments for your users?

Bellini: Client desktop virtualization and the new mobile devices are a few things that are on our agenda. Actually, we have been already using desktop virtualization for few years, but now we’re looking into providing services to users who are away and high in demand.

The second thing is mobile devices. We're seeing a lot of development and new ideas there. It's something that we're following carefully and closely, and is something that I expect will turn out into something real probably in the next 12-18 months in Ducati.

Looking back, there is nothing that I would change with respect to what we've done in the last few years.



Gardner: Any thoughts or words of wisdom for those who are undertaking virtualization now? If you could do this over again, is there anything that you might do differently and that you could share for others as they approach this.

Bellini: My suggestion would be just embrace it, test it, design it wisely, and believe in virtualization. Looking back, there is nothing that I would change with respect to what we've done in the last few years. My last advice would be to not be scared by the initial investment, which is something that is going to be repaid in an incredibly short time.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Tuesday, April 3, 2012

HP Expert Chat focuses on how IT can enable cloud adoption while maintaining control and governance

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

View the full Expert Chat presentation on cloud adoption best practices.

Cloud computing has sparked the imagination of business leaders, who see it as a powerful new way to be innovative and gain first-mover advantages -- with or without traditional IT's consent.

This now means that the center of gravity for IT services is shifting toward the enterprise’s boundaries – moving increasingly outside their firewalls. And so how can companies have it both ways -- exploit cloud's promise but also retain rigor and control that internal IT influences and governance enables?

In a special BriefingsDirect sponsored podcast created from a recent HP Expert Chat discussion on how to best pursue cloud computing models, I recently moderated an HP Expert Chat session with Singapore-based Glenn West, the Data Center and Converged Infrastructure Lead for Asia-Pacific and Japan in HP’s Technology Services Organization. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

In our wide-ranging discussion, we deliver actionable recommendations for how IT can support cloud benefits and re-architect their data centers with low risk, and how IT can quickly take on the role of cloud benefits enabler.

Here are some excerpts:

Glenn West: The cloud is an exciting environment, and it's changing things in quite incredible ways. We're going to be focused now on how cloud is enabling data center transformation.

In the data center today, there are quite a few challenges, both from the external world, as well as from internal changes. In the external space, there are regulatory risks, natural disasters, legal challenges, and obviously technologies are changing.

Whether the IT department chooses to change or not, businesses are changing anyway. This is putting pressure on the data center. They must adapt and transform. Internally, greater agility and consolidation are needed. Green initiatives to save money and cost are putting great pressure on change.

So all of these things are causing the data center to converge, and this convergence is pushing the cloud.

What is a data center? In HP we have a very holistic approach. We'll start at the bottom from the facility point of view -- the location, the building, the mechanical and the electrical. Data center densities are growing quite rapidly and electrical costs are changing incredibly fast and rising. So the facility is very important in the operational cost of the data center.
Whether the IT department chooses to change or not, businesses are changing anyway. This is putting pressure on the data center.


If data centers change, the management and the efficient operation become even more important. Then, controlling, governing, and the organization have key parts. Without having the right organizational structure it's very difficult to manage your clouds.

In reality, cloud computing is an irresistible force. It's moving forward, and things are changing.

Scalable and elastic

So what does cloud mean? Cloud means going to a more service-driven model. It's scalable and it's elastic. Think about the public cloud space. How do you handle it when something is very, very popular? One day, it may have a hundred users and the next day it becomes the next hot thing for that instant in time. Then, the demand goes away.

If we use a traditional model, we can’t afford to have the infrastructure, but this pay-per-use is the foundation of cloud. We start looking at a service concept delivered and consumed over the Internet as needed.

The key word that keeps coming up is service, service orientation, the elasticity and the pay-per-use. Clouds ideally are multi-tenant. That can be within a company or outside a company.

Cloud is for automation, orchestration, automating control and a service catalog. All of a sudden, instead of calling somebody and saying, "I need this done," you have a portal. You say, "I want a SharePoint site," and boom. It’s created.

It's radically different than traditional IT. You move away from managing servers, and you manage services. In a data center, over the next couple of years, focus is going to be on private clouds. There will be public cloud providers for certain things, but the focus is going to be on the private side.

Private will slowly push into a hybrid and then slowly adds additional from the cloud services. The majority initially a private cloud will be infrastructure as a service.

The key drivers of this are agility and speed. When a business unit says they need it tomorrow, they're not joking. The agility that a private cloud provides solves a lot of opportunity in the business. It also solves the pressure of going into a public cloud supplier and getting it outside the IT framework.

Management and processing
T
he challenges over the next few years are management and the processing. How do we fund and charge back the whole business model concept? Then, building the cloud service interface, the service description. All of this is before the technology. Cloud is more than just the technology. It’s also about people and process.

Only a small portion will fit in the cloud today, but things are rapidly moving. We were talking about the future. Look at the current sprawl that's occurring. If IT doesn’t get in the front, this probably will get worse. But if the cloud is managed properly, then IT sprawl can be reduced, controlled, and slowly moved into a more standardized structure.

As we move into the cloud and talk about private cloud, service function of IT starts coming into reality, and this is referred to as hybrid delivery. Hybrid delivery is when you start looking at the different ways of providing services, whether they are outsourced, cloud private-based, or publicly-based.

You start looking at becoming a service broker, which is the point at which you say that for this particular service, it makes best sense to pay it here. Then you start looking to manage it and be able to fully optimize your services.
As we move into the cloud and talk about private cloud, service function of IT starts coming into reality, and this is referred to as hybrid delivery.


Going further out into 2015, 18 percent of all IT delivery will be public cloud, 28 percent will remain as private cloud, and the rest will be in-house or outsourced. You can see the rapid change going forward.

Gardner: What kind of applications do you think we are going to see? When you mention the service enablement, these different cloud models, I think people want to know what sorts of applications will be coming first in terms of applicability to these models?

West: If you're referring to public cloud, the first ones a lot of times are collaboration applications. Those were the first ones that moved into the public cloud space. Things like SharePoint, email, calendaring applications were the early adopter models.

Later we have seen CRM applications move. Slowly but surely, you're seeing more and more application types, especially when you start looking at infrastructure as a service (IaaS). It’s not so much the type of application, but the type of application load.

As you see, the traditional model is all about selling products, fixed costs, fixed assets. Everything is fixed. But when you start looking at a service model, it’s more pay-per-use. It’s flexibility, it’s the choice, but also a bit of uncertainty. In the traditional model you have controls, but when you start looking at the service model, it’s all about adaptability and change.

Big gap

S
o there's a big gap here. On one side, we're all about things being fixed, and on the next side, we're moving to being cloud ready, to hybrid services, and hybrid service delivery. So how do we get across this great divide? We really need a bridge. We really need a way to move across this great divide and this big change.

The way we change this is through transformation. It's a journey. Cloud is not something that you can wake up one day and say, "We're going to have it executed instantly." You have to look at it as going through levels of maturity.

This maturity model starts at the bottom. Some organizations are already at the beginning of this journey. They've already started standardizing, or they may have started virtualizing, but it’s a process. You have to get to the point where you're looking at moving up. It’s not just about technology.
View the full Expert Chat presentation on cloud adoption best practices.
Obviously, you have to get to the point where you're consuming cloud services. If you look at the movement to cloud, you can look at it as pulling organizations into it. This is driven by the rapid adoption by the masters in cloud. There’s a great push from the business side. Business is gearing their customers to talk about cloud and cloud-based applications. So there is a pull there.

Also from the data center itself, there is a push. The IT sprawl, the difficulty in management, are all pushing towards cloud quite rapidly.
Business is gearing their customers to talk about cloud and cloud-based applications. So there is a pull there.


The question is, where are we now? Right now, a lot of companies are in this environment where they have started virtualizing. They've moved up a bit and they've started doing some optimization. So they're right at the edge of this.

But to move forward you need to look at changing more than just some of the technology. You also need to look at the people, the technology, and the process in order to bring organizational maturity to the point that it’s starting to be service enabled. Then, you're starting to leverage the agility of cloud.

If you are just simply virtualized, then guess what, you're not going to see the benefit that cloud offers. You need to increase in all of these areas.

Gardner: As we look at the continuum, how do organizations continue to cut costs while they're going about this transformation?

West: This journey is quite interesting. To a large degree, the cost optimization is built in. When you start the journey in the standardization process, you start reducing cost there. As you virtualize, you get another level of cost reduction. At each step, when you start going to a shared service model and a service orientation, you start connecting things to business. You start getting the IT concepts dealing with the business cost.

Further optimized

M
oving up to the point of elasticity, things are further optimized. This whole process is about optimization, and when you start talking about optimization, you're talking about driving down the costs.

Currently, between the beginning of this journey and the end of this journey, we're reducing cost as we go. Each stage of this is another level of cost reduction.

We mentioned that the cloud isn't just about technology. Obviously, technology is part of it, but it's also about automation and self-service portals. The cloud is about speed. Imagine the old traditional process, you say, "Let me weigh the capital equipment required. Let me get that approved. Let me write the PO."

As part of the process, you also have to get into standardization. You have to get into service lifecycle. A cloud that never throws anything away is not an optimized cloud. Having a complete service lifecycle, from beginning to end, is important.
In IT, a cloud without a chargeback model will be a cloud that is over-utilized and running out of control.


Usage and chargeback are key elements as well. Anything that's free always has a long queue. In IT, a cloud without a chargeback model will be a cloud that is over-utilized and running out of control. Having a way of allocating and charging back to the consuming parties, be it an internal customer or outside customer is very important.

Elements often forgotten in cloud are people and having a service orientation. If you look at a traditional IT organization, you have a storage manager and a network manager. If you look at cloud, you have service managers. The whole structure changes. It doesn't necessarily reduce roles or increase roles, but the roles are different. It's about relationship management, capacity management, and vendor management. These are different terms than traditional IT.

Opportunities for cloud


High performance computing, web services, database services, collaboration, high volume, frequently requested standardized and repeatable. That pretty well identifies those great opportunities for private cloud.

Once you start moving into public cloud, you need the ability to understand that things will scale a business, meaning that you need to look at the variability of cost. They need to be tied to the level of business.

Things like backup ability, interoperability and standards, and security are additional things that we need to look at as we move into public cloud services and the hybrid model.

Let's talk about the things that are pushable to hybrid cloud models. First, core activities that are essential to the business are not suitable to go to cloud. Those are best in a private cloud. But if you start looking at things that are not unique, immediate, but not a differentiator or are cost-driven, then those are ideal for public cloud.

Basically core activities are very, very good for private cloud and less core activities or that are cost-driven are more ideal for a public or hybrid cloud offering.

Lock-in and neutrality?

Gardner: Glenn, what prevents us from getting locked into one cloud and not being able to move out?

West: There are a significant number of cloud standards at every level, and HP does everything it can to remain part of those standards and to support those standards. The cloud industry is moving fast, and if you look at cloud, it's about openness. If you have a private cloud then you cannot have the ability to burst to public cloud.

Gartner actually did a study, and found that HP is one of the most open players in the industry, when it comes to cloud. A significant number of the public cloud suppliers actually use our equipment. We make a point of being totally open.

The cloud industry as a whole, because of the interoperability requirements, has to be inherently open.
There are a significant number of cloud standards at every level, and HP does everything it can to remain part of those standards and to support those standards.


Gardner: So it's not only important to pick the technologies, but it's very important to pick the partners when you start to get into these strategies?

West: That’s absolutely right. If the viewpoint of the company that you're getting your cloud from is to lock you in, then you're going to get locked in. But if the company is pushing hard to stay open, then you can see it, and there are plenty of materials available to show who is trying to do lock-in and who is trying to do open standards.

What do we need to think about here? Flexibility is obviously important. Interoperability -- and I think Dana nailed that one on the head -- being able to work across multiple standards is important. The cloud is about agility. Having a resource pool and workflow that can move around the resource pools on demand means that you have to have great interoperability.
View the full Expert Chat presentation on cloud adoption best practices.
Data privacy and compliance issues come into play, especially if we move from a private cloud into public cloud or hybrid offerings. Those things are important, especially on the compliance side, where the cloud supports data being anywhere.

Some requirements, depending on the industry, actually restrict the data movement. Skill-sets are important. Recovery and performance management, all of these things can be managed with the right automation and the right tools in cloud as well as the right people.

Greatest flexibility

We've talked about moving forward and now we're getting into the full IT service broker concept. This is where we have the greatest flexibility. One of the things you said very well was about dynamic sourcing. We can look at the workflow and we can push and share these workflows internally and externally to multiple cloud providers and act as a service broker, optimizing as we go.

You should have this even from a corporate point of view. You could be a service provider where you take those services and you broker and manage those services across multiple delivery methodologies.

At this point, you have to get at an organization very good at doing service-level agreement (SLA) management. SLAs, when you are growing cost and managing workflows through this is very important. When we start talking about going across multiple clouds, advanced automation gets to be very important as well.

As we start looking at the future data center, it is very business-driven. You have multiple ways of sourcing your IT services. So you have both, the physical as well as the virtual services and you have the appropriate mix. It’s changing practically on a daily basis, as business needs demand.
As we start looking at the future data center, it is very business-driven. You have multiple ways of sourcing your IT services.


Let's talk about these physical side and the changes in the data center. One of the things that looks quite interesting, if we look at resiliency, is that a lot of data centers are looking at moving further up the resiliency levels, and each level of this has significantly increased cost, practically exponential cost increase.

Once you implement cloud within your data center, you can get a lot more flexibility all of a sudden, because instead of building a single Tier 4 data center, using the efficiency of cloud, you could potentially build Tier 2 data center and have greater resiliency and greater agility.

The big change is in the way data center physical infrastructure is done, but the thing that's changing quite rapidly is density. If you look at it in a traditional data center, infrastructure is reasonably low to moderate density.

When you start looking at cloud enabled data center, high density is the norm. Greater efficiency, power, space, and cooling are all typical of cloud-enabled data centers. This true IT resource is where anything can run anywhere, and it becomes quite different.

The density change is radically different. The power per rack and cooling all change. The next thing with power is that even if you start looking at a traditional data centers, things such as structure, cabling, and power have to have flexibility and have to have the ability to change.

Orchestration also becomes important. If you start looking at a cloud-enabled data center, everything needs to scale. All the cost factors should scale with the amount of business.

Standardization and efficiency


T
he standardization level also changes as well. Standardizing configurations allows rapid redeployment of equipment. Finally, it’s efficiency -- this dynamic power and cooling during the work loads.

These are pretty radical changes from traditional data centers. Data centers are evolving. If you look at traditional data centers, they were quite monolithic -- one large floor, one large building, that’s pretty well it.

Slowly moving up to multi-tiered data centers, followed by flexible data centers that share resource utilization, and everything can change.

Most organizations when you start looking at the different areas, categories, and types of culture, the technology is there. If you looked at a company today, they will have different levels of maturity. This maturity modeling is a scorecard or a grade card that lets you understand where you are compared to the industry. The thing is that if you look at this example, different areas have different levels of maturity.

The problem is for cloud is that we need to look at something a little different. We need to get an even playing field across all of the areas so that the organizational maturity, the culture, the staff, the best practices are even in the level of maturity for cloud to work.
This maturity modeling is a scorecard or a grade card that lets you understand where you are compared to the industry.


If you bought the best technology, but you didn’t upgrade your governance or the culture, and you didn’t implement the best practices, it won’t work. The best infrastructure without proper service portfolio management, for example, just isn’t going to work. For cloud to work properly, you must actually look at increasing maturity across all areas of your data center, both the people and the process.

Some of the criteria for cloud include technology, consolidation, virtualization, management, governance, the people, process and services, and the service level. Managing the service level can often reduce your cost quite significantly in cloud.

In the process, adopting ITIL and looking at process automation and process management. These organizational structure and roles are quite different in cloud.

Think services. Understand what you have. Decide on what your core and your content are. What is the foundation of your business and what is something that we could start considering moving into public cloud sector?

Get your business units and your businesses on your side. Standardize and look at the automation of processes and explore the infrastructure conversion. Then look at introducing your portal and making sure you have charge back. Start with non-critical or green-field areas. Green field are your new activities. Then, slowly move into a hybrid approach.

Optimize further

Evolve, optimize, benchmark, cycle through -- and optimize further. HP has been doing this for a while. We did a very large transformation ourselves and out of that journey, we've created a huge amount of intellectual property. We have a Transformation Experience Workshop that helps organization understand what changes are needed. We can get people talking and get them moving, creating a vision together.

We have data-center services for looking at optimization, the physical change of data centers. And then we have comprehensive data center transformation services and road map. So get some action going. Let's start at doing the transformation.

A great way to do this is do it a one-day Cloud Transformation Experience Workshop. This is done in panels with key decision makers and it allows you to start building a foundation of how to go through this journey in transformation.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.
View the full Expert Chat presentation on cloud adoption best practices.
You may also be interested in:

Thursday, March 29, 2012

Ariba CMO Tim Minahan on how networked economy benefits spring from improved business commerce and cloud processes

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Ariba.

It's impossible to factor the implications of cloud computing without examining the context of all the other major business and technology trends in play today.

To gauge the potential business benefits -- and threats -- from cloud, we also need to examine the confluence of these developments and how together they change businesses and business ecosystems.

The implications of companies collaborating more efficiently over extended information networks is appropriately then the theme for the upcoming Ariba LIVE Conference in Las Vegas, which kicks off on April 10.

To help better understand the complex drivers of the next wave of business productivity, BriefingsDirect interviewed Ariba’s Chief Marketing Officer, Tim Minahan, in advance of the conference. The networked economy comes as a consequence, says Minahan, of the major business and IT trends of the day -- those being cloud computing, mobile, social, and big data.

To appreciate the full impact of the networked economy, we need to recognize how previously internal processes are now becoming increasingly externalized. Cloud computing, the force behind a lot of this extended business process innovation, has let loose the imagination of businesses to consider how to do commerce anew.

The good news is that shaking up the status quo is enabling massive efficiencies with more active market benefits, shared by more participants, as they cooperate and collaborate in entirely new ways.

As a leader in cloud-based collaborative commerce, Ariba has a unique observation of where the networked economy is headed. To gain a better idea of how business networks will drive the future of commerce, Minahan recently sat down with BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Minahan: We're experiencing another major shift in IT-generated productivity.
The next wave of productivity is going to allow companies to begin to extend those processes, extend that information, and extend alignment of their processes with their external partners.
The forces that you mentioned -- mobility and collaboration through communities or social networks -- are conspiring with cloud computing.

As a result, the next wave of productivity is going to allow companies to begin to extend those processes, extend that information, and extend alignment of their processes with their external partners, whether they be customers that they want to collaborate with more closely, or suppliers that they want to align with better and drive efficiencies with, or other partners, financial partners, logistics partners, etc., that take part in a process that the company needs to collaborate around.

We think this convergence of companies collaborating more efficiently over extended information networks is one that's going to drive this next wave of productivity.

Think about it. It’s here today. We still may be in our infancy, but you've got to think about what’s happening today. Think about it in your personal lives. That’s probably the best example. The magic of Facebook is not in its clean interface or in its news feed feature. The magic of Facebook is that it has created the world’s largest network of personal connections.

Register now for the April 10 Ariba LIVE conference in Las Vegas.

Similarly, the beauty of Amazon.com is not that it offers the best prices on books. It’s that it offers the world’s largest and most convenient network for personal shopping.

Enterprise phenomenon


This is not just a consumer phenomenon. It’s an enterprise one. More and more businesses are looking beyond the four walls of the enterprise to extend their processes and systems, to connect and collaborate more efficiently with their customers, with their suppliers, and other trading partners, whether they're across the street or around the world.

Gardner: What's fascinating to me about that, Tim, is we had Metcalfe’s Law, where the more participants on a network, the more valuable it becomes. We certainly see that with Facebook. But what’s happening in addition is that the amount of activity that these folks do, the commerce, their actions, their priorities, is all data that could be captured and analyzed, not just data individually, but collectively.

So we're not just getting value from the network on participation. We're getting insights that filters back to how we can conduct business. How important is this notion of captured analytics along the way in this networked environment?

Minahan: It’s absolutely critical. We'll go back to those models. If you look at any of the personal networks that we rely on everyday, whether it’s Facebook, Twitter, Google, Netflix, or Amazon, they have three things in common.

Number one, as you stated earlier, they're in the cloud. These are cloud-based applications. You don’t need to install hardware or custom-configure it. You don’t need to install or manage or maintain software. It is all accessible through a web browser, whether you are in Peoria or Paris.
That gives you an additional level of trust in the purchase decision you're about to make.


The second component, as we just stated, is that all of these are ultimately networks. They are large communities of individuals, and oftentimes companies, that are digitally connected.

Think about Amazon. You don’t think about connecting to each individual merchant. Whether you want to buy a book or a blender, the merchants are all connected for you. You don’t think about settling out with Visa or MasterCard and how to integrate to them. They're all connected in there for you.

And to your last part, you have cloud-based technology, communities, and then capabilities. That's where the intelligence in the community comes in.

Again, think about Amazon. You want to buy something and you get expert opinions on "folks who have bought this product have also bought this product." That's community intelligence that helps you make a more informed decision.

Secondly, you get peer opinions, other participants in the community that rate the product that you are considering buying. That gives you an additional level of trust in the purchase decision you're about to make.

These are the types of things that are only available in a network-based model. These are the types of things that are also available to businesses in a business network-type model.

Behavioral shift

Gardner: Well, we've certainly seen a behavioral shift in people’s adoption, and even enthusiasm for these sorts of activities. We're seeing it in their personal lives. When we now apply this to enterprises, to B2B activities, to commerce, we can find that the processes are uniquely actionable and automated in the cloud, even more so than in an enterprise system of record, which could be fairly brittle.

But we're also seeing -- I think it's fairly unique -- is the ability to adjust on the fly. So we have automation and governance, but we also have exception management. We have a confluence of actionable automation that's governed and managed with insight, but we also can adjust these things on the go. I think that's something also that's fairly new to this networked environment.

Minahan: Absolutely. The power of a network using Metcalfe's Law is that each new member delivers incremental value to every existing member. Part of that is it does allow you as a business to be far more responsive. It allows you to make more informed decisions, as we talked about. You're not just making decisions based on your own input, but you're making decisions based on relationship and transaction history, as well as community opinion of a particular trading partner.

In a networked environment, you can quickly find new peers or partners that can help you execute a process. You can get informed community decision on how that partner or peer has performed in the past, so you can make a educated decision as to when and how to find alternative sources of supply, for example, or find new employees or potential employees that you could be matched with through this network.

Those are the types of things that a network model allows you to do, to make more informed decisions, to be much more responsive, and ultimately, have far greater transparency and visibility into the process.
Ariba is facilitating collaborative business commerce, allowing buyers, sellers, and other parties involved in the commerce process to reach outside their four walls.


Gardner: So we have gone into at least the business level, beyond this notion of individual networks to systems of record and core business functions being networked, being loosely coupled, and therefore part of a larger business process.

Ariba has, I think, some of the critical business functions in its sights to extend further into this networked value. Tell me a little bit about some of the core themes at Ariba LIVE and why networking -- taking advantage of some of these larger trends that we have talked about -- applies to business processes and some of the core business functions that all companies share?

Register now for Ariba LIVE.

Minahan: At the end of the day, Ariba is facilitating collaborative business commerce, allowing buyers, sellers, and other parties involved in the commerce process to reach outside their four walls, to connect their systems and their processes to get greater transparency into that. That's a theme that's carrying through to Ariba LIVE.

Ariba LIVE’s theme will be around this networked enterprise, and how you enable a networked enterprise and what it means for you as a buyer, as a seller, or as a chief financial officer.

Industry leading

When you look across the agenda for Ariba LIVE, it's filled with industry-leading companies that have already embraced this network approach. Whether it's Anglo American, one of the largest mining companies, talking about how they are leveraging a networked model to identify, develop, and collaborate with sources of supply and new suppliers in the most remote regions of the world, driving a sustainable supply chain.

Whether it's Sodexo, one of the largest food-service companies that's creating some innovative ways in which they are using the network to support their highly perishable, fast-churn supply chain, and gain insight, both for them and for their trading partners. Procurement and IT have worked together to develop a networked model that allows them to be highly responsive in a very perishable supply-chain type environment.

Or whether it's GlaxoSmithKline that's leveraging a network-based environment to not only automate its invoicing process, but to help optimize cash flow, both for them and their partners.

In addition, you have a host of other companies, that are driving this network supply chain model.

We have a spotlight keynote that has driven some innovation into the federal sector. The first CIO of the United States, Vivek Kundra, is going to talk about how he developed the Cloud First Policy within the Federal Government.
You have a host of other companies, that are driving this network supply chain model.


That means, not just lowering the TCO of deploying technology to automate existing processes, but creating a new community, a new interface, a new way for the US Federal Government to connect and share information, and big data with their constituents. And that's the type of thing that's going on in the public sector, as well as in the private sector.

I have the honor of being the MC this year, and it is an honor when you look out across the keynotes that we have and see what they've been able to accomplish by adopting this networked model and how it has allowed them to drive not just their supply chain strategy, but their business strategy. We think it's going to be a phenomenal show.

High-profile innovations
In addition to that content, obviously because it is Ariba LIVE, we'll be launching some very interesting and high-profile innovations, as well as partnerships that will help buyers and sellers simplify their commerce process and get more easily connected to their trading partners.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: Ariba. Register for Arive LIVE.
You may also be interested in: