Friday, September 3, 2010

ZapThink defines IT transformation crisis points in 2020 vision framework

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


By Jason Bloomberg

In our last ZapFlash, The Five Supertrends of Enterprise IT, ZapThink announced our new ZapThink 2020 conceptual framework that helps organizations understand the complex interrelationships among the various forces of change impacting IT shops over the next 10 years, and how to leverage those forces of change to achieve the broader goals of the organization.

In many ways, however, ZapThink 2020 is as much about risk mitigation as it is about strategic benefit. Every element of ZapThink 2020 is a problem, as well as an opportunity. Nowhere is this focus on risk mitigation greater than with ZapThink 2020’s six Crisis Points.

Defining a crisis point

Of course, life in general -- as well as business in particular -- are both filled with risks, and a large part of any executive’s job description is dealing with everyday crises. A Crisis Point, however, goes beyond everyday, garden-variety fire fighting. To be a Crisis Point, the underlying issue must both be potentially game-changing as well as largely unexpected. The element of surprise is what makes each Crisis Point especially dangerous – not that the crisis itself is necessarily a surprise, but rather, just how transformative the event promises to be.

Here then are ZapThink 2020’s seven Crisis Points, why they’re surprising, and why they’re game-changing. Over the next several months we’ll dive deeper into each one, but for now, here’s a high-level overview.
Collapse of enterprise IT – Enterprises who aren’t in the IT business stop doing their own IT, and furthermore, move their outsourced IT off-premise.

Why is it that so many enterprises today handle their own IT, and in particular, write their own software? They use office furniture, but nobody would think of manufacturing their own, except of course if you’re in the office furniture manufacturing business.

The game-changing nature of this Crisis Point is obvious, but what’s surprising will be just how fast enterprises rush to offload their entire IT organizations, once it becomes clear that the first to do so have achieved substantial benefits from this move.

IPv4 exhaustion –
Every techie knows that we’re running out of IP addresses, because the IPv4 address space only provides for about 4.3 billion IP addresses, and they’ve almost all been assigned.

IPv6 is around the corner, but very little of our Internet infrastructure supports IPv6 at this time. The surprise here is what will happen when we run out of addresses: the secondary market for IP addresses will explode.

As it turns out, a long time ago IANA assigned most IP addresses to a select group of Class A holders, who each got a block of about 16.8 million addresses.

Companies like Ford, Eli Lilly, and Halliburton all ended up with one of these blocks. How much money do you think they can make selling them once the unassigned ones are all gone?

Fall of frameworks –
Is your chief Enterprise Architect your CEO’s most trusted, important advisor? No? Well, why not?

After all, EA is all about organizing the business to achieve its strategic goals in the best way we know how, and the EA is supposed to know how. The problem is, most EAs are bogged down in the details, spending time with various frameworks and other artifacts, to the point where the value they provide to their organizations is unclear.

In large part the frameworks are to blame – Zachman Framework, TOGAF, DoDAF, to name a few. For many organizations, these frameworks are little more than pointless exercises in organizing terminology that leads to checklist architectures.

At this Crisis Point, executives get fed up, scrap their current EA efforts, and bring in an entirely new way of thinking about Enterprise Architecture. Does ZapThink have ideas about this new approach EA? You bet we do. Stay tuned – or better yet, sign up for our newly revised Licensed ZapThink Architect SOA & Cloud Architecture Boot Camp.

Cyberwar
Yes, most risks facing IT shops today are security related. Not a day goes by without another virus or Windows vulnerability coming to light.

But what happens when there is a concerted, professional, widespread, expert attack on some key part of our global IT infrastructure? It’s not a matter of if, it’s a matter of when.

The surprise here will be just how effective such an attack can be, and perhaps how poor the response is, depending on who the target is. Will terrorists take down the Internet? Maybe just the DNS infrastructure? Or will this battle be between corporations? Regardless, the world post-Cyberwar will never be the same.

As the quantity and complexity of available information exceeds our ability to deal with such information, we’ll need to take a new approach to governance.

Arrival of Generation Y
These are the kids who are currently in college, more or less. Not only is this generation the “post-email” generation, they have grown up with social media.

When they hit the workforce they will hardly tolerate the archaic approach to IT we have today. Sure, some will change to fit the current system, but enterprises who capitalize on this generation’s new perspective on IT will obtain a strategic advantage.

We saw this generational effect when Generation X hit the workforce around the turn of the century – a cadre of young adults who weren’t familiar with a world without the Web. That generation was instrumental in shifting the Web from a fad into an integral part of how we do business today. Expect the same from Generation Y and social media.

Data explosion –
As the quantity and complexity of available information exceeds our ability to deal with such information, we’ll need to take a new approach to governance.

ZapThink discussed this Crisis Point in our ZapFlash The Christmas Day Bomber, Moore’s Law, and Enterprise IT. But while an essential part of dealing with the data explosion crisis point is a move to governance-driven Complex Systems, we place this Crisis Point in the Democratization of Technology Supertrend.

The shift in thinking will be away from the more-is-better, store-and-analyze school of data management to a much greater focus on filtering and curating information. We’ll place increasingly greater emphasis on small quantities of information, by ensuring that information is optimally valuable.

Enterprise application crash –
The days of “Big ERP” are numbered – as well as those of “Big CRM” and “Big SCM” and … well, all the big enterprise apps. These lumbering monstrosities are cumbersome, expensive, inflexible, and filled at their core with ancient spaghetti code.

There’s got to be a better way to run an enterprise. Fortunately, there is. And once enterprises figure this out, one or more of the big enterprise app vendors will be caught by surprise and go out of business. Will it be one of your vendors?
The ZapThink take

We can’t tell you specifically when each of these Crisis Points will come to pass, or precisely how they will manifest. What we can say with a good amount of certainty, however, is that you should be prepared for them. If one or another proves to be less problematic or urgent than feared, then we can all breathe a sigh of relief. But should one come to pass as feared, then the organizations who have suitably prepared for it will not only be able to survive, but will be able to take advantage of the fact that their competition was not so well equipped.

The real challenge with preparing for such Crisis Points is in understanding their context. None of them happens in isolation; rather, they are all interrelated with other issues and the broader Supertrends that they are a part of.

That’s where ZapThink comes in. We are currently putting together a poster that will help people at a variety of organizations understand the context for change in their IT shops over the next ten years, and how they will impact business. We’re currently looking for sponsors. Drop us a line if you’d like more information.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Tuesday, August 31, 2010

Process automation elevates virtualization use while transforming IT's function to app and cloud service broker

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

T
he trap of unchecked virtualization complexity can have a stifling effect on the advantageous spread of virtualization in data centers.

Indeed, many enterprises may think they have already exhausted their virtualization paybacks, when in fact, they have only scratched the surface of the potential long-term benefits.

Automation, policy-driven processes and best practices are offering more opportunities for optimizing virtualization so that server, storage, and network virtualization can move from points of progress into more holistic levels of adoption.

The goals then are data center transformation, performance and workload agility, and cost and energy efficiency. Many data centers are leveraging automation and best practices to attain 70 percent and even 80 percent adoption rates.

By taking such a strategic outlook on virtualization, process automation sets up companies to better exploit cloud computing and IT transformation benefits at the pace of their choosing, not based on artificial limits imposed by dated or manual management practices.

To explore how automation can help achieve strategic levels of virtualization, BriefingsDirect brought together panelists Erik Frieberg, Vice President of Solutions Marketing at HP Software, and Erik Vogel, Practice Principal and America's Lead for Cloud Resources at HP. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Vogel: Probably the biggest misconception that I see with clients is the assumption that they're fully virtualized, when they're probably only 30 or 40 percent virtualized. They've gone out and done the virtualization of IT, for example, and they haven't even started to look at Tier 1 applications.

The misconception is that we can't virtualize Tier 1 apps. In reality, we see clients doing it every day. The broadest misconception is what virtualization can do and how far it can get you. Thirty percent is the low-end threshold today. We're seeing clients who are 75-80 percent virtualized in Tier 1 applications.

Frieberg: The three misconceptions I see a lot are, one, automation and virtualization are just about reducing head count. The second is that automation doesn't have as much impact on compliance. The third is if automation is really at the element level, they just don't understand how they would do this for these Tier 1 workloads.

You're starting to see the movement beyond those initial goals of eliminating people to ensuring compliance. They're asking how do I establish and enforce compliance policies across my organization, and beyond that, really capturing or using best practices within the organization.

When you look at the adoption, you have to look at where people are going, as far as the individual elements, versus the ultimate goal of automating the provisioning and rolling out a complete business service or application.

When I talk to people about automation, they consistently talk about what I call "element automation." Provisioning a server, a database, or a network device is a good first step, and we see gaining market adoption of automating these physical things. What we're also seeing is the idea of moving beyond the individual element automation to full process automation.

Self-service provisioning

As companies expand their use of automation to full services, they're able to reduce that time from months down to days or weeks. This is what some people are starting to call cloud provisioning or self-service business application provisioning. This is really the ultimate goal -- provisioning these full applications and services versus what is often IT’s goal -- automating the building blocks of a full business service.

This is where you're starting to see what some people call the "lights out" data center. It has the same amount or even less physical infrastructure using less power, but you see the absence of people. These large data centers just have very few people working in them, but at the same time, are delivering applications and services to people at a highly increased rate rather than as traditionally provided by IT.

Vogel: One of the challenges that our clients face is how to build the business case for moving from 30 percent to 60 or 70 percent virtualized. This is an ongoing debate within a number of clients today, because they look at that initial upfront cost and see that the investment is probably higher than what they were anticipating. I think in a lot of cases that is holding our clients back from really achieving these higher levels of virtualization.

In order to really make that jump, the business case has to be made beyond just reduction in headcount or less work effort. We see clients having to look at things like improving availability, being able to do migrations, streamlined backup capabilities, and improved fault-tolerance. When you start looking across the broader picture of the benefits, it becomes easier to make a business case to start moving to a higher percentage of virtualization.

One of the things we saw early on with virtualization is that just moving to a virtual environment does not necessarily reduce a lot of the maintenance and management that we have, because we haven’t really done anything to reduce the number of OS instances that have to be managed.

More than just asset reduction

T
he benefits are relatively constrained, if we look at it from just a physical footprint reduction. In some cases, it might be significant if a client is running out of data-center space, power, or cooling capacity within the data center. Then, virtualization makes a lot of sense because of the reduction in asset footprint.

But, when we start looking at coupling virtualization with improved process and improved governance, thereby reducing the number of OS instances, application rationalization, and those kinds of broader process type issues, then we start to see the big benefits come into play.

Now, we're not talking just about reducing the asset footprint. We're also talking about reducing the number of OS instances. Hence, the management complexity of that environment will decrease. In reality, the big benefits are on the logical side and not so much on the physical side.

It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services.



Frieberg: What we're seeing in companies is that they're realizing that their business applications and services are becoming too complex for humans to manage quickly and reliably.

The demands of provisioning, managing, and moving in this new agile development environment and this environment of hybrid IT, where you're consuming more business services, is really moving beyond what a lot of people can manage. The idea is that they are looking at automation to make their life easier, to operate IT in a compliant way, and also deliver on the overall business goals of a more agile IT.

Companies are almost going through three phases of maturity when they do this. The first aspect is that a lot of automation revolves around "run book automation" (RBA), which is this physical book that has all these scripts and processes that IT is supposed to look at.

But, what you find is that their processes are not very standardized. They might have five different ways of configuring your device, resetting the server, and checking why an application isn’t working.

So, as we look at maturity, you’ve got to standardize on a set of ways. You have to do things consistently. When you standardize methods, you then find out you're able to do the second level of maturity, which is consolidate.

Transforming how IT operates

Vogel: It becomes more than just talking about the hardware or the virtualization, but rather a broader question of how IT operates and procures services. We have to start changing the way we are thinking when we're going to stand up a number of virtual images.

When we start moving to a cloud environment, we talk about how we share a resource pool. Virtualization is obviously key and an underlying technology to enable that sharing of a virtual resource pool.

We're seeing the virtualization providers coming out with new versions of their software that enable very flexible cloud infrastructures.

This includes the ability to create hybrid cloud infrastructures, which are partially a private cloud that sits within your own site, and the ability to burst seamlessly to a public cloud as needed for excess capacity, as well as the ability to seamlessly transfer workloads in and out of a private cloud to a public cloud provider as needed.

We're seeing the shift from IT becoming more of a service broker, where services are sourced and not just provided internally, as was traditionally done. Now, they're sourced from a public cloud provider or a public-service provider, or provided internally on a private cloud or on a dedicated piece of hardware. IT now has more choices than ever in how they go about procuring that service.

But it becomes very important to start talking about how we govern that, how we control who has access, how we can provision, what gets provisioned and when. ... It's a much bigger problem and a more complicated problem as we start going to higher levels of virtualization and automation and create environments that start to look like a private cloud infrastructure.

I don’t think anybody will question that there are continued significant benefits, as we start looking at different cloud computing models. If we look at what public cloud providers today are charging for infrastructure, versus what it costs a client today to stand up an equivalent server in their environment, the economics are very, very compelling to move to a cloud-type of model.

Governance crosses boundary to strategy

Without the proper governance in place, we can actually see cost increase, but when we have the right governance and processes in place for this cloud environment, we've seen very compelling economics, and it's probably the most compelling change in IT from an economic perspective within the last 10 years.

Frieberg: If you want to automate and virtualize an entire service, you’ve got to get 12 people to get together to look at the standard way to roll out that environment, and how to do it in today’s governed, compliant infrastructure.

The coordination required, to use a term used earlier, isn’t just linear. It sometimes becomes exponential. So there are challenges, but the rewards are also exponential. This is why it takes weeks to put these into production. It isn’t the individual pieces. You're getting all these people working together and coordinated. This is extremely difficult and this is what companies find challenging.

The key goal here is that we work with clients who realize that you don’t want a two-year payback. You want to show payback in three or four months. Get that payback and then address the next challenge and the next challenge and the next challenge. It's not a big bang approach. It's this idea of continuous payback and improvement within your organization to move to the end goal of this private cloud or hybrid IT infrastructure.

Vogel: We've developed a capability matrix across six broad domains to look at how a client needs to start to operationalize virtualization as opposed to just virtualizing a physical server.

We definitely understand and recognize that it has to be part of the IT strategy. It is not just a tactical decision to move a server from physical machine to a virtual machine, but rather it becomes part of an IT organization’s DNA that everything is going to move to this new environment.

We're really going to start looking at everything as a service, as opposed to as a server, as a network component, as a storage device, how those things come together, and how we virtualize the service itself as opposed to all of those unique components.

It really becomes baked into an IT organization’s DNA, and we need to look very closely at their capability -- how capable an organization is from a cultural standpoint, a governance standpoint, and a process standpoint to really operationalize that concept.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, August 30, 2010

HP eyes automated apps deployment, 'standardized' private cloud creation with integrated CloudStart package

Clearly seeing a sweet spot amid complex and costly applications support for Microsoft Exchange, SharePoint and SAP R/3 implementations, HP on Monday delivered a CloudStart package of turnkey private cloud infrastructure capabilities with a self-service, SasS portal included.

Delivered at the VMworld conference in San Francisco, HP is taking a practical approach for creating cloud and shared services deployment models that make quick economic sense by targeting costly and sprawling server farms that support seas of Microsoft, SAP and other "out of the box" business applications as services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

In doing so, HP is moving quickly to try and carve out a leadership position for the fast (30 days, they say) set-up of private clouds, coupled with the ease of a SaaS-based deployment, maintenance and ongoing operations portal that implements and supports the clouds and the applications they support. The targeting of costly and often inefficient Microsoft Exchange and SharePoint farms also points to the creeping separation of Microsoft's and HP's infrastructure -- and cloud -- strategies.

At the same time, HP's cloud hardware, software and services packaging via CloudStart exploits HP's product strengths while setting the stage for enterprise application stores, service catalogs of metered apps as services, more choices of moving to hybrid clouds, and easy segues to multiple sourcing and hosting options, all of which play into HP's Enterprise Services (nee EDS) on the hosting side.

CloudStart is also what I believe is only the opening salvo in a comprehensive private cloud initiative and strategy drive that HP aims to win. Expect more developments through the fall on HP Cloud Service Automation (CSA) and applications lifecycle management products, services and professional services support offerings.

HP's VMworld news today also comes on the heels of a slew of private cloud product and services offerings last week.

Partners form ecosystem approach

The HP CloudStart package -- with third-party partner ecosystem players like Intel, Samsung, VMware and Carnegie Mellon -- combines the features of HP BladeSystem Matrix, Converged Infrastructure, Cloud Service Automation stack, StorageWorks, and other governance and management offerings. That on top of the globally available HP server hardware and networking hardware portfolios. HP says, however, that CloudStart is designed to integrate well with an enterprises's existing heterogeneous platforms, any hypervisor, and third-party and open source middleware.

Such mission-critical aspects as disaster recovery, security, storage efficiency, governance, patches support, compliance and audits support, and use metering and charge-backs billing are also included in the CloudStart offerings and road map, HP said.

HP also announced Cloud Maps for use with apps and solutions from VMware, SAP, Oracle and Microsoft to significantly speed application deployment via tested, cloud-ready app configurations. Cloud Maps are imported directly into private cloud environments, enabling them to develop a catalog of cloud services.

The combination of the cloud elements could lead to a "standardized" approach for creating and expanding private clouds throughout an enterprise, said Paul Miller, vice president, Solutions and Strategic Alliances, Enterprise Servers, Storage and Networking at HP. The solution is designed to be deployed on-premises but uses an HP-operated, off-premises and SaaS setup and operations portal.

And that SaaS, self-service aspect could be a key to the practical deployment of enterprise clouds, which HP sees as rapidly growing in interest in a "multi-source IT world," even if enterprises are not quite sure how to begin. HP recognizes that moving from a non-cloud problem set of complexity and sprawl to a cloud-based world of complexity and sprawl sort of defeats the purpose and economics.

IT leaders need cloud road map

"When CIOs have a simplified way to map their path to the private cloud, including all the necessary components from infrastructure and applications to services, they are more likely to identify a comprehensive and realistic deployment scenario for their organization," said Matt Eastwood, group vice president, Enterprise Platform Group, IDC, in a release. "With the HP CloudStart solution, clients now have a way to accelerate the adoption of service-oriented environments for a private cloud that matches the speed, flexibility and economies of public cloud without the risk or loss of control."

So CloudStart works to consolidate, integrate, and converge the cloud support elements -- and in doing so creates a compelling alternative to IT infrastructure as usual. And maybe a standard on-ramp to the use of heterogeneous private clouds?

The HP CloudStart solution is offered now in Asia-Pacific and Japan and expected to be available globally in December.

I see the self-service portal as a critical differentiator, and could also lead to what we think of the "app stores" model for consumer and entertainment uses moving to the enterprise apps space. Because once a private cloud has been deployed, and if managed via a HP portal, applications in a service catalog via the portal could be then chosen and deployed in a common manner, all with a managed pay-as-you go metered model or other SLAs. Indeed, other apps within the enterprise could also be brought into the cloud to also be metered and charged back by usage to the business users.

Kind of reminds me of getting the values of SOA but having someone else build it out.

Accountants love this model, as it helps move IT from a cost center into an SLA-driven service center. Over time a variety of hybrid cloud offerings -- perhaps leveraging the standardized CloudStart deployment model and common billing model -- could be explored and transitioned to. That is, HP could then go the enterprises using CloudStart and via the management portal, offer to run those or other apps on its data centers -- perhaps substantially cutting the total costs of apps delivery.

This way, the enterprise app store and service catalog becomes the interface between the IT managers and the service vendors. IT becomes a procurement and brokering function, amid -- one hopes -- a vibrant market of cloud services offerings. It makes IT into more like any other mature business function ... like materials, logistics, supply chain, HR, energy, facilities, etc.

Future of IT?

Here's where the future of IT is headed. Whatever vendor/supplier/service provider (and its ecosystem) gets to IT as a service first and best, and then offers the best long-term value, support, management and reliability ... wins.

HP clearly wants to be on the short list of such winning providers.

You may also be interested in:

Friday, August 27, 2010

Platform Computing steps up with easy-entry solution for building private clouds

Platform Computing has paved the way to faster private cloud adoption with a low-risk, low-cost way for companies to evaluate their use of cloud computing. The Platform ISF Starter Pack, announced this week, will enable architects and IT managers to get a cloud sandbox environment up and running in less than 30 minutes, the company says.

The $4,995 Starter Pack announcement comes as a slew of vendors are focused on the adoption path for private clouds. More private cloud developments are expected at next week's VWworld conference.

Platform ISF manages application workloads across multiple virtual machine (VM) technologies and provisioning tools. It includes self-service, automated provisioning and chargeback capabilities. It supports multiple VM technologies, including ESX, Xen, KVM and Hyper-V, as well as popular provisioning tools, such as Red Hat Satellite, IBM xCAT, Symantec Altiris, and Platform Cluster Manager. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

“Organizations have plenty of toolkits to choose from as they evaluate private cloud, but they require multiple tools that users must string together themselves,” said James Pang, Vice President Product Management for Platform. “What’s more, these toolkits can cost $50,000 or more, and require 30-plus days of onsite consulting to build and customize an evaluation environment. We wanted to provide a cheap and easy way for users to get up and running quickly with a single product."

Software and best practices

The ISF Starter Pack, which costs $4,995, includes software, best practices advice and help to set up private cloud and includes:
  • One-year Platform ISF term license for 10 sockets, including support

  • Half-day orientation training

  • Half-day cloud building consultation

  • Integration advice for Platform ISF with your internal tools
You may also be interested in:

Thursday, August 26, 2010

Trio of cloud companies collaborate on new private cloud platform offerings

A trio of cloud ecosystem companies have collaborated to offer an integrated technology platform that aims to deliver a swift on-ramp to private and hybrid cloud computing models in the enterprise.

newScale, rPath and Eucalyptus Systems are combining their individual technology strengths in a one-two-three punch that promises to help businesses pump up their IT agility through cloud computing. [Disclosure: rPath is a sponsor of BriefingsDirect podcasts.]

The companies will work with integration services provider MomentumSI to deliver on this enterprise-ready platform that relies on cloud computing, integrating infrastructure for private and hybrid clouds with enterprise IT self-service, and system automation.

No cloud-in-a-box

From my perspective, cloud solutions won’t come in a box, nor are traditional internal IT technologies and skills apt to seamlessly spin up mission-ready cloud services. Neither are cloud providers so far able to provide custom or "shrink-wrapped" offerings that conform to a specific enterprise’s situation and needs. That leaves a practical void, and therefore an opportunity, in the market.

This trio of companies is betting that self-service private and hybrid cloud computing demand will continue to surge as companies press IT departments to deliver on-demand infrastructure services readily available from public clouds like Amazon EC2. Since many IT organizations aren’t ready to make the leap, they don’t have the infrastructure or process maturity to transition to the public cloud. That’s where the new solution comes in.

Incidentally, you should soon expect similar cloud starter packages of technology and services, including SaaS management capabilities, from a variety of vendors and partnerships. Indeed, next week's VWworld conference should be rife with such news.

The short list of packaged private cloud providers includes VMware, Citrix, TIBCO, Microsoft, HP, IBM, Red Hat, WSo2, RightScale, RackSpace, Progress Software, Platform Computing and Oracle/Sun. Who else would you add to the list? [Disclosure: HP, Progress, Platform Computing and WSO2 are sponsors of BriefingsDirect podcasts].

Well, add Red Hat, which this week preempted VWworld with news of its own path to private cloud offerings, saying only Red Hat and Microsoft can offer the full cloud lifecycle parts and maintenance. That may be a stretch, but Red Hat likes to be bold in its marketing.

Behind the scenes

Here’s how the newScale, rPath and Eucalyptus Systems collaboration looks under the hood. newScale, which provides self-service IT storefronts, brings its e-commerce ordering experience to the table. newScale’s software lets IT run on-demand provisioning, enforce policy-based controls, manage lifecycle workloads and track usage for billing.

rPath will chip in its automating system development and maintenance technologies. With rPath in the mix, the platform can automate system construction, maintenance, and on-demand image generation for deployment across physical, virtual and cloud environments.

This trio of companies is betting that self-service private and hybrid cloud computing demand will continue to surge



For its part, Eucalyptus Systems, an open source private cloud software developer, will offer infrastructure software that helps organizations deploy massively scalable private and hybrid cloud computing environments securely. MomentumSI comes in on the back end to deliver the solution.

It's hard to imagine that full private and/or hybrid clouds are fully ready from any singe single vendor. And who would want that, and the inherent risk of lock-in, a one-stop cloud shop would entail? Best-of-breed and open source components work just as well for cloud as for traditional IT infrastructure approaches. Server, storage and network virtualization may make the ecosystem approach even more practical and cost-efficient for private clouds. Pervasive and complete management and governance are the real keys.

My take is that ecosystem-based solutions then are the first, best way that many organizations will likely actually use and deploy cloud services. The technology value triumvirate of newScale, rPath and Eucalyptus—with solution practice experience of MomentumSI—is an excellent example of the ecosystem approach most likely to become the way that private cloud models actually work for enterprises for the next few years.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, August 17, 2010

Modern data centers require efficiency-oriented changes in networking with eye on simplicity, automation

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Special Offer: Gain insight into best practices for transforming your data center by downloading three whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

As data center planners seek to improve performance and future-proof their investments, the networking leg on the infrastructure stool can no longer stand apart. Advances such as widespread virtualization, increased modularity, converged infrastructure, and cloud computing are all forcing a rethinking of data center design.

And so the old rules of networking need to change because specialized, labor-intensive and homogeneous networking systems need to be be brought into the total modern data center architecture. The increasingly essential role of networking in data center transformation (DCT) needs to stop being a speed bump and instead cut complexity while spurring on adaptability and flexibility.

Networking must be better architected within -- and not bolted onto -- the DCT future. The networking-inclusive total architecture needs to accomplish the total usage pattern and requirements for both today and tomorrow -- and with an emphasis on openness, security, flexibility, and sustainability.

To learn more about how networking is changing, and how organizations can better architect networking into their data centers future, BriefingsDirect assembled two executives from HP, Helen Tang, Worldwide Data Center Transformation Solutions Lead, and Jay Mellman, Senior Director of Product Marketing in the HP Networking Unit. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Tang: As we all know, in 2010 most IT organizations are wrestling with the three Cs -- reducing cost, reducing complexity, and also tapping the problem of hitting the wall with capacity from a base, space, and energy perspective.

The reason it's happening is because IT is really stuck between two different forces. One is the decades of aging architecture, infrastructure, and facilities they have inherited. The other side is that the business is demanding ever faster services and better improvements in their ability to meet requirements.

The confluence of that has really driven IT to ... a series of integrated data center projects and technology initiatives that can take them from this old integrated architecture to an architecture that’s suited for tomorrow’s growth.

DCT ... includes four things: consolidation, whether it's infrastructure, facilities or application; virtualization and automation; continuity and sustainability, which address the energy efficiency aspect, as well as business continuity and disaster recovery; and last, but not least, converged infrastructure.

Networking involves common problems, solutions

Networking actually plays in all these areas, because it is the connective tissue that enables IT to deliver services to the business. It's very critical. In the past this market has been largely dominated by perhaps one vendor. That’s led to a challenge for customers, as they address the cost and complexity of this piece.

[With DCT] we've seen just tremendous cost reduction across the board. At HP, when we did our own DCT, we were able to save over a billion dollars a year. For some of our other customers, France Telecom for example, it was €22 million in savings over three years -- and it just goes on and on, both from an energy cost reduction, as well as the overall IT operational cost reductions.

Mellman: Today’s architecture is very rigid in the networking space. It's very complex with lots of specialized people and specialized knowledge. It's very costly and, most importantly, it really doesn’t adapt to change.

The kind of change we see, as customers are able to move virtual machines around, is exactly the kinds of thing we need in networking and don’t have. So there has been a dramatic change in what's demanded of networking in a data center context.

Within the last couple of years ... customers were telling us that there were so many changes happening in their environments, both at the edge of the network, but also in the data center, that they felt like they needed a new approach.

Look at the changes that have happened in the data center just in the last couple of years -- the rise of virtualization and being able to actually take advantage of that effectively, the pressures on time to market in alignment with the business, and the increasing risk from security and the increasing need for compliance.

Rapid rise in network connections


For example, there's the sheer number of connections, as we went from single large servers to multiple racks of servers, and to multiple virtual machines for services -- all of which need connectivity. We have different management constructs between servers, storage, and networking ... that have been very difficult to deal with.

Tie all these together, and HP felt this is the right time [for a change]. The other thing is that these are problems that are being raised in the networking space, but they have direct linkage to how you would best solve the problem.

We've been in the business for 25 to 30 years and we are successfully the number two vendor in the industry selling primarily at the edge. ... We can now do a better job because we can actually bring the right engineering talent together and solve [networking bottlenecks] in an appropriate way. That balances the networking needs with what we can do with servers, what we can do with storage, with software, with security and with power and cooling, because often times, the solution may be 90 percent networking, but it involves other pieces as well.

There are opportunities where we go from more than 210 different networking components required to serve a certain problem down to two modules. You can kind of see that's a combination of consolidation, convergence, cost reduction, and simplicity, all coming together.

We saw a real requirement from customers to come in and help them create more flexibility, drive risk down, improve time to service and take cost out of the system, so that we are not spending so much on maintenance and operation, and we can put that to more innovation and driving the business forward.

Need for simplicity that begets automation


A couple of these key rules drive simplicity. The job of a network admin needs to be made as simple and have as much automation and orchestration as the jobs of SysAdmins or SAN Admins today.

The second is that we want to align networking more fully with the rest of the infrastructure, so that we can help customers deliver the service they need when they need it, to users in the way that they need it. That alignment is just a new model in the networking space.

Finally, we want to drive open systems, first of all because customers really appreciate that. They want standards and they want to have the ability to negotiate appropriately, and have the vendors compete on features, not on lock-in.

Open standards also allow customers to pick and choose different pieces of the architecture that work for them at different points in time. That allows them, even if they are going to work completely with HP, the flexibility and the feeling that we are not locking them in. What happens when we focus on open systems is that we increase innovation and we drive cost out of the system.

The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point.



What we see are pressures in the data center, because of virtualization, business pressures, and rigidity, giving us an opportunity to come in with a value proposition that really mirrors what we’ve done for 25 years, which is to think about agility, to think about alignment with the rest of IT, and to think about openness and really bringing that to the networking arena for the first time.

For example, we have a product called Virtual Connect, which has a management concept called Virtual Connect Enterprise Manager. It allows the networking team and the sever teams to work off the same pool of data. Once the networking team allocates connectivity, the server team can work within that pool, without having to always go back to the networking team and ask for the latest new IP address and new configurations.

HP is really focused on how we bring the power of that orchestration, and the power of what we know about management, to allow these teams to work together without requiring them, in a sense, to speak the same language, when that’s often the most difficult thing that they have to do.

When we look at agility and ability to improve time-to-service, we are often seeing an order of magnitude or even two orders of magnitude [improvement] by churning up a rollout process that might take months -- and turning it into hours or days.

With that kind of flexibility, you avoid the silos, not necessarily just in technology, but in the departments, as requests from the server and storage teams to the networking team. So, there are huge improvements there, if we look at automation and risk. I also include security here.

It's very critical, as part of these, that security be embedded in what we're doing, and the network is a great agent for that. In terms of the kinds of automation, we can offer single panes of glass to understand the service delivery and very quickly be able to look at not only what's going on in a silo, but look at actual flows that are happening, so that we can actually reduce the risk associated with delivering the services.

Cost cuts justify the shift


Finally, in terms of cost, we're seeing -- at the networking level specifically -- reductions on the order of 30 percent to as high as 65 percent by moving to these new types of architectures and new types of approaches, specifically at the server edge, where we deal with virtualization.

HP has been recognizing that customers are increasingly not being judged on the quality of an individual silo. They're being judged on their ability to deliver service, do that at a healthy cost point, and do that as the business needs it. That means that we've had to take an approach that is much more flexible. It's under our banner of FlexFabric.

Tang: The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point. We're able to deliver a single integrated system, where everything can be managed as a whole that delivers incredible simplicity and automation as well as significant reduction in the cost of ownership.

[To learn more] a good place to go is www.hp.com/go/dct. That’s got all kinds of case studies, video testimonials, and all those resources for you to see what other customers are doing. The Data Center Transformation Experience Workshop is a very valuable experience.

Mellman: There are quite a few vendors out there who are saying that the future is all about cloud and the future is all about virtualization. That ignores the fact that the lion's share of what's in a data center still needs to be kept.

You want an architecture that supports that level of heterogeneity and may support different kinds of architectural precepts, depending on the type of business, the types of applications, and the type of pressures on that particular piece.

What HP has done is try to get a handle on what is that future going to look like without prescribing that it has to be a particular way. We want to understand where these points of heterogeneity will be and what will be able to be delivered by a private cloud, public cloud, or by more traditional methods and bring those together, and then net it down to architectural things that makes sense.

We realize that there will be a high degree of virtualization happening at the server edge, but there will also be a high degree of physical servers for especially some big apps that may not be virtualized for a long time, Oracle, SAP, some of the Microsoft things. Even when they are, they are going to be done with potentially different virtualization technologies.

Physical and virtual

Even with a product like Virtual Connect, we want to make sure that we are supporting both physical and virtual server capabilities. With our Converged Network Adaptors, we want to support all potential networking connectivity, whether it’s Fibre Channel, iSCSI, Fibre Channel over Ethernet or server and data technology, so that we don’t have to lock customers into a particular point of view.

We recognize that most data centers are going to be fairly heterogeneous for quite a long time. So, the building blocks that we have, built on openness and built on being managed and secure, are designed to be flexible in terms of how a customer wants to architect.

It’s best having the customer just step back and say, "Where is my biggest pain point?" The nice thing with open systems is that you can generally address one of those, try it out, and start on that path. Start with a small workable project and get a good migration path toward full transformation.
Special Offer: Gain insight into best practices for transforming your data center by downloading three whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

HP buys Fortify, and it's about time!

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

What took HP so long? Store that thought.

As we’ve stated previously, security is one of those things that have become everybody’s business. Traditionally the role of security professionals who have focused more on perimeter security, the exposure of enterprise apps, processes, and services to the Internet opens huge back doors that developers unwittingly leave open to buffer overflows, SQL injection, cross-site scripting, and you name it. Security was never part of the computer science curriculum.

But as we noted when IBM Rational acquired Ounce Labs, developers need help. They will need to become more aware of security issues but realistically cannot be expected to become experts. Otherwise, developers are caught between a rock and a hard place – the pressures of software delivery require skills like speed and agility, and a discipline of continuous integration, while security requires the mental processes of chess players.

At this point, most development/ALM tools vendors have not actively pursued this additional aspect of quality assurance (QA); there are a number of point tools in the wild that may not necessarily be integrated. The exceptions are IBM Rational and HP, which have been in an arms race to incorporate this discipline into QA. Both have so-called “black box” testing capabilities via acquisition – where you throw ethical hacks at the problem and then figure out where the soft spots are. It’s the security equivalent of functionality testing.

Raising the ante

With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Last year IBM Rational raised the ante with acquisition of Ounce Labs, providing “white box” static scans of code – in essence, applying debugger type approaches. Ideally, both should be complementary – just as you debug, then dynamically test code for bugs, do the same for security: white box static scan, then black both hacking test.

Over the past year, HP and Fortify have been in a mating dance as HP pulled its DevInspect product (an also-ran to Fortify’s offering) and began jointly marketing Fortify’s SCA product as HP’s white box security testing offering. In addition to generating the tests, Fortify's SCA manages this stage as a workflow, and with integration to HP Quality Center, autopopulates defect tracking. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We’ll save discussion of Fortify’s methodology for some other time, but suffice it to say that it was previously part of HP’s plans to integrate security issue tracking as part of its Assessment Management Platform (AMP), which provides a higher level dashboard focused on managing policy and compliance, vulnerability and risk management, distributed scanning operations, and alerting thresholds.

In our mind, we wondered what took HP so long to consummate this deal. Admittedly, while the software business unit has grown under now departed CEO Mark Hurd, it remains a small fraction of the company’s overall business. And with the company’s direction of “Converged Infrastructure”, its resources are heavily preoccupied with digesting Palm and 3Com (not to mention, EDS).

The software group therefore didn’t have a blank check, and given Fortify’s 750-strong global client base, we don’t think that the company was going to come cheap (the acquisition price was not disclosed). With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Finally!

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Friday, August 13, 2010

Google needs to know: What does Oracle really want with Android?

The bombshell that Oracle is suing Google over Java intellectual property in mobile platform powerhouse Android came as a surprise, but in hindsight it shouldn't have.

We must look at the world through the lens that all guns are pointed at Google, and that means that any means to temper its interests and blunt it's potential influence are in play and will be used.

By going for Google's second of only two fiscal jugular veins in Android (the other being paid search ads), Oracle has mightily disrupted the entire mobile world -- and potentially the full computing client market. By asking for an injunction against Android based on Java patent and copyright violations, Oracle has caused a huge and immediate customer, carrier and handset channel storm for Google. Talk about FUD!

Could Oracle extend its injunctions requests to handset makers and more disruptively for mobile carriers, developers, or even end users? Don't know, but the uncertainty means a ticking bomb for the entire Android community. Oracle's suits therefore can't linger. Time is on Oracle's side right now. Even Google counter-suing does not stop the market pain and uncertainty from escalating.

We saw how that pain works when RIM suffered intellectual property claims again its Blackberries, when RIM was up against a court-ordered injunction wall. Fair or not, right or not, they had to settle and pay to keep the product and their market cap in the right motion. And speed was essential because investors are watching, wondering, worrying. Indeed, RIM should have caved sooner. That's the market-driven, short-term "time is not on our side" of Google's dilemma with Oracle's Java.

When Microsoft had to settle with Sun Microsystems over similar Java purity and license complaints a decade back, it was a long and drawn out affair, but the legal tide seemed to be turning against Microsoft. So Microsoft settled. That's the legal-driven, long-term "time is not on our side" of Google's dilemma with Oracle's Java.

Google is clearly in a tough spot. And so we need to know: What does Oracle really want with Android?

Not about the money

RIM's aggressors wanted money and got it. Sun also needed money (snarky smugness aside) too, and so took the loot from Microsoft and made it through yet another fiscal quarter. But Oracle doesn't need the money. Oracle will want quite something else in order for the legal Java cloud over Android to go away.

Oracle will probably want a piece of the action. But will Oracle be an Android spoiler ... and just work to sabotage Android for license fees as HP's WebOS and Apple's iOS and Microsoft's mobile efforts continue to gain in the next huge global computing market, that is for mobile and thin PC clients?

Or, will Oracle instead fall deeply, compulsively in love with Android ... Sort of a Phantom of the Opera (you can see Larry with the little mask already, no?), swooping down on the sweet music Google has been making with Android, intent on making that music its own, controlled from its own nether chambers, albeit with a darker enterprise pitch and tone. Bring in heavy organ music, please.

Chances are that Oracle covets Android, believes its teachings through Java technology (the angel of class libraries) entitles it to a significant if not controlling interest, and will hold dear Christine ... err, Android, hostage unless the opera goes on the way Oracle wants it to (with license payments all along the way). Bring in organ music again, please.

Trouble is, this phantom will not let his love interest be swept safely back into the arms of Verizon, HTC, Motorola and Samsung. Google will probably have to find a way make to make music with Oracle on Android for a long time. And they will need to do the deal quickly and quietly, just like Salesforce.com and Microsoft recently did.

What, me worry?

How did Google let this happen? It's not just a talented young girl dreaming of nightly rose-strewn encores, is it?

Google's mistake is it has acted like a runaway dog in a nighttime meat factory, with it fangs into everything but with very little fully ingested (apologies to Steve Mills for usurping his analogy). In stepping on every conceivable competitors' (and partners') toes with hubristic zeal -- yet only having solid success and market domination in a very few areas -- Google has made itself vulnerable with its newest and extremely important success with Android.

Did Google do all the legal blocking and tackling? Maybe it was a beta legal review? Did the Oracle buy of Sun catch it off-guard? Will that matter when market perceptions and disruption are the real leverage? And who are Google's friends now when it needs them? They are probably enjoying the opera from the 5th box.

Android is clearly Google's next new big business, with prospects of app stores, and legions of devoted developers, myriad partners on the software and devices side, globally pervasive channels though the mobile carriers, and the potential to extend same into the tablets and even "fit" PCs arena. Wow, sounds a lot like what Java could have been, what iOS is, and what WebOS wants to be.

And so this tragic and ironic double-cross -- Java coming back to stab Google in the heart -- delivers like an aria, one that is sweet music mostly to HP, Apple, and Microsoft. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

[UPDATE: The stakes may spread far beyond the mobile market into the very future of Java. Or so says Forrester analyst Jeffrey Hammond, who argues that, in light of Oracle’s plans to sue Google over Android, “…this lawsuit casts the die on Java’s future."

"Java will be a slow-evolving legacy technology. Oracle’s lawsuit links deep innovation in Java with license fees. That will kill deep innovation in Java by anyone outside of Oracle or startups hoping to sell out to Oracle. Software innovation just doesn’t do well in the kind of environment Oracle just created," said Hammond.]

See related coverage: