Wednesday, October 27, 2010

New managed and automated paths to private clouds provide swifter adoption at lower risk for more enterprises

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

B
usinesses are looking to cloud-computing models to foster agility and improve time-to-market for new services. Yet attaining cloud benefits can founder without higher levels of unified server, data, network, storage, and applications management.

These typically disparate forms of management must now come together in new ways to mutually support a variety of different cloud approaches -- public, private, and hybrid. Without adoption of such Business Service Automation (BSA) capabilities, those deploying applications on private and hybrid clouds will almost certainly encounter increased complexity, higher risk, and stubborn cost structures.

This latest BriefingsDirect discussion therefore focuses on finding low-risk, high-reward paths to cloud computing by using increased automation and proven reference models for cloud management -- and by breaking down traditional IT management silos. In doing so, the progression toward cloud benefits will come more quickly, at lower total cost, and with an ability to rapidly scale to even more applications and data.

We're here with two executives from HP Software & Solutions to learn more about what BSA is and why it's proving essential to managed and productive cloud computing adoption: Mark Shoemaker, Executive Program Manager for Cloud Computing in the Software & Solutions Group at HP, and Venkat Devraj, Chief Technology Officer for Application Automation, also in HP’s Software & Solutions Group. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Shoemaker: There is hardly a place we go that we don’t end up talking to our customers about cloud. Most of the enterprise customers we talk to are looking at private cloud, the internal cloud solution that they own, that they then provide to their business partners, whether that’s the development teams or other elements in their business. Most of them are looking to build on the virtualization work that they've already done.

They want to improve their productivity, definitely get better utilization out of what they have already got. They want IT to be your better partner in the business. What that means is to shorten the time that the business has to wait for the services.

Devraj: There is also an interesting micro trend that’s occurring. A lot of the application teams, end-user business teams, are getting increasingly sophisticated. They're learning about private cloud implementations. Consequently, they're demanding levels of service from IT that are difficult to provide without a private cloud.

For example, because of things like agile development methodologies, application teams are doing a lot more application deployments and code releases than ever before. It's not uncommon to see dozens of application releases for different applications happening during the same day.

IT operations are just bombarded with these requirements, and requests, and they are just unable to keep up based on yesterday’s processes, which are relatively static. These application teams and business unit teams are quite influential.

They're even willing to fund specific initiatives to allow their teams to work in self-service mode, and IT ops are finding themselves in reactive mode. They have to support them, make their internal processes more fluid and dynamic, and leveraging technology that allows that kind of dynamism.

... The third-party companies, the cloud providers, the pure-play server enablers, have an unfair advantage. Because they were started relatively recently, in the last few years, they have the advantage of standardized platforms and delivery units.

A lot to deliver

They can say, "Okay, I'm going to deliver only Linux-based platforms, Windows-based platforms, or certain applications." When you look at the typical enterprise today, however, IT has a lot more to deliver.

There is a lot of prevailing heterogeneity in terms of multiple software platforms and versions. There is a lack of standardization. It's very difficult to talk about cloud and delivery within the enterprise in the same breath, when you look at these kinds of technical challenges.

As a result, IT is undergoing a lot of pressure -- but they have to deliver given the kind of challenges that they face. That’s going to require a lot of education and access to the right kind of technology, training, and guidance.

Shoemaker: Just to add to Venkat’s comment, we're seeing the business driving IT and demanding that agility and that flexibility. We talk to a lot of our customers, where their own coworkers have taken corporate credit cards and gone out into the public cloud, procured space, and have begun developing outside of them. IT really has to get in front of this. They have to manage all this.

... The one thing that’s different about cloud is that it really is a supply chain. It’s the supply chain of IT technology that the business consumes. If you think about what a supply chain is, it’s something that’s got to be repeatable. It has to be governed, and it provides a baseline or foundation and building blocks to build those services that you can then customize on top of the business.

The farther up that you can go with your standard building blocks, the less difficult it is to manage and focus on the custom business-facing functions.



So, the farther up that you can go with your standard building blocks, the less difficult it is to manage and focus on the custom business-facing functions on the front-end.

To do this, cloud has helped us out in a lot of ways. One of the challenges IT has always had is to get the business to consume standards. Because of a lot of hype in the market, the business absolutely is convinced that they get it, and they want the business benefits that cloud offers.

Even if the business decides to go to a public cloud, they still have to consume those elements in a standard fashion. There's no way out of that.

Devraj: And yet, the software used by these enterprises tends to be disparate, heterogeneous, and requires a lot of domain knowledge to be able to manage, resulting in significant delays and bottlenecks associated with service delivery. Those processes just don’t scale in the cloud.

Different platforms

At Stratavia we had built a patented technology to manage and control varied software stacks, such as databases, web servers, application servers, and even well-known packaged applications, including Microsoft Exchange, Oracle E-Business Suite, and SAP.

The content that I talk about becomes an abstraction layer, where the customer, the end user, the people who consume the services, see a very easy to understand service catalog. They can click on it. They can choose some menu options, some values from a drop-down box, and then specify exactly what they need, and have the response come back in minutes and in hours, rather than days and weeks, as is traditionally the case.

For example, just at the database layer, within the enterprise, it's very common to see four or five different platforms in use, such as DB2, SQL Server, Oracle, and so on. By automating the operations management lifecycle around these layers, Stratavia has made it possible for the enterprise to deliver and manage these assets as a service within the context of the cloud.

As more and more of HP’s and Stratavia’s joint customers started seeing value in that capability, HP brought Stratavia into its BSA/Business Technology Optimization umbrella.

There's a big gap in IT today, which is IT/Ops Engineering or IT/Ops Architecture. That’s a big missing silo within IT/Ops. And lot of the operators today that rely on scripts, command-line stuff, and point-and-click tools need to evolve themselves to more of an architect approach. They need more of taking stock of the big picture, and taking the tribal knowledge that they have in their heads and looking at the out-of-the-box content that HP provides and selecting the right content that corresponds to their tribal knowledge.

When they go into the cloud, the underlying management, things like compliance and governance, are not out of whack. They're able to successfully take that knowledge, put it in there, and then, in their new role as architects or engineering folks, they're able to watch, measure, and make modifications as appropriate.

So, the role that people play, that key subject matter experts play, is very crucial as part of walking before running with automation.

Gardner: Now that you have mentioned Stratavia, and for the benefit of our listeners and readers, HP has acquired Stratavia, and there was also quite a bit of related product and service news on Sept. 15 around BSA as the acquisition was unveiled.

Shoemaker: Obviously, the Stratavia acquisition was a huge, huge win for us, and puts us in a great position to help our customers transform their infrastructure. ... And several other things have happened in the last 60 days. We had VMworld, and we presented a cohesive strategy for infrastructure and even PaaS built on the BladeSystem Matrix hardware platform that we have, Converged Infrastructure. We've combined that with two other pieces and a piece of Cloud Service Automation (CSA) software.

CloudStart is a consulting and a professional services-led engagement capability where we come in and work with the customer to get that transformation process nailed, so we can quickly get them moving into the cloud benefits.

On the back end of that, there is another piece that we announced called Cloud Maps, which is really more knowledge, but in a different capacity, in that it offers downloadable templates, preconfigured applications, and best practices for sizing.

Cloud is a solution

We see the Stratavia acquisition fueling this fire, because in the end, cloud is a solution, and a solution needs content, and content wins. Content is what the customer is able to consume and use day one, when the solution is in. So it's important. And we've done a lot there.

We now have a best-in-class content provider in Stratavia that’s come on board to help round out the capabilities and add more into what the customer can get out of our solutions in very quick order.

All that sits on a recently refreshed BSA portfolio, with significant enhancements and new capabilities across network, automations, servers, and storage, that really makes all this happen.



... Let's face it, a lot of the CIOs are looking at a data center that’s packed full of applications that they probably don’t feel as if they have got a good handle on. Now, cloud is coming into the picture, and they've got two things to do here.

Number one, they need to start applying those new business methodologies to IT around providing cloud and the things that go with that, but also they have got a transformation piece to go along. And that can be very daunting.

What we've done is looked at the experience of helping previous customers do that work and we have applied that into the CloudStart and Cloud Maps, CloudStart being the planning and the upfront work that you need to get done.

So, we're right there with you. You don’t have to read chapter one of the book.

Then, as we put the infrastructure in with CSA for Matrix in the frame, we're embedding some of the CSA software inside of the Blade Matrix frame. So you have a way to build infrastructure as a service (IaaS) and manage it through the platform throughout the lifecycle.

Then, on the back end of that, we have the preconfigured application templates. If I need a SQL Server image to put into the system, I can pull that from Cloud Maps, build it into a framework and offer that very quickly. I don’t have to go and figure out how to size for this piece or what golden template looks like for this application.

It's really about obtaining a running start into the cloud, and one that’s not going to leave you wanting in a year or two. You have to be careful. Cloud is a great enablement technology and a lot of people are looking at IaaS, but that’s the starting point for it, and then you have to manage everything that you put inside of that as well.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, October 25, 2010

FuseSource gains new autonomy to focus on OSS infrastructure model, Apache Community innovation, cloud opportunities

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: FuseSource.

The FUSE family of software is now under the FuseSource name and has today gained new autonomy from Progress Software with its own corporate identity.

Part of the IONA Technologies acquisition by Progress Software in 2008, FuseSource has now become its own company, owned by Progress, but now more independent, to aggressively pursue its open source business model and to leverage the community development process strengths.

In anticipation of today's news, our discussion here targets the rapid growth, increased relevance, and new market direction for major open source middleware and integration software under the Apache license.

We'll also look at where FuseSource projects are headed in the near future. [NOTE: Larry Alston also recently joined FuseSource as president.]

Even as the IT mega vendors are consolidating more elements of IT infrastructure, and in some cases, buying up open-source projects and companies, the role and power of open source for enterprise and service providers alike has never been more popular or successful. Virtualization, cloud computing, mobile computing, and services orientation are all supporting more interest and increased mainstream use of open-source infrastructure.

Here now to discuss how FuseSource is therefore evolving we're joined by Debbie Moynihan, Director of Marketing for FuseSource, and Rob Davies, Director of Engineering for FuseSource. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Moynihan: Over the past couple of years, there has been a lot of focus on cost reduction, and that resulted in a lot of people looking at open source who maybe wouldn’t have looked at it in the past.

The other thing that’s really happened with open source is that some of the early adopters who started out with a single project have now standardized on FuseSource products across the entire organization. So there are many more proof-points of large global organizations rolling out open source in mission-critical production environments. Those two factors have driven a lot of people to think about open source, and to start adopting open source.

Then, the whole cloud trend came along. When you think about scaling in the cloud, open source is perfect for that. You don’t have to think about the licensing cost as you scale up. So, there are a lot of trends that have been happening and that have really been really helpful. We're very happy about them helping push open source into the mainstream.

From a FuseSource perspective, we've been seeing over 100 percent growth each year in our business, and that’s part of the reason for some of the things we're going to talk about today.

Davies: We've been around in this space for a while, but the earlier adopters who were just trying out in distinct groups are now rolling this out into broader production. Because of that, there is this snowball effect. People see that larger organizations are actually using open source for their infrastructure and their integration. That gives them more confidence to do the same.

I recently spoke to a large customer of ours in the telco space. They had this remit. Any open source that came in, they wouldn’t put into mission-critical situations, until they kicked the tires for a good while -- at least a couple of years.

But because there has been this push for more open source projects following open standards, people are now more willing to have a go using open source software.

Snowball effect

In fact, if you look at the numbers of some of our larger customers, they are using Apache ServiceMix and Apache ActiveMQ to support many thousands of business transactions, and this is business-critical stuff. That alone is enough to give people more confidence that open source is the right way to go.

... When you look at cloud, there are different issues you have to overcome. There is the issue about deploying into the cloud. How do you do that? If you're using a public cloud, there are different mechanisms for deploying stuff. And there are open source projects already in existence to make that easier to do.

This is something we have found internally as well. We deploy a lot of internal software, when we are doing our big scale testing. We make choices about which particular vendors we're going to use. So, we have to abstract the way we are doing things. We did that as an open source project, which we have been using internally.

You have to have choice. You can’t really dictate to use it this way or the other way. You've got to have a whole menu of different options for connecting.



When you get to the point of deploying, it’s how do you actually interface with these things? There is always going to be this continuing trend towards standards for integration. How are you going to integrate? Are you going to use SOAP? Are you going to use RESTful services? Would you like to use messaging, for example, to actually interface into an integration structure?

You have to have choice. You can’t really dictate to use it this way or the other way. You've got to have a whole menu of different options for connecting. This is what we try to provide in our software.

We always try to be agnostic to the technology, as much as how you connect to the infrastructure that we provide. But, we also tend to be as open as we can about the different ways of hooking these disparate systems together. That’s the only way you can really be successful in providing something like integration as a service and a cloud-like environment. You have to be completely open.

Best of both worlds

Moynihan: Progress is launching a new company called FuseSource that will be completely focused on the open source business model. We're really excited as a team. The FuseSource team has been an independent business unit, since IONA was acquired by Progress Software. We have been fairly independent within the company, but separated as our own company we'll be able to be completely independent in terms of how we do our marketing, sales, support, services, and engineering.

When you're part of a large organization, there are certain processes that everyone is supposed to follow. Within Progress, we are doing things slightly differently (or very differently depending on the area) because the needs of the open source market are different. So being our own company we'll have that independence to do everything that makes sense for the open-source users, and I'm pretty excited about that.

From a practical perspective, the business model is very different. In traditional enterprise software sales, there is a license fee which is typically a large upfront license cost relative to the entire cost over the lifetime of that software. Then, you have your annual maintenance charges and your services, training, and things like that.

From an open source perspective, typically upfront, there is no license cost. Our model is that there is no license cost. It’s a subscription support model, where there is a monthly fee, but the way that it is accounted for and the way that it works with the customer is very different. That's one of the reasons we split out our business. The way that we work with the customers and the way they consume the software are very different. It’s a month-to-month subscription support charge, but no license charge.

That’s also the reason people like cloud. You pay as you go. You scale as you go. And you don’t have that upfront capital expenditure cost. For new projects, it can be really hard to get money right now. All these benefits are why we're seeing so much growth in FuseSource.

While we do have some level of product management for open source, a lot of it is based around packaging, delivery, licensing, and these types of things, because our engineers are hearing directly from customers on a moment-by-moment basis. They're seeing the feedback in the community, getting out there, and partnering with our customers. So, from an economic perspective, the model is different.

Now, being backed by Progress Software provides us the benefit that customers can have that assurance that we're backed by a large organization. But, having FuseSource as standalone company, as you said, gives us that independence around decision making and really being like a startup.

We'll be able to have our own processes in any functional area that we need to best meet the needs of the open source users.

Davies: From a technical perspective, it’s really good for us. The shackles are off. There’s a lot of suddenly reinvigorating that seems to move forward. We've got a lot of really good ideas that we want to push out and roll out over the coming year, particularly enhancing of the products we already have, but also moving onto new areas.

There's a big excitement, like you would expect when you have got a startup. It just feels like a startup mentality. People are very passionate about what they're doing inside FuseSource.

Because those shackles have been taken away, it means that we can actually start innovating more in the direction we really want to drive our software too. It’s really good.



It's even more so, now that we have become autonomous of Progress. Not that working inside Progress was a bad thing, but we were constrained by some of the rigors and procedures that you have to go through when you are part of a larger organization. Because those shackles have been taken away, it means that we can actually start innovating more in the direction we really want to drive our software too. It’s really good.

Moynihan: From a customer perspective, this change will have a small but significant impact. We are continuing to do everything that we have been doing, but we will be able to have even more independence in the way that we do things. So it will all be beneficial to customers.

We have also launched a new community site at FuseSource.com, which we're pretty excited about. We were planning to do that and we've been working on that for several months. That just provides some additional usability and ability to find things on the site.

Overall, it will be really good for our customers. We've talked with them, and they're pretty excited about it.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: FuseSource.

You may also be interested in:

Tuesday, October 5, 2010

HP leverages converged infrastructure across IT spectrum to simplify branch offices and container-based data centers

The trend toward converged infrastructure -- a whole greater than sum of the traditional IT hardware, software, networking and storage parts -- is going both downstream and upstream.

HP today announced how combining and simplifying the parts of IT infrastructure makes the solution value far higher on either end of the applications distribution equation: At branch offices and the next-generation of compact and mobile all-in-one data center containers.

Called the HP Branch Office Networking Solution, the idea is that engineering the fuller IT and communications infrastructure solution, rather then leaving the IT staff and -- even worse -- the branch office managers to do the integrating, not only saves money, it allows the business to focus just on the applications and processes. This focus, by the way, on applications and processes -- not the systems integration, VOIP, updates and maintenance -- is driving the broad interest in cloud computing, SaaS and outsourcing. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP's announcements today in Barcelona are also marked by an emphasis on an ecosystem of partners approach, especially the branch office solution, which packages brand-name 14 apps, appliances and networking elements to make smaller sub-organizations an integrated part of the larger enterprise IT effort. The partner applications include WAN acceleration, security, unified communications
and service delivery management.

Appliances need integration too

You could think of it as a kitchen counter approach to appliances, which work well alone but don't exactly bake the whole cake. Organizing, attaching and managing the appliances -- with an emphasis on security and centralized control for the whole set-up -- has clearly been missing in branch offices. The E5400 series switch accomplishes the convergence of the discrete network appliances. The HP E5400 switch with new HP Advanced Services ZL module is available worldwide today with pricing starting at $8,294.

Today's HP news also follows a slew of product announcements last month that targeted the SMB market, and the "parts is parts" side of building out IT solutions.

To automate the branch office IT needs, HP is bringing together elements of the branch IT equation from the likes of Citrix, Avaya, Microsoft, and Riverbed. They match these up with routers, switches and management of the appliances into a solution. Security and access control across the branches and the integrated systems are being addressed via HP TippingPoint security services. These provide granular control of application access, with the ability to block access to entire websites – or features – across the enterprise and its branches.

Worried about too much Twitter usage at those branches? The new HP Application Digital Vaccine (AppDV) service delivers specifically designed filters to the HP TippingPoint Intrusion Prevention System (IPS), which easily control access to, or dictate usage of, non-business applications.

The branch automation approach also support a variety of network types, which opens the branch offices to a be able to exploit more types of applications delivery: from terminal serving apps, to desktop virtualization, to wireless and mobile. The all-WiFi office might soon only need a single, remotely and centrally managed locked-down rack in a lights-out closet, with untethered smartphones, tablets and notebooks as the worker nodes. Neat.

When you think of it, the new optimized branch office (say 25 seats and up) should be the leader in cloud adoption, not a laggard. The HP Branch Office Networking Solution -- with these market-leading technology partners -- might just allow the branches to demonstrate a few productivity tricks to the rest of the enterprise.

Indeed, we might just think of many more "branch offices" as myriad nodes within and across the global enterprises, where geography becomes essential irrelevant. Moreover, the branch office is the SMB, supported by any number and types of service providers, internal and external, public and private, SaaS and cloud.

Data centers get legs

Which brings us to the other end of the HP spectrum for today's news. The same "service providers" that must support these automated branch offices -- in all their flavors and across the org chart vagaries and far-flung global locations -- must also re-engineer their data centers for the new kinds of workloads, wavy demand curves, and energy- and cost-stingy operational requirements.

So HP has built a sprawling complex in Houston -- the POD Works -- to build an adaptable family of modular data centers -- the HP Performance Optimized Datacenter (POD) -- in the shape of 20- and 40-foot tractor-trailer-like containers. As we've seen from some other vendors, these mobile data centers in a box demand only that you drive the things up, lock the brake and hook up electricity, water and a high-speed network. I suppose you also drop them on the roof with a helicopter, but you get the point.

But in today's economy, the efficiency data rules the roost. The HP PODs deliver 37 percent more efficiency and cost 45 percent less than a traditional brick-and-mortar data centers, says HP.

Inside the custom-designed container is stuffed with highly engineered racks and the cooling, optimized networks and storage, as well as the server horsepower -- in this case HP ProLiant SL6500 Scalable Systems, from 1 to 1,000 nodes. While HP is targeting these at the high performance computing and service provider needs -- those that are delivering high-scale and/or high transactional power -- the adaptability and data center-level design may well become more the norm than the exception.

The PODs are flexible at supporting the converged infrastructure engines for energy efficiency, flexibility and serviceability, said HP. And the management is converged too, via Integrated Lights-Out Advanced (ILO 3), part of HP Insight Control.

The POD parts to be managed are essentially as many as eight servers, or up to four servers with 12 graphic processing units (GPU), in single four-rack unit enclosures. The solution further includes the HP ProLiant s6500 chassis, the HP ProLiant SL390s G7 server and the HP ProLiant SL170s G6 servers. These guts can be flexible upped to accommodate flexible POD designs, for a wide variety and scale of data-center-level performance and applications support requirements.

Built-in energy consciousness

You may not want to paint the containers green, but you might as well. The first release features optimized energy efficiency with HP ProLiant SL Advanced Power Manager and HP Intelligent Power Discovery to improve power management, as well as power supplies designed with 94 percent greater energy efficiently, said HP.

Start saving energy with delivering more than a teraFLOP per unit of rack space to increase compute power for scientific rendering and modeling applications. Other uses may well make themselves apparent.

Have data center POD, will travel? At least the wait for a POD is more reasonable. With HP POD-Works, PODs can be assembled, tested and shipped in as little as six weeks, compared with one year or longer, to build a traditional brick-and-mortar data center, said HP.

Hey, come to think of it, for those not blocking it with the TippingPoint IPS, I wish Twitter had a few of these on those PODs on the bird strings instead of that fail whale. Twitter should also know that multiple PODs or a POD farm can support large hosting operations and web-based or compute-intensive applications, in case they want to buy Google or Facebook.

Indeed, as could computing grains traction, data centers may be located (and co-located) based on more than whale tails. Compliance to local laws, for business continuity and to best serve all those thousands of automated branch offices might also spur demand for flexible and efficient mobile data centers.

Converged infrastructure may have found a converged IT market, even one that spans the globe.

You may also be interested in:

Friday, October 1, 2010

Leo Apotheker needs to target HP's forgotten businesses

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Ever since its humble beginnings in the Palo Alto garage, HP has always been kind of a geeky company – in spite of Carly Fiorina’s superficial attempts to prod HP toward a vision thing during her aborted tenure. Yet HP keeps talking about getting back to that spiritual garage.

Software has long been the forgotten business of HP. Although – surprisingly – the software business was resuscitated under Mark Hurd’s reign (revenues have more than doubled as of a few years ago), software remains almost a rounding error in HP’s overall revenue pie.

Yes, Hurd gave the software business modest support. Mercury Interactive was acquired under his watch, giving the business a degree of critical mass when combined with the legacy OpenView business.

But during Hurd’s era, there were much bigger fish to fry beyond all the internal cost cutting for which Wall Street cheered, but insiders jeered. Converged Infrastructure has been the mantra, reminding us one and all that HP was still very much a hardware company. The message remains loud and clear with HP’s recent 3PAR acquisition at a heavily inflated $2.3 billion which was concluded in spite of the interim leadership vacuum.

The dilemma that HP faces is that, yes, it is the world’s largest hardware company (they call it technology), but the bulk of that is from personal systems. Ink, anybody?

Needs to compete

The converged infrastructure strategy was a play at the CTO’s office. Yet HP is a large enough company that it needs to compete in the leagues of IBM and Oracle, and for that it needs to get meetings with the CEO. Ergo, the rumors of feelers made to IBM Software’s Steve Mills, and the successful offer to Leo Apotheker, and agreement for Ray Lane as non-executive chairman.

Our initial reaction was one of disappointment; others have felt similarly. But Dennis Howlett feels that Apotheker is the right choice “to set a calm tone” that there won’t be a massive a debilitating reorg in the short term.

Under Apotheker’s watch, SAP stagnated, hit by the stillborn Business ByDesign and the hike in maintenance fees that, for the moment, made Oracle look warmer and fuzzier. Of course, you can’t blame all of SAP’s issues on Apotheker; the company was in a natural lull cycle as it was seeking a new direction in a mature ERP market.

The problem with SAP is that, defensive acquisition of Business Objects notwithstanding, the company has always been limited by a “not invented here” syndrome that has tended to blind the company to obvious opportunities – such as inexplicably letting strategic partner IDS Scheer slip away to Software AG. Apotheker’s shortcoming was not providing the strong leadership needed to jolt SAP out of its inertia.

So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

Instead, Apotheker’s – and Ray Lane’s for that matter – value proposition is that they know the side of the enterprise business applications market that HP doesn’t. That’s the key to this transition.

The next question becomes acquisitions. HP has a lot on its plate already. It took at least 18 months for HP to digest the $14 billion acquisition of EDS, providing a critical mass IT services and data center outsourcing business. It is still digesting nearly $7 billion of subsequent acquisitions of 3Com, 3PAR, and Palm to make its converged infrastructure strategy real.

HP might be able to get backing to make new acquisitions, but the dilemma is that Converged Infrastructure is a stretch in the opposite direction from business software. So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

So let’s speculate about software acquisitions.

SAP, the most logical candidate, is, in a narrow sense, relatively “affordable” given that its stock is roughly about 10 – 15 percent off its 2007 high. But SAP would be obviously the most challenging given the scale; it would be difficult enough for HP to digest SAP under normal circumstances, but with all the converged infrastructure stuff on its plate, it’s back to the question of how can you be in two places at once. Infor is a smaller company, but as it is also a polyglot of many smaller enterprise software firms, would present HP additional integration headaches that it doesn’t need.

Little choice

HP may have little choice but to make a play for SAP if IBM or Microsoft were unexpectedly to actively bid. Otherwise, its best bet is to revive the relationship, which would give both HP and SAP the time to acclimate. But in a rapidly consolidating technology market, who has the luxury of time these days?

Salesforce.com would make a logical stab as it would reinforce HP Enterprise Services’ (formerly EDS) outsourcing and BPO business. It would be far easier for HP to get its arms around this business. The drawback is that Salesforce.com would not be very extensible as an application set, as it uses a proprietary stored procedures database architecture. That would make it difficult to integrate with other prospective ERP SaaS acquisitions, which would otherwise be the next logical step to growing the business software footprint.

Can HP afford to converge itself in another direction? Can it afford not to?

Informatica is often brought up – if HP is to salvage its Neoview and Knightsbridge BI business, it would need a data integration engine to help bolster it. Better yet, buy Teradata, which is one of the biggest resellers of Informatica PowerCenter – that would give HP far more credible presence in the analytics space. Then it will have to ward off Oracle – which has an even more pressing need for Informatica to fill out the data integration piece in its Fusion middleware stack – for Informatica. But with Teradata, there would at least be a real anchor for the Informatica business.

HP has to decide what kind of company it needs to be, as Tom Kucharvy summarized well a few weeks back. Can HP afford to converge itself in another direction? Can it afford not to? Leo Apotheker has a heck of a listening tour ahead of him.

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Financial services firms look to cloud, grid, and cluster to allay fears over data explosion, says survey

Look for a sharp uptick in cloud computing from financial services firms over the next two years, along with similar increases in cluster and grid technologies. This increased interest comes from a concern over the current data explosion and the firms' lack of scalable environments, insufficient capacity to run complex analytics, and contention for computing resources.

These findings come from a recent survey conducted by Wall Street & Technology in conjunction with Platform Computing, SAS, and the TABB Group. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

Completed in July, the survey found noteworthy differences in the challenges being faced by both buy- and sell-side firms, with sell-side institutions more likely to report a lack of a scalable environment, insufficient capacity to run complex analytics, and contention for computing resources as significant challenges.

According to the survey, data proliferation and the need to better manage it are at the root of many of the challenges being faced by financial institutions of all sizes. Two-thirds (66 percent) of buy-side firms and more than half (56 percent) of sell-side firms are grappling with siloed data sources. The silo problem is being exacerbated by organizational constraints, including policies prohibiting data sharing and access, network bandwidth issues and input/output (I/O) bottlenecks.

Too much data

Ever-increasing data growth is also cause for concern, with firms reporting that they are dealing with too much market data. Sixty-six percent of respondents didn't think their analytics infrastructures would be able to keep pace with demand over time.

Both buy- and sell-side firms plan to increase their focus on liquidity and counterparty risk in the next 12 months. Counterparty risk management was ranked as the highest priority for the sell side (45 percent) with liquidity risk following at 43 percent. Liquidity risk and counterparty risk scored high for the buy side with 36 percent and 33 percent, respectively.

Data proliferation and the need to better manage it are at the root of many of the challenges being faced by financial institutions of all sizes.



The financial institutions plan to turn to a combination of technologies including cloud computing and grid technologies. Within the next two years, 51 percent of all respondents are considering or likely to invest in cluster technology, 53 percent are considering or likely to buy grid technology, and 57 percent are considering or likely to purchase cloud technology.

The report, “The State of Business Analytics in Financial Services: Examining Current Preparedness for Future Demands,” is available for download at http://www.grid-analytics.wallstreetandtech.com. (Registration required.) Wall Street & Technology, in conjunction with the survey sponsors, will host a webinar to discuss in-depth key findings of the survey on October 7 at 12 pm ET/9 am PT. For more information, visit: http://tinyurl.com/2ulcesm.

You may also be interested in:

Tuesday, September 28, 2010

Automated governance: Cloud computing's lynchpin for success or failure

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a transcript or download a copy. Get a copy of Glitch: The Hidden Impact of Faulty Software. Learn more about governance risks. Sponsor: WebLayers.

Management and governance are the arbiters of success or failure when we look across a cloud services ecosystem and the full lifecycle of those applications. That's why governance is so important in the budding era of cloud computing.

As cloud-delivered services become the coin of the productivity realm, how those services are managed as they are developed, deployed, and used -- across a services lifecycle -- increasingly determines their true value.

And yet governance is still too often fractured, poorly extended across the development-and-deployment continuum, and often not able to satisfy the new complexity inherent in cloud models.

One key bellwether for future service environments and for defining the role and requirements for automated cloud governance is in applications development, which due to the popularity of platform as a service (PaaS) is already largely a services ecosystem.

Here to help us explain why baked-in visibility across services creation and deployment is essential please join Jeff Papows, President and CEO of WebLayers and the author of Glitch: The Hidden Impact of Faulty Software, and John McDonald, CEO of CloudOne Corp. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
McDonald: Cloud, from a technology perspective, is more about some very sophisticated tools that are used to virtualize the workloads and the data and move them live from one bank of servers to another, and from one whole data center to another, without the user really being aware of it. But, fundamentally, cloud computing is about getting access to a data center that’s my data center on-demand.

Fundamentally, the easiest way to remember it is that cloud is to hardware as software as a service (SaaS) is to software. Basically, for CloudOne, we're providing IBM Rational Development tools both through cloud computing and SaaS.

... There's a myth that development is something that we ought to be tooling up for, like providing power to a building or water service. In reality, that’s not how it works at all.

The money that you save by doing that is the reason you can open any trade magazine and the first seven pages are all going to be about cloud.



There are people who come and go with different roles throughout the development process. The front-end business analysts play a big role in gathering requirements. Then, quite often, architects take over and design the application software or whatever we are building from those requirements. Then, the people doing the coding, developers, take over. That rolls into testing and that rolls into deployment. And, as this lifecycle moves through, these roles wax and wane.

But the traditional model of getting development tools doesn’t really work that way at all. You usually buy all of the tools that you will ever need up front, usually with a large purchase, put them on servers, and let them sit there, until the people who are going to use them and log in and use them. But, while they are sitting there, taking up space and your capital expense budget, and not being used, that’s waste.

The cloud model allows you to spin up and spin down the appropriate amount of software and hardware to support the realities of the software development lifecycle. The money that you save by doing that is the reason you can open any trade magazine and the first seven pages are all going to be about cloud.

It's allowing customers of CloudOne and IBM Rational to use that money in new, creative, interesting ways to provide tools they couldn't afford before, to start pilots of different, more sophisticated technologies that they wouldn't have been able to gather the resources to do before. So, it's not only a cost-savings statement, it's also ease of use, ease of start-up, and an ability to get more for your dollar from the development process. That's a pretty cool thing all the way around.

Papows: A lot of about what’s going on in cloud computing it’s not a particularly new thing. What we used to think of was hosting or outsourcing. What’s happening now is the world is becoming more mobile, as 20 percent of our IT capacity is focused on new application development.

We have to get more creative and more distributed about the talent that contributes to those critical application development and projects. ... Design time governance is the next logical thing in that continuum, so that all of the inherent risk mitigation associated with governance and then IT contacts can be applied to application development in a hybrid model that’s both geographically and organizationally distributed.

When you try to add some linear structure and predictability to those hybrid models, the constant that can provide some order and some efficiency is not purely technology-based. It's not just the virtualization, the added virtual machine capacity, or even the middleware to include companies like WebLayers or tools like Rational. It's the process that goes along with it. One of the really important things about design-time governance is the review process.

Governance is a big part of the technology toolset that institutionalizes that review process and adds that order to what otherwise can quickly become a bit chaotic.

McDonald: The challenge of tools in the old days was that they were largely created during a time where all the people and the development project were sitting on the same floor with each other in a bunch of cubes in offices.

The cloud allows us to create a dedicated new data center that sits on the Internet and is accessible to all, wherever they are, and in whatever time zone they are working, and whatever relationship they have to my company.



As the challenges of development have caused companies to look at outsourcing and off-shoring, but even more simplistically the merger of my bank and your bank. Then we have groups of developers in two different cities, or we bought a packaged application, and the best skill to help us integrate it is actually from a third-party partner which is in a completely different city or country. Those tools have shown their weaknesses, even in just getting your hands on them.

How do I punch a hole through the firewall to give you a way to check in your code problems? The cloud allows us to create a dedicated new data center that sits on the Internet and is accessible to all, wherever they are, and in whatever time zone they are working, and whatever relationship they have to my company.

That frees things up to be collaborative across company boundaries. But with that freedom comes a great challenge in unifying a process across all of those different people, and getting a collaborative engine to work across all those people.

It’s almost a requirement to keep the wheels on the bus and to have some degree of ability to manage the process in the compliance with regulations and the information about how decisions were made in such distributed ways that they are traceable and reviewable. It’s really not possible to achieve such a distributed development environment without that governance guidance.

Papows: We're dealing with some challenges for the first time that require out-of-the-box thinking. I talk about this in "Glitch." We have reached a point where there a trillion connected devices on the Internet as the February of this year. There are a billion embedded transistors for every human being on the planet.

We have reached a point where there a trillion connected devices on the Internet as the February of this year. There are a billion embedded transistors for every human being on the planet.



You’ve read about or heard about or experienced first hand the disasters that can happen in production environments, where you have some market-facing application, where service is lost, where there is even brand damage or economic consequences.

... Everybody intellectually buys into governance, but nobody individually wants to be governed. Unless you automate it, unless you provide the right stack of tools and codify the best practices and libraries that can be reusable, it simply won’t happen. People are people, and without the automation to make it natural, unnatural things get applied some percentage of the time, and governance can’t work that way.

McDonald: Developers view themselves quite often as artists. They may not articulate it that way, but they often see themselves as artists and their palette is code.

As such, they immediately rankle at any notion that, as artists, they should be governed. Yet, as we’ve already established, that guidance for them around the processes, methods, regulations, and so on is absolutely critical for success, really in any size organization, but beyond the pale in a distributed development environment. So, how do you deal with that issue?

Well, you embed it into their entire environment from the very first stage. In most companies, this is trying to decide what projects we should undertake, which in lot of companies is a mainly over-glorified email argument.

Governance must be process-friendly

Governance has to be embedded at every step of that way, gently nudging, and sometimes shuttling all these players back into the right line, when it comes to ensuring that the result of their effort is compliant with whatever it is that I needed to be compliant to.

In short, you’ve got to make it be a part of and embedded into every stage of the development process, so that it largely disappears, and becomes something that becomes such a natural extension of the tool so that you don’t have anyone along the way realizing that they are being governed

WebLayers was the very first partner that we reached out to say, "Can you go down this journey with us together, as we begin developing these workbenches, these integrated toolsets, and delivering them through the cloud on-demand?" We already know and see that embedding governance in every layer is something we have to be able to do out of the gate.

The team at WebLayers was phenomenal in responding to that request and we were able to take several based instances of various Rational tools, embed into them WebLayers technology, and based on how the cloud works, archive those, put them up in our library to be able to be pulled down off-the-shelf, cloned, and made an instance of for the various customers that we have coming to our pipeline who want to experience this technology in what we are doing.

Better safe than sorry

... The avoidance of things going badly is unfortunately very difficult to measure. That is something that everyone who attempts to do a cloud-delivered development environment and does the right thing by embedding in it the right governance guidance should know coming out of the gate. The best thing that’s going to happen is you are not going to have a catastrophe.

That said, one of the neat things about having a common workbench, and having the kinds of reporting in metrics that it can measure, meaning the IBM Jazz, along with the WebLayers technology, is that I can get a very detailed view of what’s going on in my software factory at every turn of the crank and where things are coming off the rails a little bit.

Papows: There's an age-old expression that you're so close to the forest you can't see the trees. Well, I think in the IT business we’re sometime so deeply embedded in the bark we can't see anything.

We've been developing, expanding, deploying, and reinventing on a massive scale so rapidly for the last 30 years that we've reached a breaking point where, as I said earlier, between the complexity curves, between the lack of elasticity and human capital, between the explosion and the amount of mobile computing devices and their propensity for accessing all of this back-end infrastructure and applications, where something fundamentally has to change. It's a problem on a scale that can't be overwhelmed by simply throwing more bodies at it.

Creative solutions

Secondly, in the current economy, very few CIOs have elastic budgets. We have to do as an industry what we've done from the very beginning, which is to automate, innovate, and find creative solutions to combat the convergence of all of those digital elements to what would otherwise be a perfect storm.

There there is simply no barrier for anyone to give this a try.



So SaaS, cloud computing, automated governance, forms of artificial intelligence, Rational tooling, consistent workbench methodologies, all of these things are the instruments of getting ourselves out of the corner that we have otherwise painted ourselves in.

I don't want to seem like an alarmist or try to paint too big a storm cloud on the horizon, but this is simply not something that's going to happen or be resolved in a business-as-usual usual fashion.

That, in fact, is where companies like CloudOne are able to expand and leap productivity equations for companies in certain segments of the market. That's where automation, whether it's Rational, WebLayers, or another piece of technology, has got to be part of the recipe of getting off this limb before we saw it off behind us.

McDonald: If you have any inclination at all to see what it is that Jeff and I are telling you, give it a whirl, because it's very simple.

That's one of the coolest things of all about this whole model, in my mind. There there is simply no barrier for anyone to give this a try. In the old model, if you wanted to give the technology a try, you had better start with your calculator. And you had better get the names and addresses of your board of directors, because you're going there eventually to get the capital approval and so on to even get a pilot project started in many cases with some of these very sophisticated tools.

This is just not the case anymore. With the CloudOne environment you can sign on this afternoon with a web-based form to get a instance of let's say, Team Concert set up for you with WebLayers technology embedded in it, in about 20 minutes from when you push "submit," and it's absolutely free for the first model. From there, you grow only as you need them, user-by-user. It's really quite simple to give this concept a try and it's really very easy.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a transcript or download a copy. Get a copy of Glitch: The Hidden Impact of Faulty Software. Learn more about governance risks. Sponsor: WebLayers.

You may also be interested in:

Friday, September 24, 2010

Demise of enterprise IT departments: A pending crisis point

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

By Ronald Schmelzer

In ZapThink’s deep conversations with CIOs and other IT decision makers, we find that there’s broad agreement on the multitude of forces conspiring to change every aspect of the way the enterprise does IT.

Yet at the same time, everybody’s in denial that these changes will happen to them. For us as outsiders, it certainly looks like many enterprise IT decision-makers acknowledge that the world is changing -- but deny that they are part of that same world.

Of course, such executives simply have their head in the sand. If change is to occur, it will happen to the vast majority of enterprises, not just the minority.

This realization drives the Crisis Points of the ZapThink 2020 vision. However, ZapThink is not advocating that organizations should adopt any of the crisis points. Rather we are observing that these crises are coming, whether or not companies are ready for them.

In particular, we believe that companies will reach a crisis point as they seek to outsource IT. However, we aren’t advocating that companies outsource all their IT efforts. Rather, we are observing that siren call of offloading IT assets in the form of cloud computing and outsourcing is a significant trend that is leading to a crisis point.

And without a strong rudder, many companies will indeed be dashed on the rocks. This ZapFlash blog post provides greater detail on this particular crisis point: The pending demise of the enterprise IT department, or what we’ve called in previous ZapFlashes the Collapse of Enterprise IT.

Outsourcing and cloud computing:
Different parts of same story


Part of the reason for the visceral response to our Crisis Points ZapFlash is that there’s inherent fear when talking about outsourcing IT functions. Part of the fear comes from the fact that many people confuse outsourcing with offshoring.

Outsourcing is the purchasing of a service from an outside vendor to replace the performance of the task within the organization’s internal operations. Offshoring, on the other hand, is the movement of labor from a region of high cost (such as the United States) to one of comparatively lower cost (such as India).

People fear the latter because it means subcontracting existing work to other people, thus displacing jobs at home. However, the former has been going on for hundreds of years. Indeed, many companies exist solely because they are completing tasks that their customers would rather not undertake themselves.

Almost six years ago, we talked about how service oriented architecture (SOA) and outsourcing go hand in hand, for the simple reason that SOA requires organizations to think about their resources, processes, and capabilities in ways that are loosely coupled from the specifics of their implementation, location, and consumption. Indeed, the more companies implement SOA, the more they can outsource processes that are not strategic or competitive for the organization.

But it’s a mistake to assume the collapse of the enterprise IT department is due entirely to outsourcing the functions of IT to third parties.



Furthermore, the more companies outsource their functions, the more they are motivated to implement SOA to facilitate the consumption of the outsourced capabilities. So, therefore it should be no surprise that the combination of SOA and a challenging economic environment has motivated many companies to see outsourcing as a legitimate strategy for their IT organizations, regardless of whether they move to offshoring.

But it’s a mistake to assume the collapse of the enterprise IT department is due entirely to outsourcing the functions of IT to third parties. Outsourcing is a part of the story, but so is cloud computing. In much the same way that third-party firms can offload parts of IT in the outsourcing model, cloud computing offers the ability to offload other aspects of the IT department. Cloud computing provides both technological and economic benefits for distributing and offloading resources, functions, processes, and even data onto location-independent infrastructures.

While many enterprises are currently pursuing a private model for cloud computing, there are far too many economic benefits of the public model to ignore. Most likely, we will see hybrid cloud approaches, where organizations keep certain mission-critical features behind the firewall on the corporate premises while they shift the rest are to lower cost, more agile, and less costly third-party locations. The net result of this shift is continued erosion of the scope of responsibility for internal IT organizations.

The holistic perspective of the five supertrends

The demise of enterprise IT crisis point emerges from the fact that companies will rush into this vision of outsourced IT without thinking through first the dramatic impact that this transition will have throughout their organization.

For such organizations, the value of our ZapThink 2020 vision is that it pulls together multiple trends and delineates the interrelationships among them. One of the most closely related trends to the demise of the IT organization is the increased formality and dependence on governance, as organizations pull together the business side of governance (GRC, or governance, risk, and compliance), with the technology side of governance (IT governance, and to an increasing extent, SOA governance). Over time, CIOs become CGOs (Chief Governance Officers), as their focus shifts away from technology.

As the enterprise owns fewer and fewer of the organization’s IT assets, the role and responsibility of enterprise IT practitioners will be less about the mechanics of getting systems to work, integrating them with each other, and operating them, and more about the management of the one resource that remains constant: information. After all, IT is information technology, not computer or systems technology.

If you can successfully tackle these questions with a coherent, holistic strategy, then you have defused the risk inherent to movement to outsourcing and/or cloud computing.



With this perspective, it’s essential to view the shift to outsourcing and cloud computing holistically with all the other changes happening in the enterprise IT environment.

For example, the move to democratization of technology means that non-IT practitioners will be utilizing and creating IT capabilities and assets without the control of the IT organization. How will IT practitioners manage the sole enterprise IT asset (information) given that they cannot manage the systems in which that asset flows? As organizations realize the global cubicle vision of IT, how will enterprise IT practitioners and architects enable distributed information without losing GRC visibility?

As systems become increasingly interconnected with deep interoperability despite their increasing distributed nature, how can enterprise IT practitioners make sure the systems as a whole continue to provide value and avoid chaotic disruptions despite the fact that the organization doesn’t own or operate them? As organizations move to more iterative, agile forms of complex systems engineering where new capabilities emerge from compositions of existing ones, how will movements to cloud computing and outsourcing help or hurt those efforts?

If you can successfully tackle these questions with a coherent, holistic strategy, then you have defused the risk inherent to movement to outsourcing and/or cloud computing. On the other hand, if you rush into cloud computing and outsourcing strategies without thinking through all the issues we’ve discussed in this ZapFlash, you’ll be sunk before you know it.

The ZapThink take

Just like the Sirens calling to Odysseus in Homer’s Odyssey, the call of outsourcing and cloud computing will lead many enterprise IT ships to wreck on the rocks unless they can lash themselves to the masts of a holistic perspective of where the industry as a whole is heading. More importantly, the broad shifts in the industry that ZapThink’s 2020 vision of enterprise IT illuminates compels companies to think more broadly about their constant enterprise IT asset: information.

If it no longer matters where your IT is physically located and whether or not you actually own or operate the IT systems you depend on, then what IT department do you really need and what are they really doing? The answer: less hands-on technology and more governance, a sea change that represents the demise of the enterprise IT organization. Whether or not this transition develops into a full-blown crisis is entirely up to you.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.


SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
You may also be interested in: