Sunday, January 3, 2010

Getting on with 2010 and celebrating ZapThink’s 10-year anniversary

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

By Ronald Schmelzer

I
t’s hard to believe that ZapThink will be a full decade old in 2010. For those of you that don’t know, ZapThink was founded in October 2000 with a simple mission: record and communicate what was happening at the time with XML standards.

From that humble beginning, ZapThink has emerged as a (still small) advisory and education powerhouse focused on Service-Oriented Architecture (SOA), Cloud Computing, and loosely coupled forms of Enterprise Architecture (EA).

Oh, how things have changed, and how they have not. As is our custom, we’ll use this first ZapFlash of the year to look retrospectively at the past year and the upcoming future. But we’ll also wax a bit nostalgic and poetic as we look at the past 10 years and surmise where this industry might be heading in the next decade.

2009: A year of angst

The Times Square Alliance has it right in celebrating Good Riddance Day just prior to New Year’s Eve. There’s a lot that we can be thankful to put behind us. Anne Thomas Manes started out the year with an angst-filled posting declaring that SOA is Dead. Getting past the misleading headline, many in the industry came to the quick realization that SOA is far from dead, but rather going into a less hyped phase.

And for that reason we’re glad. We say good riddance to vendor hype, consulting firm over-selling, and the general proliferation of misunderstanding that plagued the industry from 2000 until this point (SOA is Web Services? SOA is integration middleware? Buy an ESB get a SOA?). We can now declare that the vendor marketing infatuation with SOA is dead and they have a new target in mind: Cloud Computing.

Last year we predicted that SOA would be pushed from the daily marketing buzz to replaced with Cloud Computing as the latest infatuation of the marketingerati. Specifically we said, “We expect the din of the cloud-related chatter to turn into a real roar by this time next year. Everything SOA-related will probably be turned into something cloud-related by all the big vendors, and companies will desperately try to turn their SOA initiatives into cloud initiatives.”

Oh, boy, were we right ... in spades. Perhaps this wasn’t the most remarkable of predictions, though. Every analyst firm, press writer, and book author was positively foaming at the mouth with Cloud-this and Cloud-that. Of course, if history is a lesson, 90% of what’s being spouted is EAI-cum-SOA-cum-Cloud marketing babble and intellectual nonsense.

But history also teaches us that people have short-term memories and won’t remember. They’ll continue to buy the same software and consulting services warmed over as new tech with only a few enhancements, mostly in the user interface and system integration to change things.

We’ve seen the rapid emergence of a wide range of EA frameworks, SOA methodologies, and disciplines benefiting from a rapid increase in EA and SOA training expenditures.



We also predicted a boom year for SOA education and training, which ended up panning out, for the most part. ZapThink now generates the vast majority of its revenues from SOA training and certification, which has become a multi-million dollar business for us, by itself.

ZapThink is not alone in realizing this boom of EA and SOA training spending. We’ve seen the rapid emergence of a wide range of EA frameworks, SOA methodologies, and disciplines benefiting from a rapid increase in EA and SOA training expenditures. We also predicted that ZapThink would double in size, which hasn’t exactly happened. Instead, we’ve decided to grow through use of partners and contractors – a much wiser move in an economy that has proven to be sluggish throughout 2009.

Yet, not all of our predictions panned out. We promised that there would be one notable failure and one notable success that would be universally and specifically attributed to SOA in 2009, and I can’t say that this has happened. If it did, we’d all know about it.

Rather, we saw the continued recession of SOA into the background as other, more highly hyped and visible initiatives got the thumbs-up of success or the mark of failure. In fact, perhaps this is how it should have been all along. Why should we all know with such grand visibility if it was SOA that succeeded or failed? Indeed, failure or success can rarely be solely attributable to any form of architecture. So, I think it’s possible to say that the prediction itself was misguided. Maybe we should instead have asked for raises for all those involved in SOA projects in 2009.

2010 and beyond: Where are things heading?

It’s easy to have 20/20 hindsight, however. It’s much more difficult to make predictions for the year ahead that aren’t just the obvious no-brainers that anyone who has been observing the market can make. Sure, we can assert that the vendors will continue to consolidate, IT spending will rebound with improving economic conditions, and that cloud computing will continue its inevitable movement through the hype cycle, but that wouldn’t be providing you with any information. Rather, we believe that we can stick our necks out a bit to make some predictions for 2010.

In 2010, we predict that:
  • Open Source SOA infrastructure will dominate – Lack of interest by venture capitalists and consolidation by the Big Five IT infrastructure providers will result in such lack of choice for SOA infrastructure solutions that end users will flock to open source alternatives. As a result, 2010 will be the year that open source SOA infrastructure finally gains enough adoption that it will be on the short list for most large SOA implementations. We’ll see (finally) a robust open source SOA registry/repository offering, SOA management solutions, SOA governance offerings, and SOA infrastructure solutions that rival commercial ones in terms of performance, reliability, and support.


    The Rich Internet Application (RIA) Market wars are over – Put a fork in it, it’s done.



  • The Rich Internet Application (RIA) market wars are over – Put a fork in it, it’s done. Good try Microsoft Silverlight. Nice effort, RIA startups and commercial vendors. Customers have spoken. Adobe Flash and open source Ajax solutions based on Javascript have won. Yes, there will be niches and industries were Silverlight and other commercial solutions might be appropriate and gain traction, but we see way too much (awesome quality) open source jQuery (and Prototype) solutions out there and too much adoption of Flash by the end user base for this trend to go away. And Java on the client? Feggetaboutit – that time has come and gone. As a result, this will be the end of ZapThink’s coverage of the space. Just as we declared the Native XML Database market done in 2002. So too we declare this market contest over.


  • Cloud privacy & security issues put to rest – Already we’re seeing people anguishing about Cloud’s unreliability, insecurity, and lack of privacy. Really? You think people didn’t realize this when they made their Cloud investments in the first place? There’s simply too much economic benefit in running services and applications in a dynamically scalable way on someone else’s infrastructure. The Cloud providers won’t be giving up any time soon. Nor will IT implementers. This means that there will be a credible solution to these problems, and it will become well understood and implemented by year’s end. If you’re looking for a company to start in 2010 that will have a huge, ready customer base and potential for multi-million dollar valuations with an exit in 18-24 months, then this is the place to look. Start a cloud privacy/reliability/security company that addresses current pain points and you’ll win. We’ll just take 5 percent for the suggestion, thanks.

But all these 2010 predictions are still too easy. Since we’ve been around for the past decade, perhaps we should make some predictions about the decade ahead? Where will IT and SOA and EA be in 2020? Most ironically, we believe that not much will really change in the enterprise software landscape. If you were to fall asleep in a Rip van Winklesque fashion today and wake up on January 1, 2020, you’d find that:
  • Mainframes will still exist — Look folks, if they haven’t been subsumed by all the movements of the past 30 years, they won’t be gone in another 10. Mainframes and legacy systems are here to stay. Invest in mainframe-related stocks.


  • We’ll still be talking about Enterprise Architecture – One of the biggest lessons of the past 10 years is that the business still doesn’t understand or value enterprise architecture. CIOs are still, for the most part, business managers who treat IT as a cost center or as a resource they manage on a project-by-project and acquisition-by-acquisition basis. Long-term planning? Put enterprise architects in control of IT strategy? Forget it. In much the same way that the most knowledgeable machinists and assembly line experts would never get into management positions at the automakers, so too will we fail to see EA grab its rightful reins in the enterprise. We’ll still be talking about how necessary, under implemented, and misunderstood EA will be in 2020. You’ll see the same speakers, trainers, and consultants, but with a bit more grey on top (if they don’t already have it now).


    Soon, your most private information will be spread onto hundreds of servers and databases around the world that you can’t control and have no visibility over.



  • More things in IT environments we don’t control – IT is in for long-term downward spending pressure. The technologies and methodologies that are emerging now: Cloud, mobile, Agile, Iterative, Service-Oriented are only pushing more aspects of IT outside the internal environment and into environments that businesses don’t control. Soon, your most private information will be spread onto hundreds of servers and databases around the world that you can’t control and have no visibility over. You can’t fight this battle. Private clouds? Baloney. That’s like trying to stop tectonic shift. The future of IT is outside the enterprise. Deal with it.


  • IT vendors will still be selling 10 years from now what they’ve built (or have) today – There is nothing to indicate that the patterns of vendor marketing and IT purchasing have changed in the past 10 years or will change at all in the next 10 years. Vendors will still peddle their same warmed-over wares as new tech for the next 10 years. And even worse, end users will buy them. IT procurement is still a short-sighted, tactically project-focused, solving yesterday’s problems affair. It would require a huge shift in purchasing and marketing behavior to change this, and I regret that I don’t see that happening by 2020.

The ZapThink take


Some of the above predictions may seem gloomy. Perhaps the current recessionary environment is putting a haze on the positive visions of our crystal ball. More likely, however, is the fact that the enterprise IT industry is in a long-term consolidating phase.

IT is a relatively new innovation for the business having been part of the lexicon and budgets of enterprises probably for 60 years at the longest. Just as the auto industry went through a rapid period of expansion and innovation from the beginning of the past century through the 1960s to later be followed by consolidation and slowing down of innovation, so too will we see the same happen with enterprise IT.

In fact, it’s already begun. Five vendors control over 70 percent of all enterprise IT software and hardware expenditures in the enterprise. Enterprise end users will necessarily need to follow their lead as they do less of their own IT development and innovation in-house.


Now, this doesn’t apply to IT as a whole – we see remarkable advancement and development in IT outside the enterprise. As we’ve discussed many times before, there is a digital divide between the IT environment inside the enterprise and the environment we experience when we’re at home or using consumer-oriented websites, devices, and applications.

We expect that digital divide to continue to separate, and, perhaps within the next 10 years, reach a point where enterprise IT investment will stagnate. Instead, the business will come to depend on outside providers for their technology needs. Wherever that goes, ZapThink has been there for the past 10 years, and we expect to be here another 10. In what shape and what form we will be, that is for you, our customers and readers to determine.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, December 21, 2009

HP's Cloud Assure for Cost Control allows elastic capacity planning to better manage cloud-based services

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Read a full transcript, or download a copy. Sponsor: Hewlett-Packard.

Today's podcast discussion focuses on the economic benefits of cloud computing -- of how to use cloud-computing models and methods to control IT cost by better supporting application workloads.

As we've been looking at cloud computing over the past several years, a long transition is under way, of moving from traditional IT and architectural method to this notion of cloud -- be it private cloud, at a third-party location, or through some combination of the above.

Traditional capacity planning is not enough in these newer cloud-computing environments. Elasticity planning is what’s needed. It’s a natural evolution of capacity planning, but it’s in the cloud.

Therefore traditional capacity planning needs to be reexamined. So now we'll look at how to best right-size cloud-based applications, while matching service delivery resources and demands intelligently, repeatedly, and dynamically. The movement to pay-per-use model also goes a long way to promoting such matched resources and demand, and reduces wasteful application practices.

We'll also examine how quality control for these cloud applications in development reduces the total cost of supporting applications, while allowing for a tuning and an appropriate way of managing applications in the operational cloud scenario.

Here to help unpack how Cloud Assure services can take the mystique out of cloud computing economics and to lay the foundation for cost control through proper cloud capacity management methods, we're joined by Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Ashizawa: Old-fashioned capacity planning focuses on the peak usage of the application, and it had to, because when you were deploying applications in-house, you had to take into consideration that peak usage case. At the end of the day, you had to be provisioned correctly with respect to compute power. Oftentimes with long procurement cycles, you'd have to plan for that.

In the cloud, because you have this idea of elasticity, where you can scale up your compute resources when you need them, and scale them back down, obviously that adds another dimension to old-school capacity planning.

The new way look at it within the cloud is elasticity planning. You have to factor in not only your peak usage case, but your moderate usage case and your low-level usage as well. At the end of the day, if you are going to get the biggest benefit of cloud, you need to understand how you're going to be provisioned during the various demands of your application.

If you were to take, for instance, the old-school capacity-planning ideology to the cloud, you would provision for your peak use-case. You would scale up your elasticity in the cloud and just keep it there.

But if you do it that way, then you're negating one of the big benefits of the cloud. That's this idea of elasticity, and paying for only what you need at that moment.

One of the main factors why people consider sourcing to the cloud is because you have this elastic capability to spin up compute resources when usage is high and scale them back down when the usage is low. You don’t want to negate that benefit of the cloud by keeping your resource footprint at its highest level.
[Editor's Note: On Dec. 16, HP announced three new offerings designed to enable cloud providers and enterprises to securely lower barriers to adoption and accelerate the time-to-benefit of cloud-delivered services.

This same week, Dana Gardner also interviewed HP's Robin Purohit, Vice President and General Manager for HP Software and Solutions, on how CIOs can contain IT costs while spurring innovation payoffs such as cloud architectures.

Also, HP announced, back in the spring of 2009, a Cloud Assure package that focused on security, availability, and performance.]
Making the road smoother

Ashizawa: What we're now bringing to the market works in all three cases [of cloud capacity planning]. Whether you're a private internal cloud, doing a hybrid model between private and public, or sourcing completely to a public cloud, it will work in all three situations.

The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost. The thing here, though, is that you might fall into a situation where you negate that benefit.

If you deploy an application in the cloud and you find that it’s underperforming, the natural reaction is to spin up more compute resources. It’s a very good reaction, because one of the benefits of the cloud is this ability to spin up or spin down resources very fast. So no more procurement cycles, just do it and in minutes you have more compute resources.

The situation, though, that you may find yourself in is that you may have spun up more resources to try to improve performance, but it might not improve performance. I'll give you a couple of examples.

You can find yourself in a situation where your application is no longer right-sized in the cloud, because you have over-provisioned your compute resources.



If your application is experiencing performance problems because of inefficient Java methods, for example, or slow SQL statements, then more compute resources aren't going to make your application run faster. But, because the cloud allows you to do so very easily, your natural instinct may be to spin up more compute resources to make your application run faster.

When you do that, you find yourself in is a situation where your application is no longer right-sized in the cloud, because you have over provisioned your compute resources. You're paying for more compute resources and you're not getting any return on your investment. When you start paying for more resources without return on your investment, you start to disrupt the whole cost benefit of the cloud.

Applications need to be tuned so that they are right-sized. Once they are tuned and right-sized, then, when you spin up resources, you know you're getting return on your investment, and it’s the right thing to do.

Whether you have existing applications that you are migrating to the cloud, or new applications that you are deploying in the cloud, Cloud Assure for cost control will work in both instances.

Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.

The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

Moderate and peak usage

Ashizawa: The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage.

When you have this visibility of end user measurement at various load levels with Performance Center, resource consumption with SiteScope, and code level performance with HP Diagnostics, and you integrate them all into one console, you allow yourself to do true elasticity planning. You can tune your application and right-size it. Once you've right-sized it, you know that when you scale up your resources you're getting return on your investment.

You want to get a grasp of the variable-cost nature of the cloud, and you want to make this variable cost very predictable. Once it’s predictable, then there will be no surprises. You can budget for it and you could also ensure that you are getting the right performance at the right price. ... If you're thinking about sourcing to the cloud and adopting it, from a very strategic standpoint, it would do you good to do your elasticity planning before you go into production or you go live.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Learn more. Read a full transcript, or download a copy. Sponsor: Hewlett-Packard.

You may also be interested in:

Friday, December 18, 2009

Careful advance planning averts costly snafus in data center migration projects

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript of the podcast, or download a copy. Learn more. Sponsor: Hewlett-Packard.

The crucial migration phase when moving or modernizing data centers can make or break the success of these complex undertakings. Much planning and expensive effort goes into building new data centers, or in conducting major improvements to existing ones. But too often there's short shrift in the actual "throwing of the switch" -- in the moving and migrating of existing applications and data.

But, as new data center transformations pick up -- due to the financial pressures to boost overall IT efficiency -- so too should the early-and-often planning and thoughtful execution of the migration itself get proper attention. This podcast examines the best practices, risk mitigation tools, and requirements for conducting data center migrations properly.

To help pave the way to making data center migrations come off effectively, we're joined by three thought leaders from Hewlett-Packard (HP): Peter Gilis, data center transformation architect for HP Technology Services; John Bennett, worldwide director, Data Center Transformation Solutions at HP, and Arnie McKinnis, worldwide product marketing manager for Data Center Modernization at HP Enterprise Services.

The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: We see a great deal of activity in the marketplace right now of people designing and building new data centers. They have this wonderful new showcase site, and they have to move into it.

The reasons for this growth, the reasons for moving to other data centers, are fueled by a lot of different activities.

In many cases it's related to growth. The organization and the business have been growing. The current facilities were inadequate -- because of space or energy capacity reasons or because they were built 30 years ago -- and so the organization decides that it has to either build a new data center or perhaps make use of a hosted data center. As a result, they are going to have to move.

Whether they're moving to a data center they own, moving to a data center owned and managed by someone else, or outsourcing their data center to a vendor like HP, in all cases you have to physically move the assets of the data center from one location to another.

The impact of doing that well is awfully high. If you don't do it well, you're going to impact the services provided by IT to the business. You're very likely, if you don't do it well, to impact your service level agreements (SLAs). And, should you have something really terrible happen, you may very well put your own job at risk.

So, the objective here is not only to take advantage of the new facilities or the new hosted site, but also to do so in a way that ensures the right continuity of business services. That ensures that service levels continue to be met, so that the business, the government, or the organization continues to operate without disruption, while this takes place. You might think of it, as our colleagues in Enterprise Services have put it, as changing the engine in the aircraft while it's flying.

Gilis: If you don't do the planning, if you don't know where you're starting from and where you're going to, then it's like being on the ocean. Going in any direction will lead you anywhere, but it's probably not giving you the path to where you want to go. If you don't know where to go to, then don't start the journey.

Most of the migrations today are not migration of the servers, the assets, but actually migration of the data. You start building a next-generation data center --most of the time with completely new assets that better fit what your company wants to achieve.

Migration is actually the last phase of a data center transformation. The first thing that you do is a discovery, making sure that you know all about the current environment, not only the servers, the storage, and the network, but the applications and how they interact. Based on that, you decide how the new data center should look.

... If you build your new engine, your new data center, and you have all the new equipment inside, the only thing that you need to do is migrate the data. There are a lot of techniques to migrate data online, or at least synchronize current data in the current data centers with the new data center.

Usually, what you find out is that you did not do a good enough job of assessing the current situation, whether that was the assessment of a hardware platform, server platform, or the assessment of a facility.



There's not that much difference between local storage, SAN storage, or network attached storage (NAS) and what you designed. The only thing that you design or architect today is that basically every server or every single machine, virtual or physical, gets connected to a shared storage, and that shared storage should be replicated to a disaster recovery site.

That's basically the way you transfer the data from the current data centers to the new data centers, where you make sure that you build in disaster recovery capabilities from the moment you do the architecture of the new data center. ... The moment you switch off the computer in the first data center, you can immediately switch it on in the new data center.

McKinnis: From an outsourcing perspective, companies don't always do 100 percent outsourcing of that data-center environment or that shared computing environment. It may be part of it. Part of it they keep in-house. Part of it they host with another service provider.

What becomes important is how to manage all the multiple moving parts and the multiple service providers that are going to be involved in that future mode of operation. It's accessing what we currently have, but it's also designing what that future mode needs to look like.

There are all sorts of decisions that go around that from a client perspective to get to that decision. In many cases, if you look at it from a technology standpoint, the point of decision is something around getting to an end of life on a platform or an application. Or, there is a new licensing cycle, either from a support standpoint or an operating system standpoint.

There is usually something that happens from a technology standpoint that says, "Hey look, we've got to make a big decision anyway. Do we want to invest going this way, that we have gone previously, or do we want to try a new direction?"

Once they make that decision, we look at outside providers. It can take anywhere from 12 to 18 months to go through the full cycle of working through all the proposals and all the due diligence to build that trust between the service provider and the client. Then, you get to the point, where you can actually make the decision of, "Yes, this is what we are going to do. This is the contract we are going to put in place." At that point, we start all the plans to get it done.

. . . There are times when deals just fall apart, sometimes in the middle, and they never even get to the contracting phase.



There are lots of moving parts, and these things are usually very large. That's why, even though outsourcing contracts have changed, they are still large, are still multi-year, and there are still lots of moving parts.

Bennett: The elements of trust come in, whether you're building a new data center or outsourcing, because people want to know that, after the event takes place, things will be better. "Better" can be defined as: a lot cheaper, better quality of service, and better meeting the needs of the organization.

This has to be addressed in the same way any other substantial effort is addressed -- in the personal relationships of the CIO and his or her senior staff with the other executives in the organization, and with a business case. You need measurement before and afterward in order to demonstrate success. Of course, good, if not flawless, execution of the data center strategy and transformation are in play here.

The ownership issue may be affected in other ways. In many organizations it's not unusual for individual business units to have ownership of individual assets in the data center. If modernization is at play in the data center strategy, there may be some hand-holding necessary to work with the business units in making that happen. This happens whether you are doing modernization and virtualization in the context of existing data centers or in a migration. By the way, it's not different.

Be aware of where people view their ownership rights and make sure you are working hand-in-hand with them instead of stepping over them. It's not rocket science, but it can be very painful sometimes.

Gilis: You have small migration and huge migrations. The best thing is to cut things into small projects that you can handle easily. As we say, "Cut the elephant in pieces, because otherwise you can't swallow it."

Should be a real partnership

And when you work with your client, it should be a real partnership. If you don't work together, you will never do a good migration, whether it's outsourcing or non-outsourcing. At the end, the new data center must receive all of the assets or all of the data -- and it must work.

If you do a lot of migrations, and that's actually what most of the service companies like HP are doing, we know how to do migrations and how to treat some of the applications migrated as part of a "migration factory."

We actually built something like a migration factory, where teams are doing the same over and over all the time. So, if we have to move Oracle, we know exactly how to do this. If we have to move SAP, we know exactly how to do this.

That's like building a car in a factory. It's the same thing day in and day out, everyday. That's why customers are coming to service providers. Whether you go to an outsourcing or non-outsourcing, you should use a service provider that builds new data centers, transforms data centers, and does migration of data centers nearly every day.

Most of the time, the people that know best how it used to work are the customers. If you don't work with and don't partner directly with the customer, then migration will be very, very difficult. Then, you'll hit the difficult parts that people know will fail, and if they don't inform you, you will have to solve the problem.

McKinnis: Cloud computing has put things back in people's heads around what can be put out there in that shared environment. I don't know that we've quite gotten through the process of whether it should be at a service provider location, my location, or within a very secure location at an outsourced environment.

Where to hold data

I don't think they've gotten to that at the enterprise level. But, they're not quite so convinced about giving users the ability to retain data and do that processing, have that application right there, held within that confinement of that laptop, or whatever it happens to be that they are interacting with. They're starting to see that it potentially should be held someplace else, so that the risk of that data isn't held at the local level.

Bennett: Adopting a cloud strategy for specific business services would let you take advantage of that, but in many of these environments today cloud isn't a practical solution yet for the broad diversity of business services they're providing.

We see that for many customers it's the move from dedicated islands of infrastructure, to a shared infrastructure model, a converged infrastructure, or an adaptive infrastructure. Those are significant steps forward with a great deal of value for them, even without getting all the way to cloud, but cloud is definitely on the horizon.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript of the podcast, or download a copy. Learn more. Sponsor: Hewlett-Packard.

You may also be interested in:

Thursday, December 17, 2009

Executive interview: HP's Robin Purohit on how CIOs can contain IT costs while spurring innovation payoffs

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

The latest BriefingDirect podcast delivers an executive interview with Robin Purohit, Vice President and General Manager for HP Software and Solutions.

I had the pleasure to recently sit down with Purohit to examine how CIOs are managing their IT budgets for 2010. During the economic recovery, the cost-containment conundrum of "do more for less" -- that is, while still supporting all of your business requirements -- is likely to remain the norm.

So this discussion centers on how CIOs are grappling with implementing the best methods for higher cost optimization in IT spending, while also seeking the means to improve innovation and business results. The interview coincides with HP's announcements this week at Software Universe in Germany on fast-tracks to safer cloud computing.

"Every CIO needs to be extremely prepared to defend their spend on what they are doing and to make sure they have a great operational cost structure that compares to the best in their industry," says Purohit.

The 25-minute interview is conducted by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Purohit: Well, just about every CIO I've talked to right now is in the middle of planning their next year’s budget. Actually, it's probably better to say preparing for the negotiation for next year’s budget. There are a couple of things.

The good news is that this budget cycle doesn’t look like last year’s. Last year’s was very tough, because the financial collapse really was a surprise to many companies, and it required people to very quickly constrain their capital spend, their OPEX spend, and just turn the taps off pretty quickly.

... [Now] they need to be able to prepare to make a few big bets, because the reality is that the smartest companies out there are using this downturn as an advantage to make some forward looking strategic bets. If you don't do that now, the chances are that, two years from now, your company could be in a pretty bad position.

... There are a couple of pretty important things to get done. The first is to have an extremely good view of the capital you have, and where it is in the capital cycle. Getting all of that information that is timely, accurate, and at your fingertips, so you can enter the planning cycle, is extraordinarily important and fundamental.

When you are going to deploy new capital, always make sure that it's going to be able to be maintained and sustained in the lowest-cost way. The way we phrase this is, "Today's innovation is tomorrow’s operating cost."

When you do refresh, there are some great new ways of actually using capital on server storage and networking that's at a much lower cost structure, and much easier to operate, than the systems we had three or four years ago.

In the past, we’ve seen mistakes made, where people deployed new capital without really thinking how they were going to drive the long-term cost structure down in operating that new capital.

This is where we really see an opportunity: To help customers put in place IT financial management solutions, which are not just planning tools -- not just understanding what you have -- but essentially a real-time financial analytic application that is timely and accurate as an enterprise resource planning (ERP) system, or a business intelligence (BI) system that's supporting the company’s business process.

New business agenda

Companies want to see the CIOs use capital to support the most important business initiatives they have, and usually they are associated with revenue growth, by expanding the sales force, and new business units, some competitive program, or eventually a new ecommerce presence.

It's imperative that the CIO shows as much as possible that they're applying capital to things that clearly align with driving one of those new business agendas that's going to help the company over the next three years.

Now, in terms of how you do that, it's making sure that the capital spend that you have, that everything in the data center you have, is supporting a top business priority. It's the most important thing you can do.

One thing that won't change is that demand from the business will all of a sudden strip your supply of capital and labor. What you can do is make sure that every person you have, every piece of equipment you have, every decision you are making, is in the context of something that is supporting an immediate business need or a key element of business operation.

It also means there are more things and more new things to manage.



There are lots of opportunities to be disciplined in assessing your organization, both in how you spend capital, how you use your capital, and what your people are working on. I wouldn't call it waste, but I would call it just a better discipline and whether what you're doing truly is business critical or not.

If you don't get the people and process right, then new technologies, like virtualization or blade systems, are just going to cause more headaches downstream, because those things are fantastic ways of saving capital today. Those are the latest and greatest technologies. Four or five years ago, it was Linux and Windows Server.

It also means there are more things and more new things to manage. If you don't have extremely disciplined processes that are automated, and if you don't have all of your team with one play book on what those processes are, and making sure that there is a collaborative way for them to work on those processes, and which is as automated as possible, your operating costs are just going to increase as you embrace the new technologies that lower your capital. You've got to do both at the same time.

Say that you're a new CIO coming to organization and you see a lack of standardization, a lack of centers of excellence, and a lot of growth through merger and acquisition, there is a ton of opportunity to take out operating cost.

The right governance


We've seen customers generally take out 5 to 10 percent, when a new CIO comes on board, rationalizes everything that's being done, and introduces rigorous standardization. That's a quick win, but it's really there for companies that have been probably a little earlier in the maturity cycle of how they run IT.

A couple of new things that are possible now with the outsourcing model and the cloud model -- whether you want to call it cloud or software as a service (SaaS) -- is that there's an incredibly rich marketplace of boutique service shops and boutique technology providers that can provide you either knowledge or technology services on-demand for a particular part of your IT organization.

The cost structures associated with running infrastructure as a service (IaaS) are so dramatically lower and are very compelling, so if you can find a trusted provider for that, cloud computing allows you to move at least markets that are lower risk to experiment with those kind of new techniques.

The other nice thing we like about cloud computing is that there is at least a perception that is going to be pretty nimble, which means that you'll be able to move services in and out of your firewall, depending on where the need is, or how much demand you have.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

You may also be interested in:

Wednesday, December 16, 2009

Early thoughts on IBM buying Lombardi: Keep it simple

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

This has been quite a busy day, having seen IBM’s announcement come over the wire barely after the alarm went off. Lombardi has always been the little business process management (BPM) company that could.

In contrast to rivals like Pegasystems, which has a very complex, rule-driven approach, Lombardi’s approach has always been characterized by simplicity. In that sense, its approach mimicked that of Fuego before it was acquired by BEA, which of course was eventually swallowed up by Oracle.

We echo Sandy Kemsley’s thoughts of letdown about hopes for a Lombardi IPO. But even had the IPO been done, that would have postponed the inevitable. We agree with her that if IBM is doing this acquisition anyway, it makes sense to make Lombardi a first-class citizen within the IBM WebSphere unit.

Not surprisingly, IBM is viewing Lombardi for its simplicity. At first glance, it appears that Lombardi Teamworks, their flagship product, overlaps WebSphere BPM. Look under the hood, and WebSphere BPM is not a single engine, but the product of several acquisitions and internal development, including the document-oriented processes of FileNet and the application integration processes from Crossworlds.

So in fact Lombardi is another leg of the stool, and one that is considerably simpler than what IBM already has. In fact, this is vey similar to how Oracle has positioned the old Fuego product alongside its enterprise BPM offering which is build around IDS Scheer’s ARIS modeling language and tooling.

IBM’s strategy is that Lombardi provides a good way to open the BPM discussion at department level. But significantly on today's announcement call, IBM stated that once the customer wants to scale up, that it would move the discussion to its existing enterprise-scale BPM technology. It provided an example of a joint engagement at Ford -– where Lombardi works with the engineering department, while IBM works at the B2B trading partner integration level -- as an example of how the two pieces would be positioned going forward.

The challenge for IBM is preserving the simplicity of Lombardi products, which tend to be more department oriented bottom-up, vs. the IBM offerings that are enterprise-scale and top-down.



James Governor of RedMonk had a very interesting suggestion that IBM could leverage the Lombardi technologies atop some of its Lotus collaboration tools. We also see good potential synergies with the vertical industry frameworks as well.

The challenge for IBM is preserving the simplicity of Lombardi products, which tend to be more department-0oriented and bottom-up vs. the IBM offerings that are enterprise-scale and top-down. Craig Hayman, general manager of the application and integration middleware (WebSphere) division, admitted on the announcement call that IBM has “struggled” in departmental, human-centric applications. In part that is due to IBM’s top-down enterprise focus, and also the fact that all too often, IBM’s software is known more for richness than ease of use.

A good barometer of how IBM handles the Lombardi integration will be reflected on how it handles Lombardi Blueprint and IBM WebSphere BlueWorks BPM. Blueprint is a wonderfully simple process definition hosted service while BlueWorks is also hosted, but is far more complex with heavy strains of social computing.

We have tried Blueprint and found it to be a very straightforward offering that simply codifies your processes, generating Word or PowerPoint documentation, and BPMN models. The cool thing is that if you use it only for documentation, you have gotten good value out of it – and in fact roughly 80 percent of Blueprint customers simply use it for that.

On today's call, Hayman said that IBM plans to converge both products. That's a logical move. But please, please, please, don’t screw up the simplicity of Blueprint. If necessary, make it a stripped down face of BlueWorks.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

New HP offerings enable telcos to deliver more safe cloud services fast

Hewlett-Packard (HP) has significantly elevated its efforts to become an indispensable full-service supplier to cloud computing aspirants, especially telecommunications, mobile and Internet service providers.

At Software Universe in Hamburg, Germany, HP today announced three new offerings designed to enable cloud providers and enterprises to securely lower barriers to adoption and accelerate the time-to-benefit of cloud-delivered services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Timing here is critical. As the end users of cloud services seek flexible infrastructure, IP voice, unified communications and call center automation, cloud providers need a fast-track to such low-risk cloud capabilities. HP is also wasting no time as it competes yet more broadly against Cisco Systems in the race to become mainstream means to cloud services.

Among the new offerings:
  • HP Operations Orchestration, which will automate the provisioning of services within the existing infrastructure, allowing businesses to seamlessly increase capacity through integration with such things as Amazon Elastic Compute Cloud. Look for other public cloud providers to offer this as well.

  • HP Communication as a service (CaaS), a cloud program that will enable service providers to offer small and mid-size businesses services delivered on an outsourced basis with utility pricing. CaaS includes an aggregation platform, four integrated communications services from HP and third parties, as well as the flexibility to offer other on-demand services.

  • HP Cloud Assure for Cost Control, designed to help companies optimize cloud costs and gain predictability in budgeting by ensuring that they right-size their compute footprints.
Cloud Assure was introduced by HP last Spring, and today's announcement moves it to the next level. Neil Ashizawa, manager of HP's Software-as-a-Service (SaaS) Products and Cloud Solutions, recently spoke with me about Cloud Assure for cost control. He told me:
"When we first launched Cloud Assure earlier this year, we focused on the top three inhibitors, which were security of applications in the cloud, performance of applications in the cloud, and availability of applications in the cloud. We wanted to provide assurance to enterprises that their applications will be secure, they will perform, and they will be available when they are running in the cloud.

"The new enhancement that we're announcing now is assurance for cost control in the cloud. Oftentimes enterprises do make that step to the cloud, and a big reason is that they want to reap the benefits of the cost promise of the cloud, which is to lower cost."
He then explained how Cloud Assure for cost control works:
"Cloud Assure for cost control solution comprises both HP Software and HP Services provided by HP SaaS. The software itself is three products that make up the overall solution.
  • "The first one is our industry-leading Performance Center software, which allows you to drive load in an elastic manner. You can scale up the load to very high demands and scale back load to very low demand, and this is where you get your elasticity planning framework.

  • "The second solution from a software’s perspective is HP SiteScope, which allows you to monitor the resource consumption of your application in the cloud. Therefore, you understand when compute resources are spiking or when you have more capacity to drive even more load.

  • "The third software portion is HP Diagnostics, which allows you to measure the performance of your code. You can measure how your methods are performing, how your SQL statements are performing, and if you have memory leakage."
These HP-driven means to attain cloud benefits sooner than later come in response to recent surveys in which industry executives clearly stated a need for more flexible computing options in the face of uncertain economic times. They want to be able to dial up and down their delivery of services, but without the time and cost of building out the capital-intensive traditional delivery models.

The market is also looking to cloud services -- be they on-premises, from third-parties or both -- to provide:
  • Elasticity – the ability to rapidly respond to changing business needs with automated provisioning of cloud and physical services
  • Cost control – by optimizing efficiency and gaining predictability of costs by ensuring cloud compute resources are “right sized” to support fluctuating business demands
  • Risk reduction – through automated service provisioning that reduces manual errors, non-compliance snafus, and downtime of business services and processes.
I think HP has correctly identified a weakness in the SaaS and cloud markets. In many cases, applications and productivity services came to market first, but lacked the enterprise-caliber infrastructure, management, and auditing and fiscal control mechanisms. Now, HP is bringing these traditional IT requirements to the cloud domains, and making them available to the large market of existing providers.

The cloud horse is now in front of the cart, which means the providers can do their jobs better, and more end users can adopt secure cloud services in ways that reassure their mangers and adhere to their governance policies.

BriefingsDirect contributor Carlton Vogt provided editorial assistance and research on this post.


You may also be interested in:

Tuesday, December 8, 2009

Fujitsu ascends to new cloud offerings, expands data center to cover enterprises and ISVs

Many companies are intrigued by the potential cost savings and agility promised by cloud computing, but a lot of those are unsure about how to get in and when, as well as how. Fujitsu is rising to the occasion with end-to-end cloud services designed to help both enterprises and independent software vendors (ISVs).

Fujitsu says its new solution will allow companies migrate existing multi-platform and multi-vendor mission-critical systems to enterprise clouds. The benefit of this is that it will remove capital-intensive investments in technology and replace them with a pay-as-you-go strategy.

Scheduled for launch in the first quarter of 2010, the Fujitsu services have already attracted several ISVs, who plan to offer their own services to clients, using a software-as-a-service (SaaS) model. To accommodate the move, Fujitsu has upgraded its Sunnyvale, Calif. data center to the Tier III level and will support the cloud application programming interface (API).

Designed for enterprises in manufacturing, finance, healthcare, retail and other compute- and data-intensive industries, Fujitsu's cloud solutions include system construction, operations, maintenance services and full-featured vertical applications. In order to comply with vertical industry standards and regulations, retail transactional applications will be hosted in a payment-card industry (PCI) compliant data center and health care applications will be hosted in a health insurance portability and accountability act (HIPAA) compliant environment.

Going green

In addition, the multi-million dollar data-center upgrade and expansion will more than double available raised floor space, reduce carbon emissions by 21 percent, and increase available power and cooling capabilities that will dramatically expand the data center’s effective capacity by over 800 percent.

The redesign leverages technology from Fujitsu, including its PalmSecure palm vein recognition technology for physical access control, Fujitsu 10-gigabit switch technology for core backbone fabric, and Fujitsu PRIMERGY server and ETERNUS storage technologies. Sunnyvale will join other premier Fujitsu Tier-III+ and Tier IV facilities in the Americas, including Dallas, Montreal and Trinidad, in delivering high-availability IT solutions.

. . . Upgrade and expansion will . . . reduce carbon emissions by 21 percent, and increase available power and cooling capabilities that will dramatically expand the data center’s effective capacity by over 800 percent.



Fujitsu recently announced enhancements to its Interstage Cloud business process management (BPM) service, which will be migrated to the new secure cloud platform as soon as it is available.

The goal of the cloud API submitted by Fujitsu to the Open Cloud Standards Incubator of the Distributed Management Task Force (DMTF) is to maintain interoperability among various cloud computing environments, so clients don't need to worry about vendor lock-in when adopting a particular cloud computing platform. Fujitsu plans to actively participate in the standardization process of the DMTF and aims to implement the API as part of its next-generation infrastructure-as-a-service (IaaS) platform.

Among the first ISVs to take advantage of the new cloud services offerings are CoolRock Software, an ISV specializing in email management software for archiving, ediscovery and collaboration, and Intershop Communications, a leading ecommerce solutions ISV.

You may also be interested in:

Monday, December 7, 2009

TIBCO borrows a Twitter page to bring better information to enterprise workers

TIBCO Software will release in 2010 software that lets people search for and then track corporate information by subject matter in a similar way to how they might follow people on Twitter.

This is a clear sign that the enterprise software and social software worlds are munging. Get ready to see a lot more.

The idea behind the tibbr – the name an obvious play on “Twitter” -- helps people find information related to their particular tasks and jobs quickly and easily by searching for information based on its subject matter, and then subscribing to relevant feeds on those topics, the company said. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Lack of information isn’t the main problem for enterprise systems these days, what's really needed is a useful interface and method for getting to the precise needed information quickly and easily to help business workers do their jobs more efficiently. By taking a page out of the social networking playbook, TIBCO aims to let people access corporate information via a Twitter-like "update." The result: workers can find the information they need faster, so, in theory, they perform with far higher productivity.

In an interview with All Things D’s Ben Worthen, TIBCO CEO Vivek Ranadive said he got the idea for tibbr when reading -- what else? –Twitter. More specifically, he said the inspiration came while he read updates to the micro-blogging service made by NBA basketball player Shaquille O’Neal.

With people spending – or arguably wasting -- so much time on social-networking applications outside of their everyday work tasks, companies have been looking for ways to apply social-networking technologies like real-time collaboration, status updates and Web presence information inside the firewall. TIBCO obviously sees tibbr as one way to do it.

I expect we'll see more ways that the social wall interface makes it's way into the business IT domain. This interface could easily replace the email in-box as the place workers tend to "live" during their jobs. Google Wave clearly also sees this as a good fit.

And, of course, no one "wall" will do. We should also expect an aggregation of walls that will follow us, and also adapt in terms of what takes priority on the personalized wall -- automated via policies -- based on what we are doing. Or where we are doing it. Or both.

As TIBCO describes tibbr, it will let people set “subjects” that represent a user, an application or a process relevant to what tasks or functions someone performs in an organization. Through tibbr, they can subscribe to feeds by category – for example, Finance or Accounts Payable -- for specific information they think will be relevant to their jobs.

Tibbr is based on Silver, TIBCO’s own cloud-computing infrastructure platform. TIBCO unveiled Silver earlier this year as a rapid-application development and delivery system for companies that want to deploy cloud computing but are unsure how to get started.

The company also is pushing tibbr’s foundation on open standards as an advantage for companies that want to integrate it with other applications so it can become a part of someone’s daily workflow.

TIBCO plans to test tibbr out on its own employees beginning on Dec. 14 before rolling it out to customers in early 2010.

BriefingsDirect contributor Elizabeth Montalbano provided editorial assistance and research on this post.