Tuesday, May 4, 2010

Just as the vendor-speak turns from SOA, the users are actually embracing it

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

There is a core disconnect between what gets analysts and journalists excited, and what gains traction with the customers who consume the technologies that keep our whole ecosystem in business.

OK, guilty as charged, we analysts get off on hearing about what’s new and what’s breaking the envelope, but that’s the last thing that enterprise customers want to hear.

Excluding reference customers (who have a separate set of motivations that often revolve around a vendor productizing something that would otherwise be custom developed), most want the tried and true, or at least innovative technology that at has matured through the rough spots and is no longer version 1.0.

It’s a thought that crystallized as we bounced impressions of this year’s IBM Impact 2010 event with colleagues like Dorothy Alexander and Marcia Kaufman, who shared perceptions that, while this year’s headlines or trends seemed a bit anticlimactic, that there was real evidence that customers were actually “doing” whatever it is that we associate with SOA.

[Note: See a roundup of Impact news.]

Forget about the architectural journeys that you’ve heard about SOA; SOA is an enterprise architectural pattern that is a means to an end.

Forget about the architectural journeys that you’ve heard about with SOA; SOA is an enterprise architectural pattern that is a means to an end. It’s not a new argument; it was central to the "SOA is dead" debate that flared up with Anne Thomas Manes’ famous or infamous post of almost a year and a half ago, and of the subsequent debates and hand wringing that ensued.

IBM’s so-called SOA conference, Impact, doesn’t even include SOA in its name. But until now SOA was the implicit rationale for this WebSphere middleware stack conference to exist. Yet more and more the focus is about the stack that SOA enables; and more and more, about the composite business applications that IBM’s SOA stack enables.

IBM won’t call it the applications business, but when you put vertical industry frameworks, business rules, business process management, and analytics together, it’s not simply a plumbing stack, but a collection of software tools and vertical industry templates that become the new de facto applications that bolt atop and aside the core application portfolio that enterprises already have and are not likely to replace.

Something old, something new

In past years, this conference was used to introduce game changers, such as the acquisition of Webify that placed IBM Software firmly on the road to verticalizing its middleware.

This year the buzz was about something old becoming something new again. IBM’s acquisition of Cast Iron, as dissected well by colleagues Dana Gardner and James Governor, reflects the fact that after all these years of talking flattened architectures, especially using the ESB style, that enterprise integration (or application-to-application, or A2A) hubs never went out of style. There are still plenty of instances of packaged apps out there that need to be interfaced.

The problem is no different from a decade ago when the first wave of EAI hubs emerged to productize systems integration of enterprise packages. While the EAI business model never scaled well in its time because of the need for too much customization, experience, commoditization of templates, and emergence of cheap appliances provided economic solutions to this model.

More importantly, the emergence of multi-tenanted SaaS applications, like Salesforce.com, Workday and many others, have imposed a relatively stable target data schema plus a need of integration of cloud and on-premises applications. Informatica has made a strong run with its partnership with Salesforce, but Informatica is part of a broader data integration platform that for some customers is overkill. By contrast, niche players like Cast Iron which only do data translation have begun to thrive with a Blue Chip customer list.

Of course Cast Iron is not IBM’s first appliance play. That distinction goes to DataPower, which originally made its name with specialized appliances that accelerated compute-intensive XML processing and SSL encryption. While we were thinking about potential synergy, such as applying some of DataPower’s XML acceleration technology to A2A workloads, IBM’s middleware head Craig Hayman responded to us that IBM saw Cast Iron’s technology as a separate use-case. But they did demonstrated that the software of Cast Iron could, and would, literally run on DataPower’s own iron.

IBM could go the opposite direction and infuse some of this A2A transformation as services that could be transformed and accelerated by the traditional DataPower line.

Of course, you could say that Cast Iron overlaps the application connectors from IBM’s Crossworlds acquisition, but those connectors, which were really overlay applications (Crossworlds used to call them “collaborations”), have been repurposed by IBM as BPM technology for WebSphere Process Server.

Arguably, there is much technology from IBM’s Ascential acquisition focused purely on data transformation that also overlaps here. But Cast Iron’s value add to IBM is the way those integrations are packaged, and the fact that they have been developed especially for integrations to and from SaaS applications – no more and no less.

IBM has gained the right sized tool for the job. IBM has decided to walk a safe tightrope here; it doesn’t want to weigh Cast Iron’s simplicity (a key strength down) with added bells and whistles from the rest of its integration stack. But the integration doesn’t have to go in one direction –weighing down Cast Iron with richer but more complex functionality. IBM could go the opposite direction and infuse some of this A2A transformation as services that could be transformed and accelerated by the traditional DataPower line.

This is a similar issue that IBM has faced with Lombardi, a deal that it closed back in January. They’ve taken the obvious first step in “blue washing” the flagship Lombardi Teamworks BPM product, which is now rebranded IBM WebSphere Lombardi Edition and bundled with WebSphere Application Serve 7 and DB2 Express under the covers.

The more pressing question is what to do with Lombardi’s elegantly straightforward Blueprint process definition tool and IBM WebSphere BlueWorks BPM, which is more of a collaboration and best practices definition rather than modeling tool (and still in beta). The good news is that IBM is trying the right thing in not cluttering Blueprint (now rebranded IBM BPM Blueprint), but the bad news is that there is still confusion with IBM’s mixed messages of a consistent branding umbrella but uncertainty regarding product synergy or convergence.

Back to the main point however: while SOA was the original impetus for the Impact event, it is now receding to a more appropriate supporting role.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Confluence of global trends ups ante for improved IT governance to prevent costly business 'glitches'

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WebLayers.


Headlines these days are full of big, embarrassing corporate and government "gotchas."

These complex snafus cost a ton of money, severely damage a company’s reputation, and most importantly, can hurt or even kill people.

From global auto recalls to bank failures to exploding oil rigs, and cyber crime that can uproot the private information from millions of users, the scale and damage that technology-accelerated glitches can inflict on businesses and individuals has probably never been higher. So what is at the root?

Is it a technology run amok problem, or a complexity spinning out of control issue -- and why is it seemingly worse now?

A new book is coming out this summer that explores the relationship between glitches and technology, specifically the role of software use and development in the era of cloud computing.

It turns out the role and impact of governance over people, process, and technology comes up again and again in the new book.

BriefingsDirect's latest podcast discussion then focuses on the nature of, and some possible solutions for, a growing parade of enterprise-scale glitches. We interview the author of the book as well as a software expert from IBM to delve into the causes and effects of glitches and how governance relates to the problem and fixes.

Please join guests, Jeff Papows, President and CEO of WebLayers, and the author of Glitch: The Hidden Impact of Faulty Software, and Kerrie Holley, IBM fellow and Chief Technology Officer for IBM’s SOA Center of Excellence. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Papows: What we're actually seeing is the confluence of three primary factors that are creating an information technology perfect storm of sorts.

The first is a loss of intellectual capital. We saw, between 2000 and 2007, the first drop in computer science graduates. The merger and consolidation activity -- the other side of the recession of 2008 -- has created massive complexity in these giant corporate IT mash-ups and critical back-office systems.

The third factor is just the sheer ubiquity of the technological complexity curve. It’s the magnitude of technology that’s now part of our social fabric, whether it’s literally one million transistors that now exist for every human being on the planet or the six billion network devices that exist in the world today, all of which are accessing the same critical back-office structures.

You take those three meta-level factors and put them together and we're making the morning broadcast news cycles now on a daily basis with more and more of these embarrassing things coming to light. They're not just inconvenient, but there are monumental economic consequences -- and we're killing people. Look at the recent glitches you have seen at places like Toyota.

One of the most heartbreaking things in the research for the book was on software that controls the radiation devices in our hospitals for cancer treatment. I ran across a bunch of research where, because of some software glitches and policy problems in terms of the way those updates were distributed, people with fairly nominal cancers received massive overdoses in radiation.

The medical professionals running these machines -- like much of our culture, because something is computerized -- just assume that it’s infallible. Because of the problems in governance or lack of governance policy, people were being over-radiated.

Holley: Jeff brought up some excellent points. The other thing that we see is that we've had this growth of distributed computing.

If we look at a lot of what businesses are trying to accomplish today, whether it’s a new business model, differentiation, or whatever they're trying to do compete, what we are finding is that the complexity of that solution is pretty significant.

If we look at a lot of technologies that are out in the market place, unfortunately, in many cases they are siloed. They repair or they help with a part of the problem, but perhaps they're not holistic in dealing with the whole life-cycle. ... We just have an explosion of technologies that we have to integrate.

Secondly -- this is a point-in-time statement -- we're seeing rapid improvements in the technology to solve this. It hasn’t caught up, but I think it will. ... Along with that comes some of the challenges in terms of how we make this agile, and how we make it such that it doesn't break.

Papows: We've grown up for decades now where we just threw more and more bodies at the problem, as the technological curve grew.

All that means is automating those best practices and turning them inward, so that we’re governing ourselves as an industry the way that we would automate or govern many things.



There was always this never-ending economic rosy horizon, where you would just add more IT professionals and you would acquire and you’d merge systems.

In 2008, the economic malaise that we’re managing our way through changed all of that. Now, the only way out of this complexity curve that we’ve created is to turn the innovation that has been the hallmark of our industry back on ourselves.

That means automating and codifying all of the best practices and human capital that’s been in-place and learning for decades in the form of active policy management and inference engines in what we typically think of as SOA and design-time governance.

Really, all that means is automating those best practices and turning them inward, so that we’re governing ourselves as an industry in the same way that we would automate or govern many things. But now it’s no longer a "nice to have."

I would argue that it’s critical, because the complexity curve and the economics have crossed and there is no way to put this genie back in the bottle. There is no way to go backward.

There are lots of examples in the book [of what can go wrong] that may not be as ubiquitous as Toyota, but there are many cases of widespread health, power, energy, and security risks as a consequence of the lack of policy management or governance

... We all need to say, "I am a computer science professional. We have reached a point in the complexity curve where I no longer scale." You have to start with an admission of fact. And the reality is that the demands placed on today's IT organizations, the magnitude of the existing infrastructure that needs to continue to be cared for, the magnitude of application demands for new systems and access points from all of this new technology, simply is not going to correlate without a completely different highly automated approach.

Holley: One of the nice things that the attention to SOA has brought to our marketplace is the recognition that we do need to focus on governance. I don’t know of a single client who’s got an SOA implementation who has not, as a minimum, thought about governance.

They may not be doing everything they want to do or should be doing, but governance is clearly on the attention span of everyone in terms of recognizing that it needs to be done.

... That governance is not only around the technology. It’s not only around the life-cycle of services. It’s not only around the use of addressing processes and addressing application development. Governance also focuses on the convergence that’s required between business and IT.

The synergistic relationship that we seek will be promoted through the use of governance. Change management specifically brings about a pretty significant focus, meaning that there will be a focus on the part of the business and the IT organizations and teams to bring about the results that are sought.

... A lot of what IBM has been talking about from a Smarter Planet standpoint is actually the exact issues that Jeff has talked about, which is that the world is getting more instrumented. There are more sensors. There is a convergence of a lot of different technology, SOA, business process management, mobile computing, and cloud computing.

Clearly, on one end of the spectrum, it’s increasing the complexity. On the other end of the spectrum, it’s adding tremendous value to businesses, but it mandates this attention to governance.

My book, that’s going to be out later this year, is 100 SOA Questions: Asked and Answered. What my co-author [Ali Arsanjani] and I are trying to accomplish in the book, which distinguishes us from other SOA books in the marketplace, is based on thousands of questions that we’ve experienced over the decade in hundreds of projects where we’ve had first-hand roles in as consultants, architects, and developers.

We provide the audience with a hands-on, prescriptive understanding of some of the more difficult questions, and not just have platitudes as answers, but really give the reader an answer they can act on.

Papows: If we don’t police our own industry, if we don’t get more serious about this governance, whether it’s IBM or WebLayers or some other technological help, we run the risk of seeing the headlines we’re seeing today become completely ubiquitous.

There's an old expression, "Everybody wants governance, but nobody wants to be governed." We run the risk, and I think we’ve tripped over it several times, where we get to the point where developers don’t want to be slowed down. There is this Big Brother-connotation at times to governance. We’ve got to explore a different cultural approach to it.

Governance, whether it’s design time or run time, is really about automating and codifying best practices.



Governance, whether it’s design-time or run-time, is really about automating and codifying best practices, and it’s not done generically as was once taught. It can be, in my experience, very specific. The things we see Ford Motor Co. doing are very different. They're germane to their IT culture and organization.

What you need is a way to automate what you are doing, so that your best practices are enforced. I'd argue that rather than making distinctions between design and run-time governance, companies simply, one way or another, need to automate their best practices.

The business mandates of the corporations need to be reflected in an automated way that makes it manageable across the information technology life-cycle -- or you exist at your own peril.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WebLayers.

Monday, May 3, 2010

IBM to build out hub for cloud of clouds with Cast Iron acquisition

Looking to improve its ability to integrate across various cloud ecosystems, IBM on Monday bought integration services provider Cast Iron Systems.

The latest addition to IBM's infrastructure portfolio comes amid a rolling thunder of acquisitions in the technology space, a result of confidence that the economy is improving and a validation that new service models like cloud are roiling the post-recession IT landscape.

Cast Iron Systems, in Mountain View, CA, is not a cloud provider, but rather a hub enabler of other cloud and SaaS services providers. Cast Iron, through appliances, software and services, also provides an integration capability between on-premises data and applications in enterprises and the various emerging cloud services providers.

Cast Iron recently delivered OmniConnect, a cloud integration solution offers a single platform rather than multiple products or on-premise tools to accomplish cloud integrations.

With the addition of Cast Iron Systems to its portfolio, IBM will be able to offer clients a complete platform to integrate cloud applications from providers including Salesforce.com, Amazon, NetSuite and ADP with on-premise applications, such as SAP and JD Edwards, said IBM.

IBM needs to make sure the core value of integration, a mainstay of its WebSphere brands, does not slide up and out of the enterprises data center and then become controlled then by the likes of cloud leaders Google, Microsoft, Amazon and Salesforce.com. I guess we can think of Cast Iron as a way to bring WebSphere to cloud integrations.

IBM expects the global cloud computing market to grow at a compounded annual rate of 28 percent from $47 billion in 2008 to $126 billion by 2012.

The Cast Iron buy jettisons IBM into extending its integration role across more types of integration, across cloud ecosystems and for becoming a traffic cop of sorts for web services and cloud API activities. Cast Iron, a privately held company founded in 2001, becomes part of IBM as of today. Terms of the sale were not released.

Security, integration, and customization form the top three hurdles that enterprises face in exploiting the benefits of cloud computing, said Craig Hayman, General Manager, Websphere, at IBM, in announcing the acquisition at the IBM Impact event in Las Vegas.

IBM sees the Cast Iron value as significantly helping on the integration portion of the cloud-weakness triumvirate. The buy is also clearly aimed at helping to simplify how integration is accomplished across ecosystems and a variety of integration styles. The addition of Cast Iron also bolsters IBM DataPower line, given the appliances model the Cast Iron has used, as well as IBM's Lombardi acquisition.

Building more Cast Iron systems and appliances using IBM hardware and software infrastructure can go a long way to cutting the total costs of providing these integration hubs.

"Cloud application use is exploding, but just because you like Salesforce.com doesn't mean you are going to throw out SAP, Oracle or other applications you have on-premise. It's a hybrid world where companies have a combination of cloud and on-premise locations," said Chandar Pattabhiram, vice president of Channel and Product Marketing for Cast Iron Systems, earlier this year. "You don't maximize the value of your cloud applications unless you get all the data into it – so you need integration."

UPDATE: Cast Iron competitor Boomi has some thoughts.

"IBM WebSphere has a history of buying appliance-based companies and so we think this is a good fit for Cast Iron," said Bob Moul, Boomi CEO. "Boomi on the other hand has been focused exclusively on the cloud computing space and has built the industry's number one integration cloud as pure Software-as-a-Service. We remain committed to the cloud and are convinced that this pure-SaaS integration approach is the best model to drive the continued success and expansion of the cloud computing industry.”

Hot on heels of smartphone popularity, cloud-based printing scales to enterprise mainstream

As enterprises focus more on putting applications and data into Internet clouds, a new trend is emerging that also helps them keep things tangibly closer to terra firma, namely, printed materials – especially from mobile devices.

Major announcements from HP and Google are drawing attention to printing from the cloud. But these two heavy-hitters aren’t the only ones pushing the concept. Lesser-known brands like HubCast and Cortado got out in front with cloud printing services that work to route online print orders to printer choices via the cloud. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Still a nascent concept, some forward-thinking enterprises are moving to understand what printing from the cloud really means, what services are available, why they should give it a try, how to get started—and what’s coming next. Again, we’re early in the cloud printing game, but when Fortune 500 tech companies start advocating for a better way to print, it’s worth investigating.

HP’s Cloud Printing

HP is no stranger to cloud printing. The company is behind a service called MagCloud that lets self-publishers print on demand and sell through a web-based marketplace with no minimum orders. But a recent announcement suggests HP is looking to deeply lead the charge into printing from the cloud for the broader enterprise ... and consumers.

Earlier this month, HP rolled out the ePrint Enterprise mobile printing solution developed in collaboration with RIM. It’s based on HP CloudPrint technology and works with BlackBerry smartphones. As HP describes it, CloudPrint lets users print documents from their mobile devices, computers and netbooks while they aren’t in the office on a LAN.

Essentially, CloudPrint blends cloud and web-services-based technologies to let people print anything—like reports, photos, emails, presentations, or documents—from anywhere. All you need is a destination network-connected printer. With CloudPrint and ePrint Enterprise, HP has a wide margin of enterprise printing needs covered.

Google’s Cloud Printing

Google got into the cloud printing game in mid-April. Dubbed Google Cloud Print, the search engine giant’s service will work with the Chrome operating system, where all applications are web apps. Google wanted to design a printing experience that would make it possible for web apps to give users the full printing capabilities that native apps have today. Access to the cloud is the one component all major devices and operating systems have in common.

Here’s how it works: Instead of relying on a single operating system—or drivers—to print, apps can use Google Cloud Print to submit and manage print jobs. Google Cloud Print will send the print job to the appropriate printer, with the particular options the user selected, and then return the job status to the app. But Google Cloud Print is still under development, which gives HP and other players a chance to gain market momentum.

Cloud Printing Pioneers

Indeed, there are other players promoting printing from the cloud—and some could be considered pioneers. Hubcast is one of them. Hubcast bills itself as the only worldwide digital print delivery network. It routes your online print order to the high quality network printer closest to you. This way you don’t have to pay shipping charges for printing. Hubcast won the Gartner “Cool Vendor” Award back in 2008.

Meanwhile, Cortado offers one-stop mobile business software solutions that aim at the enterprise—including cloud printing. Cortado competes with HP, offering a free cloud printing app called Cortado Workplace for BlackBerry and iPhone that lets you print your documents to any printer reachable via Wi-Fi or Bluetooth. Enterprise customers can also get Cortado Corporate Server for use on their company network behind the firewall.

Why Print from the Cloud?


Road warriors, mobile workers and on-the-go professionals can see the value in being able to access information and personal documents from just about any device. The problem historically has been the need to install drivers that make printing possible. Keeping up to date with print drivers for the various printers you might meet with while out of the office is cumbersome at best and nearly impossible at worst.

HP has also invested heavily in new ways of publishing, of making the mashup of printing and cloud services a commercial opportunity, with even small-batch, location-focused publications possible via printers rather then presses.

Similarly, the latest user-focused cloud printing solutions that are integrated with mobile devices make publishing boundary-less and set the stage to boost productivity with the ability to print documents on the fly at places like FedEx, hotels, business centers or anywhere else along a professional’s travels that offer access to a printer. In other words, these solutions extend the corporate network and offer cross-platform conveniences that aren’t available through traditional printing options.

Getting started is getting easier easy. You just have to download an application to your BlackBerry or iPhone. Becoming an early adopter of cloud printing puts you on the cutting-edge of business and could give you an advantage in a competitive marketplace.

Think about the possibilities of being able to print, sign and fax a document back to a client from just anywhere you happen to be. Cloud printing is poised to revolutionize the enterprise work environment in much the same way that cloud computing is transforming IT settings.

It also highlights the longer-term strength of cloud models, beyond more than cost savings from outsourcing. And that value is the powerful role that clouds play as integration platforms, to enable things that could not be done before, to bind processes -- like printing -- that scale up and down easily and affordably.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Thursday, April 29, 2010

HP's 48Upper moves IT beyond the 'anti-social' motif into a more community-oriented, socialized flow future

I'm not saying that IT departments have a PR problem, but when was the last time you saw a button saying, "Hug an IT person today"?

What I am saying is that IT is largely misunderstood outside the walls of the IT environment. And the crusty silos inside of IT can make their own cultural connections tenuous, too.

The fact is that IT over the decades has been put into the unenviable role of having to say "No" more often than "Yes." At least that's the perception.

IT forms an expensive reality check as businesses seek to reinvent themselves, and sometimes even to adapt quickly in their own markets. Saying "No" it isn't fun, but it's often truth. This is because computers are tangible, complex and logical, and businesses are, well ... dynamic, human-oriented, emotion-driven, creative and crowd-driven. Computers take a long time to set up properly and are best managed centrally, while consensus-oriented businesses change their minds twice a quarter (for better or worse).

Yes, the IT guys inhabit the happy-go-lucky no-man's land between the gaping culture chasms of bits and bytes reality versus the bubbly new business models, market inflection points, and charisma-driven leadership visionaries' next big thing.

Worse, when asked to explain why "Yes" has to mean "No" to keep the IT systems from crapping out or security holes from opening, the business side of the enterprise usually gets a technical answer from the IT guys (and gals). It's like they are all speaking different languages, coming from different planets, with different cultural references. A recipe for ... well, something short of blissful harmony. Right?

Yet, at the same time, today's visionary business workers and managers keep finding "Yes" coming from off of the Web from the likes of Google, Amazon, Microsoft and the SaaS applications providers. The comparison of free or low-cost Web-based wonders does not stack up so well against the traditional IT department restraint. The comparison might be unfair, but it's being made ... a lot.

Most disruptively, the social networks like Facebook, LinkedIn and Twitter are saying a lot more than just "Yes" to users -- they're saying, "Let's relate in whole new ways, kids!" The preferred medium of interaction has moved rapidly away from a world of email and static business application interfaces to "rivers" and "walls" of free-flowing information and group-speak insights. Actual work is indeed somehow getting done through friend conversations and chatty affinity groups linked by interests, concerns, proximity and even dynamic business processes.

So nowadays, IT has more than an image problem. It has a socialization problem that's not going away any time soon. So why shouldn't IT get social too in order to remain relevant and useful?

HP Software has taken notice, and is building out a new yet-unreleased social media approach to how IT does business. It may very well allow to IT to say "Yes" more often. But more importantly socially collaborative IT can relate to itself and its constituents in effective and savvy new ways.

HP's goal is to foster far better collaboration and knowledge sharing among and between IT practitioners, as well as make the shared services movement align with the social software phenomenon in as many directions as possible. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Called 48Upper (apparently named after an HP skunk works location in Cupertino, CA), the new IT-focused collaboration and socialized interfaces approach is being readied for release at some point in mid-2010. There's already a web site stub at www.48upper.com and a YouTube video there that portrays a new cultural identity for IT.

I was intrigued by a recent introductory chat with HP's Matt Schvimmer, Senior Director, Marketing and Business Development at 48Upper. He explained that IT people are not reflective of white lab coat stereotypes, that there's a huge opportunity to manage IT better using the tools common now among social networks and SaaS processes. His blog has more.

Matt was kind enough to share an early (dare I say, exclusive) look at an in-development (ie alpha) screen shot of 48Upper. It does meld worlds, for sure.


IT clearly needs to bridge its internal silos -- such as between development and operations, networks and servers, architects and developers. And, as stated, IT can go a long way to better communicate with the business users and leaders. So why shouldn't a Facebook-like set of applications and services accomplish both at once?

HP is not alone in seeing the value of mashups between social media methods and processes with business functions and governance. Salesforce.com has brought Chatter to the ERP suite (and beyond). Social business consultancies are springing up. Google Wave is making some of its own. Twitter and Facebook are finding their values extended deeply into the business world, whether sanctioned by IT or not.

What jumps out at me from 48Upper is how well social media interfaces and methods align with modern IT architectures and automation advances, such as IT shared services, SOA, cloud computing, and webby app development. A SOA is a great back-end for a social media front-end, so to speak.

An ESB is a great fit for a fast-paced, events-driven, policy-directed fabric of processes that is fast yet controlled. In a sense, SOA makes the scale and manageability of socialized business processes possible. The SOA can drive the applications services as well as the interactions as social gatherings. Is it any wonder HP sees an opportunity here?

By applying governance to social media activities, the best of the new sharing, and the needs of the IT requirements around access and security control, can co-exist. And -- as all of this social activity managed by a SOA churns along -- a ton of data and inference information is generated, allowing for information management and business intelligence tools to be brought into the mix.

That sets up virtuous cycles of adoption refined by data-driven analytics that help shape the next fluid iteration of the business processes (modeled and managed, of course). It allows the best of people-level sharing and innovation to be empowered by IT, and by the IT workers.

So perhaps it's time for IT to find a new way of saying, "Yes." Or at least have a vibrant conversation about it.

Wednesday, April 28, 2010

VMforce: Cloud mates with Java marriage of necessity for VMware and Salesforce.com

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Go to any vendor conference and it gets hard to avoid what has become “The Obligatory Cloud Presentation” or “Slide.” It’s beyond this discussion to discuss hype vs. reality, but potential benefits like the elasticity of the cloud have made the idea too difficult to dismiss, even if most large enterprises remain wary of trusting the brunt of their mission systems to some external host, SAS 70 certification or otherwise.

So it’s not surprising that cloud has become a strategic objective for VMware and SpringSource -- both before after the acquisition that brought them together. VMware was busy forming its vCloud strategy to stay a step ahead of rivals that seek to make VMware’s core virtualization hypervisor business commodity, while SpringSource acquired CloudFoundry to take its expanding Java stack to the cloud (even as such options were coming available for .NET and emerging web languages and frameworks like Ruby on Rails).

Following last summer’s VMware SpringSource acquisition, the obvious path would have placed SpringSource as the application development stack that would elevate vCloud from raw infrastructure as a service (IaaS) to a full development platform. That remains the goal, but it’s hardly the shortest path to VMware’s strategic goals.

At this point, VMware still is getting its arms around the assets that are now under its umbrella with SpringSource. As we speculated last summer, we should see some of the features of the Spring framework itself, such as dependency injection (which abstracts dependencies so developers don’t have to worry about writing all the necessary configuration files), applied to managing virtualization. But that’s for another time, another day.

VMware’s more pressing need is to make vSphere the de facto standard for managing virtualization and making vCloud, the de facto standard for cloud virtualization. (Actually, if you think about it, it is virtualization squared: OS instances virtualized from hardware, and hardware virtualized form infrastructure.)

In turn, Salesforce.com wants to become the de facto cloud alternative to Google, Microsoft, IBM, and when they get serious, Oracle and SAP. The dilemma is that Salesforce up until now has built its own walled garden. That was fine when you were confining this to CRM and third-party AppExchange providers who piggybacked on Salesforce’s own multi-tenanted infrastructure using its own proprietary Force.com environment with its “Java-like” Apex stored procedures language.

But at the end of the day, Apex is not going to evolve into anything more than a Salesforce.com niche development platform, and Force.com is not about to challenge Microsoft .NET, or Java for that matter.

The challenge is that Salesforce, having made the modern incarnation of remote hosted computing palatable to the enterprise mainstream, now finds itself in a larger fishbowl outgunned in sheer scale by Amazon and Google, and outside the enterprise, the on-premises Java mainstream. Salesforce Chairman and CEO Marc Benioff conceded as much at the VMforce launch this week, characterizing Java as “the No. 1 developer language in the enterprise.”

So VMforce is the marriage of two suitors that each needed their own leapfrogs: VMware transitions into a ready-made cloud-based Java stack with existing brand recognition, and Salesforce.com steps up to the wider Java enterprise mainstream opportunity.

Apps written using the Spring Java stack will gain access to Force.com's community and services such as search, identity and security, workflow, reporting and analytics, web services integration API, and mobile deployment. But it also means dilution of some features that make Force.com platform what it is; the biggest departure is away from the Apex language stored procedures architecture that runs directly inside the Salesforce.com relational database.

Salesforce pragmatically trades scalability of a unitary architecture for scalability through a virtualized one.

It really means that Salesforce morphs into a different creature, and now must decide whom it means to compete with because -- it’s not just Oracle business applications anymore.

Our bets are splitting the difference with Amazon, as other SaaS providers like IBM that don’t want to get weighed down by sunk costs have already done. If Salesforce wants to become the enterprise Java platform-as-a-Service (PaaS) leader, it will have to ramp up capacity, and matching Amazon or Google in a capital investment race is a nearly hopeless proposition.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Monday, April 26, 2010

HP rolls out application modernization tools on heels of Forrester survey showing need for better app lifecycle management

Application lifecycle productivity is proving an escalating challenge in today’s enterprise. Bloated app portfolios and obsolete technologies can stifle business agility and productivity, according to a new Forrester Research IT trends survey.

A full 80 percent of IT decision makers queried cited obsolete and overly complex technology platforms as making "significant" or "critical impact" on application delivery productivity. Another 76 percent cited the negative impact of "cumbersome software development lifecycle processes," while 73 percent said it was "difficult to change legacy applications," said Forrester's consulting division. The study is available. [Disclosure: HP is a sponsor of BriefingsDirect podcasts].

A sharp focus on overcoming the challenges associated with improving applications quality and productivity is leading to a growing demand for applications modernization. Specifically, agility, cost reduction and innovation are driving modernization efforts, the Forrester survey concludes.

Fifty-one percent of Forrester’s respondents are currently modernizing software development lifecycle tools, including software testing processes. But are enterprises truly realizing the benefits of application modernization efforts?

On Monday, HP rolled out a set of application quality tools that focus on increasing business agility and reducing time to market to help more companies answer "yes" to that question. The new solutions are part of the HP Application Lifecycle Management portfolio, a key component of HP’s Application Transformation solutions to help enterprises manage shifting business demands.

New challenges, new tools

HP Service Test Management (STM) 10.5 and the enhanced HP Functional Testing 10.0 work to advance application modernization efforts in two ways. First, the tools make it easier for enterprises to focus on hindrances to application quality. Second, the tools improve the all-important line of sight between development and quality assurance teams.

“To maintain a competitive edge in today’s dynamic IT environment, it is critical for business applications to rapidly support changes without compromising quality or performance,” says Jonathan Rende, vice president and general manager of HP’s Business Technology Optimization Applications, Software and Solutions division.

HP STM 10.5 works to mitigate risk and improve business ability by setting the stage for more collaboration between development and quality assurance teams. Built on HP Quality Center, enterprises are using HP STM 10.5 to increase testing efficiency and overall throughput of application components and shared services.

Meanwhile, HP Functional Testing 10.0 ensures application quality to address changing business demands. It even offers a new Web 2.0 Feature Pack and Extensibility Accelerator that supports Web 2.0 apps and lets IT admins test any rich Internet apps technology.

“It is critical for us, particularly in the financial industry, to react rapidly to development changes early on in the testing life cycles,” says Mat Gookin, test automation lead at Suntrust Banks. “We look to ... flexible technology that keeps application quality performance high and operations cost low so we can focus on preventing risks and providing value to our end users.”

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Friday, April 23, 2010

Freed from data center requirements, cloud computing gives start-ups the fast-track to innovate, compete

This quest post comes courtesy of Mike Kavis, is CTO of M-Dot Network, Vice President and Director of Social Technologies for the Center for the Advancement of the Enterprise Architecture Profession (CAEAP ), and a licensed ZapThink architect.

By Mike Kavis

Cloud computing is grabbing a lot of headlines these days. As we have seen with SOA in the past, there is a lot of confusion of what cloud computing is, a lot of resistance to change, and a lot of vendors repackaging their products and calling it cloud-enabled.

While many analysts, vendors, journalists, and big companies argue back and forth about semantics, economic models, and viability of cloud computing, start-ups are innovating and deploying in the cloud at warp speed for a fraction of the cost.

This begs the question, “Can large organizations keep up with the pace of change and innovation that we are seeing from start-ups?”

Innovate or die

Unlike large, well-established companies, start-ups don’t have the time or money to debate the merits of cloud computing. In fact, a start-up will have a hard time getting funded if they choose to build data centers, unless building data centers is their core competency.

Start-ups are looking for two things: Speed to market and keeping the burn rate to a minimum. Cloud computing provides both. Speed to market is accomplished by eliminating long procurement cycles for hardware and software, outsourcing various management and security functions to the cloud service providers, and the automation of scaling up and down resources as needed.

The low burn rate can be achieved by not assuming all of the costs of physical data centers (cooling, rent, labor, etc.), only paying for the resources you use, and freeing up resources to work on core business functions.

I happen to be a CTO of a start-up. For us, without cloud computing, we would not even be in business. We are a retail technology company that aggregates digital coupons from numerous content providers and automatically redeems these coupons in real time at the point of sale when customers shop.

These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough.



To provide this service, we need to have highly scalable, reliable, and secure infrastructure in multiple locations across the nation and eventually across the globe. The amount of capital required to build these data centers ourselves and hire the staff to manage them is at least 10 times the amount we are spending to build our 100 percent cloud-based platform. There are a hand full of large companies who own the paper coupon industry.

You would think that they would easily be the leaders in the digital coupon industry. These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough and build the new digital solutions cheap enough to compete with a handful of start-ups that are racing to sign up all the retailers for this service.

Oh, the irony of it all! The bigger companies have a ton of talent, well established data centers and best practices, and lots of capital. Yet the cash strapped start-ups are able to innovate faster, cheaper, and produce legacy-free solutions that are designed specifically to address a new opportunity driven by increased mobile usage and a surge in the redemption rates of both web and mobile coupons due to economic pressures.

My story is just one use case where we see start-ups grabbing accounts that used to be a honey pot for larger organizations. Take a look at the innovation coming out of the medical, education, home health services, and social networking areas to name a few and you will see many smaller, newer companies providing superior products and services at lower cost (or free) and quicker to market.

While bigger companies are trying to change their cultures to be more agile, to do “more with less” -- and to better align business and IT -- good start-ups just focus on delivery as a means of survival.

Legacy systems and company culture as anchors

Start-ups get to start with a blank sheet of paper and design solutions to specifically take advantage of cloud computing whether they leverage SaaS, PaaS, or IaaS services or a combination of all three. For large companies, the shift to the cloud is a much tougher undertaking.

First, someone has to sell the concept of cloud computing to senior management to secure funding to undertake a cloud based initiative. Second, most companies have years of legacy systems to deal with. Most, if not all of these systems were never designed to be deployed or to integrate with systems deployed outside of an on-premise data center.

Often the risk/reward for re-engineering existing systems to take advantage of the cloud is not economically feasible and has limited value for the end users. If it is not broke don’t fix it!

Smarter companies will start new products and services in the cloud. This approach makes more sense, but there are still issues like internal resistance to change, skill gaps, outdated processes/best practices, and a host of organizational challenges that can get in the way. Like we witnessed with SOA, organization change management is a critical element for successfully implementing any disruptive technology.

The culture for most start-ups is entrepreneurial by nature. The focus is on speed, low cost, results.



Resistance to change and communication silos can and will kill these types of initiatives. Start-ups don’t have these issues, or at least they shouldn’t. Start-ups define their culture from inception. The culture for most start-ups is entrepreneurial by nature. The focus is on speed, low cost, results.

Large companies also have tons of assets that are depreciating on the books and armies of people trained on how to manage stuff on-site. Many of these companies want the benefits of the cloud without given up control that they are used to having. This often leads them down an ill advised path to build private clouds within their data center.

To make matters worse, some even use the same technology partners that supply their on-premise servers without giving the proper evaluation to the thought leading vendors in this space. When you see people arguing about the economics of the cloud, this is why. The cloud is economically feasible when you do not procure and manage the infrastructure on-site.

With private clouds, you give up much of the benefits of cloud computing in return for control. Hybrid clouds offer the best of both worlds but even hybrids add a layer of complexity and manageability that may drive costs higher than desired.

We see that start-ups are leveraging the public cloud for almost everything. There are a few exceptions where due to customer demands, certain data are kept at the customer site or in a hosted or private cloud, but that is the exception not the norm.

The Zapthink take

Start-ups will continue to innovate and leverage cloud computing as a competitive advantage while large, well-established companies will test the waters with non-mission critical solutions first. Large companies will not be able to deliver at the speed of start-ups due to legacy systems and organizational issues, thus conceding to start-ups for certain business opportunities.

Our advice is that larger companies create a separate cloud team that is not bound by the constraints of the existing organization and let them operate as a start-up. Larger companies should also consider funding external start-ups that are working on products and services that fit into their portfolio.

Finally, large companies should also have their merger and acquisition department actively looking for promising start-ups for strategic partnerships, acquisitions, or even buy to kill type strategies. This strategy allows larger companies to focus on their core business while shifting the risks of failed cloud executions to the start-up companies.

If you’re a Licensed ZapThink Architect and you’d like to contribute a guest ZapFlash, please email info@zapthink.com.

This quest post comes courtesy of Mike Kavis, is CTO of M-Dot Network, Vice President and Director of Social Technologies for the Center for the Advancement of the Enterprise Architecture Profession (CAEAP ), and a licensed ZapThink architect.

You may also be interested in: