Wednesday, March 5, 2008

Cloud computing for enterprises, work it through your head

Here are some great quotes from a Hiperware white paper I just read:
In combination, cluster computing and multi-core computers have the potential to provide unprecedented performance, scalability and reliability for enterprise software.

Much of the significant benefit evident in the ideology of multicore and cluster computing -- lower costs, higher availability and scalability -- is effectively negated by the cost, time, risk and complexity involved in developing and deploying software that can run on these systems.

... What hinders businesses from taking advantage of multicore and clustered hardware is the lack of a simple means – such as a Rapid Application Development (RAD) method – so that software developers can quickly develop, test and deploy
enterprise software on these systems.

By taking the engineering complexity away from multi-core and cluster-computing, Hiperware Platform makes it significantly easier for developers to write software that can be partitioned across multiple computers or CPU-cores or virtual machines.
The new paper goes on to detail several enterprise computing use-case scenarios that show how cloud computing architectures and methodologies, if enterprise developers can exploit them, will rapidly advance cost-benefits.

Cloud computing is not just for Google and Amazon, folks. It will be synonymous with high performance and then good old enterprise mission-critical computing, in all its forms, in the coming years.

The new neat trick will be managing how the clouds and SOAs relate and interact. And that spells more integration as a service, and more federated policy management and enforcement as a service. It's a whole new abstraction for middleware.

Cloud computing could be the next big opportunity for middleware.

Tuesday, March 4, 2008

Splunk goes 'platform' to extend IT search benefits across more IT management functions

Gaining more insights early and often into what vast arrays of servers, routers and software stacks are actually doing has long been on the top of the IT wish list. Traditional IT management approaches force the trade-off been depth and comprehensive reach, meaning you can't get the full, integrated picture across mixed systems with sufficient clarity.

Splunk's approach to this problem has been to index and make searchable the flood of constantly generated log files being emitted from IT systems, and then aligning the time stamps to draw out business intelligence inferences about actual IT performance.

The San Francisco company took the IT information assembly and digestion process a step further two years ago by creating Splunk Base, an open reservoir of knowledge about IT searched systems for administrators to share and benefit from. [Disclosure: Splunk is a sponsor of BriefingsDirect podcasts, including this one on Splunk Base.]

Now, recognizing the power of mashed up services and Enterprise 2.0 tools for associating applications, services, and data, Splunk has gone "platform." Instead of only providing the fruits of IT search to sys admins and IT operators, Splunk has created the means to offer developers easy access to that data and the powerful inferences gleaned from comprehensive IT search. That means the data can go places no log file has gone before.

Through a common set of services and APIs, the Splunk Platform now allows developers and equipment makers to build and integrate applications that include IT-search generated data. Because Splunk collects and manages logs, configurations, messages, traps and alerts -- compiling statistics from nearly every IT component -- the makers of IT equipment can build better management and maintenance applications (not to mention billable services).

In trial use, the Splunk Platform has already been leveraged by OEMs and systems integrators in the form of bundling and embedding Splunk with their own hardware, software and services. The opportunity there is for these OEMs and systems integrators to seek new business opportunities for offering ongoing maintenance and support values for their products and services.

What's more, the applications that the various OEMs, service providers, hosting organizations, and service bureau outsourcers build on Splunk, the more the applications can be used in coordination together, and the findings then integrated for faster problem solving, greater threat response, heightened compliance reporting, and for gaining business intelligence insight into user activity and transactions.

I like this approach because gaining an insight into total datacenter behavior in near real-time has been so difficult, but its importance is growing with the advances in virtualization, mixed-hosting arrangements, co-location, and SOA-based systems and infrastructure. In effect, both the complexity and heterogeneity of systems has kept growing, while the ability to gain common-denominator meta data about systems behaviors hasn't kept pace. We've long needed a way to make all systems "readable" in common ways.

With Splunk Platform and the applications it will spawn, IT information can now much better support and interact with distributed management applications. And we certainly need more innovative applications that can leverage this common meta data about systems to produce better management and quick feedback from systems and users.

Taking this all a step further, many of these applications and services can and should support an ecosystem. By easily distributing their applications and gaining the ability to download other applications created by anyone in the Splunk ecosystem, IT managers and the makers of IT equipment will benefit. To kick-start the effort, the first Splunk-built application on the platform was announced this week. Splunk for PCI Compliance is available for download from SplunkBase.

The application provides 125 searches, reports and alerts to help satisfy PCI requirements, including secure remote access, file integrity monitoring, secure log collection, daily log review, audit trail retention, and PCI control reporting, says Splunk. The goal is to make it simpler and faster for IT managers to comply, to answer auditor questions, and to control access to sensitive systems data. Splunk has taken pains to provide security and access control to the sensitive data, while opening up access to the non-sensitive information for better analysis.

Consequently, Splunk's foray into the developer world and applications ecosystems coincides with the company's release of Splunk 3.2, which now includes a Splunk for Windows version (on the same single code base that runs on Linux, Mac OSX, Solaris, FreeBSD and AIX). New features in Splunk 3.2 include transaction search and interactive field extraction to create easier ways for end users to generate their own applications. The update also extends the platform's capabilities with filesystem change monitoring, flexible roles, data signing and audit trails. A new REST API and SDKs for .Net and Python further opens the platform for more developers.

The Splunk Platform and associated ecosystem should quickly grow the means to bridge the need for transparency between runtime actualities and design-time requirements. When developers can easily know more about what applications and systems do in the real world in real time, they can make better decisions and choices in the design and test phases. This obviously has huge time- and money-saving implications.

The need for such transparency will quickly grow as virtualization and a services-based approach to applications gains stream and acceptance. We have seen some very powerful productivity improvements as general enterprise data has been mined for business intelligence. Now its time to better mine systems data for better IT intelligence.

Monday, March 3, 2008

Nexaweb Advance takes RIA value to the enterprise application modernization imperative

There are so many good reasons to modernize legacy and 3GL/4GL applications that enterprises are moving wholesale to modernization activities, changing entire classes of applications, and aligning them with SOA, SaaS, data center consolidation, ITIL, and energy-conservation/green initiatives.

Oh, and modernization allows you to gracefully get out of the costly fat PC client software support business and focus on the browser-only end points.

The building interest in virtualization is also a spur to getting out the client/server business and making more applications Web-facing and services-based. These moves, in turn, allow for better organizing data into common warehouses and SANs, allowing for BI and other benefits, while reducing storage and back-ups costs. Business continuity also gets a boost, because everything is on the server-side (often of low-cost x86 Linux).

In short, what enterprise's are really up to these days is datacenter transformation, the whole ball of wax, and in which applications modernization is an early and essential ingredient to begin enjoying the larger holistic productivity and costs benefits.

The trick is to keep those same older (and often mission critical) applications performing well, with the rich GUIs that users expect, and quickly leading to the back-end integration flexibility to make the legacy logic also part of any enterprise's SOA patterns.

For those applications deemed no longer mission-critical, application modernization allows for proper sunsetting. It is often worthwhile to cull out the still valued logic, transactional mappings, and data -- and apply them anew to other applications or processes -- before pulling the plug.

Yep, so many reasons to modernize, so few ways to do it without pain, confusion, and cost. And so into this gapping need, Nexaweb today takes its rich Internet application (RIA) solution value with Nexaweb Advance. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]

For more on the whole rationale and business case for application modernization, check out a sponsored podcast I did with HP Services. ITIL v3 factors into this in a big, so here's some background on that, too.

For Nexaweb, the end game for enterprises is flexible composite workflows, and so the newest offerings are more than tools and platform, there's a professional services component, to take the best practices and solutions knowledge to market as well. The process includes applications assets capture and re-factoring (sort of like IT resources forensics), re-composition, deployment and then proper maintenance. In the bargain, you can gain a enhanced platform, increased automation, and services orientation.

The goal is to harvest all those stored procedures, but target them to newer architectures -- from Struts to Spring -- and move from client/server to Enterprise 2.0, is a leap-frog of sorts. The re-use of logic then allows those assets to be applied to model-driven architectures and the larger datacenter transformation values.

Nexaweb Advance pairs Nexaweb’s Enterprise Web Suite with automated code generation tools and professional services to deliver a model-driven architecture approach to the transformation of legacy PowerBuilder, ColdFusion, C++, VisualBasic, and Oracle Forms applications, according to the Burlington, Mass. company.

We have seen quite a bit of associating RIA values with SOA in the past few years, so I'm happy to see RIAs also becoming essential to other mainstream enterprise imperatives, like datacenter transformation.

Microsoft opens Pandora's box on online services, betting convenience is the killer app

Now that Microsoft has shown how online productivity applications and communications/groupware should be properly packaged, we can enter the new era of worker choice.

It's not that different from the choices developers have been making for years: Do you want the convenience of neat packaging (at the cost of flexibility and choice) or do you want to pick ala carte components that may best meet your needs and avoid lock-in?

Microsoft Online Services (MOS) is being launched for the U.S. today by Bill Gates at the annual Microsoft Office SharePoint Conference. The bevy of applications is designed to appeal to many kinds of users, and businesses of most sizes and character. A limited beta has been set up, with general availability during the second half of this year.

Core services will include Web-based e-mail, calendaring, contacts, shared workspaces, and webconferencing and videoconferencing over the Web. Microsoft is characterizing the services as part of its "software plus services" drive, so it's hard to tell how much of the "software" (that stuff installed on the PC or server) you'll need to use MOS.

Microsoft says these services will be "managed through a single Web-based interface," which sounds like a portal you'll need to log in to to add or manage users. "IT professionals can monitor the performance of the services, add and configure users, submit and track support requests, and manage users and licenses," says Microsoft.

As in development, some shops like a nice big package, with per developer seat licenses. Others give their developers more choice on tools, utilities, desktop OS, frameworks. They seem more interested in the work the developers do, than in how they do it.

We could see a similar breakdown among more general computing users, given the MOS versus Google services offerings so far. This is more than a matter of style or taste, one model is born of and imbued with client/server, and the other is of and imbued with the Web. You know which is which.

So, in effect, Microsoft is placing a Web shell on its old model, just like it put a GUI shell on DOS with DOS 5, and another shell on that with Windows 95.

Of course on costs, the beauty and/or devil is in the details. This is a subscription service, designed for businesses. Those businesses will pay on a per-user subscription basis. Those Microsoft shops, existing customers with Software Assurance on their Microsoft Client Access Licenses (CALs) will get a discount.

So there are two big issues here: Total cost, and convenience. And those will break down differently if you're a Microsoft "Assurance"-level user or a non-Microsoft user. We don't know the numbers yet, but it's going to be the real nut in this.

Microsoft will need to skate delicately on thin ice to make the total cost close enough to the way assurance users pay to prevent them from moving too quickly. But, the total cost will need to be low enough so that the Microsoft way to online SaaS will be marginally competitive against Google and other providers of online productivity applications and communications/groupware as services.

And they way this is set up, it's almost as if Microsoft has given up on competing for individuals, students, SOHOs, and perhaps businesses of less than 50 people. It's almost as if they don;t think they can compete with Google there -- at least not for the foreseeable future.

This is, then, about maintaing the base of the small businesses and department-level buyers of Microsoft products. In essence, this is defense. It is designed to make it confusing or economically difficult to calibrate total costs, given the complexity of factoring installations, older apps, licenses, and the entire 20-year-old hairball.

And what Microsoft must do, in addition to making the true cost-benefits analysis murky, is to absolutely win on packaging and convenience. And this is where Google is vulnerable. Google has still to show, aside from costs, how businesses of all sorts can adopt their services and approach in an easy to manage way, that packages things up neatly for the IT folks, and that make a transition from the hairball easy, convenient, and well-understood.

And so Google continues the march into businesses via the organic, user-generated interest and convenience level. Google takes the early lead on the individuals and younger, greenfield companies.

And Microsoft places a bulwark around its empire This could be a long slog.

Sunday, March 2, 2008

OpSource releases OpSource Connect for better integrating SaaS and Web services

OpSource, a software as a service (SaaS) delivery company, is making it easier for SaaS and Web companies to consume and publish multiple Web services with the announcement of OpSource Connect, which will a core component of the OpSource On-Demand Summer 2008 release.

OpSource Connect, which is available immediately, provides a common platform -- the OpSource Service Bus (OSB) -- that will enable integrating SaaS applications in the cloud with legacy enterprise applications behind the firewall, freeing SaaS applications from silos.

OpSource, of Santa Clara, Calif., says that its multi-tenant OSB will change the way companies build and deploy SaaS applications, as well as the way in which those applications interact with and reach new markets. According to OpSource, the OSB provides a "write one, integrate with all" capability for all SaaS applications and Web services.

SaaS is where the growth is expected to be for the foreseeable future. Gartner, for example, sees SaaS growing at a 22.1 percent compound annual rate, which is roughly double the growth of enterprise software as a whole.

Rumor has it that Microsoft isn't waiting around for Gartner to be proven right or wrong and is ramping up its cloud-based applications to mimic its shrink-wrapped offerings.

OpSource Connect APIs provide integration capability for any application. Companies can also use Boomi for OpSource Connect, a visual drag-and-drop application integration environment from Boomi, Inc. This allows integrations with popular non-OSB applications including Salesforce and NetSuite.

Behind the firewall integrations use OpSource Sockets, which provide integration with legacy enterprise applications such as SAP and Intuit QuickBooks. The first OpSource Sockets are based on Boomi Atoms, agents that reside behind the firewall that enable integration without the need for specialized software packages or hardware appliances.

OpSource Connect APIs, Boomi for OpSource Connect and OpSource Sockets are available immediately.

When OpSource On Demand Summer 2008 is released, OpSource Connect will add the ability to use the OSB to not only consume, but publish applications as Web services, allowing each application to become a platform in its own right.

OpSource is also creating a range of services to assist companies in integration and enabling applications. Among these are:

  • Web Services Enablement Program: To assist with enabling applications as Web services.
  • Certified Integrator Program: To provide assistance in integrating applications in the cloud or behind the firewall.
  • Application Directory: To make it easier for companies to find Web services that use the OSB.