Friday, May 1, 2009

New Open Group SOA book builds bridge over delta between IT and business services

Business people discovered services around the time the first cave man offered to start campfires in exchange for food.

IT people groked service orientation sometime in the past decade but are still struggling to communicate their discovery to business people.

This may be an exaggeration, but it also helps explain the disconnect between business and IT that has plagued adoption of service-oriented architecture (SOA) to the point where some people have thrown up their hands and declared SOA dead.

In some ways the problem of getting business people to embrace SOA is due to this backward-incompatible approach, in the view of Chris Harding, forum director for The Open Group. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

“One of the things we are supposed to do is bring about alignment between the business and technical communities,” he said. “What we found when we were doing that is the business people have known what a service was for centuries if not millennia.

"And technical people have come across this wonderful new idea. And actually the alignment problem is to stop the technical people reinventing service in a new way that the business people don’t understand.”

To help get the business-technical alignment back on track, The Open Group is publishing The SOA Source Book.

Harding knows what you are thinking: What do we need with another SOA book?

He is quick to differentiate that The SOA Source Book is from all the other SOA titles now available.

To begin with this is not your coder’s SOA book. It does not tell you how to build a service. It is also not a publication of standards and guidelines that would have required a lengthy review and adoption process.

The SOA Source Book was created by members of The Open Group’s SOA Workgroup, who have day jobs architecting business applications. They are offering real world enterprise architecture experience in deploying services for business purposes.

Applying The Open Group Architecture Framework (TOGAF) approach, The SOA Source Book “aspires to be systematized common sense” in architecture and governance, Harding says.

It takes a flexible approach to implementation. For example, rather than advocating one model for SOA, the book suggests a number of models that can be used depending on what makes the most common sense for the business application.

The SOA Source Book is also not your after-market weighty tome that can double as a doorstop. Running exactly 100-pages in the PDF version, it features short clear sentences in brief paragraphs focused on the many moving parts of an SOA implementation. Scanning the categories and subheads in the table of contents, the reader can quickly find information on a specific subject.

Rather than reading it from cover to cover, Harding anticipates that enterprise and IT architects will use it to quickly look up information they need for specific components or processes they are working on.

The SOA Source Book is available in both printed and electronic form (if you want to save a tree). More information is available.

Rich Seeley provided research and editorial assistance to BriefingsDirect on this blog. He can be reached at Writer4Hire.

Follow me on Twitter at http://twitter.com/Dana_Gardner.

rPath offers free management tool for applications aspiring to the cloud

rPath would like to be your applications' path to cloud computing.

The Raleigh, N.C.-based start up founded by Red Hat refugees, Tim Buckley,
executive chairman of the board, and Erik Troan, CTO, recently released a free downloadable version of rBuilder for managing application deployment to virtual or cloud-based environments as well as traditional glass houses. [Disclosure: rPath is a sponsor of BriefingsDirect podcasts.]

For IT managers looking at cloud deployment, rPath’s approach is to embrace as many flavors of the cloud as possible to deal with the fact that what is commonly called the cloud is really a bunch of non-standard environments varying from vendor to vendor.

rPath lists support for three clouds, Amazon EC2, Globus Alliance, and Bluelock. rBuilder also supports hypervisors, including VMware ESX, Citrix Xen and Microsoft Hyper-V.

As a startup with a limited budget for hardware, rPath eats its own cloud dog food. The company uses Amazon EC2 for some of its own applications, as Billy Marshall, chief strategy officer, explained in a Q&A interview with SearchSOA last fall. We also did an interiew with Marchall on BriefingsDirect.

The new free version of rBuilder differs from the free rBuilder Online community version in that you can download it and run it behind your own firewall. And it differs from the commercial version in that it is restricted to 20 running system instances in production.

Once a user reaches 21, they have to “establish a commercial relationship with rPath.”

Also, users of the free version can only get support through the rBuilder Online community.

For shops looking to explore Cloud computing, the free version of rBuilder, appears to be a viable option. You can check out the system requirements and download instruction at rPathQuickStart.

Rich Seeley provided research and editorial assistance to BriefingsDirect on this blog. He can be reached at Writer4Hire.

Follow me on Twitter at http://twitter.com/Dana_Gardner.

Wednesday, April 29, 2009

PC 'security as a service' gains global cloud footprint with free Panda anti-virus offering

Cloud computing's utility and power in everyday life reached a notable new milestone today with Panda Security's free PC security service.

This delivery and two-way malware detection-access model makes a ton of sense, so much so that I expect we'll be soon seeing the cloud model deliver of more than PC security and anti-virus/anti-spam services. The era of remote services for a slew of device support and maintenance -- of everything from cars to cell phones to home appliances -- is upon us.

Essentially anything that uses software and has network access can be supported efficiently and powerfully based on the Panda Security cloud model. Making the service free to home-based users is especially brilliant because it gains the Metcalfe's Law benefits of a valuable community to detect the malware, with the means to then sell the detection and prevention means to business and professional users. [Disclosure: Panda Security is a sponsor of BriefingsDirect podcasts.]

Here's how it works, from Panda's release:
Consumers can download the free client protection product from http://www.cloudantivirus.com. ... The Panda Cloud Antivirus thin-client agent introduces a new philosophy for on-access asynchronous cloud-scanning. It combines local detection technologies with real-time cloud-scanning to maximize protection while minimizing resource consumption. This optimized model blocks malicious programs as they attempt to execute, while managing less dangerous operations via non-intrusive background scans.

Panda's proprietary cloud computing technology called Collective Intelligence, Panda Cloud Antivirus harnesses the knowledge of Panda's global community of millions of users to automatically identify and classify new malware strains in almost real-time. Each new file received by Collective Intelligence is automatically classified in under six minutes. Collective Intelligence servers automatically receive and classify over 50,000 new samples every day. In addition, Panda's Collective Intelligence system correlates malware information data collected from each PC to continually improve protection for the community of users.
Panda says the model demands a lot less of a PC's resources, 5% versus 9% for other fat-client AV software approaches. That means older PCs can get protected better, cheaper, and longer. Far fewer people will need to upgrade the PC hardware just to keep it free from viruses. It's about time! Poor security should not be a business model for sellers of new computers and software.

I'm going to try this service on Windows XP Home running on Parallels on my iMac Leopard. I'll report back on how it works.

As I said, I hope this model succeeds because it really is a harbinger of how cloud-based services can improve and solve thorny problems in a highly efficient manner that combines the power of community with scale and automation. This may go far in also dissuading the creators of malware because the bad things will be squelched so fast if a Panda model get critical mass that the effort is useless and therefore mute.

Panda Security, a privately held company based in Spain, could well see its services expand to include PC maintenance, support, remote and automated support, and even more SaaS applications and productivity services. I expect this burgeoning ball of PC services from the cloud ecology to become the real software plus services model. It will be very interesting to see which vendors and/or providers or partnerships can assemble the best solutions package first and best.

Incidentally, don't expect Microsoft to do this cloud-based security thing. It can't afford to kill off or alienate the third-party malware security providers by doing it all itself. Those days are long past gone. The third parties, however, can now stretch their wings and fly. And they are.

Follow me on Twitter at http://twitter.com/Dana_Gardner.

Tuesday, April 28, 2009

Can software development aspire to the cloud?

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

As we’re all too aware, the tech field has always been all too susceptible to the fad of the buzzword, which of curse gave birth to another buzzword as popularized by gave birth to Gartner’s Hype Cycles. But in essence the tech field is no different from the worlds of fashion or the latest wave in electronic gizmos – there’s always going to be some new gimmick on the block.

But when it comes to cloud, we’re just as guilty as the next would-be prognosticator as it figured into several of our top predictions for 2009. In a year of batten-down-the-hatch psychology, anything that saves or postpones costs, avoids long-term commitment, while preserving all options (to scale up or ramp down) is going to be quite popular, and under certain scenarios, cloud services support all that.

And so it shouldn’t be surprising that roughly a decade after Salesforce.com re-popularized the concept (remember, today’s cloud is yesterday’s time-sharing), the cloud is beginning to shake up how software developers approach application development. But in studying the extent to which the cloud has impacted software development for our day job at Ovum, we came across some interesting findings that in some cases had their share of surprises.

ALM vendors, like their counterparts on the applications side, are still figuring out how the cloud will impact their business. While there is no shortage of hosted tools addressing different tasks in the software development lifecycle (SDLC), major players such as IBM/Rational have yet to show their cards. In fact, there was a huge gulf of difference in cloud-readiness between IBM and HP, whose former Mercury unit has been offering hosted performance testing capabilities for 7 or 8 years, and is steadily expanding hosted offerings to much of the rest of its BTO software portfolio.

More surprising was the difficulty of defining what Platform-as-a-Service (PaaS) actually means. There is the popular definition and then the purist one. For instance, cloud service providers such as Salesforce.com employ the term PaaS liberally in promoting their Force.com development platform, in actuality development for the Force.com platform uses coding tools that don’t run on Salesforce’s servers, but locally on the developer’s own machines. Only once the code is compiled is it migrated to the developer’s Force.com sandbox where it is tested and staged prior to deployment. For now, the same principle applies to Microsoft Azure.

That throws plenty of ambiguity on the term PaaS – does it refer to development inside the cloud, or development of apps that run in the cloud? The distinction is important, not only to resolve marketplace confusion and realistically manage developer expectations, but also to highlight the reality that apps designed for running inside a SaaS provider’s cloud are going to be architecturally different than those deployed locally. Using the Salesforce definition of PaaS, apps that run in its cloud are designed based on the fact that the Salesforce engine handles all the underlying plumbing. In this case, it also highlights the very design of Salesforce’s Apex programming language, which is essentially a stored procedures variant of Java. It’s a style of development popular from the early days of client/server, where the design pattern of embedding logic inside the database was viewed as a realistic workaround to the bottlenecks of code running from fat clients. Significantly, it runs against common design patterns for highly distributed applications, and of course against the principles of SOA, which was to loosely couple the logic and abstracted from the physical implementation. In plain English, this means that developers of apps to run in the cloud may have to make some very stark architectural choices.

The confusion over PaaS could be viewed as a battle over vendor lock-in. It would be difficult to port an application running in the

That throws plenty of ambiguity on the term PaaS – does it refer to development inside the cloud, or development of apps that run in the cloud?

Salesforce cloud to another cloud provider or transition it to on premises because the logic is tightly coupled to Salesforce’s plumbing. This also sets the stage for future differentiation of players like Microsoft, whose Software + Services is supposed to make the transition between cloud and on premises seamless; in actuality, that will prove more difficult unless the applications are written in strict, loosely-coupled service-oriented manner. But that’s another discussion that applies to all cloud software, not just ALM tools.

But the flipside of this issue is that there are very good reasons why much of what passes for PaaS involves on-premises development. And that in turn provides keen insights as to which SDLC tasks work best in the cloud and which do not.

The main don’ts consist of anything having to do with source code, for two reasons: Network latency and IP protection. The first one is obvious: who wants to write a line of code and wait until it gets registered into the system, only to find out that the server or network connection went down and you’d better retype your code again. Imagine how aggravating that would be with highly complex logic; obviously no developer, sane or otherwise, would have such patience. And ditto for code check-in/check out, or for running the usual array of static checks and debugs. Developers have enough things to worry about without having to wait for the network to respond.

More of concern however is the issue of IP protection: while your program is in source code and not yet compiled or obfuscated, anybody can get to it. The code is naked, it’s in a language that any determined hacker can intercept. Now consider that unless you’re automating a lowly task like queuing up a line of messages or printers, your source code is business logic that represents in software how your company does business. Would any developer wishing to remain on the payroll the following week dare place code in an online repository that, no matter how rigorous the access control, could be blown away by determined hackers for whatever nefarious purpose?

If you keep your logic innocuous or sufficiently generic (such as using hosted services like Zoho or Bungee Connect), developing online may be fine (we’ll probably get hate mail on that). Otherwise, it shouldn’t be surprising that no ALM vendor has yet or is likely to place code-heavy IDEs or source code control systems online. OK, Mozilla has opened the Bespin project, but just because you could write code online doesn’t mean you should.

Conversely, anything that is resource-intensive, like performance testing, does well with the cloud because, unless you’re a software vendor, you don’t produce major software releases constantly. You need lots of resource occasionally to load and performance test those apps (which by that point, their code is compiled anyway). That’s a great use of the cloud, as HP’s Mercury has been doing since around 2001.

Similarly, anything having to do with the social or collaboration aspects of software development lent themselves well to the cloud. Project management, scheduling, task lists, requirements, and defect management all suit themselves well as these are at core group functions where communications is essential to keeping projects in sync and all members of the team – wherever they are located — on literally the same page. Of course, there is a huge caveat here – if your company designs embedded software that goes into products, it is not a good candidate for the cloud: imagine getting a hold of Apple’s project plans for the next version of the iPhone.

This guest post comes courtesy of Tony Baer's OnStrategies blog . Tony is a senior analyst at Ovum. His profile is here. You can reach him here.

Wednesday, April 22, 2009

Progress gives CEP a performance boost with multi-core support on Apama

Progress Software this week announced the release of an enhanced Parallel Correlator for its Apama Complex Event Processing (CEP) platform so it can take advantage of multi-core, multi-processor hardware.

Progress claims a seven-fold increase in CEP performance on an eight-core server in the company’s internal benchmark testing of this version of the Parallel Correlator in what the company described as “real-world customer scenarios.” [Disclosure: Progress is a sponsor of BriefingsDirect podcasts.]

John Bates, who founded Apama in 1999 to build out technology based on his research at Cambridge University in the UK, believes CEP is an easier technology sell to business users than service oriented architecture (SOA) because a clear case can be made for the ability of a product like Apama to execute high-speed transactions based on the identification of millisecond movements in the business enivronment. It can also provide business managers and executives with split-second snapshots of how they are doing in their markets.

I guess we can think of CEP as SOA for high octane business intelligence (BI) for transactional and real-time insights and inferences from tremendously complex and often massive streams of services (and more). Mostly traditional BI comes from read-only agglomerations of fairly static SQL data, some of which needs a lot of handholding before it gives up its analytics gems.

Incidentally, I also spoke this week with Cory Isacsson at CodeFutures, busy at the MySQL show, who has a lot to say these days about database sharding and how applying it to OSS databases like MySQL gets, among other things, more BI love from transactional read/write SQL data. More here.

Back to CEP ... This is being newly perceived by some as much more tangible to business users than the more nerdy benefits of SOA, such as reuse of services for more agile programming of new applications. Talk about the benefits of CEP and business users eye light up. Talk about the benefits of SOA and even business process management (BPM) and their eyes can glaze over.

I should point out that my buddy at ActiveEndpoints Alex Neihaus (another disclosure on their sponsorship of BriefingsDirect podcasts) would argue that CEP and SOA are the real somnolence inducers, and that BPM and visual orchestration form the far better point on the business value arrow around service swarms. Talk among yourselves ...

In making the latest Apama announcement, Progress touts an IDC report on CEP (excerpts) that included evaluation of the 2008 version of the Apama platform. IDC gave the Progress product high ratings in the categories of “Low Latency,” the speed of event processing, “Business User Control,” how it works for the business people, and “Deterministic Behavior,” the predictability and repeatability of the event processing programs.

Lo, and although it is not mentioned in the Progress announcement, Apama did not get such high scores in the two other IDC categories, “Data Management,” and “Complex Event Detection.”

IDC does non-metaphysical squares, rather than Magic Quadrants, we should gather.

In the real world, the major market for CEP appears the beleaguered financial services industry and the government watchdog agencies that are overseeing them. This appears reflected in the Apama customers listed in this week’s announcement, including JP Morgan, Deutsche Bank, and FSA (Financial Services Authority) of the UK.

Written in the midst of this recession, the IDC report worries: "Because Apama is so closely identified with the financial markets, the current downturn is likely to negatively impact Apama's opportunity and growth prospects in the near term. Therefore, it is incumbent that Progress figure out how to cost effectively apply the technology to new markets with better short-term growth prospects."

At the beginning of last fall’s financial system meltdown, Bates told a reporter that there may be a silver lining for CEP even in the midst of a banking crisis. He foresees potential for greater use of CEP by both government regulators as well as the financial institutions that need to supply more and more detailed data to show how they are complying with new regulations now being formulated, as well as old regulations now being more rigorously enforced.

Too bad they can't apply it to card counting or my wife's algorithmic-rich shuffling of copius coupons for generating a simple groceries list. Just start with the old one, I keep telling her.

Other industries that both Progress and IDC agree might provide new markets for CEP include transportation and inventory control systems based on RFID, and ERP systems for manufacturing. I continue to be intrigued too by mobile commerce (Google Voice, anyone?), laced with locations services and other varibles like weather.

CEP is going to advance the competitive capabilities for a lot of companies. What's less clear is how they will manage that along with their BI, SOA, cloud, and other must do somedays on the IT groceries list.

Rich Seeley provided research and editorial assistance to BriefingsDirect on this blog. He can be reached at Writer4Hire.