Monday, August 11, 2008

WSO2 data services provide catalyst for SOA, set stage for cloud-based data hosting models

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: WSO2.

Read a full transcript of the discussion.

In the past, data was structured, secure and tightly controlled. The bad news is that the data was limited by the firewall of personnel, technologies, and process rigidity. Today, however, the demand is for just-in-time and inclusive data, moving away from a monolithic data system mentality to multiple sources of data that provide real-time inferences on consumers, activities, events, and transactions.

The move is in the ownership of data value to the very people who really need it, who help define its analysis, and who can best use it for business and consumption advantage. Analysis and productivity values rule the future of data as services.

But how to jibe the best of the old with the needs of the new? How to use data services as onramps to SOA? How to bring data together for federated analysis? And how to use the power of open source licenses and community to encourage the further federation of data as standardized consumable services?

To answer these questions and to learn more about the quickly evolving data services landscape, I recently spoke with Paul Fremantle, the chief technology officer at WSO2; Brad Svee, the IT manager of development at travel management leader Concur Technologies; and James Governor, a principal analyst and founder at RedMonk.

Here are some excerpts from the resulting podcast:
The live, connected world needs to be managed properly and it's very, very difficult to build that as a single monolithic system. ... The [new] model is of keeping the data where it belongs and yet making it available to the rest of the world.

Our data is trapped in these silos, where each department owns the data and there is a manual paper process to request a report. Requesting a customer report takes a long time, and what we have been able to do is try to expose that data through Web services using mashup type UI technology and data services to keep the data in the place that it belongs, without having a flat file flying between FTP servers, as you talked about, and start to show people data that they haven't seen before in an instant, consumable way.

It seems clear that the status quo is not sustainable. There is inherent risk in the current system, and simply retrofitting existing data and turning it on as a service is not sufficient.

We have been trying to free up our data, as well as rethink the way all our current systems are integrated. We are growing fairly rapidly and as we expand globally it is becoming more and more difficult to expose that data to the teams across the globe. So we have to jump in and rethink the complete architecture of our internal systems.

The browser becomes the ubiquitous consumption point for this data, and we are able to mash up the data, providing a view into several different systems. Before, that was not possible, and the additional piece of moving the file between financial system, for example, we are able to not have to pull files, but actually use Web services to send only the data that has changed, as opposed to a complete dump of the data, which really decreases our network bandwidth usage.

[Yet] most of the data that we are dealing with is fairly sensitive, and therefore almost all of it has a need for at least per-user access basis, as well as, when we are transporting data, we will have to make sure that it's encrypted or at least digitally signed.

What we have built is what we call WSO2 Data Services, which is a component of our application server. The WSO2 Data Services component allows you to take any data source that is accessible through JDBC, MySQL databases, Oracle databases, or DB2, but, in addition, we also have support for Excel, CSV files, and various other formats and very simply expose it as XML.

Now this isn't just exposed to, for example, Web Services. In fact, it can also be exposed by REST interfaces. It can be exposed through XML over HTTP, can even be exposed as JSON. JavaScript Object Notation (JSON) makes it very easy to build Ajax interfaces. It can also support it over JMS, and messaging system.

So the fundamental idea here is that the database can be exposed through a simple mapping file into multiple formats and multiple different protocols, without having to write new code or without having to build new systems to do that.

What we're really replacing is ... where you might take your database and build an object relational map and then you use multiple different programming toolkits -- one Web services toolkit, one REST toolkit, one JMS toolkit -- to then expose those objects. We take all that pain away, and say, "All you have to do is a single definition of what your data looks like in a very simple way, and then we can expose that to the rest of the world through multiple formats."

I think that open source is absolutely vital to making this work, because fundamentally what we're talking about is breaking down the barriers between different systems. As you say, every time you're pushing the proprietary software solution that isn't based on open standards, doesn't have open APIs, and doesn't have the ability to improve it and contribute back, you're putting in another barrier.

Everyone has woken up to this idea of collaboration through Web 2.0 websites, whether through Flickr or FaceParty or whatever. What the rest of the world is waking up to is what open-source developers have been discovering over the last five to 10 years. Open source is Web 2.0 for developers.

Open source drives this market of components, and it's exactly the same thing that happened in the PC market. As soon as there was an open buy off that wasn't owned by a single company, the world opened up to people being able to make those components, work in peace and harmony, and compete on a level playing field. That's exactly where the open-source market is today.

That's exactly the sweet part in my opinion. I can shake and bake, I can code up a bunch of stuff, I can prototype stuff rapidly, and then my boss can sleep well at night, when he knows that he can also buy some support, in case whatever I cook up doesn't quite come out of the oven. I see there's a kind of new model in open source that I think is going to be successful because of that.
Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: WSO2.

Read a full transcript of the discussion.

Tuesday, August 5, 2008

Sybase rides growing database business into mobility innovation, says Chen at user conference

Sybase Chairman, CEO and President John Chen opened the TechWave Sybase user conference today with a slew of product announcements and a proud pointer to a growing database market -- growing at double-digit revenue growth (even if you're not IBM, Microsoft and Oracle).

Chen boasted 38 percent database revenue growth in Sybase's latest quarter, outstripping his formidable competitors. He told the crowd of IT users gathered in Las Vegas that the underlying database business is strong.

Sybase at the 10th annual event has announced tools, analytics and mobile products that target Sybase's global customer base of developers, database administrators, operators as well as its burgeoning enterprise mobility infrastructure solutions providers.

"We're making a run at the major data warehouse providers ... and we think we can compete very well," said Chen. "Things are working very well for the company."

For mobility, in a year where the Apple iPhone gained the lion's share of attention (despite Symbian's dominance), Sybase is driving toward the "unwired enterprise" to bring the analytics world together with the mobile tier and handheld delivery world. "The more devices and the more operating systems ... the better Sybase will be," said Chen.

Based on any metric data continues to explode across the IT landscape, said Chen. And the emphasis on real time and deep and wide analytics is only accelerating. Chen calls it "decision ready information."

Chen asked -- practically implored -- the users to get more into their mobility strategy, to unwire their enterprises.

Sybase Senior Vice President Raj Nathan said that mobile computing is overtaking older forms of communications and computing, as a theme for his portion of today's keynote presentation. But the current IT infrastructure can not well support this trend to mobility and information delivery out to the mobile edge.

"If you look at who is accessing the data, it's no longer just the employees ... and the demands of the non-employee for accessing from outside the firewall ... is much different. Today's architectures will not meet this demand," said Nathan.

Applications also need to be designed to be transaction-centric, said Nathan. What developers have to deal with has changed. "It's not just transactional applications, it's analytics, mobile, and messaging applications," he said. "These applications come from outside the firewall and through a mobile device in a unstructured, ad hoc form."

This all requires a shift in IT architectures, said Nathan. We need message-oriented interfaces. Data, applications, and tools -- all need to adjust. You need to handle complex analytics as a part of the process, not an after-thought, he said.

"The demands of information are changing, and you need a different set of architecture paradigms to make this happen," said Nathan. "Information delivery is even more important. It's time to evolve this and not go through a full set of replacements."

Amazon invests in cloud deployment venture as Elastra raises another $12 million

Elastra Corp., a cloud computing startup with a focus on ease of deployment, today announced a second round of funding, including participation from Amazon, which continues its investment ramp-up in cloud-based ventures.

The Series B funding for the San Francisco, Calif.-based Elastra, totals $12 million. Other participants were Bay Partners and Hummer Winblad Venture Partners, which took part in the first round of funding last year.

The cloud topic continues to heat up, with today's announcement that AT&T is jumping in. More from ZDNet's Larry Dignan. It's a no-brainer for telecos to be in on this, and it sets the stage for more tension between software vendors and service providers.

As for Elastra's Cloud Server, it provides point-and-click configuration, push-button deployment and automated management and dynamic monitoring of application infrastructure software and systems. The company's elastic computing markup language (ECML) and elastic deployment markup language (EDML) allow for extensibility and portability of applications across public and private clouds.

With this approach businesses and IT organizations don't have to script, monitor and scale their application infrastructure by hand, nor are they locked into “cloud silos” from a single provider.

For Amazon, which has been providing cloud infrastructure services for the over two years, this is the second foray into funding cloud ventures in the last three weeks. In July, Amazon chipped when Engine Yard raised $15 million.

I saw the potential for Elastra's approach, when the company arrived on the scene last March.
As virtualized software has become the primary layer over now-buried hardware that architects and engineers must deal with, we should expect more tools and “bridging” technologies like Elastra to emerge to help grease the skids for what can (and should?) be deployed in clouds. The software then becomes agile services that can be provisioned and consumed via innovative and highly efficient business models and use-based metering schemes.

. . . the database-driven product can help bring applications rapidly to a pay-as-you-use model. Enterprises may be able to provide more applications as services, charging internal consumers as a managed service provider.
I said at the time that the segue to the cloud could come sooner than many people might think. It looks like that prediction was on the mark.

Monday, August 4, 2008

SOA places broad demands on IT leadership beyond traditional enterprise architect role, says Open Group panel

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: The Open Group.

Read a full transcript of the discussion.

Defining the emerging new role of enterprise architects (EAs) in the context of services-oriented architecture (SOA) is more than an esoteric exercise. It also helps define the major shifts now under way from SOA activities in enterprises and large organizations. These murky shifts come down to defining how IT is to be managed anew in the modern organization.

To help better understand the shifting requirements for personnel and leadership due to SOA, The Open Group presented a panel on July 22 at the 19th Annual Open Group Enterprise Architect's Practitioners Conference in Chicago. I had the pleasure to moderate the discussion, on the role and impact of skills and experience for EAs in both the public sector and the private sector.

Our panel of experts and guests included Tony Baer, senior analyst at Ovum; Eric Knorr, editor-in-chief of InfoWorld; Joe McKendrick, SOA blogger and IT industry analyst; Andras Szakal, the chief architect at IBM's Federal Software Group, and David Cotterill, head of innovation at the U.K. Government Department for Work and Pensions.

Here are some excerpts:
Within the government [sector], enterprise architecture is, I would say, trending more toward the political aspect, to the executive-level practitioner, than it is, say, an integrator who has hired a set of well-trained SOA practitioners.

The enterprise architect is really more focused on trying to bring the organization together around a business strategy or mission. And, certainly, they understand the tooling and how to translate the mission, vision, and strategy into an SOA architecture -- at least in the government.

I think the technical background can be taken as a given for an enterprise architect. We expect that they have a certain level of depth and breadth about them, in terms of the broadest kind of technology platforms. But what we are really looking for are people who can then move into the business space, who have a lot more of the softer skills, things like influencing … How do you build and maintain a relationship with a senior business executive?

Those are kind of the skills sets that we're looking for, and they are hard to find. Typically, you find them in consultants who charge you a lot of money. But they're also skills that can be learned -- provided you have a certain level of experience.

We try to find people who have a good solid technical background and who show the aptitude for being able to develop those softer skills. Then, we place them in an environment and give them the challenge to actually develop those skills and maintain those relationships with the business. When it works, it's a really great thing, because they can become the trusted partner of our business people and unmask all the difficulties of the IT that lies behind.

We are software architects, but we are really trying to solve the business problem. ... I would look for people [to hire] who have deep technical skills, and have had experience in implementing systems successfully. I have seen plenty of database administrators come to me and try to call themselves an architect, and you can't be one-dimensional on the information side, although I am an information bigot too.

So you're looking for a broad experience and somebody who has methodology background in design, but also in enterprise architecture. In that way, they can go from understanding the role of the enterprise architect, and how you take the business problem and slice that out into business processes, and then map the technology layer on to it.
Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: The Open Group.

Read a full transcript of the discussion.

Tuesday, July 29, 2008

IBM's 'grammar checker' catches code gotchas and errors in the early development process

It's no secret that the earlier you find a software bug the easier and cheaper it is to correct it, and an upcoming IDC report claims that fixing bugs can cost companies anywhere from $5.2 million to $22 million annually, depending on the size of the organization.

Lone Ranger IBM is riding to the rescue with a new application that catches bugs and other defects at the coding stage, rather than in testing or, in a weak-case scenario ... in the field.

The latest IBM Rational Software Analyzer, announced today, works sort of like a grammar checker for word processing, scanning software for quality and defects before the application is built. IBM says it can reduce errors in the field by 15 to 20 percent. Whoo-hoo!

Catching bugs earlier in the process speeds up time to market by reducing the amount of time development teams spend on manual testing and the effort of fixing bugs at the error-prone testing stage, when 90 percent of the code has already been written. Makes Monday mornings a lot nicer, too.

Built as a plug-in for Eclipse version 3.3, the software analyzer finds software errors, flags them, and makes suggestions for fixing them. The software can automatically scan each line of code up to 700 times, "grammar checking” the code before it goes into deeper testing and/or production.

This approach sure makes sense for dynamic languages, too, where testing is not always, shall we say ... standard operating procedure. IBM's latest grammar jammer supports Java and C++ and "more [languages support is] available through extensions," says IBM. How about helping out the webby apps and rich Internet apps makers directly, too, eh?

At least Eclipse-oriented software developers can use the new IBM software to provide insight to their management team through detailed reporting on software code status and to offer direction into governance and compliance of corporate coding guidelines and requirements. IBM partners and customers can also build an adapter to use the code-scanning technology to improve the code quality and development of their custom-built offerings.

The classic reference for the cost of software bugs to industry is a 2002 report from the National Institute of Standards and Technology (NIST), which estimated that defects cost businesses $59.9 billion. With the world and business moving at "Internet speed," that six-year-old estimate may be a little low today. Companies are leaning more heavily on agility and the necessity of responding quickly to shifting competitive positions in a global economy.

Other studies have indicated that the cost of fixing a bug during testing is 7 times higher than during the coding phase, and the cost of fixing it in the field is 14 times higher than during coding.

SC Magazine, which reported on the upcoming IDC study, says it blames Web 2.0 and service-oriented architecture (SOA) for the financial impact of bug fixes:
Increased software complexity from multicore, Web 2.0 and SOA are said to be increasing code problems and hiking up costs for companies that develop both in-house and through third parties, such as offshore firms.

“The increased complexity of software development environments and the cost of fixing defects in the field (rather than early in the software cycle) combine in exorbitant ways to drain income and to hamstring businesses as a result of critical software downtime,” said a statement issued by IDC.
IBM Rational Software Analyzer is currently available. Pricing for the Developer Edition is $3,500 per user while pricing for the Enterprise Edition is $50,000 per server with unlimited users. More information is available at the Rational Web site.

Monday, July 28, 2008

Cast Iron rolls out updated appliance to ease SaaS-to-enterprise integration management

No one has said that the road to software as a service (SaaS) was going to be smooth, but Cast Iron Systems is trying to soften some of the bigger bumps with the latest offering in its "configuration not coding approach" to integration.

The Mountain View, Calif. company last week announced the Cast Iron iA4000 series, an appliance designed to expedite integrating on-demand services with other on-demand services or with on-premise applications. Capabilities include data conversion and profiling tools, an extensive library of pre-configured integration templates to enable out-of-the-box synchronization of one-to-one and many-to-many application endpoints, and an advanced process flow designer.

The latest offering replaces the company's iA3000 version and can be deployed on-site or hosted in the cloud.

The pre-configured components, called Template Integration Processes (TIPs) help solve common integration problems out of the box. In cases where company-specific configuration is needed, configuration wizards are available to help customize the TIPs.

The process flow designer allows users to visualize business processes, the flow of information, and the movement of data on screen. The user interface maps data flows to actual business processes and creates integration workflows between on-demand and on-premise applications.

Other features include:
  • Data profiling, which assesses the quality of the data before beginning migrations.

  • Intelligent data cleansing, which removes duplicate values, and includes "fuzzy lookup" -- rules to highlight and fix such errors as common misspellings, and abbreviations.

  • Data enrichment, which performs lookups with third-party data providers to add value to data.
Fellow ZDNet blogger Phil Wainewright has some good insights into the power and promise of these new means to manage the transition and boundaries between on-premises and SaaS or cloud-based services and applications.

A lingering question is whether integration means such as Cast Iron's will remain as a third-party addition, be supplied by the enterprise or SMB that needs integration, or become a required service that the SaaS provides themselves will pony up. At least for some time, we'll see all three.

But as Internet scale has its effect, I expect that the major SaaS providers will also need to increasingly become integration services and platform providers too. The easier the integration, the more enticing the move to SaaS and the more sticky the relationship ... easy on, and perhaps not so easy off.

And the SaaS leaders will be or are seeking out best of breed approaches to build, buy or partner on to make integration a seamless and tidy value add to the rest of the budding portfolio of enterprise-caliber services. Indeed, convenient integration could be the killer app of SaaS.

StrikeIron adds data delivery punch to Serena Software's mashup exchange

In what could be described as a win-win situation, StrikeIron, which provides solutions for delivering data over the Internet, has joined the Serena Mashup Exchange, an online marketplace for businesses and their partners to connect through mashups.

The move will allow StrikeIron, based in Cary, N.C., to make its Web services available to a wider audience. At the same time, it will help Redwood City, Calif.-based Serena Software extend its mashup footprint. The combined forces will provide business users with access to key external data, while allowing them to focus on creating mashups, without having to worry whether the data has SOA hooks.

The Serena Mashup Exchange provides packaged mashups, template workflows and professional service offerings. The StrikeIron Web Services Marketplace provides over 100 data services from leading technology suppliers including Cortera, Dun & Bradstreet (D&B), Gale, MapQuest, Midnight Trader, NASDAQ, Tax Data Systems, Wall Street Horizon, and Zacks.

One example of a successful Business Mashup is the Serena Salesforce.com Credit Approval Mashup. The Mashup automates the interaction with D&B credit information via a StrikeIron Web service and allows sales personnel to request approval to extend credit to prospects. It can be downloaded for free along with a demo from the Mashup Exchange. A subscription to the data can be obtained from StrikeIron.

Friday, July 18, 2008

WSO2 adds data services and security features in Mashup Server 1.5

Open-source SOA provider WSO2 Monday will unveil version 1.5 of its WSO2 Mashup Server, which allows enterprises to consume, aggregate, and publish information in a variety of forms and from a variety of sources.

The latest version from the Mountain View, Calif., and Sri Lanka company adds WSO2 Data Services and security features for enterprise-class service composition. Mashup Server 1.5 is built on the WSO2 Web Services Application Server based on Apache/Axis 2. It can be used as an individual service development and deployment tool or can scale up to support team, enterprise, or Internet communities. [Disclosure: WSO2 has been a sponsor of BriefingsDirect podcasts.]

The original mashup server debuted in January of this year. Key features of the new version include:
  • Integrated WSO2 Data Services, providing mashup-ready Web service interfaces to relational databases and other data sources such as Excel spreadsheets, and comma-separated values (CSVs).

  • An integrated user interface (UI) for easily managing secured mashup services based on WS-Security. Users can choose among some 20 popular enterprise scenarios providing various types and combinations of authentication, encryption, and signing.

  • Application programming interface (API) and configuration extensions, facilitating the consumption of secured services based on WS-Security—from enabling communications between secured services to defining who is allowed to access a service and in what ways.

  • User login using OpenID, which complements existing login options based on username/password and Microsoft InfoCard-based electronic IDs.

  • Google Gadget support, offering stubs, templates, and try-it pages for Google Gadgets that can be hosted within the WSO2 Mashup Server or externally, such as in a user’s iGoogle page. It also includes a beta dashboard add-in for hosting Google Gadgets within the mashup server itself.
Version 1.5 will be available for download Monday. It carries no software licensing or subscription fees, although WSO2 offers a range of service and support options, including training, consulting, and custom development.