Wednesday, August 27, 2008

Databases leverage MapReduce technology to radically juice data scale, performance, analytics

In what could best be termed a photo finish, Greenplum and Aster Data Systems have both announced that they have integrated MapReduce into their massively parallel processing (MPP) database engines.

MapReduce, pioneered by Google for analyzing the Web, now becomes available to enterprises and service providers, giving them more access and visibility into more data from more origins. Originally created to analyze massive amounts of unstructured data, the approach has been updated to analyze structured data as well.

Greenplum, San Mateo, Calif., says that MapReduce will be part of its Greenplum Database beginning in September. Aster Data, Redwood Shores, Calif., says that MapReduce will be included in its Aster nCluster. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

Curt Monash, president of Monash Research, editor of DBMS2, and a leading authority on MapReduce, sees this as a major leap forward. He reports that both companies had completed adding MapReduce to their existing products and had been racing to the finish line to get their news out first. As it turned out, both made their announcements within hours of each other.

Curt lists some points on his blog about what this new technology marriage means.
  • Google’s internal use of MapReduce is impressive. So is Hadoop’s success. Now commercial implementations of MapReduce are getting their shots too.

  • The hardest part of data analysis is often the recognition of entities or semantic equivalences. The rest is arithmetic, Boolean logic, sorting, and so forth. MapReduce is already proven in use cases encompassing all of those areas.

  • MapReduce isn’t needed for tabular data management. That’s been efficiently parallelized in other ways. But, if you want to build non-tabular structures such as text indexes or graphs, MapReduce turns out to be a big help.

  • In principle, any alphanumeric data at all can be stuffed into tables. But in high-dimensional scenarios, those tables are super-sparse. That’s when MapReduce can offer big advantages by bypassing relational databases. Examples of such scenarios are found in CRM and relationship analytics.
Greenplum customers have been involved in an early-access program using Greenplum MapReduce for advanced analytics. For example, LinkedIn is using Greenplum Database for new, innovative social networking features such as “People You May Know” and sees it as a way to develop compelling analytics products faster. A primary benefit of the new capability is that customers can combine SQL queries and MapReduce programs into unified tasks that are executed in parallel across hundreds or thousands of cores.

Part of the appeal of business intelligence and its huge ramp-up over the past five years is that IT assets play an ever larger role in providing unprecedented strategic guidance and insights to leaders of enterprises, governments, telecos and cloud providers. IT has gone from an automating business functions role to an essential crystal ball service of the highest order. By consequently gaining access to larger data sets that -- more than ever before can be mined and analyzed for higher levels of process and business refinements -- IT has become a member of the board.

With better data reach and inclusion, come better results. So BI allows leaders can establish the trends early that will determine their future success or failures. In a fast-paced, global, hyper competitive business landscape these insights are the currency of success for the future. The better you do BI, the better you do business ... current, near-term and long-term. There's no better way to know your customers, competitors, employees and the variables that buffet and stir markets than effective BI.

Now, by exanding the role and reach of MapReduce technologies and methods, a powerful new tool is added to the BI arsenal. More data, more data types, more data sources -- all rolled into an analytical framework that can be directly targeted by developers, scripters, business analysts, exectutives, and investors.

These new MapReduce use announcements mark a significant advancement that helps makes IT another notch higher in its utility and indespensible nature to business. And it comes at a time when more data, meta data, complex events, transactions and Internet-scale inferences demand tools that can do for enterprise BI what Google has done for Web search and indexing.

Being comprehensive and deep with massive data sets analytics offers a new mantra: The database is dead, long live the data. Structured data and the containers that contain it are simply not enough to organize an access the intelligence lurking on modern networks, at Internet scale and Internet time.

Tuesday, August 26, 2008

Citrix makes virtualization splash with new version of XenApp to speed desktop applications delivery

Citrix Systems has overhauled its flagship presentation server product, promising IT operators higher performance and lower costs, while improving the end-user experience. The company this week announced Citrix XenApp 5, the next generation of its application virtualization solution.

The new version of XenApp, formerly the Citrix Presentation Server, combines with Citrix XenServer to create an "end-to-end" solution that spans servers, applications, and desktops. Companies using the new combined product can centralize applications in their datacenter and deliver them as on-demand services to both physical and virtual desktops.

Virtualization, while not a new technology, has currently been gaining a huge head of steam, as companies realize the deployment, maintenance, and security benefits of central control across nearly all applications, while also providing businesses with agile and flexible solutions.

In my thinking, virtualization is allowing the best of the old (central command and control) with the new (user flexibility and ease of innovation). Virtualizing broadly places more emphasis on the datacenter and less on the client, without the end user even knowing it.

What's more, from a productivity standpoint, the end users gain by having app and OS updates and fixes done easier and faster (fewer help desk calls and waits), while operators can excercise the security constraints they need (data stays on the server), and developers need only target the server deployments (local processing is over-rated).

And, of course, virtualization far better aligns IT resources supply with demand, removing wasted utilization capacity while allowing for more flexibility in raming up or down on specific applications or data demands. Works for me.

Currently, most IT operations are faced with managing myriad Windows-based applications, and are hampered by the demands of installing, patching, updating, and removing those applications. Many users have simplified the task and lowered cost by using server-based deployment. We'll see a lot more of this, and that includes more uptake in the use of desktop virtualization, but that's another topic for another day.

According to Fort Lauderdale, Fla.-based Citrix, version 5 of XenApp, which includes more than 50 major enhancements, can improve application start-up time by a factor of 10 and reduces applications preparation and maintenance by 25 percent.

Of the major new features, I like the support for more Windows apps and compatibility with Microsoft AppV (formerly Softgrid), the HTTP streaming support, the IPV6 support, as well as the improved performance monitoring and load balancing. Also very nice is the "inter-isolation communication," which allows each app to be isolated and also aggregrated as if installed locally. Add to that the ability of the apps to communicate locally, such as cut and paste. Think of it as OLE for the virtualized app set (finally).

I've been watching Citrix since it took the bold step of acquiring XenSource just a little over a year ago. At that time, I saw the potential for its move to gobble a piece of the virtualization pie:
The acquisition also sets the stage for Citrix to move boldly into the desktop as a service business, from the applications serving side of things. We’ve already seen the provider space for desktops as a service heat up with the recent arrival of venture-backed Desktone. One has to wonder whether Citrix will protect Windows by virtualizing the desktop competition, or threaten Windows by the reverse.
The new XenApp 5 release is being featured on Sept. 9 as part of a global, online launch event called, Citrix Delivery Center Live! This virtual event is the first in a series that will take place in the second half of 2008 highlighting the entire Citrix Delivery Center product family. This debut event features presentations, chat sessions and online demos from Citrix, as well as participation from key partners such as Microsoft and Intel. I'm also looking forward to attending Citrix's annual analyst conference in Phoenix on Sept. 9.

XenApp 5, which runs on the Microsoft Windows Server platform, leverages all the enhancements in Windows Server 2008 and fully supports Windows Server 2003. This enables existing Windows Server 2003 customers to immediately deploy Windows Server 2008 into their existing XenApp environments in any mix.

XenApp 5 will be available Sept. 10. For North America, suggested retail pricing is per concurrent user (CCU) and includes one year of Subscription Advantage, the Citrix program that provides updates during the term of the contract:
  • Advanced Edition – $350

  • Enterprise Edition – $450

  • Platinum Edition – $600
Standalone pricing for client-side application streaming and virtualization begins as low as $60 per CCU. TCO for virtualized apps will over time continue to fall, a nice effect for all concerned.

Thursday, August 21, 2008

Pulse provides novel training and tools configuration resource to aid in developer education, preparedness

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Genuitec.

Read a full transcript of the discussion.

Java training and education has never been easy. Not only is the language and its third-party and community offerings constantly moving targets, each developer has his or her own preferences, plug-ins inventory and habits. What's more, the "book knowledge" gained in many course settings can vary wildly from what happens in the "real world" of communities and teams.

MyEclipse maker Genuitec developed Pulse last year to monitor and update the most popular Eclipse plug-ins, but Pulse also has a powerful role in making Java training and tools preferences configuration management more streamlined, automated and extensible. Unlike commercial software, in the open source, community-driven environments like Eclipse, there is no central vendor to manage plug-ins and updates. For the Eclipse community Pulse does that, monitoring for updates while managing individual developers' configuration data -- and at the same time gathering meta data about how to better serve Eclipse and Java developers.

I recently moderated a sponsored podcast to explore how Pulse, and best practices around it use, helps organize and automate tools configuration profiles for better ongoing Java training and education. I spoke with Michael Cote, an analyst with RedMonk; Ken Kousen, an independent technical trainer, president of Kousen IT, Inc., and adjunct professor at Rensselaer Polytechnic Institute; and Todd Williams, vice president of technology at Genuitec.

Here are some excerpts:
The gap between what's taught in academia and what's taught in the real world is very large, actually. ... Academia will talk about abstractions of data structures, algorithms, and different techniques for doing things. Then, when people get into the real world, they have no idea what Spring, Hibernate, or any of the other issues really are.

It's also interesting that a lot of developments in this field tend to flow from the working professionals toward academia, rather than the other way around, which is what you would find in engineering.

Part of what I see as being difficult, especially in the Java and Enterprise Java market, is the huge number of technologies that are being employed at different levels. Each company picks its own type of stack. ... Finding employees that fit with what you are trying to do today, with an eye toward being able to mature them into where you are going tomorrow, is probably going to always be the concern.

You look at the employment patterns that most developers find themselves in, and they are not really working at some place three, five, 10, even 20 years. It's not realistic. So, specializing in some technology that essentially binds you to a job isn't really an effective way to make sure you can pay your bills for the rest of your life.

You have to be able to pick up quickly any given technology or any stack, whether it’s new or old. Every company has their own stack that they are developing. You also have to remember that there is plenty of old existing software out there that no one really talks about anymore. People need to maintain and take care of it.

So, whether you are learning a new technology or an old technology, the role of the developer now, much more so in the past, is to be more of a generalist who can quickly learn anything without support from their employer.

Obviously, in open source, whether it’s something like the Eclipse Foundation, Apache, or what have you, they make a very explicit effort to communicate what they are doing through either bug reports, mail lists, and discussion groups. So, it's an easy way to get involved as just a monitor of what's going on. I think you could learn quite a bit from just seeing how the interactions play out.

That's not exactly the same type of environment they would see inside closed-wall corporate development, simply because the goals are different. Less emphasis is put on external communications and more emphasis is put on getting quality software out the door extremely quickly. But, there are a lot of very good techniques and communication patterns to be learned in the open-source communities.

[With Pulse] we built a general-purpose software provisioning system that right now we are targeting at the Eclipse market, specifically Eclipse developers. For our initial release last November, we focused on providing a simple, intuitive way that you could install, update, and share custom configurations with Eclipse-based tools.

In Pulse 2, which is our current release, we have extended those capabilities to address what we like to call team-synchronization problems. That includes not only customized tool stacks, but also things like workspace project configurations and common preference settings. Now you can have a team that stays effectively in lock step with both their tools and their workspaces and preferences.

With Pulse, we put these very popular, well-researched plug-ins into a catalog, so that you can configure these types of tool stacks with drag-and-drop. So, it's very easy to try new things. We also bring in some of the social aspects; pulling in the rankings and descriptions from other sources like Eclipse Plug-in Central and those types of things.

So, within Pulse, you have a very easy way to start out with some base technology stacks for certain kinds of development and you can easily augment them over time and then share them with others.

The Pulse website is www.poweredbypulse.com. There is a little 5 MB installer that you download and start running. If anyone is out in academia, and they want to use Pulse in a setting for a course, please fill out the contact page on the Website. Let us know, and we will be glad to help you with that. We really want to see usage in academia grow. We think it’s very useful. It's a free service, so please let us know, and we will be glad to help.

I did try it in a classroom, and it's rather interesting, because one of the students that I had recently this year was coming from the Microsoft environment. I get a very common experience with Microsoft people, in that they are always overwhelmed by the fact, as Todd said, there are so many choices for everything. For Microsoft, there is always exactly one choice, and that choice costs $400.

I tried to tell them that here we have many, many choices, and the correct choice, or the most popular choice changes all the time. It can be very time consuming and overwhelming for them to try to decide which ones to use in which circumstances.

So, I set up a couple of configurations that I was able to share with the students. Once they were able to register and download them, they were able to get everything in a self-contained environment. We found that pretty helpful. ...

It was pretty straightforward for everybody to use. ... whenever you get students downloading configurations, they have this inevitable urge to start experimenting, trying to add in plug-ins, and replacing things. I did have one case where the configuration got pretty corrupted, not due to anything that they did in Pulse, but because of plug-ins they added externally. We just basically scrapped that one and started over and it came out very nicely. So, that was very helpful in that case.

We have a very large product plan for Pulse. We've only had it out since November, but you're right. We do have a lot of profile information, so if we chose to mine that data, we could find some correlations between the tools that people use, like some of the buying websites do.

People who buy this product also like this one, and we could make ad hoc recommendations, for example. It seems like most people that use Subversion also use Ruby or something, and you just point them to new things in the catalog. It's kind of a low-level way to add some value. So there are certainly some things under consideration.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Genuitec.

Read a full transcript of the discussion.

Tuesday, August 19, 2008

Morph Labs and FiveRuns combine efforts to support, test and monitor Rails apps

Ruby on Rails in the cloud got another leg up this week with the announcement that Morph Labs and FiveRuns have joined forces to provide a managed hosting platform for application testing and monitoring.

Morph Labs of Portland, Oregon, a platform-as-a-service (PaaS) provider, and FiveRuns, an Austin, Texas company that provides Ruby on Rails monitoring and analysis, will use Morph Labs' AppSpace to allow developers to get insight into application performance. The goal is to make enterprise-class application developers and deployers an offer they can't refuse.

Developers can subscribe to Morph AppSpace for free and select the version that automatically integrates FiveRuns TuneUp. The apps can be deployed without any special modifications or hassles.

The Morph AppSpace automated deployment system activates FiveRuns TuneUp giving developers insight into potential bottlenecks and performance issues early on. By running the application in an enterprise-caliber environment, developers gain a clear picture of how the application will perform in real production.

Users may also choose to participate in a secure group or open community where they can share their performance information such as tips, tricks and advice from and for other Ruby and Rails enthusiasts. The companies are providing this ramp-up to FiveRuns TuneUp, along with a developer subscription to the Morph AppSpace, to encourage developers to better test and analyze their applications.

In the second phase of the partnership, Morph Labs will integrate FiveRuns Manage as a production option for each Morph AppSpace. By adding FiveRuns Manage, developers will be able to continually monitor each application running within a Morph AppSpace subscription.

FiveRuns Manage also provides additional information to enable developers to quickly diagnose problems and proactively maintain application performance.

The integrated Morph AppSpace and FiveRuns Manage solution is expected to be available in the fourth quarter of 2008.

This certainly ups the ante for other cloud providers that target the Ruby and Rails communities, and makes the fast-track development and deployment environment all the more appealing to enterprises.

Thursday, August 14, 2008

Survey says: Aligning IT operations with business goals increases agility, cuts costs

Aligning IT with business goals -- and the benefits that brings to a business -- have long been a recurrent theme of the podcasts and discussions we've done over the last few years. So it's gratifying to see a worldwide study showing that businesses are not only pursuing this strategy, but are reaping significant benefits from it.

A survey of nearly 1,000 IT professionals, from the C-level down to frontline workers, indicates that 27 percent of companies responding are in the process of a business transformation, with another 27 percent having just completed one, and another 30 percent considering changing their processes.

Conducted by the Economist Intelligence Unit, London, and sponsored by Cisco Systems, the survey also revealed that improving IT responsiveness to new business requirements was the top IT objective for 57 percent of the companies. Of the companies that have completed a transformation, 43 percent said that cost savings were the top benefit they realized. Another 40 percent reported smoother, more flexible operations.

While other companies reported different effects, the most astonishing result was that only 2 percent of companies reported no tangible benefit. This would seem to indicate that transforming your business model has a 98 percent probability of success, which is pretty impressive.

One interesting result in the survey was the revelation that companies in India lead the pack when it comes to aligning the operations with business goals:
For example, respondents in India are by far the most likely to have goals associated with interacting with business counterparts. Those goals include implementing new projects based on corporate—not information technology (IT)—objectives, actively seeking opportunities to propose technology-based approaches to improving business practices and gaining more support from senior business managers for things like budgeting, change management and technology adoption.

The willingness among Indian IT groups to “go where the business is going” and take concrete steps to pursue highly collaborative working environments is perhaps one explanation for why Indian respondents were most likely to identify their companies’ organizational structures as “very effective."
The report attributes this to the fact that technology executives in India seem to have greater power in the organization. A higher percentage of Indian chief information officers (CIOs) report directly to the CEO than is typically the case in the U.S., Europe, or the Middle East.

One drawback in many companies is a lack of clear communication of business and IT goals. The survey showed that 49 percent of CIOs saw contributing to business goals as one of their top objectives, but this view was shared by only 30 percent of frontline workers. At the same time, 59 percent of IT architects saw cost cutting as an objective, while only 45 percent of CIOs thought the same thing.

The entire survey report, which isn't very long, is well worth a read for anyone involved in IT, or the business of IT. It is available for download (PDF file) from the EIU site. There is also a Webcast that explains some of the finer points.

Here are the key conclusions and recommendations from the report.
Addressing corporate cultural issues is key to any successful IT transformation project. Senior IT executives must work doggedly to communicate goals, and build bridges up and down the chain of command throughout the organization, both in business strategy sessions and regular meetings with technology employees.

IT transformation is not a cure-all. Changing processes and organizational structures may make IT departments more agile, but will do little good if IT professionals do not adapt their thinking around how better to align their efforts with that of the business on a regular basis.

Walk before you run. Before embarking on a large-scale IT transformation initiative, assess the length of time it will take to complete the effort, as well as the costs, risks and eventual benefit to the business.

Track—and publicize—success. Make sure to assess the return on investment of any IT transformation project. Not only will it strengthen IT’s reputation among business partners, it could help to build momentum for future IT initiatives.
Can't argue with all of that. Of course, it is all clearly easier said than done.

Wednesday, August 13, 2008

Borland's own ‘journey' to Agile forms foundation for new software delivery management solutions

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Borland Software.

Read a full transcript of the discussion.

Borland Software has been in the developer productivity field for decades, but when it came time to build out a new software delivery management suite for application lifecycle management, Agile Development methods became both the means and the ends.

Over the past few years, the development process at Borland shifted dramatically to Agile and Scrum practices, which in turn deeply influenced how the new products were crafted and refined. These shifts fundamentally impacted a set of recent new products. [See product and solution rundowns. See demo and see launch video.]

To learn more about Agile Development and how Borland used the methods to build their own software management products and solutions, I recently spoke with Peter Morowski, the senior vice president of research and development at Borland. He described the journey, the productivity and business outcomes benefits, and the potential pitfalls and errors to avoid. He also wrote a great article on the experience.

As part of the podcast, I asked Peter what surprised him most about this Agile journey at Borland. "The thing that surprised me," he said, "is how powerful it is each morning when everybody gets around the table and actually goes through what they've done, basically saying, "Have I lived up to my commitments? What I am committing to the team today? And then is there anything blocking?"

"Generally speaking," said Morowski, "a lot of developers tend to be quiet and not the most social. What this did is that it just didn't allow the few people who were social to have input on what was going on. This daily stand-up had people, everybody on the team, contributing, and it really changed the relationships in the team. It was just a very, very powerful thing."

Here are some excerpts from the discussion:
Look at the late delivery cycles that traditional waterfall methodologies have brought upon us. Products have been delivered and end up on the shelf. The principles behind Agile right now allow teams to deliver on a much more frequent cycle and also to deliver more focused releases.

What I have observed over my career was the fact that there really existed two worlds. There is what I call the "management plane," and this is a plane of milestones, of phase transitions and a very orderly process and progress through the software development lifecycle.

Underneath it though, in reality, is a world of chaos. It's a world of rework, a world of discovery, in which the engineers, testers and frontline managers live.

What [Agile] is really about is that teams begin to take ownership for delivering the product. What happens is that, by allowing these teams to become self-directed, they own the schedule for delivery.

Hiring is important, regardless of what methodology you use, and I tend to stress that. I do contend there are different kinds of personalities and skill sets you are going to be looking for when you are building Agile teams, and it's important to highlight those.

It's very important that whoever comes onboard in Agile team is collaborative in nature. In traditional software environments, there are two roles that traditionally you may struggle with, and you have to look at them closely. One is the manager. If a manager is a micromanager-type, that's not going to work in an Agile environment.

What happens is that you see some things like traditional breakdowns of roles, where they are looking at what work needs to be finished in a sprint, versus "Well, I am a developer. I don't do testing," or "I am a doc writer, and I can't contribute on requirements," and those types of things. It really builds a team, which makes it a much more efficient use of resources and processes, and you end up with better results than you do in a traditional methodology.

When Borland first started developing in Agile, we had multiple locations, and each site was, in essence, developing its own culture around Agile. What I found was that we were getting into discussions about whose Agile was more pure and things like that, and so I decided to develop a Borland Agile culture.

As we've grown, and our projects are getting more complex, the fact that we evolve from site-to-site based on the same process and the same terminology has allowed us to now choose more complex Agile techniques like Scrum of Scrums or work across organizations, and have a common vocabulary, and that kind of common way of working.

One thing we clearly did was, once that we saw the benefits of doing this, we had a lot of executive sponsorship around this. I made it one of the goals for the year to expand our use of Agile within the organization, so that teams knew it was safe to go ahead and begin to look at it. In addition, because we had a reference implementation of it, it also gave teams a starting point to begin their experimentation. We also paid for our teams to undergo training and those types of things. We created an environment that encouraged transformation.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Borland Software.

Read a full transcript of the discussion.

Tuesday, August 12, 2008

Discussion features insights into using RESTful services plus a cool iPhone app

For a quick update on creating RESTful services, check out IONA Technologies' webinar moderated by yours truly. The recent webinar was part of IONA's "Open Source in the Enterprise" series, and is archived on their Website. (Note: free registration required.)

Included in the discussion is my introduction on how to increase value payback, followed by a presentation from Adrian Trenaman, distinguished IONA consultant, on creating RESTful services using FUSE.

Finally, Roland Tritsch, IONA director of services in EMEA, demos a really cool method for accessing RESTful FUSE services from the iPhone. [Disclosure: IONA is a sponsor of BriefingsDirect podcasts.]

IONA has been making the news recently, especially the announcement just a few weeks ago that it was being acquired by Progress Software. A few weeks earlier, IONA released a set of enhancements to Artix Data Services designed to reduce risk exposure and operational costs.

Just over a year ago, I produced a podcast with Dan Kulp, principal engineer at IONA, and Debbie Moynihan, director of open source programs at IONA, in which we discussed the incubation Apache CXF project.

Monday, August 11, 2008

WSO2 data services provide catalyst for SOA, set stage for cloud-based data hosting models

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: WSO2.

Read a full transcript of the discussion.

In the past, data was structured, secure and tightly controlled. The bad news is that the data was limited by the firewall of personnel, technologies, and process rigidity. Today, however, the demand is for just-in-time and inclusive data, moving away from a monolithic data system mentality to multiple sources of data that provide real-time inferences on consumers, activities, events, and transactions.

The move is in the ownership of data value to the very people who really need it, who help define its analysis, and who can best use it for business and consumption advantage. Analysis and productivity values rule the future of data as services.

But how to jibe the best of the old with the needs of the new? How to use data services as onramps to SOA? How to bring data together for federated analysis? And how to use the power of open source licenses and community to encourage the further federation of data as standardized consumable services?

To answer these questions and to learn more about the quickly evolving data services landscape, I recently spoke with Paul Fremantle, the chief technology officer at WSO2; Brad Svee, the IT manager of development at travel management leader Concur Technologies; and James Governor, a principal analyst and founder at RedMonk.

Here are some excerpts from the resulting podcast:
The live, connected world needs to be managed properly and it's very, very difficult to build that as a single monolithic system. ... The [new] model is of keeping the data where it belongs and yet making it available to the rest of the world.

Our data is trapped in these silos, where each department owns the data and there is a manual paper process to request a report. Requesting a customer report takes a long time, and what we have been able to do is try to expose that data through Web services using mashup type UI technology and data services to keep the data in the place that it belongs, without having a flat file flying between FTP servers, as you talked about, and start to show people data that they haven't seen before in an instant, consumable way.

It seems clear that the status quo is not sustainable. There is inherent risk in the current system, and simply retrofitting existing data and turning it on as a service is not sufficient.

We have been trying to free up our data, as well as rethink the way all our current systems are integrated. We are growing fairly rapidly and as we expand globally it is becoming more and more difficult to expose that data to the teams across the globe. So we have to jump in and rethink the complete architecture of our internal systems.

The browser becomes the ubiquitous consumption point for this data, and we are able to mash up the data, providing a view into several different systems. Before, that was not possible, and the additional piece of moving the file between financial system, for example, we are able to not have to pull files, but actually use Web services to send only the data that has changed, as opposed to a complete dump of the data, which really decreases our network bandwidth usage.

[Yet] most of the data that we are dealing with is fairly sensitive, and therefore almost all of it has a need for at least per-user access basis, as well as, when we are transporting data, we will have to make sure that it's encrypted or at least digitally signed.

What we have built is what we call WSO2 Data Services, which is a component of our application server. The WSO2 Data Services component allows you to take any data source that is accessible through JDBC, MySQL databases, Oracle databases, or DB2, but, in addition, we also have support for Excel, CSV files, and various other formats and very simply expose it as XML.

Now this isn't just exposed to, for example, Web Services. In fact, it can also be exposed by REST interfaces. It can be exposed through XML over HTTP, can even be exposed as JSON. JavaScript Object Notation (JSON) makes it very easy to build Ajax interfaces. It can also support it over JMS, and messaging system.

So the fundamental idea here is that the database can be exposed through a simple mapping file into multiple formats and multiple different protocols, without having to write new code or without having to build new systems to do that.

What we're really replacing is ... where you might take your database and build an object relational map and then you use multiple different programming toolkits -- one Web services toolkit, one REST toolkit, one JMS toolkit -- to then expose those objects. We take all that pain away, and say, "All you have to do is a single definition of what your data looks like in a very simple way, and then we can expose that to the rest of the world through multiple formats."

I think that open source is absolutely vital to making this work, because fundamentally what we're talking about is breaking down the barriers between different systems. As you say, every time you're pushing the proprietary software solution that isn't based on open standards, doesn't have open APIs, and doesn't have the ability to improve it and contribute back, you're putting in another barrier.

Everyone has woken up to this idea of collaboration through Web 2.0 websites, whether through Flickr or FaceParty or whatever. What the rest of the world is waking up to is what open-source developers have been discovering over the last five to 10 years. Open source is Web 2.0 for developers.

Open source drives this market of components, and it's exactly the same thing that happened in the PC market. As soon as there was an open buy off that wasn't owned by a single company, the world opened up to people being able to make those components, work in peace and harmony, and compete on a level playing field. That's exactly where the open-source market is today.

That's exactly the sweet part in my opinion. I can shake and bake, I can code up a bunch of stuff, I can prototype stuff rapidly, and then my boss can sleep well at night, when he knows that he can also buy some support, in case whatever I cook up doesn't quite come out of the oven. I see there's a kind of new model in open source that I think is going to be successful because of that.
Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: WSO2.

Read a full transcript of the discussion.

Tuesday, August 5, 2008

Sybase rides growing database business into mobility innovation, says Chen at user conference

Sybase Chairman, CEO and President John Chen opened the TechWave Sybase user conference today with a slew of product announcements and a proud pointer to a growing database market -- growing at double-digit revenue growth (even if you're not IBM, Microsoft and Oracle).

Chen boasted 38 percent database revenue growth in Sybase's latest quarter, outstripping his formidable competitors. He told the crowd of IT users gathered in Las Vegas that the underlying database business is strong.

Sybase at the 10th annual event has announced tools, analytics and mobile products that target Sybase's global customer base of developers, database administrators, operators as well as its burgeoning enterprise mobility infrastructure solutions providers.

"We're making a run at the major data warehouse providers ... and we think we can compete very well," said Chen. "Things are working very well for the company."

For mobility, in a year where the Apple iPhone gained the lion's share of attention (despite Symbian's dominance), Sybase is driving toward the "unwired enterprise" to bring the analytics world together with the mobile tier and handheld delivery world. "The more devices and the more operating systems ... the better Sybase will be," said Chen.

Based on any metric data continues to explode across the IT landscape, said Chen. And the emphasis on real time and deep and wide analytics is only accelerating. Chen calls it "decision ready information."

Chen asked -- practically implored -- the users to get more into their mobility strategy, to unwire their enterprises.

Sybase Senior Vice President Raj Nathan said that mobile computing is overtaking older forms of communications and computing, as a theme for his portion of today's keynote presentation. But the current IT infrastructure can not well support this trend to mobility and information delivery out to the mobile edge.

"If you look at who is accessing the data, it's no longer just the employees ... and the demands of the non-employee for accessing from outside the firewall ... is much different. Today's architectures will not meet this demand," said Nathan.

Applications also need to be designed to be transaction-centric, said Nathan. What developers have to deal with has changed. "It's not just transactional applications, it's analytics, mobile, and messaging applications," he said. "These applications come from outside the firewall and through a mobile device in a unstructured, ad hoc form."

This all requires a shift in IT architectures, said Nathan. We need message-oriented interfaces. Data, applications, and tools -- all need to adjust. You need to handle complex analytics as a part of the process, not an after-thought, he said.

"The demands of information are changing, and you need a different set of architecture paradigms to make this happen," said Nathan. "Information delivery is even more important. It's time to evolve this and not go through a full set of replacements."

Amazon invests in cloud deployment venture as Elastra raises another $12 million

Elastra Corp., a cloud computing startup with a focus on ease of deployment, today announced a second round of funding, including participation from Amazon, which continues its investment ramp-up in cloud-based ventures.

The Series B funding for the San Francisco, Calif.-based Elastra, totals $12 million. Other participants were Bay Partners and Hummer Winblad Venture Partners, which took part in the first round of funding last year.

The cloud topic continues to heat up, with today's announcement that AT&T is jumping in. More from ZDNet's Larry Dignan. It's a no-brainer for telecos to be in on this, and it sets the stage for more tension between software vendors and service providers.

As for Elastra's Cloud Server, it provides point-and-click configuration, push-button deployment and automated management and dynamic monitoring of application infrastructure software and systems. The company's elastic computing markup language (ECML) and elastic deployment markup language (EDML) allow for extensibility and portability of applications across public and private clouds.

With this approach businesses and IT organizations don't have to script, monitor and scale their application infrastructure by hand, nor are they locked into “cloud silos” from a single provider.

For Amazon, which has been providing cloud infrastructure services for the over two years, this is the second foray into funding cloud ventures in the last three weeks. In July, Amazon chipped when Engine Yard raised $15 million.

I saw the potential for Elastra's approach, when the company arrived on the scene last March.
As virtualized software has become the primary layer over now-buried hardware that architects and engineers must deal with, we should expect more tools and “bridging” technologies like Elastra to emerge to help grease the skids for what can (and should?) be deployed in clouds. The software then becomes agile services that can be provisioned and consumed via innovative and highly efficient business models and use-based metering schemes.

. . . the database-driven product can help bring applications rapidly to a pay-as-you-use model. Enterprises may be able to provide more applications as services, charging internal consumers as a managed service provider.
I said at the time that the segue to the cloud could come sooner than many people might think. It looks like that prediction was on the mark.

Monday, August 4, 2008

SOA places broad demands on IT leadership beyond traditional enterprise architect role, says Open Group panel

Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: The Open Group.

Read a full transcript of the discussion.

Defining the emerging new role of enterprise architects (EAs) in the context of services-oriented architecture (SOA) is more than an esoteric exercise. It also helps define the major shifts now under way from SOA activities in enterprises and large organizations. These murky shifts come down to defining how IT is to be managed anew in the modern organization.

To help better understand the shifting requirements for personnel and leadership due to SOA, The Open Group presented a panel on July 22 at the 19th Annual Open Group Enterprise Architect's Practitioners Conference in Chicago. I had the pleasure to moderate the discussion, on the role and impact of skills and experience for EAs in both the public sector and the private sector.

Our panel of experts and guests included Tony Baer, senior analyst at Ovum; Eric Knorr, editor-in-chief of InfoWorld; Joe McKendrick, SOA blogger and IT industry analyst; Andras Szakal, the chief architect at IBM's Federal Software Group, and David Cotterill, head of innovation at the U.K. Government Department for Work and Pensions.

Here are some excerpts:
Within the government [sector], enterprise architecture is, I would say, trending more toward the political aspect, to the executive-level practitioner, than it is, say, an integrator who has hired a set of well-trained SOA practitioners.

The enterprise architect is really more focused on trying to bring the organization together around a business strategy or mission. And, certainly, they understand the tooling and how to translate the mission, vision, and strategy into an SOA architecture -- at least in the government.

I think the technical background can be taken as a given for an enterprise architect. We expect that they have a certain level of depth and breadth about them, in terms of the broadest kind of technology platforms. But what we are really looking for are people who can then move into the business space, who have a lot more of the softer skills, things like influencing … How do you build and maintain a relationship with a senior business executive?

Those are kind of the skills sets that we're looking for, and they are hard to find. Typically, you find them in consultants who charge you a lot of money. But they're also skills that can be learned -- provided you have a certain level of experience.

We try to find people who have a good solid technical background and who show the aptitude for being able to develop those softer skills. Then, we place them in an environment and give them the challenge to actually develop those skills and maintain those relationships with the business. When it works, it's a really great thing, because they can become the trusted partner of our business people and unmask all the difficulties of the IT that lies behind.

We are software architects, but we are really trying to solve the business problem. ... I would look for people [to hire] who have deep technical skills, and have had experience in implementing systems successfully. I have seen plenty of database administrators come to me and try to call themselves an architect, and you can't be one-dimensional on the information side, although I am an information bigot too.

So you're looking for a broad experience and somebody who has methodology background in design, but also in enterprise architecture. In that way, they can go from understanding the role of the enterprise architect, and how you take the business problem and slice that out into business processes, and then map the technology layer on to it.
Listen to the podcast. Download the podcast. Listen on iTunes/iPod. Sponsor: The Open Group.

Read a full transcript of the discussion.