Showing posts with label Java. Show all posts
Showing posts with label Java. Show all posts

Wednesday, September 19, 2012

Heroku provides single-click provisioning for Java applications in the cloud

Heroku, a cloud platform-as-a-service (PaaS) and a Salesforce.com company, today announced Heroku Enterprise for Java, a new service for companies and IT organizations to build and run Java applications in the cloud.

Enterprise for Java is designed to enable quick creation and deployment of enterprise Java applications. It also greases the skids to move development processes to a continuous delivery model, all without traditional, on-premises software or IT infrastructure. Enterprise for Java is part of the Salesforce Platform, which is being updated and expanded this week at Dreamforce in San Francisco.



Traditionally, creating Java applications applications has required piecing together both a range of development and runtime infrastructure tools -- such as source-code control systems, continuous integration servers, testing and staging environments, load balancers, application server clusters, databases and in-memory caching systems.
Developers get all the benefits of developing in Java along with the ease of using an open, cloud platform in a single click.

This often drags out application building and deployment by months. With Heroku's new offering, enterprise developers can get a complete Java solution in a single package, provisioned with a single click, says Heroku.

Heroku began as a PaaS for dynamic languages like Ruby, but has since gone "polyglot," with support for Java, Node.js, Scala, Clojure and Python and PHP. The Java push is designed to expand Heroku's appeak beyond start-ups and SMBs for Web apps into the fuller enterprise development lifecycle.

And moving to a polyglot PaaS and continuous delivery model for applications is an essential ingredient of IT transformation to a fuller hybrid services delivery capability, said Oren Teich, COO, Heroku. The Heroku PaaS approach not only streamlines development, it modernizes the very processes behind delivering IT better as a service, he said.

“Enterprise developers have been looking for a better way to easily create innovative applications without the hassle of building out a back-end infrastructure,” said Teich. “With Heroku Enterprise for Java, developers get all the benefits of developing in Java along with the ease of using an open, cloud platform in a single click.”


Heroku aims to simplify the Java process by automating data connections, sessions management and other plumbing requirements, while keeping up-to-date on reference platform and JDK advancements. These have mostly been the labor of skilled Java developers, and hence costly and time-consuming (when you can find and keep the skills).

Heroku is therefore providing a "curated" and full Java stack that allows developers to use standard tools like Eclipse and Spring Framework to then build and deploy on a common and integrated PaaS, built around Tomcat 7. This is designed to improve compliance of applications to the runtime environment, in a large degree automating the process of deployment to spec.

"We can bring 80 steps down to four," said Teich, of Java deployment with full compliance.

And let's face it, Salesforce is not just targeting the productivity of developers. They are targeting the cost and complexity of the Java runtime targets in the enterprise: Oracle's Weblogic legacy and IBM's WebSphere. "Total cost of ownership for Java apps needs to come down." You hear this a lot in enterprise IT environs.

To me, the costs-benefits analysis of creating new apps -- especially quickly and in volume to support the voracious need for mobile apps -- and being able to deploy without hassles is a pure accelerant to PaaS adoption in general, with even greater economic and agility benefits when applied to Java.

And so Heroku Enterprise for Java also comes with a new and potentially disruptive payments plan of $1,000 per application deployed per month, with no costs incurred until production deployment. Think if that in comparison with total costs of a mission critical traditional Java app across its lifecycle. The math is compelling.

Heroku also Wednesday announced integration with products from Atlassian, which provides enterprise collaboration software for product development teams. A new Heroku plug-in for Atlassian’s Bamboo continuous integration service lets developers automate application delivery across all lifecycle stages.

Product features

Heroku Enterprise for Java includes:
  • Full-Stack Java: Enterprise for Java provides a full stack of pre-configured systems needed to build scalable, high-performance, highly available applications. This also includes memcache for session management and horizontal scaling, and Postgres for relational data management.
  • Heroku Runtime: In addition to providing runtime and management of the full stack of components, the service includes separate environments for development and staging. These environments can be provisioned instantaneously, providing a way for IT organizations to adopt rapid development methodologies. These applications can be scaled to serve massive volume with a simple control change.
    Enterprise developers have been looking for a better way to easily create innovative applications without the hassle of building out a back-end infrastructure
  • Continuous Delivery Framework: When combined with Atlassian’s integration service, Bamboo, Enterprise for Java automates the application delivery process. From code check-in to test builds, staging deploys and production promotion, developers get an out-of-the-box experience with no server set-up needed. All components are automatically provisioned and configured.
  • Native Java Tools: The offering also includes native support for Eclipse Java IDE. Developers can create and deploy Java applications directly within their IDE. In addition, Heroku now supports direct deployment of Java WAR files, providing a simple way to migrate existing Java applications to the cloud.
Pricing starts at $1,000 per month per application, and it is available starting today.

You may also be interested in:

Friday, August 13, 2010

Google needs to know: What does Oracle really want with Android?

The bombshell that Oracle is suing Google over Java intellectual property in mobile platform powerhouse Android came as a surprise, but in hindsight it shouldn't have.

We must look at the world through the lens that all guns are pointed at Google, and that means that any means to temper its interests and blunt it's potential influence are in play and will be used.

By going for Google's second of only two fiscal jugular veins in Android (the other being paid search ads), Oracle has mightily disrupted the entire mobile world -- and potentially the full computing client market. By asking for an injunction against Android based on Java patent and copyright violations, Oracle has caused a huge and immediate customer, carrier and handset channel storm for Google. Talk about FUD!

Could Oracle extend its injunctions requests to handset makers and more disruptively for mobile carriers, developers, or even end users? Don't know, but the uncertainty means a ticking bomb for the entire Android community. Oracle's suits therefore can't linger. Time is on Oracle's side right now. Even Google counter-suing does not stop the market pain and uncertainty from escalating.

We saw how that pain works when RIM suffered intellectual property claims again its Blackberries, when RIM was up against a court-ordered injunction wall. Fair or not, right or not, they had to settle and pay to keep the product and their market cap in the right motion. And speed was essential because investors are watching, wondering, worrying. Indeed, RIM should have caved sooner. That's the market-driven, short-term "time is not on our side" of Google's dilemma with Oracle's Java.

When Microsoft had to settle with Sun Microsystems over similar Java purity and license complaints a decade back, it was a long and drawn out affair, but the legal tide seemed to be turning against Microsoft. So Microsoft settled. That's the legal-driven, long-term "time is not on our side" of Google's dilemma with Oracle's Java.

Google is clearly in a tough spot. And so we need to know: What does Oracle really want with Android?

Not about the money

RIM's aggressors wanted money and got it. Sun also needed money (snarky smugness aside) too, and so took the loot from Microsoft and made it through yet another fiscal quarter. But Oracle doesn't need the money. Oracle will want quite something else in order for the legal Java cloud over Android to go away.

Oracle will probably want a piece of the action. But will Oracle be an Android spoiler ... and just work to sabotage Android for license fees as HP's WebOS and Apple's iOS and Microsoft's mobile efforts continue to gain in the next huge global computing market, that is for mobile and thin PC clients?

Or, will Oracle instead fall deeply, compulsively in love with Android ... Sort of a Phantom of the Opera (you can see Larry with the little mask already, no?), swooping down on the sweet music Google has been making with Android, intent on making that music its own, controlled from its own nether chambers, albeit with a darker enterprise pitch and tone. Bring in heavy organ music, please.

Chances are that Oracle covets Android, believes its teachings through Java technology (the angel of class libraries) entitles it to a significant if not controlling interest, and will hold dear Christine ... err, Android, hostage unless the opera goes on the way Oracle wants it to (with license payments all along the way). Bring in organ music again, please.

Trouble is, this phantom will not let his love interest be swept safely back into the arms of Verizon, HTC, Motorola and Samsung. Google will probably have to find a way make to make music with Oracle on Android for a long time. And they will need to do the deal quickly and quietly, just like Salesforce.com and Microsoft recently did.

What, me worry?

How did Google let this happen? It's not just a talented young girl dreaming of nightly rose-strewn encores, is it?

Google's mistake is it has acted like a runaway dog in a nighttime meat factory, with it fangs into everything but with very little fully ingested (apologies to Steve Mills for usurping his analogy). In stepping on every conceivable competitors' (and partners') toes with hubristic zeal -- yet only having solid success and market domination in a very few areas -- Google has made itself vulnerable with its newest and extremely important success with Android.

Did Google do all the legal blocking and tackling? Maybe it was a beta legal review? Did the Oracle buy of Sun catch it off-guard? Will that matter when market perceptions and disruption are the real leverage? And who are Google's friends now when it needs them? They are probably enjoying the opera from the 5th box.

Android is clearly Google's next new big business, with prospects of app stores, and legions of devoted developers, myriad partners on the software and devices side, globally pervasive channels though the mobile carriers, and the potential to extend same into the tablets and even "fit" PCs arena. Wow, sounds a lot like what Java could have been, what iOS is, and what WebOS wants to be.

And so this tragic and ironic double-cross -- Java coming back to stab Google in the heart -- delivers like an aria, one that is sweet music mostly to HP, Apple, and Microsoft. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

[UPDATE: The stakes may spread far beyond the mobile market into the very future of Java. Or so says Forrester analyst Jeffrey Hammond, who argues that, in light of Oracle’s plans to sue Google over Android, “…this lawsuit casts the die on Java’s future."

"Java will be a slow-evolving legacy technology. Oracle’s lawsuit links deep innovation in Java with license fees. That will kill deep innovation in Java by anyone outside of Oracle or startups hoping to sell out to Oracle. Software innovation just doesn’t do well in the kind of environment Oracle just created," said Hammond.]

See related coverage:

Wednesday, February 3, 2010

CERN’s evolution toward cloud computing could portend next revolution in extreme IT productivity

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Platform Computing.

What are the likely directions for cloud computing? Based on the exploration of expected cloud benefits at a cutting edge global IT organization, the future looks extremely productive.

In this podcast we focus on the thinking on how cloud computing -- both the private and public varieties -- might be used at CERN, the European Organization for Nuclear Research in Geneva.

CERN has long been an influential bellwether on how extreme IT problems can be solved. Indeed, the World Wide Web owes a lot of its usefulness to early work done at CERN. Now the focus is on cloud computing. How real is it, and how might an organization like CERN approach cloud?

In many ways CERN is quite possibly the New York of cloud computing. If cloud can make it there, it can probably make it anywhere. That's because CERN deals with fantastically large data sets, massive throughput requirements, a global workforce, finite budgets, and an emphasis on standards and openness.

So please join us, as we track the evolution of high-performance computing (HPC) from clusters to grid to cloud models through the eyes of CERN, and with analysis and perspective from IDC, as well as technical thought leadership from Platform Computing.

Join me in welcoming our panel today: Tony Cass, Group Leader for Fabric Infrastructure and Operations at CERN; Steve Conway, Vice President in the High Performance Computing Group at IDC, and Randy Clark, Chief Marketing Officer at Platform Computing. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Conway: Private cloud computing is already here, and quite a few companies are exploring it. We already have some early adopters. CERN is one of them. Public clouds are coming. We see a lot of activity there, but it's a little bit further out on the horizon than private or enterprise cloud computing.

Just to give you an example, we at IDC just did a piece of research for one of the major oil and gas companies, and they're actively looking at moving part of their workload out to cloud computing in the next 6-12 months. So, this is really coming up quickly.

CERN is clearly serious about it in their environment. As I said, we're also starting to see activity pick up with cloud computing in the private sector with adoption starting somewhere between six months from now and, for some, more like 12-24 months out.

Clark: At Platform Computing we have formally interviewed over 200 customers out of our installed base of 2,000. A significant portion -- I wouldn’t put an exact number on that, but it's higher than we initially anticipated -- are looking at private-cloud computing and considering how they can leverage external resources such as Amazon, Rackspace and others. So, it's easily one-third and possibly more [evaluating cloud].

Cass: At CERN we're a laboratory that exists to enable, initially Europe’s and now the world’s, physicists to study fundamental questions. Where does mass come from? Why don’t we see anti-matter in large quantities? What's the missing mass in the universe? They're really fundamental questions about where we are and what the universe is.

We do that by operating an accelerator, the Large Hadron Collider, which collides protons thousands of times a second. These collisions take place in certain areas around the accelerator, where huge detectors analyze the collisions and take something like a digital photograph of the collision to understand what's happening. These detectors generate huge amounts of data, which have to be stored and processed at CERN and the collaborating institutes around the world.

We have something like 100,000 processors around the world, 50 petabytes of disk, and over 60 petabytes of tape. The tape is in just a small number of the centers, not all of the hundred centers that we have. We call it "computing at the terra-scale," that's terra with two R's. We’ve developed a worldwide computing grid to coordinate all the resources that we have with the jobs of the many physicists that are working on these detectors.

If you look at the past, in the 1990’s, we had people collaborating, but there was no central management. Everybody was based at different institutes and people had to submit the workloads, the analysis, or the Monte Carlo simulations of the experiments they needed.

We realized in 2000-2001 that this wasn’t going to work and also that the scale of resources that we needed was so vast that it couldn’t all be installed at CERN. It had to be shared between CERN, a small number of very reliable centers we call the Tier One centers and then 100 or so Tier Two centers at the universities. We were developing this thinking around the same time as the grid model was becoming popular. So, this is what we’ve done.

Grid sets stage for seeking greater efficiencies

[Our grid] pushes the envelope in terms of the scale to make sure that it works for the users. We connect the sites. We run tens of thousands of jobs a day across this and gradually we’ve run through a number of exercises to distribute the data at gigabytes a second and tens of thousands of jobs a day.

We've progressively deployed grid technology, not developed it. We've looked at things that are going on elsewhere and made them work in our environment.

The grid solves the problem in which we have data distributed around the world and it will send jobs to the data. But, there are two issues around that. One is that if the grid sends my job to site A, it does so because it thinks that a batch slot will become available at site A first. But, maybe a grid slot becomes available at site B and my job is site A. Somebody else who comes along later actually gets to run their job first.

Today, the experiment team submits a skeleton job to all of the sites in order to detect which site becomes available first. Then, they pull down my job to this site. You have lots of schedulers involved in this -- in the experiment, the grid, and the site -- and we're looking at simplifying that.

We’re now looking at virtualizing the batch workers and dynamically reconfiguring them to meet the changing workload. This is essentially what Amazon does with EC2. When they don’t need the resources, they reconfigure them and sell the cycles to other people. This is how we want to work in virtualization and cloud with the grid, which knows where the data is.

... We’re definitely concentrating for the moment on how we exploit effective resources here. The wider benefits we'll have to discuss with our community.

Conway: CERN's scientists have earned multiple Nobel prizes over the years for their work in particle physics. CERN is where Tim Berners-Lee and his colleagues invented the World Wide Web in the 1980s.

More generally, CERN is a recognized world leader in technology innovation. What’s been driving this, as Tony said, are the massive volumes of data that CERN generates along with the need to make the data available to scientists, not only across Europe, but across the world.

For example, CERN has two major particle detectors. They're called CMS and ATLAS. ATLAS alone generates a petabyte of data per second, when it’s running. Not all that data needs to be distributed, but it gives you an idea of the scale or the challenge that CERN is working with.

In the case of CERN’s and Platform’s collaboration, the idea is not just to distribute the data but also the applications and the capability to run the scientific problem.

Showing a clear path to cloud

CERN is definitely a leader there, and cloud computing is really confined today to early adopters like CERN. Right now, cloud computing services constitute about $16 billion as a market.

IDC: By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending.



That’s just about four percent of mainstream IT spending. By 2012, which is not so far away, we project that spending for cloud computing is going to grow nearly threefold to about $42 billion. That would make it about 9 percent of IT spending. So, we predict it’s going to move along pretty quickly.

... [Being able to manage workloads in a dynamic environment] is the single biggest challenge we see for not only cloud computing, but it has affected the whole idea of managing these increasingly complex environments -- first clusters, then grids, and now clouds. Software has been at the center of that.

That’s one of the reasons we're here today with Platform and CERN, because that’s been Platform’s business from the beginning, creating software to manage clusters, then grids, and now clouds, first for very demanding, HPC sites like CERN and, more recently, also for enterprise clients.

Clark: Historically, clusters and grids have been relatively static, and the workloads have been managed across those. Now, with cloud, we have the ability to have a dynamic set of resources.

The trick is to marry and manage the workloads and the resources in conjunction with each other. Last year, we announced our cloud products -- Platform LSF and Platform ISF Adaptive Cluster -- to address that challenge and to help this evolution.

[Cloud adoption] is being driven by the top of the organization. Tony and Steve laid it out well. They look at the public/private cloud economically, and say, "Architecturally, what does this mean for our business?" Without any particular application in mind they're asking how to evolve to this new model. So, we're seeing it very horizontally in both enterprise and HPC applications.

What Platform sees is the interaction of distributed computing and new technologies like virtualization requiring management. What I mean by that is the ability, in a large farm or shared environment, to share resources and then make those resources dynamic. It's the ability to add virtualization into those on the resource side, and then, on the server side, to make it Internet accessible, have a service catalog, and move from providing IT support to truly IT as a competitive service.

The state of the art is that you can get the best of Amazon, ease of use, cost, accessibility with the enterprise configuration, scale, and dependability of the enterprise grid environment.

There isn't one particular technology or implementation that I would point to, to say "That is state of the art," but if you look across the installations we see in our installed base, you can see best practices in different dimensions with each of those customers.

Conway: People who have already stepped through the earlier stages of this evolution, who have gone from clusters to grid computing, are now for the most part contemplating the next move to cloud computing. It's an evolutionary move. It could have some revolutionary implications, but, from a technological standpoint, sometimes evolutionary is much safer and better than revolutionary.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Platform Computing.

Wednesday, January 27, 2010

Oracle's Sun Java strategy: Business as usual

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

In an otherwise pretty packed news day, we’d like to echo @mdl4’s sentiments about the respective importance of Apple’s and Oracle’s announcements: “Oracle finalized its purchase of Sun. Best thing to happen to Sun since Java. Also: I don’t give a sh#t about the iPad. I said it.”

There’s little new in observing that on the platform side, that Oracle’s acquisition of Sun is a means for turning the clock back to the days of turnkey systems in a post-appliance era. History truly has come full circle as Oracle in its original database incarnation was one of the prime forces that helped decouple software from hardware.

Fast forward to the present, and customers are tired of complexity and just want things that work. Actually, that idea was responsible for the emergence of specialized appliances over the past decade for performing tasks ranging from SSL encryption/decryption to XML processing, firewalls, email, or specialized web databases.

The implication here is that the concept is elevated to enterprise level; instead of a specialized appliance, it’s your core instance of Oracle databases, middleware, or applications. And even there, it’s but a logical step forward from Oracle’s past practice of certifying specific configurations of its database on Sun (Sun was, and now has become again, Oracle’s reference development platform).

That’s in essence the argument for Oracle to latch onto a processor architecture that is overmatched in investment by Intel for the x86 line. The argument could be raised than in an era of growing interest in cloud, as to whether Oracle is fighting the last war. That would be the case – except for the certainty that your data center has just as much chance of dying as your mainframe.

Question of second source

At the end of the day, it’s inevitably a question of second source. Dana Gardner opines that Oracle will replace Microsoft as the hedge to IBM. Gordon Haff contends that alternate platform sources are balkanizing as Cisco/EMC/VMware butts their virtualized x86 head into the picture and customers look to private clouds the way they once idealized grids.

The highlight for us was what happens to Sun’s Java portfolio, and as it turns out, the results are not far from what we anticipated last spring: Oracle’s products remain the flagship offerings. From looking at respective market shares, it would be pretty crazy for Oracle to have done otherwise.

The general theme was that – yes – Sun’s portfolio will remain the “reference” technologies for the JCP standards, but that these are really only toys that developers should play with. When they get serious, they’re going to keep using WebLogic, not Glassfish. Ditto for:

• Java software development. You can play around with NetBeans, which Oracle’s middleware chief Thomas Kurian characterized as a “lightweight development environment,” but again, if you really want to develop enterprise-ready apps for the Oracle platform, you will still use JDeveloper, which of course is written for Oracle’s umbrella ADF framework that underlies its database, middleware, and applications offerings. That’s identical to Oracle’s existing posture with the old (mostly) BEA portfolio of Eclipse developer tools. Actually, the only thing that surprised us was that Oracle didn’t simply take NetBeans and set it free – as in donating it to Apache or some more obscure open source body.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.



• SOA, where Oracle’s SOA Suite remains front and center while Sun’s offerings go on maintenance.

We’re also not surprised as tot he prominent role of JavaFX in Oracle’s RIA plans; it fills a vacumm created when Oracle tgerminated BEA’s former arrangement to bundle Adobe Flash/Flex development tooling. Inactualityy, Oracle has become RIA agnosatic, as ADF could support any of the frameworks for client display, but JavaFX provides a technology that Oracle can call its own.

There were some interesting distinctions with identity management and access, where Sun inherited some formidable technologies that, believe it or not, originated with Netscape. Oracle Identity management will grab some provisioning technology from the Sun stack, but otherwise Oracle’s suite will remain the core attraction. But Sun’s identity and access management won’t be put out to pasture, as it will be promoted for midsized web installations.

There are much bigger pieces to Oracle’s announcements, but we’ll finish with what becomes of MySQL. In shirt there’s nothing surprising to the announcement that MySQL will be maintained in a separate open source business unit – the EU would not have allowed otherwise. But we’ve never bought into the story that Oracle would kill MySQL. Both databases aim at different markets. Just about the only difference that Oracle’s ownership of MySQL makes – besides reuniting it under the same corporate umbrella as the InnoDB data store – is that, well, like yeah, MySQL won’t morph into an enterprise database. Then again, even if MySQL had remained independent, that arguably it was never going to evolve to the same class of Oracle as the product would lose its believed simplicity.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Monday, November 30, 2009

The more Oracle says MySQL not worth much, the more its actions say otherwise

As the purgatory of Oracle's under-review bid to buy Sun Microsystems for $7.4 billion drags on, it's worth basking in the darn-near sublime predicament Oracle has woven for itself.

Oracle has uncharacteristically found itself maneuvered (by its own actions) into a rare hubristic place where it's:
  • Footing the bill for the publicity advancement of its quarry ... MySQL is more famous than ever, along with its low-cost and open attributes.
  • Watching the value of its larger quarry, Sun Microsystems, dwindle by the day as users flee the SPARC universe in search of greener (and leaner) binary pastures.
  • Giving open source middleware a boost in general too as Oracle seems to saying that MySQL is worth hundreds of millions of dollars (dead or alive); the equivalent of what it's losing by not spinning MySQL out of the total Sun package.
  • Both denigrating and revering the fine attributes of the awesome MySQL code and community, leaving the other database makers happy to let Oracle pay for and do their dirty work of keeping MySQL under control.
This last point takes the cake. IBM, Microsoft and Sybase really don't want MySQL to take over with world, err ... Web, any time soon, either. But they also want to coddle the developers who may begin with MySQL and then hand off to the IT operators who may be inclined, err ... seduced, to specify a commercial RDB ... theirs ... for the life of the app.

So it's a delicate dance to profess love for MySQL while setting the snare to eventually tie those new apps to the costly RDBs and associated Java middleware (and hardware, if you can). Let's not also forget the budding lust for all things appliance by certain larger vendors (Oracle included).

If Oracle, by its admission to the EU antitrust mandarins, thinks MySQL has little market value and is not a direct competitor to its heavy-duty Oracle RDB arsenal, than why doesn't it just drop MySQL, by vowing to spin it out or sell it? Then the Sun deal would get the big rubber stamp.

It's because not of what MySQL is worth now, but what it may become. Oracle wants to prune the potential of MySQL while not seeming to do anything of the sort.

The irony is that Oracle has advanced MySQL, lost money in the process, and helped its competitors -- all at the same time. When Oracle buys Sun and controls MySQL the gift (other than to Microsoft SQL Server) keeps on giving as the existential threat to RDBs is managed by Redwood Shores.

And we thought Larry Ellison wasn't overly charitable.

Wednesday, November 18, 2009

IBM feels cozy on sidelines as Oracle-Sun deal languishes in anti-trust purgatory

You have to know when to hold them, and when to fold them. That's the not just slightly smug assessment by IBM executives as they reflect -- with twinkles in their eyes -- on the months-stalled Oracle acquisition of Sun Microsystems, a deal that IBM initially sought but then declined earlier this year.

Chatting over drinks at the end of day one of the Software Analyst Connect 2009 conference in Stamford, Conn., IBM Senior Vice President and IBM Software Group Executive Steve Mills told me last night he thinks the Oracle-Sun deal will go through, but it won't necessarily be worth $9.50 a share to Oracle when it does.

"He (Oracle Chairman Larry Ellison) didn't understand the hardware business. It's a very different business from software," said Mills.

Mills seemed very much at ease with IBM's late-date jilt of Sun (Sun was apparently playing hard to get in order to get more than $9.40/share from Big Blue's coffers). IBM's stock price these days is homing in on $130, quite a nice turn of events given the global economy.

Sun is trading at $8.70, a significant discount to Oracle's $9.50 bid, reflecting investor worries about the fate of the deal now under scrutiny by European regulators, Mill's views notwithstanding.

IBM Software Group Vice President of Emerging Technology Rod Smith noted the irony -- perhaps ancient Greek tragedy-caliber irony -- that a low market share open source product is holding up the biggest commercial transaction of Sun's history. "That open source stuff is tricky on who actually makes money and how much," Smith chorused.

Should Mills's prediction that Oracle successfully maintains its bid for Sun prove incorrect, it could mean bankruptcy for Sun. And that may mean many of Sun's considerable intellectual property assets would go at fire-sale prices to ... perhaps a few piecemeal bidders, including IBM. Smith just smiled, easily shrugging off the chill (socks in tact) from the towering "IBM" logo ice sculpture a few steps away.

And wouldn't this hold up go away if Sun and/or Oracle jettisoned MySQL? Is it pride or hubris that makes a deal sour for one mere grape? Was the deal (and $7.4 billion) all about MySQL? Hardly.

Many observers think that Sun's Java technology -- and not its MySQL open source database franchise -- should be of primary concern to European (and U.S.) anti-trust mandarins. I have to agree. But Mills isn't too concerned with Oracle's probable iron-grip on Java ..., err licensing. IBM has a long-term license on the technology, the renewal of which is many years out. "We have plenty of time," said Mills.

Yes, plenty of time to make Apache Harmony a Java doppelganger -- not to mention the Java market-soothing effects of OSGi and Eclipse RCP. [Hey, IBM invented Java for the server for Sun, it can re-invent it for something else ... SAP?]

Unlike some software titans, Mills is clearly not living in a "reality distortion field" when it comes to Oracle's situation.

"We're in this for the long haul," said Mills, noting that he and IBM have have been competing with Oracle since August 1993 when IBM launched its distributed DB2 product. "All of our market share comes at the expense of Oracle's," said Mills. "And we love to do benchmarks again Oracle."

Even as the Fates seem to be on IBM's side nowadays, the stakes remain high for the users of these high-end database technologies and products. It's my contention that we're only now entering the true data-driven decade. And all that data needs to run somewhere. And it's not going to be in MySQL, no matter who ends up owning it.

Wednesday, April 8, 2009

Google Apps charges ahead with improved data security and long-awaited Java support

Cast Iron Systems and Google have teamed up to overcome one of the biggest hurdles to cloud computing and software as a service (SaaS) in the enterprise -- concerns over data security.

Cast Iron for Google Apps, which was announced today, includes the Google Secure Data Connection, enabling the encrypted exchange of data between a company's enterprise applications and Google's cloud offerings. This makes it easier for companies to integrate their Google Apps and Google App Engine applications with on-premises and cloud apps.

Cast Iron, Mountain View, Calif., is a SaaS and cloud applications provider, and offers pre-configured connectivity with hundreds of other applications, as well as a library of integration templates with pre-configured gadget data maps. Cast Iron for Google Apps offers a portfolio of deployment options, including integration-as-a-service through Cast Iron Cloud, and on-premise physical and virtual appliances.

In a recent survey, IT executives displayed considerable hesitancy in switching to cloud-based applications. A main reason for holding back, cited by many of these executives, was the concern over data security.

Not everyone is squeamish about using cloud apps. Schumacher Group, a $250-million U.S. emergency medicine practice management firm, has created a web portal for its medical providers using a set of custom gadgets and a Google site. The company manages 2,500 physicians who care for 2.5 million patients each year in over 150 emergency rooms across 20 states.

Cast Iron for Google Apps helps enable the extraction and secure exchange of data from Schumacher Group’s MS SQL Server data warehouse to Google Enterprise Gadgets in real-time. Providers and doctors in the Schumacher network now have more secure visibility into emergency room data from anyplace, anytime.

In other Google Apps news, the long-awaited Java support for App Engine has been announced, and the first 10,000 developers to sign up will be given a first look and a chance to comment.

With the new support, developers can build web applications using standard Java technologies and run them on Google's scalable infrastructure. The Java environment provides a Java 6 JVM, a Java Servlets interface, and support for standard interfaces to the App Engine scalable datastore and services, such as JDO, JPA, JavaMail, and JCache.

Also included is a secure sandbox, which will allow developers to run code safely on Google servers, while being flexible enough to allow them to break abstractions at will. More information is available at http://code.google.com/appengine/docs/java/overview.html.

These two developments continue the march toward enterprise-ready cloud activities. Can we still really call cloud just a fad or hype?

Friday, March 20, 2009

If you’re an enterprise, developer or economist, IBM is not the right buyer for Sun

From the perspective of IT users, developer communities and global industry as a whole, IBM may be the worst place for beleaguered Sun Microsystems to land.

Sure a merger as is rumored is good -- but not urgently or obviously so -- for IBM. Big Blue gains modest improvement in share of some servers, mostly Unix-based. It would actually gain just enough share of high-end servers to justly draw anti-trust scrutiny nearly worldwide.

Yet these types of servers are not today's growth engines for IT vendors, they are the blunt trailing edge. Users have been dumping them in droves, with their sights set on far lower-cost alternatives and newer utility models of deployment and payment. IBM may want the next generation of data centers to be built of mainframes, but not too many others do.

In any event, server hardware is not a meaningful differentiator in today’s IT markets. Sun, if anyone, has proven that. IBM to claim it as the rationale for the buyout is fishy. A lot of other analysts are holding their noses too. UPDATE: Good analysis from Redmonk's Stephen O'Grady.

The rumored IBM-Sun deal for $6.4 billion is incremental improvement for IBM on several fronts: open source software (low earnings), tape storage (modest albeit dependable revenue), Java (already mostly open), engineering talent (easier to get these days given Sun layoffs), new intellectual property (targeted by design by Sun on undercutting IBM's cash cows). In short, there are no obvious game changers or compelling synergies in IBM buying Sun other than setting the sun on Sun.

I initially thought the rumored deal, which drove up Sun's stock, JAVA, by nearly 80 percent on rumor day one, didn't make sense. But it does make sense. Unfortunately it only makes sense for IBM in a fairly ugly way. As Tom Foremski said, it smacks of a spoiler role.

If IBM, would you spend what may end up being $4 billion in actual cost to slow or stifle the deterioration of a $100 billion data center market, and, at the same time, take the means of accelerating the move to cloud computing off the table from your competitors? As Mister Rogers would say, "Sure, sure you would."

Most likely, though the denials are in the works, IBM will plunder and snuff, plunder and snuff its way across the Sun portfolio -- from large account to large account, developer community to developer community, employee project to project. The tidy market share and technology gems will be absorbed quietly, the rest canceled or allowed to wither on the vine.

Certain open source communities and projects that Sun has fostered will be cultivated, or not. IBM is the very best at knowing how to play the open source cards, and that does not mean playing them all.

Listen, this would be a VERY different acquisition than any IBM has done in recent memory. It’s really about taking a major competitor out when they are down. It’s bold and aggressive, and it’s ignoble. But these are hard times and many people are distracted.

The deal is not good for Sun and it's customers (unless they already decided to move from being a Sun shop to an IBM shop), and may put in jeopardy the momentum of open source use up into middleware, SOA, databases and cloud infrastructure. That’s because, even at the price of $6.4 billion (twice Sun's market value before the deal talk), IBM will gain far more from the deal over the long term by eradicating Sun than by joining Sun's vector.

This deal is all about control. Control of Java, of markets, developers, cost of IT -- even about the very pace of change across the industry. For much of it's history IBM has had its hand on the tiller of the IT progression. It's was a comfortable position except for an historically exceptional past 17 years for IBM. It's time to get back in the saddle.

Clearly, Sun has little choice in the matter, other than to jockey for the best price and perhaps some near-term concessions for its employees. It's freaking yard sale. Sun is being run by -- gasp -- investment bankers. Here's a rare bonus bonanza in a M&A desert, for sure.

But let's be clear, this is no merger of partners or equals. This is assimilation. It’s Borg-like, and resistance may be futile. It is important to know when you're being assimilated, however.

Scott McNealy, Sun’s chairman, former CEO and co-founder, famously called the 2001 proposed merger of HP and Compaq a collision between two "garbage trucks." Well, IBM’s proposed/rumored purchase of Sun is equivalent to a garbage truck being airlifted out of sight and over the horizon by a C-17 cargo transport plane. Just open the door and drive it in. The plane was probably designed on Sun hardware, too. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Sun’s fate has been shaky for a long time now. The reasons are fodder for Harvard case studies.

But what of the general good of enterprise IT departments, of communities of idealistic developers, or of open and robust competition in the new age of cloud computing? In the new age, incidentally, you may no longer need an army of consultants and C-17 full of hardware and software at each and every enterprise. As Nick Carr correctly points out, this changes everything. That kind of change may not be what IBM has in mind.

It’s not easy resting having IBM in control of a vast portions of the open source future, and the legacy installed past. Linux and Apache Web servers might have made sense for IBM, but do open source cloud databases, middleware, SOA, and the next generations of on- and off-premises utility and virtualization fabric infrastructure?

IBM today is making the lion's share of its earnings from the software and services that run yesterday's data centers. Even the professional services around the newer cloud models (and subscription fees of actual, not low-utilization, use) does not make up for lost software license revenues. In many ways, cloud is more a threat than an opportunity to Big Blue. It ultimately means lower revenues, lower margins, less control, and feisty competitors that make money from ads and productivity, not sales and service.

Cloud models will take a long time to become common and mainstream, but any sense of inevitability must make IBM (and others) nervous. Controlling the pace of the change is essential.

The hastening shift to virtualization, application modernization, SaaS, mobile, cloud, and increased use of open source for legacy infrastructure could seriously disrupt the business models of IBM, HP, Cisco, Microsoft, Oracle and others. Moving from legacy-and-license to cloud-and-subscription (on OSS or commercial code) poses a huge risk to IBM, especially if it happens fast -- something this unexpected economic crisis could accelerate.

Enterprises could soon gain the equivalent of the powerful and efficient IT engines that run a Google or Amazon, either for itself, or rented off the wire, or both. IBM probably won't have 60 percent of the cloud services market in five years like it does the high-end Unix market (if it gets Sun). In fact, what has happened to Sun in terms of disruption may be a harbinger of could happen to IBM during the next red-shift in the market.

Sun should have gotten to these compelling cloud values first, made a business of it before Amazon. Sun was on the way, had the vision, but ran out of time and out of gas.

Sun has let a lot of us down by letting it come to this. The private equity firms that control Sun now don't give a crap about open source, or innovation, clouds or whether the network is the computer, or my dog's pajamas are the computer. They need to get their money back ASAP.

As a result, they and Sun could well be handing over to IBM the very keys to being able to time the market to IBM's strategic needs above all else. All for $6.4 billion in cash, minus the profits from chopping off Sun's remaining limbs and keeping the ones that make a good Borg fit.

There should be a better outcome. Should the deal emerge, regulators should insist what IBM itself called for more than 10 years ago. Something as important as Java and other critical open software specifications (OpenSolaris?) should be in the control and ownership of a neutral standards body, not in the control of the global market dominant legacy vendor.

It’s sort of like letting General Motors decide when to build the next generation of fuel efficient and alternative energy cars. And we know how that worked out.

IBM has the deep pockets now to buy strategic advantage during an economic crisis that helps it in coming years. It's during this coming period when the cloud vision begins to stick, when the madness of how enterprise IT has evolved in cost and complexity is shaken off for something much better, faster and cheaper.

And that’s what IT has always been about.

Wednesday, March 18, 2009

IBM buying Sun Microsystems makes no sense, it's a red herring

Someone has floated a trial balloon, through a leak to the Wall Street Journal, that IBM is in "talks" to buy Sun Microsystems for $6.5 billion. The only party that would leak this information is Sun itself, and it smacks of desperation in trying to thwart an unwanted acquisition, or to positively impact another deal that Sun is weak in.

If IBM wanted to buy Sun it would have done so years ago, at least on the merits of synergy and technology. If IBM wanted to buy Sun simply to trash the company, plunder the spoils and do it on the cheap -- the time for that was last fall.

So more likely, given that Sun has reportedly been shopping itself around (nice severance packages for the top brass, no doubt), is that Sun has been too successful at selling itself -- just to the wrong party at too low of a price. This may even be in the form of a chop shop takeover. The only thing holding up a hostile takeover of Sun to sell for spare parts over the past six months was the credit crunch, and the fact that private equity firms have had some distractions.

By buying Sun IBM gains little other than some intellectual property and mySQL. IBM could have bought mySQL or open sourced DB2 or a subset of DB2 any time, if it wanted to go that route. IBM has basically already played its open source hand, which it did masterfully at just the right time. Sun, on the other hand, played (or forced) its open source hand poorly, and at the wrong time. What's the value to Sun for having "gone open source"? Zip. Owning Java is not a business model, or not enough of one to help Sun meaningfully.

So, does IBM need chip architectures from Sun? Nope, has their own. Access to markets from Sun's long-underperforming sales force? Nope. Unix? IBM has one. Linux? IBM was there first. Engineering skills? Nope. Storage technology? Nope. Head-start on cloud implementations? Nope. Java license access or synergy? Nope, too late. Sun's deep and wide professional services presence worldwide? Nope. Ha!

Let's see ... hardware, software, technology, sales, cloud, labor, market reach ... none makes sense for IBM to buy Sun -- at any price. IBM does just fine by continuing to watch the sun set on Sun. Same for Oracle, SAP, Microsoft, HP.

With due respect to Larry Dignan on ZDNet, none of his reasons add up in dollars and cents. No way. Sun has fallen too far over the years for these rationales to stand up.

Only in playing some offense via data center product consolidation against HP and Dell would buying Sun help IBM. And the math doesn't add up there. The cost of getting Sun is more than the benefits of taking money from enterprise accounts from others. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The cost of Sun is not cheap, or at least not cheap like a free puppy. Taking over Sun for technology and market spoils ignores the long-term losses to be absorbed, the decimated workforce, the fact that Cisco will now eat Sun's lunch as have the other server makers for more than five years.

So who might by Sun on the cheap, before Sun's next financial report to Wall Street? Cisco, Dell, EMC, Red Hat. That's about it for vendors. And it would be a big risk for them, unless the price tag were cheap, cheap, cheap. Anything under $4 billion might make sense. Might.

Other buyers could come in the form of carriers, cloud providers or other infrastructure service provider types. This is a stretch, because even cheap Sun would come with a lot of baggage for their needs. Another scenario is a multi-party deal, of breaking up Sun among several different kinds of firms. This also is hugely risky.

So my theory -- and it's just a guess -- is that today's trial balloon on an IBM deal is a last-ditch effort by Sun to find, solidify, or up the price on some other acquisition or exit strategy by Sun. The risk of such market shenanigans only underscores the depths of Sun's malaise. The management at Sun probably sees its valuation sinking yet gain to below tangible assets and cash value when it releases it's next quarterly performance results. ... Soon.

The economic crisis has come at a worst time for Sun than just about any other larger IT vendor. Sun, no matter what happens, will go for a fire sale deal -- not a deal of strength among healthy synergistic partners. No way.

Thursday, August 21, 2008

Pulse provides novel training and tools configuration resource to aid in developer education, preparedness

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Genuitec.

Read a full transcript of the discussion.

Java training and education has never been easy. Not only is the language and its third-party and community offerings constantly moving targets, each developer has his or her own preferences, plug-ins inventory and habits. What's more, the "book knowledge" gained in many course settings can vary wildly from what happens in the "real world" of communities and teams.

MyEclipse maker Genuitec developed Pulse last year to monitor and update the most popular Eclipse plug-ins, but Pulse also has a powerful role in making Java training and tools preferences configuration management more streamlined, automated and extensible. Unlike commercial software, in the open source, community-driven environments like Eclipse, there is no central vendor to manage plug-ins and updates. For the Eclipse community Pulse does that, monitoring for updates while managing individual developers' configuration data -- and at the same time gathering meta data about how to better serve Eclipse and Java developers.

I recently moderated a sponsored podcast to explore how Pulse, and best practices around it use, helps organize and automate tools configuration profiles for better ongoing Java training and education. I spoke with Michael Cote, an analyst with RedMonk; Ken Kousen, an independent technical trainer, president of Kousen IT, Inc., and adjunct professor at Rensselaer Polytechnic Institute; and Todd Williams, vice president of technology at Genuitec.

Here are some excerpts:
The gap between what's taught in academia and what's taught in the real world is very large, actually. ... Academia will talk about abstractions of data structures, algorithms, and different techniques for doing things. Then, when people get into the real world, they have no idea what Spring, Hibernate, or any of the other issues really are.

It's also interesting that a lot of developments in this field tend to flow from the working professionals toward academia, rather than the other way around, which is what you would find in engineering.

Part of what I see as being difficult, especially in the Java and Enterprise Java market, is the huge number of technologies that are being employed at different levels. Each company picks its own type of stack. ... Finding employees that fit with what you are trying to do today, with an eye toward being able to mature them into where you are going tomorrow, is probably going to always be the concern.

You look at the employment patterns that most developers find themselves in, and they are not really working at some place three, five, 10, even 20 years. It's not realistic. So, specializing in some technology that essentially binds you to a job isn't really an effective way to make sure you can pay your bills for the rest of your life.

You have to be able to pick up quickly any given technology or any stack, whether it’s new or old. Every company has their own stack that they are developing. You also have to remember that there is plenty of old existing software out there that no one really talks about anymore. People need to maintain and take care of it.

So, whether you are learning a new technology or an old technology, the role of the developer now, much more so in the past, is to be more of a generalist who can quickly learn anything without support from their employer.

Obviously, in open source, whether it’s something like the Eclipse Foundation, Apache, or what have you, they make a very explicit effort to communicate what they are doing through either bug reports, mail lists, and discussion groups. So, it's an easy way to get involved as just a monitor of what's going on. I think you could learn quite a bit from just seeing how the interactions play out.

That's not exactly the same type of environment they would see inside closed-wall corporate development, simply because the goals are different. Less emphasis is put on external communications and more emphasis is put on getting quality software out the door extremely quickly. But, there are a lot of very good techniques and communication patterns to be learned in the open-source communities.

[With Pulse] we built a general-purpose software provisioning system that right now we are targeting at the Eclipse market, specifically Eclipse developers. For our initial release last November, we focused on providing a simple, intuitive way that you could install, update, and share custom configurations with Eclipse-based tools.

In Pulse 2, which is our current release, we have extended those capabilities to address what we like to call team-synchronization problems. That includes not only customized tool stacks, but also things like workspace project configurations and common preference settings. Now you can have a team that stays effectively in lock step with both their tools and their workspaces and preferences.

With Pulse, we put these very popular, well-researched plug-ins into a catalog, so that you can configure these types of tool stacks with drag-and-drop. So, it's very easy to try new things. We also bring in some of the social aspects; pulling in the rankings and descriptions from other sources like Eclipse Plug-in Central and those types of things.

So, within Pulse, you have a very easy way to start out with some base technology stacks for certain kinds of development and you can easily augment them over time and then share them with others.

The Pulse website is www.poweredbypulse.com. There is a little 5 MB installer that you download and start running. If anyone is out in academia, and they want to use Pulse in a setting for a course, please fill out the contact page on the Website. Let us know, and we will be glad to help you with that. We really want to see usage in academia grow. We think it’s very useful. It's a free service, so please let us know, and we will be glad to help.

I did try it in a classroom, and it's rather interesting, because one of the students that I had recently this year was coming from the Microsoft environment. I get a very common experience with Microsoft people, in that they are always overwhelmed by the fact, as Todd said, there are so many choices for everything. For Microsoft, there is always exactly one choice, and that choice costs $400.

I tried to tell them that here we have many, many choices, and the correct choice, or the most popular choice changes all the time. It can be very time consuming and overwhelming for them to try to decide which ones to use in which circumstances.

So, I set up a couple of configurations that I was able to share with the students. Once they were able to register and download them, they were able to get everything in a self-contained environment. We found that pretty helpful. ...

It was pretty straightforward for everybody to use. ... whenever you get students downloading configurations, they have this inevitable urge to start experimenting, trying to add in plug-ins, and replacing things. I did have one case where the configuration got pretty corrupted, not due to anything that they did in Pulse, but because of plug-ins they added externally. We just basically scrapped that one and started over and it came out very nicely. So, that was very helpful in that case.

We have a very large product plan for Pulse. We've only had it out since November, but you're right. We do have a lot of profile information, so if we chose to mine that data, we could find some correlations between the tools that people use, like some of the buying websites do.

People who buy this product also like this one, and we could make ad hoc recommendations, for example. It seems like most people that use Subversion also use Ruby or something, and you just point them to new things in the catalog. It's kind of a low-level way to add some value. So there are certainly some things under consideration.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Genuitec.

Read a full transcript of the discussion.

Friday, June 27, 2008

Eclipse Foundation delivers Ganymede train with 23 cars, but where are the cloud on-ramps?

Not all trains run on time, but The Eclipse Foundation has kept to its schedule with its annual release train, this year named Ganymede.

For the third year in a row, the Eclipse community has delivered, on the same day as in previous years, numerous software updates across a wide range of projects.

This year's iteration includes software that spans 23 projects and represents over 18 million lines of code. Highlights of the release include the new p2 provisioning platform, new Equinox security features, new Ecore modeling tools, and support for service-oriented architecture (SOA).

Now that the Eclipse Foundation has proven it mettle with delivery of consistent and complete packages of downloads -- now's the time to take this puppy to the cloud. I'd like to see more integration between Eclipse products and cloud-based development, integration and deployment services. And I'm not alone on these wants, no siree.

Amazon Web Services has proven the demand and acceptance. A modern IDE needs the cloud hand-offs and test and real-world performance proofing that cloud and platform as a service (PaaS) are now offering. How about a hybrid model where the IDE remains local but more application lifecycle management and test and debug features comes as services?

How about integration between such a hybrid model and then associated ease to choose among a variety of cloud deployment partners and models? Build, test, and deploy across many providers and models, all close in the bosom of Eclipse. All supported by the community. We could call it Eclipse Cloud Services (ECS). I'm in.

Well, until IBM figures that out, here are the lastest and greatest on earth-bound Eclipse. Key features of the release for SOA support include:
  • SCA Designer, which provides a graphical interface for developers who wish to create composite applications using the SCA 1.0 standard.

  • Policy Editor, a collection of editors and validators that makes it easy for developers to construct and manipulate XML expressions that conform to the WS-Policy W3C standard.

  • Business process modeling notation (BPMN) Editor that allows consumers to construct and extend the BPMN 1.1 standard notation to illustrate business processes.
For Equinox and runtime projects:
  • A new provisioning system, called p2, makes it easier for Eclipse users to install and update Eclipse.

  • New security features, including a preferences-like storage for sensitive data such as passwords and login credentials and the ability to easily use the Java authentication service (JAAS) in Equinox.

  • Rich Ajax Platform (RAP) 1.1, with new features, including the ability to customize the look and feel with Presentation Factories and CSS and the ability to store application state information on a per user basis.

  • The Eclipse Communication Framework (ECF), with real-time shared editing and other communications features to allow developers to communicate and collaborate from within Eclipse.
Developer tools include:
  • A new JavaScript IDE, called JSDT, provides the same level of support for JavaScript as the JDT provides for Java. New features include code completion, quick fix, formatting and validation.

  • The Business Intelligenec Reporting Tools (BIRT) project now provides an improved JavaScript editor and a new JavaScript debugger for debugging report event handlers.

  • DTP has added a new graphical SQL query editor, called the SQL Query Builder, and improved usability of connection profile creation and management for users and adopters/extenders.
More information on all of the new the features can be found at the Ganymede Web site.

The idea behind the yearly release train, according to the Eclipse Foundation, is to provide predictability and reliability for developers in an effort to promote commercial adoption of the Eclipse community's projects.

ZDNet blogger Ed Burnette details how Eclipse maneuvered pieces of the new release out to mirror sites in an attempt to avoid the type of logjam created when the new Firefox went live recently. Apparently, it was only partly successful, although, according to Ed, things have since smoothed out.

Ganymede, named after one of the moons of Jupiter, eclipsed the previous releases, names Callisto and Europa, also moons of Jupiter. Last year's release train encompassed 21 projects, and Europa, the 2006 release, included only 10.

Ganymede is available for download, in one of seven packages, on the Eclipse Web site.

Saturday, June 14, 2008

Kapow takes a jab at challenge of creating mashups from JavaScript and AJAX sites

Kapow Technologies, whose solutions helps companies assemble mashups by harvesting and managing data from across the Web, has enhanced its approach to overcome the obstacle many businesses encounter when targeting sources with dynamic JavaScript and AJAX.

The Palo Alto, Calif. company's Kapow Mashup Server 6.4, which it unveiled this week, features extended JavaScript handling, a response to the burgeoning number of AJAX-based Web sites. [Disclosure: Kapow Technologies is a sponsor of BriefingsDirect podcasts.]

The Web 2.0 Edition, one of four editions of the new Mashup Server, now includes support for Web Application Description Language (WADL), making it easier for applications and mashup-building tools to discover and consume REST services. The WADL support also helps developers leverage the Kapow Excel Connector, an Excel plug-in provided by StrikeIron.

The Portal Content Edition, which enables companies to refurbish existing portal assets, has several enhancements to the web clipping technology for development and deployment of JSR-168 standards based portlets. It now provides the ability to make on-the-fly changes to clipping portlets that enhance portal functionality, while adding a portlet deployment mechanism on major portal platforms such as IBM WebSphere, Oracle Portal and BEA WebLogic.

Last January, I did a podcast with Stefan Andreasen, founder and CTO of Kapow. Andreasen described the mashup landscape. You can listen to the podcast here or read the full transcript here. I also blogged last April about Kapow's Web-to-spreadsheet service. At that time, I said:

Despite a huge and growing amount of “webby” online data and content, capturing and defining that data and then making it available to users and processes has proven difficult, due to differing formats and data structures. The usual recourse is manual intervention, and oftentimes cut-and-paste chores. IT departments are not too keen on such chores.

But Kapow’s OnDemand approach provides access to the underlying data sources and services to be mashed up and uses a Robot Designer to construct custom Web harvesting feeds and services in a flexible role-based execution runtime. Additionally, associated tools allow for monitoring and managing a portfolio of services and feeds, all as a service.

In addition to the Web 2.0 Edition and the Portal Content Edition, the Kapow Mashup Server is also available in the Data Collection Edition and the OnDemand Edition.

All editions are available now. More information can be found on the Kapow Web site. Product pricing is based on a flexible subscription offering.