Thursday, August 26, 2010

Trio of cloud companies collaborate on new private cloud platform offerings

A trio of cloud ecosystem companies have collaborated to offer an integrated technology platform that aims to deliver a swift on-ramp to private and hybrid cloud computing models in the enterprise.

newScale, rPath and Eucalyptus Systems are combining their individual technology strengths in a one-two-three punch that promises to help businesses pump up their IT agility through cloud computing. [Disclosure: rPath is a sponsor of BriefingsDirect podcasts.]

The companies will work with integration services provider MomentumSI to deliver on this enterprise-ready platform that relies on cloud computing, integrating infrastructure for private and hybrid clouds with enterprise IT self-service, and system automation.

No cloud-in-a-box

From my perspective, cloud solutions won’t come in a box, nor are traditional internal IT technologies and skills apt to seamlessly spin up mission-ready cloud services. Neither are cloud providers so far able to provide custom or "shrink-wrapped" offerings that conform to a specific enterprise’s situation and needs. That leaves a practical void, and therefore an opportunity, in the market.

This trio of companies is betting that self-service private and hybrid cloud computing demand will continue to surge as companies press IT departments to deliver on-demand infrastructure services readily available from public clouds like Amazon EC2. Since many IT organizations aren’t ready to make the leap, they don’t have the infrastructure or process maturity to transition to the public cloud. That’s where the new solution comes in.

Incidentally, you should soon expect similar cloud starter packages of technology and services, including SaaS management capabilities, from a variety of vendors and partnerships. Indeed, next week's VWworld conference should be rife with such news.

The short list of packaged private cloud providers includes VMware, Citrix, TIBCO, Microsoft, HP, IBM, Red Hat, WSo2, RightScale, RackSpace, Progress Software, Platform Computing and Oracle/Sun. Who else would you add to the list? [Disclosure: HP, Progress, Platform Computing and WSO2 are sponsors of BriefingsDirect podcasts].

Well, add Red Hat, which this week preempted VWworld with news of its own path to private cloud offerings, saying only Red Hat and Microsoft can offer the full cloud lifecycle parts and maintenance. That may be a stretch, but Red Hat likes to be bold in its marketing.

Behind the scenes

Here’s how the newScale, rPath and Eucalyptus Systems collaboration looks under the hood. newScale, which provides self-service IT storefronts, brings its e-commerce ordering experience to the table. newScale’s software lets IT run on-demand provisioning, enforce policy-based controls, manage lifecycle workloads and track usage for billing.

rPath will chip in its automating system development and maintenance technologies. With rPath in the mix, the platform can automate system construction, maintenance, and on-demand image generation for deployment across physical, virtual and cloud environments.

This trio of companies is betting that self-service private and hybrid cloud computing demand will continue to surge



For its part, Eucalyptus Systems, an open source private cloud software developer, will offer infrastructure software that helps organizations deploy massively scalable private and hybrid cloud computing environments securely. MomentumSI comes in on the back end to deliver the solution.

It's hard to imagine that full private and/or hybrid clouds are fully ready from any singe single vendor. And who would want that, and the inherent risk of lock-in, a one-stop cloud shop would entail? Best-of-breed and open source components work just as well for cloud as for traditional IT infrastructure approaches. Server, storage and network virtualization may make the ecosystem approach even more practical and cost-efficient for private clouds. Pervasive and complete management and governance are the real keys.

My take is that ecosystem-based solutions then are the first, best way that many organizations will likely actually use and deploy cloud services. The technology value triumvirate of newScale, rPath and Eucalyptus—with solution practice experience of MomentumSI—is an excellent example of the ecosystem approach most likely to become the way that private cloud models actually work for enterprises for the next few years.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, August 17, 2010

Modern data centers require efficiency-oriented changes in networking with eye on simplicity, automation

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Special Offer: Gain insight into best practices for transforming your data center by downloading three whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

As data center planners seek to improve performance and future-proof their investments, the networking leg on the infrastructure stool can no longer stand apart. Advances such as widespread virtualization, increased modularity, converged infrastructure, and cloud computing are all forcing a rethinking of data center design.

And so the old rules of networking need to change because specialized, labor-intensive and homogeneous networking systems need to be be brought into the total modern data center architecture. The increasingly essential role of networking in data center transformation (DCT) needs to stop being a speed bump and instead cut complexity while spurring on adaptability and flexibility.

Networking must be better architected within -- and not bolted onto -- the DCT future. The networking-inclusive total architecture needs to accomplish the total usage pattern and requirements for both today and tomorrow -- and with an emphasis on openness, security, flexibility, and sustainability.

To learn more about how networking is changing, and how organizations can better architect networking into their data centers future, BriefingsDirect assembled two executives from HP, Helen Tang, Worldwide Data Center Transformation Solutions Lead, and Jay Mellman, Senior Director of Product Marketing in the HP Networking Unit. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Tang: As we all know, in 2010 most IT organizations are wrestling with the three Cs -- reducing cost, reducing complexity, and also tapping the problem of hitting the wall with capacity from a base, space, and energy perspective.

The reason it's happening is because IT is really stuck between two different forces. One is the decades of aging architecture, infrastructure, and facilities they have inherited. The other side is that the business is demanding ever faster services and better improvements in their ability to meet requirements.

The confluence of that has really driven IT to ... a series of integrated data center projects and technology initiatives that can take them from this old integrated architecture to an architecture that’s suited for tomorrow’s growth.

DCT ... includes four things: consolidation, whether it's infrastructure, facilities or application; virtualization and automation; continuity and sustainability, which address the energy efficiency aspect, as well as business continuity and disaster recovery; and last, but not least, converged infrastructure.

Networking involves common problems, solutions

Networking actually plays in all these areas, because it is the connective tissue that enables IT to deliver services to the business. It's very critical. In the past this market has been largely dominated by perhaps one vendor. That’s led to a challenge for customers, as they address the cost and complexity of this piece.

[With DCT] we've seen just tremendous cost reduction across the board. At HP, when we did our own DCT, we were able to save over a billion dollars a year. For some of our other customers, France Telecom for example, it was €22 million in savings over three years -- and it just goes on and on, both from an energy cost reduction, as well as the overall IT operational cost reductions.

Mellman: Today’s architecture is very rigid in the networking space. It's very complex with lots of specialized people and specialized knowledge. It's very costly and, most importantly, it really doesn’t adapt to change.

The kind of change we see, as customers are able to move virtual machines around, is exactly the kinds of thing we need in networking and don’t have. So there has been a dramatic change in what's demanded of networking in a data center context.

Within the last couple of years ... customers were telling us that there were so many changes happening in their environments, both at the edge of the network, but also in the data center, that they felt like they needed a new approach.

Look at the changes that have happened in the data center just in the last couple of years -- the rise of virtualization and being able to actually take advantage of that effectively, the pressures on time to market in alignment with the business, and the increasing risk from security and the increasing need for compliance.

Rapid rise in network connections


For example, there's the sheer number of connections, as we went from single large servers to multiple racks of servers, and to multiple virtual machines for services -- all of which need connectivity. We have different management constructs between servers, storage, and networking ... that have been very difficult to deal with.

Tie all these together, and HP felt this is the right time [for a change]. The other thing is that these are problems that are being raised in the networking space, but they have direct linkage to how you would best solve the problem.

We've been in the business for 25 to 30 years and we are successfully the number two vendor in the industry selling primarily at the edge. ... We can now do a better job because we can actually bring the right engineering talent together and solve [networking bottlenecks] in an appropriate way. That balances the networking needs with what we can do with servers, what we can do with storage, with software, with security and with power and cooling, because often times, the solution may be 90 percent networking, but it involves other pieces as well.

There are opportunities where we go from more than 210 different networking components required to serve a certain problem down to two modules. You can kind of see that's a combination of consolidation, convergence, cost reduction, and simplicity, all coming together.

We saw a real requirement from customers to come in and help them create more flexibility, drive risk down, improve time to service and take cost out of the system, so that we are not spending so much on maintenance and operation, and we can put that to more innovation and driving the business forward.

Need for simplicity that begets automation


A couple of these key rules drive simplicity. The job of a network admin needs to be made as simple and have as much automation and orchestration as the jobs of SysAdmins or SAN Admins today.

The second is that we want to align networking more fully with the rest of the infrastructure, so that we can help customers deliver the service they need when they need it, to users in the way that they need it. That alignment is just a new model in the networking space.

Finally, we want to drive open systems, first of all because customers really appreciate that. They want standards and they want to have the ability to negotiate appropriately, and have the vendors compete on features, not on lock-in.

Open standards also allow customers to pick and choose different pieces of the architecture that work for them at different points in time. That allows them, even if they are going to work completely with HP, the flexibility and the feeling that we are not locking them in. What happens when we focus on open systems is that we increase innovation and we drive cost out of the system.

The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point.



What we see are pressures in the data center, because of virtualization, business pressures, and rigidity, giving us an opportunity to come in with a value proposition that really mirrors what we’ve done for 25 years, which is to think about agility, to think about alignment with the rest of IT, and to think about openness and really bringing that to the networking arena for the first time.

For example, we have a product called Virtual Connect, which has a management concept called Virtual Connect Enterprise Manager. It allows the networking team and the sever teams to work off the same pool of data. Once the networking team allocates connectivity, the server team can work within that pool, without having to always go back to the networking team and ask for the latest new IP address and new configurations.

HP is really focused on how we bring the power of that orchestration, and the power of what we know about management, to allow these teams to work together without requiring them, in a sense, to speak the same language, when that’s often the most difficult thing that they have to do.

When we look at agility and ability to improve time-to-service, we are often seeing an order of magnitude or even two orders of magnitude [improvement] by churning up a rollout process that might take months -- and turning it into hours or days.

With that kind of flexibility, you avoid the silos, not necessarily just in technology, but in the departments, as requests from the server and storage teams to the networking team. So, there are huge improvements there, if we look at automation and risk. I also include security here.

It's very critical, as part of these, that security be embedded in what we're doing, and the network is a great agent for that. In terms of the kinds of automation, we can offer single panes of glass to understand the service delivery and very quickly be able to look at not only what's going on in a silo, but look at actual flows that are happening, so that we can actually reduce the risk associated with delivering the services.

Cost cuts justify the shift


Finally, in terms of cost, we're seeing -- at the networking level specifically -- reductions on the order of 30 percent to as high as 65 percent by moving to these new types of architectures and new types of approaches, specifically at the server edge, where we deal with virtualization.

HP has been recognizing that customers are increasingly not being judged on the quality of an individual silo. They're being judged on their ability to deliver service, do that at a healthy cost point, and do that as the business needs it. That means that we've had to take an approach that is much more flexible. It's under our banner of FlexFabric.

Tang: The traditional silos between servers and storage and networking are finally coming down. Technology has come to an inflection point. We're able to deliver a single integrated system, where everything can be managed as a whole that delivers incredible simplicity and automation as well as significant reduction in the cost of ownership.

[To learn more] a good place to go is www.hp.com/go/dct. That’s got all kinds of case studies, video testimonials, and all those resources for you to see what other customers are doing. The Data Center Transformation Experience Workshop is a very valuable experience.

Mellman: There are quite a few vendors out there who are saying that the future is all about cloud and the future is all about virtualization. That ignores the fact that the lion's share of what's in a data center still needs to be kept.

You want an architecture that supports that level of heterogeneity and may support different kinds of architectural precepts, depending on the type of business, the types of applications, and the type of pressures on that particular piece.

What HP has done is try to get a handle on what is that future going to look like without prescribing that it has to be a particular way. We want to understand where these points of heterogeneity will be and what will be able to be delivered by a private cloud, public cloud, or by more traditional methods and bring those together, and then net it down to architectural things that makes sense.

We realize that there will be a high degree of virtualization happening at the server edge, but there will also be a high degree of physical servers for especially some big apps that may not be virtualized for a long time, Oracle, SAP, some of the Microsoft things. Even when they are, they are going to be done with potentially different virtualization technologies.

Physical and virtual

Even with a product like Virtual Connect, we want to make sure that we are supporting both physical and virtual server capabilities. With our Converged Network Adaptors, we want to support all potential networking connectivity, whether it’s Fibre Channel, iSCSI, Fibre Channel over Ethernet or server and data technology, so that we don’t have to lock customers into a particular point of view.

We recognize that most data centers are going to be fairly heterogeneous for quite a long time. So, the building blocks that we have, built on openness and built on being managed and secure, are designed to be flexible in terms of how a customer wants to architect.

It’s best having the customer just step back and say, "Where is my biggest pain point?" The nice thing with open systems is that you can generally address one of those, try it out, and start on that path. Start with a small workable project and get a good migration path toward full transformation.
Special Offer: Gain insight into best practices for transforming your data center by downloading three whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

HP buys Fortify, and it's about time!

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

What took HP so long? Store that thought.

As we’ve stated previously, security is one of those things that have become everybody’s business. Traditionally the role of security professionals who have focused more on perimeter security, the exposure of enterprise apps, processes, and services to the Internet opens huge back doors that developers unwittingly leave open to buffer overflows, SQL injection, cross-site scripting, and you name it. Security was never part of the computer science curriculum.

But as we noted when IBM Rational acquired Ounce Labs, developers need help. They will need to become more aware of security issues but realistically cannot be expected to become experts. Otherwise, developers are caught between a rock and a hard place – the pressures of software delivery require skills like speed and agility, and a discipline of continuous integration, while security requires the mental processes of chess players.

At this point, most development/ALM tools vendors have not actively pursued this additional aspect of quality assurance (QA); there are a number of point tools in the wild that may not necessarily be integrated. The exceptions are IBM Rational and HP, which have been in an arms race to incorporate this discipline into QA. Both have so-called “black box” testing capabilities via acquisition – where you throw ethical hacks at the problem and then figure out where the soft spots are. It’s the security equivalent of functionality testing.

Raising the ante

With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Last year IBM Rational raised the ante with acquisition of Ounce Labs, providing “white box” static scans of code – in essence, applying debugger type approaches. Ideally, both should be complementary – just as you debug, then dynamically test code for bugs, do the same for security: white box static scan, then black both hacking test.

Over the past year, HP and Fortify have been in a mating dance as HP pulled its DevInspect product (an also-ran to Fortify’s offering) and began jointly marketing Fortify’s SCA product as HP’s white box security testing offering. In addition to generating the tests, Fortify's SCA manages this stage as a workflow, and with integration to HP Quality Center, autopopulates defect tracking. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We’ll save discussion of Fortify’s methodology for some other time, but suffice it to say that it was previously part of HP’s plans to integrate security issue tracking as part of its Assessment Management Platform (AMP), which provides a higher level dashboard focused on managing policy and compliance, vulnerability and risk management, distributed scanning operations, and alerting thresholds.

In our mind, we wondered what took HP so long to consummate this deal. Admittedly, while the software business unit has grown under now departed CEO Mark Hurd, it remains a small fraction of the company’s overall business. And with the company’s direction of “Converged Infrastructure”, its resources are heavily preoccupied with digesting Palm and 3Com (not to mention, EDS).

The software group therefore didn’t have a blank check, and given Fortify’s 750-strong global client base, we don’t think that the company was going to come cheap (the acquisition price was not disclosed). With the mating ritual having predated IBM’s Ounce acquisition last year, buying Fortify was just a matter of time. At least a management interregnum didn’t stall it.

Finally!

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Friday, August 13, 2010

Google needs to know: What does Oracle really want with Android?

The bombshell that Oracle is suing Google over Java intellectual property in mobile platform powerhouse Android came as a surprise, but in hindsight it shouldn't have.

We must look at the world through the lens that all guns are pointed at Google, and that means that any means to temper its interests and blunt it's potential influence are in play and will be used.

By going for Google's second of only two fiscal jugular veins in Android (the other being paid search ads), Oracle has mightily disrupted the entire mobile world -- and potentially the full computing client market. By asking for an injunction against Android based on Java patent and copyright violations, Oracle has caused a huge and immediate customer, carrier and handset channel storm for Google. Talk about FUD!

Could Oracle extend its injunctions requests to handset makers and more disruptively for mobile carriers, developers, or even end users? Don't know, but the uncertainty means a ticking bomb for the entire Android community. Oracle's suits therefore can't linger. Time is on Oracle's side right now. Even Google counter-suing does not stop the market pain and uncertainty from escalating.

We saw how that pain works when RIM suffered intellectual property claims again its Blackberries, when RIM was up against a court-ordered injunction wall. Fair or not, right or not, they had to settle and pay to keep the product and their market cap in the right motion. And speed was essential because investors are watching, wondering, worrying. Indeed, RIM should have caved sooner. That's the market-driven, short-term "time is not on our side" of Google's dilemma with Oracle's Java.

When Microsoft had to settle with Sun Microsystems over similar Java purity and license complaints a decade back, it was a long and drawn out affair, but the legal tide seemed to be turning against Microsoft. So Microsoft settled. That's the legal-driven, long-term "time is not on our side" of Google's dilemma with Oracle's Java.

Google is clearly in a tough spot. And so we need to know: What does Oracle really want with Android?

Not about the money

RIM's aggressors wanted money and got it. Sun also needed money (snarky smugness aside) too, and so took the loot from Microsoft and made it through yet another fiscal quarter. But Oracle doesn't need the money. Oracle will want quite something else in order for the legal Java cloud over Android to go away.

Oracle will probably want a piece of the action. But will Oracle be an Android spoiler ... and just work to sabotage Android for license fees as HP's WebOS and Apple's iOS and Microsoft's mobile efforts continue to gain in the next huge global computing market, that is for mobile and thin PC clients?

Or, will Oracle instead fall deeply, compulsively in love with Android ... Sort of a Phantom of the Opera (you can see Larry with the little mask already, no?), swooping down on the sweet music Google has been making with Android, intent on making that music its own, controlled from its own nether chambers, albeit with a darker enterprise pitch and tone. Bring in heavy organ music, please.

Chances are that Oracle covets Android, believes its teachings through Java technology (the angel of class libraries) entitles it to a significant if not controlling interest, and will hold dear Christine ... err, Android, hostage unless the opera goes on the way Oracle wants it to (with license payments all along the way). Bring in organ music again, please.

Trouble is, this phantom will not let his love interest be swept safely back into the arms of Verizon, HTC, Motorola and Samsung. Google will probably have to find a way make to make music with Oracle on Android for a long time. And they will need to do the deal quickly and quietly, just like Salesforce.com and Microsoft recently did.

What, me worry?

How did Google let this happen? It's not just a talented young girl dreaming of nightly rose-strewn encores, is it?

Google's mistake is it has acted like a runaway dog in a nighttime meat factory, with it fangs into everything but with very little fully ingested (apologies to Steve Mills for usurping his analogy). In stepping on every conceivable competitors' (and partners') toes with hubristic zeal -- yet only having solid success and market domination in a very few areas -- Google has made itself vulnerable with its newest and extremely important success with Android.

Did Google do all the legal blocking and tackling? Maybe it was a beta legal review? Did the Oracle buy of Sun catch it off-guard? Will that matter when market perceptions and disruption are the real leverage? And who are Google's friends now when it needs them? They are probably enjoying the opera from the 5th box.

Android is clearly Google's next new big business, with prospects of app stores, and legions of devoted developers, myriad partners on the software and devices side, globally pervasive channels though the mobile carriers, and the potential to extend same into the tablets and even "fit" PCs arena. Wow, sounds a lot like what Java could have been, what iOS is, and what WebOS wants to be.

And so this tragic and ironic double-cross -- Java coming back to stab Google in the heart -- delivers like an aria, one that is sweet music mostly to HP, Apple, and Microsoft. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

[UPDATE: The stakes may spread far beyond the mobile market into the very future of Java. Or so says Forrester analyst Jeffrey Hammond, who argues that, in light of Oracle’s plans to sue Google over Android, “…this lawsuit casts the die on Java’s future."

"Java will be a slow-evolving legacy technology. Oracle’s lawsuit links deep innovation in Java with license fees. That will kill deep innovation in Java by anyone outside of Oracle or startups hoping to sell out to Oracle. Software innovation just doesn’t do well in the kind of environment Oracle just created," said Hammond.]

See related coverage:

Wednesday, August 11, 2010

Metastorm seeks to accelerate the strategic socialization of the enterprise for process improvement

Metastorm, the business process management (BPM) software provider that recently released two cloud-based business collaboration products, is betting on what it calls the "socialization of the enterprise."

We're seeing more social media techniques and approaches entering the enterprise, from Saleforce.com's Chatter to the forthcoming beta of HP's 48Upper. The trend is undeniable. A recent Trend Micro survey reveals social media use in the workplace has risen from 19 percent to 24 percent in the last two years.

Strategies to resist the socialization of the enterprise may be futile. So Metastorm is suggesting enterprises embrace it, using tools that foster rather than squash social productivity in the workplace.

Knowing that these technologies exist, there is this effort to figure out how to adapt this for a distributed business environment to increase the productivity and effectiveness of employees.



Part of that process is moving away from standalone products like Yammer and Socialtext and integrating social capabilities, profiles and collaboration with a richer enterprise experience, according to Laura Mooney, vice president of corporate communications at Metastorm, maker of Smart Business Workspace, a rich internet application that aims to empower knowledge workers to become more engaged and productive.

BriefingsDirect caught up with Mooney to discuss the issues around social enterprises.

BriefingsDirect: What’s your perspective on the business trend toward social enterprises?

Mooney: Companies don’t necessarily want to move away from stand-alone tools, but stand-alone tools are not necessarily well-integrated into the day-to-day operations and activities that employees are engaged in from a decision-making perspective.

As people got used to the instant ability to collaborate in their social life with using social networking capabilities, we discovered they wanted that same experience in the office environment in a way that would add business value. By tying social capabilities into the BPM foundation their work is already running on, employees can initiate that collaboration where it makes sense.

Metastorm focus on helping organizations, the people within the company, map out their strategy, understand the way different components of their business inter-operate and overlap, and then automate and execute business processes and try to improve these business processes on a day-to-day basis.

BriefingsDirect: Do tools like Facebook have a place in the enterprise from a productivity perspective?

Mooney: At work, Facebook is really not applicable to what I’m doing. But within this business process modeling tool, I have the ability to invite people that I can see online to participate in a process review session online, so we can all look at the same model and we can annotate, draw on it, and share it and get feedback. In that way, this is very meaningful to my day-to-day job.

Rather than getting on the phone or scheduling a conference call, trying to create a WebEx, and then trying to keep track of what it was we talked about, all of that would be captured.

It becomes useful also for audit purposes because a lot of companies can’t just change core business processes without some sort of audit trail. Having that audit ability is important from a business perspective versus random social networking. Social media is not necessarily trackable.

BriefingsDirect: Do you have any insight into the customer demand that’s sort of driving these traditional software vendors to play in the enterprise to the other world?

Mooney: It has to do with companies being so virtualized these days, especially the large organizations. Not only do they have multiple offices in different locations and most likely different countries, but there’s a shift toward telecommuting so everyone is not necessarily in the office at the same time. Knowing that these technologies exist, there is this effort to figure out how to adapt this for a distributed business environment to increase the productivity and effectiveness of employees.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, August 10, 2010

CollabNet rolls out trio of cloud ALM offerings with focus on Agile and governance benefits

In an aggressive move to drive Agile software deeper into the enterprise, CollabNet rolled out a trio of new offerings today at the Agile 2010 Conference.

CollabNet introduced version 5.4 of the CollabNet TeamForge application lifecycle management (ALM) platform, a TeamForge licensing option, and CollabNet Subversion Edge 1.1. Together with the recently released CollabNet TeamForge to ScrumWorks Pro Integration, the company is promising enterprises more flexibility to adopt Agile software development methods in the cloud.

“The products we’re introducing today enable organizations of any size, with developers located anywhere around the world, to realize breakthrough governance and innovation benefits while adopting Agile development methods at a pace that suits their business cycles, technical objectives, and team requirements,” says CollabNet CEO Bill Portelli.


Flagship product enhancements


Portelli says the tools and processes -- using any development methodology and technology -- can boost productivity by up to 50 percent and reduce the cost of software development by 80 percent.

Part of the promise depends on the latest version of CollabNet’s flagship product, the TeamForge ALM Platform. Version 5.4 is optimized for Agile teams and continuous integration. Some of the new features include dynamic planning improvements, such as drag-and-drop sequencing of backlog items and direct links between planning folders and file releases. The company says this makes it easier to implement Agile projects.

The products we’re introducing today enable organizations of any size, with developers located anywhere around the world, to realize breakthrough governance and innovation benefits.



TeamForge ALM version 5.4 also offers new personalization features that let users manipulate data in ways that best suit their needs and save their settings as their default view. And reporting enhancements, like the ability to embed dynamic charts directly within project pages, aim to make it easier to see release status at a glance.

CollabNet TeamForge ALM is $4,995 for the first 25 users and $749 per additional user, per year.

New licensing option

CollabNet also offers more flexibility with a TeamForge SCM licensing option. The new option promises the collaboration, enterprise-wide governance, and centralized management capabilities of the TeamForge platform to organizations that use Subversion for source code management.

According to the company, the new licensing option saves money for organizations that don’t need features like artifact tracking, task management, and document sharing. The new licensing option also adds centralized role-based access control, project workspaces, tools like wikis and discussion forums, and the secure delegation of repository administration to project teams. CollabNet TeamForge SCM is $2,995 for the first 25 users and $289 per additional user, per year.

Finally, CollabNet Subversion Edge is coming out of beta as a free, open-source download. Subversion Edge is certified stack that combines Subversion, the Apache Web server, and ViewVC with a Web-based management interface works to streamline installation, administration, use, and governance of the entire software stack. Subversion Edge also offers an auto-update feature.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Friday, August 6, 2010

Cloud computing's ultimate value depends on open PaaS models to avoid applications and data lock-in

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WSO2.

As enterprises examine the use of cloud computing for core IT functions, how can they protect themselves against service provider lock-in, ensure openness and portability of applications and data, and foster a true marketplace among cloud providers?

Indeed, this burning question about the value and utility of cloud computing centers on whether applications and data can move with relative ease from cloud to cloud -- that is, across so-called public- and private-cloud divides, and among and between various public cloud providers.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

For enterprises to determine the true value of cloud models -- and to ascertain if their cost and productivity improvements will be sufficient to overcome the disruptive shift to cloud computing -- they really must know the actual degree of what I call "application fungibility."

Fungible means being able to move in and out of like systems or processes. But what of modern IT applications? Fungible applications could avoid the prospect of swapping on-premises platform lock-in for some sort of cloud-based service provider lock-in and, perhaps over time, prevent being held hostage to arbitrary and rising cloud prices.

Application fungibility would, I believe, create a real marketplace for cloud services, something very much in the best interest of enterprises, small and medium businesses (SMBs), independent software vendors (ISVs), and developers.

In this latest BriefingsDirect podcast discussion, we examine how enterprises and developers should be considering the concept of application fungibility, both in terms of technical enablers and standards for cloud computing, and also consider how to craft the proper service-level agreements (SLAs) to promote fungibility of their applications.

Here to explore how application fungibility can bring efficiency and ensure freedom of successful cloud computing, we're joined by Paul Fremantle, Chief Technology Officer and Co-Founder at WSO2, and Miko Matsumura, author of SOA Adoption for Dummies and an influential blogger and thought leader on cloud computing subjects. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Matsumura: Fungibility is very, very critical, and one thing I want to emphasize is that the fungibility level of current solutions is very low.

... The economics of upscaling and downscaling as a utility is very attractive. Obviously, there are a lot of reasons why people would start moving into the cloud, but the thing that we're talking about today with this fungibility factor is not so much why would you start using cloud, but really what is the endgame for successful applications.

The area where we are specifically concerned is when the application is more successful than in your wildest dreams. Now, in some ways what it creates is almost an unprecedented leverage point for the supplier. If you're locked in to a very high-transactional, high-value application, at that point, if you have no flexibility or fungibility, you're pretty much stuck. The history of the pricing power of the vendor could be replicated in cloud and potentially could be even more significant.

... The things to look at in the cloud world are who are the emergent dominant players and will Amazon and Google or one of these players start to behave as an economic bully? Right now, since we're in the early days of cloud, I don't think that people are feeling the potential for domination.

But people who are thinking ahead to the endgame are pretty clear that that power will emerge because any rational, publicly traded company will maximize its shareholder value by applying any available leverage. If you have leverage against the customer, that produces very benevolent looking quarterly returns.

Fremantle: People are building apps in a month, a week, or even a day, and they need to be hosted. The enterprise infrastructure team, unfortunately, hasn’t been able to keep up with those productivity gains.

Now, people are saying, "I just want to host it." So, they go to Amazon, Rackspace, ElasticHosts, Joyent, whoever their provider is, and they just jump on that and say,"Here is my credit card, and there is a host to deploy my app on."

The problem comes when, exactly as Miko said, that app is now going to grow. And in some cases, they're going to end up with very large bills to that provider and no obvious way out of that.

You could say that the answer to that is that we need cloud standards, and there have been a number of initiatives to come up with standard cloud management application programming interfaces (APIs) that would, in theory, solve this. Unfortunately, there are some challenges to that, one of which is that not every cloud has the same underlying infrastructure.

Take Amazon, for example. It has its own interesting storage models. It has a whole set of APIs that are particularly specific to Amazon. Now, there are a few people who are providing those same APIs -- people like Eucalyptus and Ubuntu -- but it doesn’t mean you can just take your app off of Amazon and put it onto Rackspace, unfortunately, without a significant amount of work.

No way out

As we go up the scale into what's now being termed as platform as a service (PaaS), where people are starting to build higher level abstractions on top of those virtual machines (VMs) and infrastructure, you can get even more locked in.

When people come up with a PaaS, it provides extra functionality, but now it means that instead of just relying on a virtualized hardware, you're now relying on a virtualized middleware, and it becomes absolutely vital that you consider lock-in and don’t just end up trapped on a particular platform.

One of the things that naturally evolved, as a result of the emergence of a common foe, is this principle of unification, openness, and alliance.



Matsumura: From my perspective, to some extent, there already is a cloud marketplace -- but the marketplace radically lacks transparency and efficiency. It's a highly inefficient market.

The thing that's great is, if you look at rational optimization of strategic competitive advantage, [moving to the cloud makes perfect sense.] "My company that makes parts for airplanes is not an expert in keeping PC servers cool and having a raised floor, security, biometric identification, and all kinds of hosting things." So, maybe they outsource that, because that's not any advantage to them.

That's perfectly logical behavior. I want to take this now to a slightly different level, which is, organizations have emergent behavior that's completely irrational. It's comical and in some ways very unfortunate to observe.

In the history of large-scale enterprise computing, there has long been this tension between the business units and the IT department, which is more centralized. The business department is actually the frustrated party, because they have developed the applications in a very short time. The lagging party is actually the IT department.

There is this unfortunate emergent property that the enterprise goes after something that, in the long run turns out to be very disappointing. But, by the time the disappointment sets in, the business executives that approved this entry point into the cloud are long gone. They've gotten promotions, because, their projects worked and they got their business results faster than they would have if they had actually done it the right way and actually gone through IT.

Hard for IT to compete in short-term

So, it puts central IT into a very uncomfortable position, where they have to provide services that are equal to or better than professionals like Amazon. At the same time, they also have to make sure that, in the long-term interest of the company, these services have the fungibility, protection, reliability, and cost control demanded by procurement.

The question becomes how do you keep your organization from being totally taken advantage of in this kind of situation.

Fremantle: What we are trying to do at WSO2 is exactly to solve that problem through a technical approach, and there are also business approaches that apply to it as well.

The technical approach is that we have a PaaS, and what’s unique about it is that it's offering standard enterprise development models that are truly independent of the underlying cloud infrastructure.

What I mean is that there is this layer, which we call WSO2 Stratos, that can take web applications, web application archive (WAR) files, enterprise service bus (ESB) flows, business process automation (BPA) processes, and things like governance and identity management and do all of those in standard ways. It runs those in multi-tenant elastic cloud-like ways on top of infrastructures like Amazon, as well as private cloud installments like Ubuntu, Eucalyptus, and coming very soon, VMware.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

What we're trying to do is to say that there is a set of open standards, both de facto and de jure standards, for building enterprise applications, and those can be built in such a way that they can be run on this platform -- in public cloud, private cloud, virtual private cloud, hybrid, and so forth.

What we're trying to do there is exactly what we've been talking about. There is a set of ways of building code that don’t tie you into a particular stack very tightly. They don’t tie you into a particular cloud deployment model very tightly, with the result that you really can take this environment, take your code, and deploy it in multiple different cloud situations and really start to build this fungibility. That’s the technical aspect.

One of the things that’s very important in cloud is how you license software like this. As an open source company, we naturally think that open source has a huge benefit here, because it's not just about saying you can run it any way. You need to then be able to take that and not be locked into it.

Our Stratos platform is completely open source under the Apache license, which means that you are free to deploy it on any platform, of any size, and you can choose whether or not to come to WSO2 for support.

We think we're the best people to support you, but we try and prove that every day by winning your business, not by tying you in through the lawyers and through legal and licensing approaches.



Matsumura: As a consumer of cloud, you need to be clear that the will of the partner is always essentially this concept of, "I am going to maximize my future revenue." It applies to all companies.

... Thing that’s fascinating about it is that, when a vendor says "Believe me," you look to the fine print. The fine print in the WSO2 case is the Apache license, which has incredible transparency.

It becomes believable, as a function, being able to look all the way through the code, to be able to look all the way through the license, and to realize, all of a sudden, that you're free. If someone is not being satisfactory in how they're behaving in the relationship, you're free to go.

If you look at APIs, where there is something that isn’t that opaque or isn’t really given to you, then you realize that you are making a long-term commitment, akin to a marriage. That’s when you start to wonder if the other person is able to do you harm and whether that’s their intention in the long run.

Fremantle: What Miko has been trying to politely say is that every vendor, whether it’s WSO2 or not, wants to lock in their customers and get that continued revenue stream.

Our lock-in is that we believe that it's such an enticing, attractive idea, that it's going to keep our customers there for many years to come.

Now, what’s WSO2's lock-in?

Our lock-in is that we have no lock-in. Our lock-in is that we believe that it's such an enticing, attractive idea, that it's going to keep our customers there for many years to come. We think that’s what entices customers to stay with us, and that’s a really exciting idea.

It's even more exciting in the cloud era. It was interesting in open source, and it was interesting with Java, but what we are seeing with cloud is the potential for lock-in has actually grown. The potential to get locked-in to your provider has gotten significantly higher, because you may be building applications and putting everything in the hands of a single provider; both software and hardware.

There are three layers of lock-in. You can get locked into the hardware. You can get locked into the virtualization. And, you can get locked into the platform. Our value proposition has become twice as valuable, because the lock-in potential has become twice as big.

... You're bound to see in the cloud market a consolidation, because it is all going to become price sensitive, and in price sensitive markets you typically see consolidation.

Two forms of consolidation

What I hope to see is two forms of consolidation. One is people buying up each other, which is the sort of old form. It would be really nice instead to see consolidation in the form of cloud providers banding together to share the same models, the same platforms, the same interfaces, so that there really is fungibility across multiple providers, and that being the alternative to acquisition.

That would be very exciting, because we could see people banding together to provide a portable run-time.

Matsumura: Smart organizations need to understand that it's not any individual's decision to just run off and do the cloud thing, but that it really has to combine enterprise architecture and ... cautionary procurement, in order to harness cloud and to keep the business units from running away in a way that is bad.

The thing that really critical though is when this is going to happen. There is a very tired saying that those who do not understand history are doomed to repeat it. We could spend almost decades in the IT industry just repeating the things of the past by reestablishing these kind of dominant-vendor, lock-in models.

A lot of it depends on what I call the emergent intelligence of the consumer. The reason I call it emergent intelligence is that it isn’t individual behavior, but organizational behavior. People have this natural tendency to view a company as a human being, and they expect rational behavior from individuals.

Aggregate behavior

But, in the endgame, you start to look at the aggregate behaviors of these very large organizations, and the aggregate behaviors can be extremely foolish. Programs like this help educate the market and optimize the market in such ways that people can think about the future and can look out for their own organizations.

The thing that’s really funny is that people have historically been very bad at understanding exponential growth, exponential curves, exponential costs, and the kind of leverage that they provides to suppliers.

People need to get smart on this fungibility topic. If we're smart, we're going to move to an open and transparent model. That’s going to create a big positive impact for the whole cloud ecosystem, including the suppliers.

Fremantle: It's up to the consumers of cloud to really understand the scenarios and the long-term future of this marketplace, and that’s what's going to drive people to make the right decisions. Those right decisions are going to lead to a fungible commodity marketplace that’s really valuable and enhances our world.

The challenge here is to make sure that people are making the right, educated decisions. I'd really like people to make informed decisions, when they choose a cloud solution or build their cloud strategy, that they specifically approach and attack the lock-in factor as one of their key decision points. To me, that is one of the key challenges. If people do that, then we're going to get a fair chance.

I don’t care if they find someone else or if they go with us. What I care most about is whether people are making the right decision on the right criteria. Putting lock-in into your criteria is a key measure of how quickly we're going to get into the right world, versus a situation where people end up where vendors and providers have too much leverage over customers.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WSO2.

You may also be interested in:

Wednesday, August 4, 2010

Revolution Analytics targets R language, platform at growing need to handle 'big data' crunching challenges

Revolution Analytics is working to revolutionize big data analysis with better crunching tools and an updated platform that brings the open source R statistics language to some the the largest data sets.

The company is betting its new big data scalability platform will help R transition from a research and prototyping tool to a production-ready platform for such enterprise applications as quantitative finance and risk management, social media, bioinformatics, and telecommunications data analysis.

The latest version of Revolution R Enterprise comes complete with an add-on package called RevoScaleR, a framework for multi-core processing of large data sets. With RevoScaleR, Revolution Analytics targets some of the largest levels of capacity and performance for analyzing big data, they said.

“With RevoScaleR, we’ve focused on making analytical models not just scale to the big data sets, but run the analysis in a fraction of the time compared to traditional systems,” says David Smith, vice president of Community and Marketing at Revolution Analytics. “For example, the FAA publishes a data set that contains every commercial airline take off and landing between 1987 and 2008. That’s more than 13 gigabytes of data. By analyzing that data, we can figure out the likelihood of airline delays in one second.”

A rows-and-columns approach

One second to analyze 13 GB of data should turn some heads because it takes 300 seconds with traditional methods. Under the hood of RevoScaleR is rapid fire access to data. For example, the RevoScaleR uses an XDF file format, a new binary big data file format with an interface to the R language that offers high-speed access to arbitrary rows, blocks and columns of data.

We’ve taken that one step further to develop a system that accesses the database by rows and columns at the same time



“The new SQL movement was all about going from relational databases to a flat file on a disk that offers fast to access by columns. A lot of the technology that’s behind things like Twitter and Facebook take this approach,” Smith said. “We’ve taken that one step further to develop a system that accesses the database by rows and columns at the same time, which is really well-attuned to doing these statistical computations.”

RevoScaleR also relies on a collection of the most-common statistical algorithms optimized for big data, including high-performance implementations of summary statistics, linear regression, binomial logistic regression and crosstabs. Data reading and transformation tools let users interactively explore and prepare large data sets for analysis. And, extensibility lets expert R users develop and extend their own statistical algorithms.

Integrating Hadoop

Based on the open-source R technologies, Revolution R Enterprise accordingly plays well with other modern big data architectures. Revolution R Enterprise leverages sources such as Hadoop, NoSQL or key value databases, relational databases, and data warehouses. These products can be used to store, regularize, and do basic manipulation on very large data sets—while Revolution R Enterprise now provides advanced analytics.

“Together, Hadoop and R can store and analyze massive, complex data,” says Saptarshi Guha, developer of the popular RHIPE R package that integrates the Hadoop framework with R in an automatically distributed computing environment. “Employing the new capabilities of Revolution R Enterprise, we will be able to go even further and compute dig data regressions and more.”

The new RevoScaleR package will be delivered as part of Revolution R Enterprise 4.0, which will be available for 32-and 64-bit Microsoft Windows in the next 30 days. Support for Red Hat Enterprise Linux (RHEL 5) is planned for later this year.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in: