Friday, August 6, 2010

Cloud computing's ultimate value depends on open PaaS models to avoid applications and data lock-in

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WSO2.

As enterprises examine the use of cloud computing for core IT functions, how can they protect themselves against service provider lock-in, ensure openness and portability of applications and data, and foster a true marketplace among cloud providers?

Indeed, this burning question about the value and utility of cloud computing centers on whether applications and data can move with relative ease from cloud to cloud -- that is, across so-called public- and private-cloud divides, and among and between various public cloud providers.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

For enterprises to determine the true value of cloud models -- and to ascertain if their cost and productivity improvements will be sufficient to overcome the disruptive shift to cloud computing -- they really must know the actual degree of what I call "application fungibility."

Fungible means being able to move in and out of like systems or processes. But what of modern IT applications? Fungible applications could avoid the prospect of swapping on-premises platform lock-in for some sort of cloud-based service provider lock-in and, perhaps over time, prevent being held hostage to arbitrary and rising cloud prices.

Application fungibility would, I believe, create a real marketplace for cloud services, something very much in the best interest of enterprises, small and medium businesses (SMBs), independent software vendors (ISVs), and developers.

In this latest BriefingsDirect podcast discussion, we examine how enterprises and developers should be considering the concept of application fungibility, both in terms of technical enablers and standards for cloud computing, and also consider how to craft the proper service-level agreements (SLAs) to promote fungibility of their applications.

Here to explore how application fungibility can bring efficiency and ensure freedom of successful cloud computing, we're joined by Paul Fremantle, Chief Technology Officer and Co-Founder at WSO2, and Miko Matsumura, author of SOA Adoption for Dummies and an influential blogger and thought leader on cloud computing subjects. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Matsumura: Fungibility is very, very critical, and one thing I want to emphasize is that the fungibility level of current solutions is very low.

... The economics of upscaling and downscaling as a utility is very attractive. Obviously, there are a lot of reasons why people would start moving into the cloud, but the thing that we're talking about today with this fungibility factor is not so much why would you start using cloud, but really what is the endgame for successful applications.

The area where we are specifically concerned is when the application is more successful than in your wildest dreams. Now, in some ways what it creates is almost an unprecedented leverage point for the supplier. If you're locked in to a very high-transactional, high-value application, at that point, if you have no flexibility or fungibility, you're pretty much stuck. The history of the pricing power of the vendor could be replicated in cloud and potentially could be even more significant.

... The things to look at in the cloud world are who are the emergent dominant players and will Amazon and Google or one of these players start to behave as an economic bully? Right now, since we're in the early days of cloud, I don't think that people are feeling the potential for domination.

But people who are thinking ahead to the endgame are pretty clear that that power will emerge because any rational, publicly traded company will maximize its shareholder value by applying any available leverage. If you have leverage against the customer, that produces very benevolent looking quarterly returns.

Fremantle: People are building apps in a month, a week, or even a day, and they need to be hosted. The enterprise infrastructure team, unfortunately, hasn’t been able to keep up with those productivity gains.

Now, people are saying, "I just want to host it." So, they go to Amazon, Rackspace, ElasticHosts, Joyent, whoever their provider is, and they just jump on that and say,"Here is my credit card, and there is a host to deploy my app on."

The problem comes when, exactly as Miko said, that app is now going to grow. And in some cases, they're going to end up with very large bills to that provider and no obvious way out of that.

You could say that the answer to that is that we need cloud standards, and there have been a number of initiatives to come up with standard cloud management application programming interfaces (APIs) that would, in theory, solve this. Unfortunately, there are some challenges to that, one of which is that not every cloud has the same underlying infrastructure.

Take Amazon, for example. It has its own interesting storage models. It has a whole set of APIs that are particularly specific to Amazon. Now, there are a few people who are providing those same APIs -- people like Eucalyptus and Ubuntu -- but it doesn’t mean you can just take your app off of Amazon and put it onto Rackspace, unfortunately, without a significant amount of work.

No way out

As we go up the scale into what's now being termed as platform as a service (PaaS), where people are starting to build higher level abstractions on top of those virtual machines (VMs) and infrastructure, you can get even more locked in.

When people come up with a PaaS, it provides extra functionality, but now it means that instead of just relying on a virtualized hardware, you're now relying on a virtualized middleware, and it becomes absolutely vital that you consider lock-in and don’t just end up trapped on a particular platform.

One of the things that naturally evolved, as a result of the emergence of a common foe, is this principle of unification, openness, and alliance.



Matsumura: From my perspective, to some extent, there already is a cloud marketplace -- but the marketplace radically lacks transparency and efficiency. It's a highly inefficient market.

The thing that's great is, if you look at rational optimization of strategic competitive advantage, [moving to the cloud makes perfect sense.] "My company that makes parts for airplanes is not an expert in keeping PC servers cool and having a raised floor, security, biometric identification, and all kinds of hosting things." So, maybe they outsource that, because that's not any advantage to them.

That's perfectly logical behavior. I want to take this now to a slightly different level, which is, organizations have emergent behavior that's completely irrational. It's comical and in some ways very unfortunate to observe.

In the history of large-scale enterprise computing, there has long been this tension between the business units and the IT department, which is more centralized. The business department is actually the frustrated party, because they have developed the applications in a very short time. The lagging party is actually the IT department.

There is this unfortunate emergent property that the enterprise goes after something that, in the long run turns out to be very disappointing. But, by the time the disappointment sets in, the business executives that approved this entry point into the cloud are long gone. They've gotten promotions, because, their projects worked and they got their business results faster than they would have if they had actually done it the right way and actually gone through IT.

Hard for IT to compete in short-term

So, it puts central IT into a very uncomfortable position, where they have to provide services that are equal to or better than professionals like Amazon. At the same time, they also have to make sure that, in the long-term interest of the company, these services have the fungibility, protection, reliability, and cost control demanded by procurement.

The question becomes how do you keep your organization from being totally taken advantage of in this kind of situation.

Fremantle: What we are trying to do at WSO2 is exactly to solve that problem through a technical approach, and there are also business approaches that apply to it as well.

The technical approach is that we have a PaaS, and what’s unique about it is that it's offering standard enterprise development models that are truly independent of the underlying cloud infrastructure.

What I mean is that there is this layer, which we call WSO2 Stratos, that can take web applications, web application archive (WAR) files, enterprise service bus (ESB) flows, business process automation (BPA) processes, and things like governance and identity management and do all of those in standard ways. It runs those in multi-tenant elastic cloud-like ways on top of infrastructures like Amazon, as well as private cloud installments like Ubuntu, Eucalyptus, and coming very soon, VMware.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

What we're trying to do is to say that there is a set of open standards, both de facto and de jure standards, for building enterprise applications, and those can be built in such a way that they can be run on this platform -- in public cloud, private cloud, virtual private cloud, hybrid, and so forth.

What we're trying to do there is exactly what we've been talking about. There is a set of ways of building code that don’t tie you into a particular stack very tightly. They don’t tie you into a particular cloud deployment model very tightly, with the result that you really can take this environment, take your code, and deploy it in multiple different cloud situations and really start to build this fungibility. That’s the technical aspect.

One of the things that’s very important in cloud is how you license software like this. As an open source company, we naturally think that open source has a huge benefit here, because it's not just about saying you can run it any way. You need to then be able to take that and not be locked into it.

Our Stratos platform is completely open source under the Apache license, which means that you are free to deploy it on any platform, of any size, and you can choose whether or not to come to WSO2 for support.

We think we're the best people to support you, but we try and prove that every day by winning your business, not by tying you in through the lawyers and through legal and licensing approaches.



Matsumura: As a consumer of cloud, you need to be clear that the will of the partner is always essentially this concept of, "I am going to maximize my future revenue." It applies to all companies.

... Thing that’s fascinating about it is that, when a vendor says "Believe me," you look to the fine print. The fine print in the WSO2 case is the Apache license, which has incredible transparency.

It becomes believable, as a function, being able to look all the way through the code, to be able to look all the way through the license, and to realize, all of a sudden, that you're free. If someone is not being satisfactory in how they're behaving in the relationship, you're free to go.

If you look at APIs, where there is something that isn’t that opaque or isn’t really given to you, then you realize that you are making a long-term commitment, akin to a marriage. That’s when you start to wonder if the other person is able to do you harm and whether that’s their intention in the long run.

Fremantle: What Miko has been trying to politely say is that every vendor, whether it’s WSO2 or not, wants to lock in their customers and get that continued revenue stream.

Our lock-in is that we believe that it's such an enticing, attractive idea, that it's going to keep our customers there for many years to come.

Now, what’s WSO2's lock-in?

Our lock-in is that we have no lock-in. Our lock-in is that we believe that it's such an enticing, attractive idea, that it's going to keep our customers there for many years to come. We think that’s what entices customers to stay with us, and that’s a really exciting idea.

It's even more exciting in the cloud era. It was interesting in open source, and it was interesting with Java, but what we are seeing with cloud is the potential for lock-in has actually grown. The potential to get locked-in to your provider has gotten significantly higher, because you may be building applications and putting everything in the hands of a single provider; both software and hardware.

There are three layers of lock-in. You can get locked into the hardware. You can get locked into the virtualization. And, you can get locked into the platform. Our value proposition has become twice as valuable, because the lock-in potential has become twice as big.

... You're bound to see in the cloud market a consolidation, because it is all going to become price sensitive, and in price sensitive markets you typically see consolidation.

Two forms of consolidation

What I hope to see is two forms of consolidation. One is people buying up each other, which is the sort of old form. It would be really nice instead to see consolidation in the form of cloud providers banding together to share the same models, the same platforms, the same interfaces, so that there really is fungibility across multiple providers, and that being the alternative to acquisition.

That would be very exciting, because we could see people banding together to provide a portable run-time.

Matsumura: Smart organizations need to understand that it's not any individual's decision to just run off and do the cloud thing, but that it really has to combine enterprise architecture and ... cautionary procurement, in order to harness cloud and to keep the business units from running away in a way that is bad.

The thing that really critical though is when this is going to happen. There is a very tired saying that those who do not understand history are doomed to repeat it. We could spend almost decades in the IT industry just repeating the things of the past by reestablishing these kind of dominant-vendor, lock-in models.

A lot of it depends on what I call the emergent intelligence of the consumer. The reason I call it emergent intelligence is that it isn’t individual behavior, but organizational behavior. People have this natural tendency to view a company as a human being, and they expect rational behavior from individuals.

Aggregate behavior

But, in the endgame, you start to look at the aggregate behaviors of these very large organizations, and the aggregate behaviors can be extremely foolish. Programs like this help educate the market and optimize the market in such ways that people can think about the future and can look out for their own organizations.

The thing that’s really funny is that people have historically been very bad at understanding exponential growth, exponential curves, exponential costs, and the kind of leverage that they provides to suppliers.

People need to get smart on this fungibility topic. If we're smart, we're going to move to an open and transparent model. That’s going to create a big positive impact for the whole cloud ecosystem, including the suppliers.

Fremantle: It's up to the consumers of cloud to really understand the scenarios and the long-term future of this marketplace, and that’s what's going to drive people to make the right decisions. Those right decisions are going to lead to a fungible commodity marketplace that’s really valuable and enhances our world.

The challenge here is to make sure that people are making the right, educated decisions. I'd really like people to make informed decisions, when they choose a cloud solution or build their cloud strategy, that they specifically approach and attack the lock-in factor as one of their key decision points. To me, that is one of the key challenges. If people do that, then we're going to get a fair chance.

I don’t care if they find someone else or if they go with us. What I care most about is whether people are making the right decision on the right criteria. Putting lock-in into your criteria is a key measure of how quickly we're going to get into the right world, versus a situation where people end up where vendors and providers have too much leverage over customers.

Get the free
"Cloud Lock-In Prevention Checklist"
here.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: WSO2.

You may also be interested in:

Wednesday, August 4, 2010

Revolution Analytics targets R language, platform at growing need to handle 'big data' crunching challenges

Revolution Analytics is working to revolutionize big data analysis with better crunching tools and an updated platform that brings the open source R statistics language to some the the largest data sets.

The company is betting its new big data scalability platform will help R transition from a research and prototyping tool to a production-ready platform for such enterprise applications as quantitative finance and risk management, social media, bioinformatics, and telecommunications data analysis.

The latest version of Revolution R Enterprise comes complete with an add-on package called RevoScaleR, a framework for multi-core processing of large data sets. With RevoScaleR, Revolution Analytics targets some of the largest levels of capacity and performance for analyzing big data, they said.

“With RevoScaleR, we’ve focused on making analytical models not just scale to the big data sets, but run the analysis in a fraction of the time compared to traditional systems,” says David Smith, vice president of Community and Marketing at Revolution Analytics. “For example, the FAA publishes a data set that contains every commercial airline take off and landing between 1987 and 2008. That’s more than 13 gigabytes of data. By analyzing that data, we can figure out the likelihood of airline delays in one second.”

A rows-and-columns approach

One second to analyze 13 GB of data should turn some heads because it takes 300 seconds with traditional methods. Under the hood of RevoScaleR is rapid fire access to data. For example, the RevoScaleR uses an XDF file format, a new binary big data file format with an interface to the R language that offers high-speed access to arbitrary rows, blocks and columns of data.

We’ve taken that one step further to develop a system that accesses the database by rows and columns at the same time



“The new SQL movement was all about going from relational databases to a flat file on a disk that offers fast to access by columns. A lot of the technology that’s behind things like Twitter and Facebook take this approach,” Smith said. “We’ve taken that one step further to develop a system that accesses the database by rows and columns at the same time, which is really well-attuned to doing these statistical computations.”

RevoScaleR also relies on a collection of the most-common statistical algorithms optimized for big data, including high-performance implementations of summary statistics, linear regression, binomial logistic regression and crosstabs. Data reading and transformation tools let users interactively explore and prepare large data sets for analysis. And, extensibility lets expert R users develop and extend their own statistical algorithms.

Integrating Hadoop

Based on the open-source R technologies, Revolution R Enterprise accordingly plays well with other modern big data architectures. Revolution R Enterprise leverages sources such as Hadoop, NoSQL or key value databases, relational databases, and data warehouses. These products can be used to store, regularize, and do basic manipulation on very large data sets—while Revolution R Enterprise now provides advanced analytics.

“Together, Hadoop and R can store and analyze massive, complex data,” says Saptarshi Guha, developer of the popular RHIPE R package that integrates the Hadoop framework with R in an automatically distributed computing environment. “Employing the new capabilities of Revolution R Enterprise, we will be able to go even further and compute dig data regressions and more.”

The new RevoScaleR package will be delivered as part of Revolution R Enterprise 4.0, which will be available for 32-and 64-bit Microsoft Windows in the next 30 days. Support for Red Hat Enterprise Linux (RHEL 5) is planned for later this year.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, August 3, 2010

Harvard Medical School use of cloud computing provides harbinger for new IT business value, Open Group panel finds

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

We've assembled a panel to examine the business impact of cloud computing, to explore practical implementations of cloud models, and to move beyond the hype and into gaining business paybacks from successful cloud adoption.

Coming to you from The Open Group Conference in Boston on July 21, the panel tackles such issues as what stands in the way of cloud use, safe and low-risk cloud computing, and working around inhibitors to cloud use. We also delve into a compelling example of successful cloud practices at the Harvard Medical School.

Learn more about cloud best practices and produced practical business improvements from guests Pam Isom, Senior Certified Executive IT Architect at IBM; Mark Skilton, Global Director, Applications Outsourcing at Capgemini; Dr. Marcos Athanasoulis, Director of Research Information Technology for Harvard Medical School, and Henry Peyret, Principal Analyst at Forrester Research. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Athanasoulis: The business of Harvard Medical School is research. ... Similar to many industries, there is a culture that requires that, for IT to be successful, it has to be meeting the needs of the users.

We have a particularly interesting situation. I call Harvard Medical School the land of a thousand CIOs, because, in essence, we cannot mandate that anyone use central IT services, cloud services, or other things. So that sets a higher standard for us, because people have to want to use it. It has to be cost-effective and it has to meet their business, research objectives.

We set out about five years ago to start thinking about how to provide infrastructure. Over time, we've evolved into creating a cloud that's a private cloud at the medical school.

User participation

W
e've been able to put in place a cloud that, number one, has user participation. This means that the faculty have and the researchers have skin in the game.

They can use the resources that are made available and subsidized by the school, but if they need additional resources, additional computing power, they're able to buy it. They actually purchase nodes that go into the cloud and they own those nodes, but when those notes are idle, other people's work can run on it. So they buy into the cloud.

These folks are not very trusting of central IT organizations. Many of them want to do their own thing. In order to get them to be convinced that they ought to participate, we told them, "You buy equipment and, if it doesn't work out for you, you can take that equipment and put it under the bench in your lab and set it up how you want." That made them more comfortable. But, not a single time has anyone ever actually come back and said they were going to take back the equipment.

In essence, it's building the trust of the researchers or the business clients, if you're in more of a business environment, getting them engaged in their requirements, and making sure it will meet their needs.

... Personal relationship is a part of what it's about. We had to make sure that we weren't seen as just a black box that they had absolutely no control over. That was step number one.

Then we also had to make sure that it was very much of an iterative process. We would start with one folk's needs and then realize there were certain other needs.

... We started out with a relatively small cloud initially. Once people saw the value, they began to adopt it more, and it's really starting to have a snowball effect, where we are growing by orders of magnitude.

... People are moving from the giant project, two- to three-year implementation cycles to, "Let's take a chunk, see how it works, and then iterate and moderate along the way."

Skilton: What's illustrated [at Harvard Medical School] is this need to move to more continuous-release or continuous-improvement type of life cycle. This is a transformation for IT, which may be typically more project-cycle based. It's a subtle difference, but it's one that is fundamentally changing the way you would offer an incrementalized service as opposed to more of a clunky, project-based, traditional waterfall approach.

We're seeing software as a service (SaaS), due to the economic conditions, taken quite seriously now, particularly targeted at specific business processes, but also starting to become potentially more mainstream. Clearly, with Salesforce.com and others like that, we are seeing that starting to accelerate.

... We're starting to see utility computing becoming much more common mainstream, so that it’s no longer a fad or an alternative to mainstream. We're seeing that sort of consistency.

Demonstrate success early

Athanasoulis: It's always easier to show someone something that's already working and say, "Do you want to hop onto this bus" than to say, "We're going to build this great new giant infrastructure, and just trust us, it's going to work great. So, hop on board now, before anyone has even seen it or tried it out." It's having the ability to let people walk before they run. Come on and try it out. If it doesn’t work for you, so be it, but you also have demonstrated successes that people can point to.

... The CIO at Harvard Medical School, John Halamka, had the vision to start this. It started with his initial vision and going to bat to move from everyone from doing their own thing and setting up their own infrastructure, to creating a cloud that will actually work for people.

He had the foresight to say, "Let's try this out." He went to his leadership, the dean and others and said, "Yes, we're taking a chance. We're going to spend some money. We're not going to spend a huge amount of money until we prove the model, but we're going to have to put some money in and see how this works." It was a very interesting communication game.

Peyret: From an enterprise architect (EA) point of view, we should ... determine what are the elements that can migrate to the cloud, different types of cloud. Then, we should try to evangelize. The EA should be in between business and IT. That’s a good place to make a right choice and mitigate risks and choices.

... The EA should participate to establish and negotiate what I call the business service catalog, something that will be an extension of the ITIL service catalog, which is very IT-based and IT-defined.

Something that is missing currently within ITIL V3 is how to deal with the business to define the service and define also the contract in terms of cost and of service level agreement (SLA). But, it's not only the SLA. It's broader than that. That's something that's missing at the moment. Most of the EAs are not participating in that.

... The business service catalog is the next step. We have heard in enterprise architecture about business capabilities. We talked about that business capabilities to help develop business architecture.

A missing link

W
e have also heard SOA. There is a missing link in between -- the business service catalog. It's a way we will contractualize. I like very much the fact that you said, we are contractualizing, but with flexibility. We should manage that flexibility. We should predict what that flexibility means in terms of impact. Perhaps that service is not valuable for other parts of the company.

That's where I think that EA and the next step for EA will take place. SOA is not an end, and the next step will be the business service catalog, which we will develop to link to the business capabilities.

Isom: The catalog of services would be great. I think we need to be careful about that catalog of services, so that it doesn’t become too standardized.

We need to be careful with the catalog of services that we offer, but I definitely think that it is a new way of thinking, when it comes to the role and capacity of IT.



As I mentioned earlier today in one of my presentations, you want to be careful with that standardization, because you do want to give people some flexibility, but you need to manage that flexibility. So, you need to be careful. We need to be careful with the catalog of services that we offer, but I definitely think that it is a new way of thinking, when it comes to the role and capacity of IT.

It’s a new way of thinking, because along with that comes service management. You can't just think about offering the services. Can you really back up what you offer? So, it does introduce more thinking along those lines.

... The enterprise architect would be the one who would provide that enterprise view and make sure that anything that we do is thought out from a holistic perspective, even though we may actually start practicing on a smaller scale or for a smaller domain.

A good practice would be to involve the enterprise architect, even though we may start with a specific domain for implementing the cloud, because you've got to keep your eye on the strategic vision of the company.

... As far as what’s driving cloud as a solutions strategy is the need to improve business performance. If we can get solutions that will help drive business performance and business sustainability, the cloud is a good place for that.

... You can’t produce cloud solutions in a vacuum. You won’t get any consumers. So, it’s a great venue for cloud providers to work with business stakeholders to explain and explore opportunities for valuable services.

Athanasoulis: Defining the service with the users is the first clear step, and obviously getting the requirements from the users, particularly in an organization like our medical school, where they have choices and they don’t have to use the systems.

We have people who want to just come in and put in systems, buy a rack of stuff and put it under the lab bench, and then they are surprised when the power and cooling isn’t there to meet the requirement.



... As IT leaders, we all know that there is now a marketplace. The public cloud is available to folks. People can get on Amazon EC2. They can get on to these various clouds and they can start to use them. That forces us to have compelling cloud offerings that are more cost effective than what they can go get out in the public sector.

... We view the public cloud as an extension of the private cloud to the degree that there is consistency of virtual machine definitions and to the degree that we can make a node on the public cloud look exactly like a node on the private cloud and make the same databases available there.

If someone has the money, they want the capabilities, say 10,000 processor hours or 100,000 processor hours, whatever it might be, between now and this deadline three weeks from now, and they are willing to spend the money, wouldn’t it be great if transparent to them, they just spend up to $100,000, $200,000, whatever their budget is, and let this stuff go from our private cloud out to the public cloud. What a great solution that would be for folks.

... So, having this balance of bringing in an IT specialist, the enterprise architect, to define the requirements in joint-step -- back to the dance with the customers -- was really what allowed us to be successful.

A new question

Skilton: The portfolio needs to be put in place, but it also needs another set of service management investment tools to control data distribution, compliance, or access and security control, and things like that.

I detect a worry about whether I can outsource that. Do I need to do something in-house? What do I need to spend money on? Because that's a block, and people need to understand that.

... What we are seeing with clients now is that they are over the initial infrastructure as a service (IaaS), platform as a service (PaaS), SaaS, and business process as a service-sort of conversation. They're now asking, "What cloud services do you do?"

What they mean by that is that they need to see your cloud security reference model. They need to see your cloud services model. They need to understand the type of services that you can offer into a portfolio and then the types of service catalogs that you can interact with them.

They then make a decision. Does that need to be on-premise, can it be out in the cloud, or is there something as a hybrid? They're on that page now, and there is a strategic planning process starting to evolve around that.

Flexible vision


Athanasoulis: You want to iterate and you have to have a vision of where you are going.

If you're taking a car trip and you're going to drive from here to Ohio tomorrow, we know where we're going, we have our map, we start to drive, but we might along the way find, that the highway is clogged with traffic. So, we're going to go around over here, or we are going to take a detour.

Perhaps, somewhere along the way you say, "You know what, now that we have been learning more, Ohio isn't really where we wanted to go. We actually want to keep on going. We're heading right out to Colorado, wherever it may be." But, you have to have a vision of where you are going.

Then, to keep things from spinning out of control along the way, it's really important to know the potential factors that might lead to things starting to fall apart or fray at the edges. How do you monitor that you have the right capacity in place? You don't want to sell something to everyone and then find six months into it that you're way oversubscribed and everyone is bitter and unhappy, because there isn't the capability that they expected.

Isom: The IT department should be more focused now on providing information technology as a service. It’s not just a cloud figure of speech. They are truly looking at providing their capabilities as a service and looking at it from an end-to-end perspective.

That includes that service catalog and includes some of the things you were talking about, how to make it easier for consumers to actually consume the services, and also making sure that the services that they do provide will perform, knowing that the business consumers will go somewhere else if we don't. The services are just that available now. You really have to think about that. That shouldn’t be the driving force for us, providing IT as a service, but it should be a consideration.

The IT department should be more focused now on providing information technology as a service. It’s not just a cloud figure of speech.



Peyret: What I wanted to recommend is that you should evangelize your IT person to act as an IT service. What does that mean? That means that you should recommend to them to contractualize their service, to express and establish, through the business service catalog, including some pricing aspects. Within the enterprise, where you have some funding and no problem about funding, you should contractualize. That’s absolutely key to make the adoption of cloud, any type of cloud, easier. That would be more or less transparent.

Risk mitigation

Isom: The cloud can be a risk mitigator. ... We talked about how we can help mitigate the risk of losses in product, sales and services, because capabilities are now made faster. There is also that infrastructure to try things out. If you don’t like it, try something else, but that infrastructure is more readily adaptable with cloud.

Also, there's the fact that there is the mitigation of the proliferation of licenses and excess inventory that you have with respect to products, software, and things like that. We can help mitigate that with the cloud, with the pooling of licensing and things like that, so you can reach cloud from that respect.

Skilton: From the business side, I would recommend to go out and look at best practices. Go and look at examples of where SaaS is already being used.

The number of case studies are growing by the month. So, for businesses, go out and learn about what's out there, because it is real. It’s not a cloud.



It constantly amazes me how many blue-chip Fortune 500 companies are already doing this.

From an IT point of view, as we have heard from Marcos, go and learn. Try it, pilot it in your organization. I'll go further and say, practice what you preach. Test it out on one of your own business processes.

From my own experience in my own company, we do use what we preach in the cloud. That way, you learn what it means internally to yourself to transform, and you can take that learning and build on it. You can't get it in a book. You can’t just read it. You have to do it.

Athanasoulis: I will think of four words that begin with P to describe where I would emphasize. One, pilot, as we have already been saying. Two, participation. You have to get buy-in and participation across the entire group. Three, obviously produce results. If you don’t produce results, then it’s not going anywhere. And then, promotion. At the end of the day, you also have to be out there promoting this service, being an advocate and an evangelist for it, and then, once the snowball gets going, there is no stopping it.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

WSO2 offerings add zest to Carbon 3.0 platform for BPM, cloud construction

WSO2 kicked its Carbon 3.0 Apache-based middleware platform up a notch today with the announcement of five new releases that take advantage of Carbon 3.0's process-oriented components and building blocks for cloud computing.

Among the new offerings from the Mountain View, Calif. company are Business Process Server (BPS) 2.0, Data Services Server (DSS) 2.5, Business Activity Monitor (BAM) 1.1, Gadget Server 1.1, and Mashup Server 2.1. All are designed to aid users in customizing IT application and Web service deployments across servers -- and in private and public clouds. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

Based on the componentized OSGI-compliant Carbon platform, all five inherit the functionality that was added to Carbon 3.0 in June. This includes:
  • Component Manager, which provides an interface that lets developers simply point-and-click to extend the capabilities of the middleware. It then acquires, installs, and provisions the runtime automatically

  • Web Services Dynamic Discovery (WS-Discovery) support to automate the detection and configuration of Web service endpoints

  • Enhanced integration with the WSO2 Governance Registry, facilitating large clustered deployments and cloud implementations.
“The lean approach of our WSO2 Carbon platform means enterprise IT teams can quickly deliver projects using just the functionality they need, and over the long term they benefit from a clean, interoperable and effective enterprise architecture,” said Paul Fremantle, co-founder and CTO. “Our newest products based on Carbon 3.0 continue that commitment with a wealth of new functionality that can be customized to an IT project’s needs.”

Business processes

BPS enables developers to easily compose and orchestrate business processes using WS-BPEL. Version 2.0 adds support for two emerging open source human-centric process specifications, which are currently under OASIS standardization review. Additional new features include Scheduled instance cleanup, Java Message Service (JVM) API support, and XML Path extension support.

DSS enables database administrators and database programmers to create and manage WS-* style Web services and REST-style Web resources using enterprise data. Version 2.5 adds several features to offer greater flexibility and efficiency in creating and managing data services, including:
  • Contract-first data service creation in which developers start with XML schema and WSDL definitions to create their data services.

  • Batch mode for insert, update and delete operations

  • Boxcarring support, meaning developers now "boxcar" a number of service requests into a single database transaction

  • Data validation logic

  • Support for additional data types including array, binary input/output, and Carbon data sources
BAM provides real-time visibility into service-oriented architecture (SOA) processes, transactions and workflows. Version 1.1 adds support for the widely adopted Oracle relational database management system (RDBMS), as well as support for deployment on the JBoss, Apache Tomcat and WebLogic application servers.

Our newest products based on Carbon 3.0 continue that commitment with a wealth of new functionality that can be customized to an IT project’s needs.



Gadget Server lets users implement and modify a true Web-based portal that can be accessed anywhere via a browser. Enhancements to version 1.1 include inter-gadget communication support, a gadget editor, and support for i18n.

Mashup Server provides the reusability, security, reliability and governance required for an SOA. Version 2.1 makes it easier to share mashups by providing the ability to upload a mashup together with all the required resources in a ZIP folder.

BPS 2.0, DSS 2.5, BAM 1.1, Gadget Server 1.1, and Mashup Server 2.1 are available today as software downloads and as WSO2 Cloud Virtual Machines running on the Amazon Elastic Computing Cloud (EC2), Linux Kernel Virtual Machine (KVM), or VMware ESX. As fully open source solutions released under the Apache License 2.0, the products do not carry any licensing fees. WSO2 offers a range of additional service and support options.

You may also be interested in:

Friday, July 30, 2010

FACE initiative takes aim at improved interoperability and standards among future military avionics platforms

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Coming to you from The Open Group Conference in Boston, we've assembled a panel to explore a new military aircraft systems interoperability consortium and effort, the Future Airborne Capability Environment (FACE).

FACE aims to promote and better support interoperability and standardization among future military avionics platforms across several branches of the U.S. Armed Forces. We define FACE, how it came about, and examine the consortium's basic goals under the tutelage of The Open Group.

Here to help better understand the promise and potential for FACE to improve costs, spur upgrades, flexibility, and accelerate the avionics components' development agility are our panelists, David Lounsbury, Vice President for Collaboration Services at The Open Group, and Mike Williamson, Deputy Program Manager for Mission Systems with the Navy's Air Combat Electronics Program Office. The conversation is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts from last week's discussion:
Williamson: FACE started out as a Navy program. As we started looking around to see what other services were doing, we found that the Army and the Air Force were also doing similar things and trying to go down that same path.

The Army had a program called Integrated Data Modem (IDM). The Air Force was doing a program called Universal Network Interface (UNI). We got together with them and are now teaming up to put this consortium together and go forward to define standards for what FACE will be.

We're really addressing all of the capabilities and all of the systems onboard the aircraft. In the past, we identified a requirement and usually developed a system to meet that requirement.

Sometimes it’s easier to describe a program by saying what it’s not. FACE is not a program. FACE is not a computer. FACE is not a software package. FACE is an environment, and it’s specifically set up as an environment. It was an idea that came about to try to reduce costs, improve interoperability across naval aircraft, and to get capabilities out to the fleet faster and quicker, as best we can.

What we are trying to do with FACE now is to develop a computer environment that’s on the aircraft already. As we define new capabilities and new things that we want to put out into the fleet, we can host software in the computer environment that’s already there, rather than building a brand new box, software, or program for every single capability that we put out there.

Lounsbury: We really need modularity within the necessary structures of testing for things that are going to be used, and be able to get those new capabilities in the cycle quickly and get them out to the war fighters.

The testing and deployment schedule is a real issue for agility for our forces. The one thing we know is that threats change all of the time, and we need the ability to field new capabilities quickly, both as the mission changes and also as the technology evolves.

The Open Group has a couple of areas similar to this. We've got our Real-time and Embedded Systems Forum for some of the fundamental standards.

We've been running a consortium called DirectNet, which is very similar to FACE in the sense that it is principally focused on a defense need, but in the context of open systems. Through connections developed there, Mike found us and we talked about what we can do to organize.

We have a number of activities that are on this government-industry boundary, where some of the lessons that industry learned about how open standards can bring agility and help control your cost can benefit military systems like this.

Williamson: The idea for FACE really started about a year ago in the Navy... . We started looking at what we could do and what we needed to do.

What we're going to be doing principally is marshaling, as always, the expertise of the members to address various parts of the problem.



The timeline is very, very tight. We're looking at having some kind of standards defined by first quarter of calendar year 2011 -- next year. By the end of March, we're looking to have defined a set of standards on what the FACE environment will look like, because we have procurements coming out at that time that we intend to have FACE be part of those requests for proposals (RFPs) that are going to be coming out.

One of the things that we have looked at is the fact that commercial industry is doing this. Commercial aviation is already doing a lot of this. We've not been able to do that within naval aviation to date, and primarily that's been driven by safety-of-flight issues, issues within our operability, and issues with how we contract for things. We need to get beyond that.

We're actually using the model of what commercial aviation has done, with open systems, open source software, licensed software, and those kinds of things, to ask how we can bring that into our platforms. We need to have an environment that we can have a library of software applications that can be used across multiple platforms in the same environment.

That solves two problems for us. One, it gets capabilities to the fleet cheaper and faster. And two, it solves the interoperability issues that we have today, where even sometimes when we have the same standards, two different platforms implement the same standards in two different ways and they can't talk to each other. They are not interoperable. Those are the things that we are trying to solve with this.

Lounsbury: One of the explicit goals of FACE, and we performed a business work group to address these, is to talk about the business-model issues. What does open licensing mean in a government context? What would be appropriate ways of sharing intellectual property rights (IPR) in the run up to this?

These are all things that commercial people are familiar with through years of standards activity, but it's kind of new to some of the players in the government space. So, we're going to make sure that those things are explicitly addressed. It’s not just the technological solution, though that’s the critical part, but the fact that people can actually buy -- and that we will have a marketplace of -- components that can be licensed and reused.

The government is a complex place, and there are lot of programs, so principal growth will be different programs inside the government. But, we do envision that some of the things that will be developed here may be applicable to other systems.



Typically, what The Open Group does is provide a structure. Members come in, they bring their business expertise, their subject matter expertise, and operate. What we provide is the framework, where we can have an open consortium that has a balance of interest between the suppliers of components, all government agency programs doing procurement, and the integrators who put it all together. We have the proven process at The Open Group to make sure that we have that openness that's important for protecting all of the parties.

Williamson: There have been a lot of things that I've learned, having The Open Group come along and take a lead on all of this and developing the standards. The Navy and the Department of Defense (DoD) aren't real good at developing standards ourselves. We've tried to do it in the past and we've failed miserably with some of the attempts that we have had. Having The Open Group come and join us, and then bringing industry in, was the right thing to do.

Having this consortium with industry, Navy, Army, and Air Force acquisition teams, and fleet participation, has been the right way to go. It’s the only way we can really define the standards and get in place the standards that we really need to get at, with all those inputs coming together.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, July 27, 2010

Analysts define business value and imperatives for cloud-based B2B ecommerce trading communities

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Ariba.

As more services, applications, and data are developed for -- and delivered via -- cloud models, how do business to business (B2B) commerce and procurement adapt?

Or, perhaps we have the cart in front of the horse. Are the new requirements and expectations of modern, global business processes, in fact, driving the demand for IT solutions that can be best delivered via cloud models?

Either way, the promise of cloud aligns very well with the sophistication of modern B2B ecommerce and the pressing need for speed, agility, discovery, efficiency, and adaptability. Ecosystems of services are swiftly organizing around cloud models. How then should businesses best respond?

To answer these questions, BriefingsDirect assembled a group of IT industry analysts and executives at the recent Ariba LIVE 2010 conference in Orlando, Fla. to explore the business implications for ecommerce in the cloud-computing era.

Panelists include Robert Mahowald, Research Vice President at IDC; Mickey North Rizza, Research Director at AMR Research, a Gartner company; Tim Minahan, Chief Marketing Officer at Ariba, and Chris Sawchuk, Managing Director at The Hackett Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Minahan: What we're seeing now is that we’ve really entered the state of new normal. We’ve just gone through a major recession. Companies have taken a lot of cost out of their operations. It's cost reduction in the form of laying off employees, and reducing infrastructure cost, including IT cost.

If you look at most of the studies out there, the CEOs, CFOs, and COs, are saying, "We're not hiring that back. We are looking for a new level of productivity, and more agility. To do so we're going to rely much more on external trading partners, which means we're going to need to collaborate with them much more.

"We're also going to look at alternative IT models to help support that collaboration outside of our enterprise, because our ERP investments do very well at automating information and process within the four walls. It stops at the edge of the enterprise and, at the end of the day, we do business. We buy, sell, and manage cash with our external trading partners and we need to automate and streamline those processes as well."

SaaS was all about a new delivery model for an existing business process. When you move into cloud, when you move into some of these collaborative processes around supply chain, and procurement, and the financial supply chain, it really involves multiple parties. It's really about business process transformation, a business process that's shared among multiple trading partners.

To do that, it’s not just the ability for everyone to share a common technology platform upon which they can collaborate around the process, but rather everyone needs to be digitally connected in a community, so that they can add new trading partners or remove old trading partners, as their needs change.

North Rizza: We’re actually finding companies are spending more time looking at the cloud. What happens is that you have your trading partners specifically around the sale side and the supply side of the organization. If you start looking just across your own businesses and internal stakeholders, you realize they can actually work together, get the information they need, and spend a lot of time on their business process, using just basic technology and automation components.

But, when they start looking at that extended network, into their trading partners, they realize we’re not getting everything we need. We need to pull everything together and we need to do it more quickly than what we’re doing. We can’t wait for on-premise, behind-the-firewall type applications. We need something that’s going to give us both the service and the technology and allow us to work in that trading-partner community in a collaborative environment.

In a recent study we just did, we found that 96 percent of the companies in that study are, or will be, using cloud applications. Within that, we see 46 percent are using hybrid cloud solution. That solution is really around the cloud technology, optimizing across their IT investment and on-premise, typically around enterprise resource planning (ERP), but there are many other instances as well. And then, they're tying that back in to the cloud services, where it’s actually extending the capabilities from their IT standpoint. And, that’s 46 percent out of the 96 percent within that.

... We think there are some great opportunities here for companies to move forward.

Mahowald: There is a lot more possibility now for collaborative commerce, when business applications have built a scenario where a lot of our data and application functionality exists outside of your organization. In that situation, it becomes far easier to source new partners and customers, leverage and trust data that lives in the cloud, and invite authenticated partners to enter into that kind of exchange.

It’s easy to see the way that the cloud has grown up and become more capable to support some of the business requirements that we have. At the same time, many of our business requirements are changing to adapt to a growing wealth of solutions in the cloud.

North Rizza: We've also seen the applications come out even from the ERP standpoint in the different pieces that come together to marry that entire ERP system. What you see happen is that every function has a piece of that. You see the various markets that have developed supplier relationship management (SRM), customer relationship management (CRM), and what not, out there in the marketplace.

What's now evolved is that those business processes really go end to end into that trading partner network. What you are finding is that you can use those applications, but you don't necessarily have to use those applications. You can use the services that go with it.

What you're doing is driving value. At the end of the day, all you want to do is deliver a value, and that's what's happening.



The point is that you're actually making some cost-value trade-offs, lowering your overall cost and extending some of this into your partnerships and your trading partner community. What you're doing is driving value. At the end of the day, all you want to do is deliver a value, and that's what's happening.

Sawchuk: One benefit that we didn’t touch on during our discussion here is a benefit I call the democratization of collaboration. When you think about the past, it has always been the big companies who could collaborate. They had the tools, they had the investments, they had the dollars.

What you're now had seeing is an environment where anybody can participate. Small, large, etc. all become connected in this world. That just takes things to a different level than what we’ve experienced. Just economically, everyone is now connected across the board in a much more equal and level playing field.

Focus on process agility

We now have the opportunity, the focus on agility, and the focus on where we’re going. It's a much more volatile world. We’ve got to build more agility and more variabilization into our business models, not only our staffing, our people, the way we do business, and our technology tools, but also the more extended value chain. Where we draw the lines between what we do becomes much more transparent and it's easier to make those decisions than we have in the past.

Minahan: There is a massive movement afoot in the enterprise space that's beginning to blur the line between enterprise applications and the community. What got in the way of business-to-business collaboration before was that there was no transparency. There was no efficient way to discover, qualify, and connect with your trading partners, before you could even collaborate with them.

There was a level of un-trust, a higher transaction cost that artificially inflated prices and costs that went around things. The ability to get rid of all the paper, connect digitally with everyone, and then open this up in a community environment, where you can collaborate in a host of different ways and not just around the transaction really is transformative.

As companies begin to look at particularly "extraprise" type applications, the community is going to become more and more important, whether that's the community of you and your trading partners, or a community of you and your peers, that can help you design the better process.

Sawchuk: What's going to be key over time is think about the lives we live today and the informational overload that we have. As you can rate these communities, there is going to be all kinds of information intelligence created. How do we dissect that and make it smart, relevant, timely, and in bite-sized chunks that we can deal with?

So the question is whether we're going to create all this community, all of this collaboration, all of this information in services, and then be able to dissect that and make it relevant for what we are trying to achieve. It's going to be a key differentiator.

Overload of information

We’ve always been in a time, where we try to get access to more information, more knowledge, and more intelligence. We're quickly moving into a period of time where it's going to be an overload of that kind of information.

Minahan: An important component, and which Chris is talking about, is taking that intelligence and putting in context of the business process. The reason we have information overload today, or one of the reasons, is because of the information that’s out there. We’ve aggregated all this information. I'm doing business process over here, and, oh God, I go over there to get that information. It's the ability to aggregate information and put it right within context with other business process.

The reason we have information overload today, or one of the reasons, is because of the information that’s out there.



So, I've gone out and aggregated my spend. I know where my spend leverage is. Guess what! I now have this market intelligence on what's going on, pricing in the season that I'm supplying the market, and what other buyers are experiencing in the market.

It might not be such a good time to go out and source that, so maybe I will go my second largest category of spend and source that first. That’s the type of the analytic that you need, which is in context with the business process.

Mahowald: It's important, as we start to put more-and-more business activities into these communities -- and more-and-more of our data and transactions happen outside the organization on SaaS services -- that we understand exactly what that means for organizations, where customer data and our own data actually resides and how we can find it during an audit in a way that guarantees that we've met our business requirements.

We don’t want to restrict ourselves and say don’t participate in this community. I think it's healthy and it ultimately drives tremendous value for us. What we do want to say is that we have to apply the same kind of governance and rules that help us manage our processes that are now onsite in this new world, where we are participating in communities and SaaS services. The same thing should apply.

The bottom line is that if you don’t do it, there isn’t even a ton of money on the table. You’re not able to take out the cost that you want to take out.



North Rizza: Basically, what we see the best companies doing [around cloud computing] is that they start to understand what their overall business objectives are. Then, they peel that back and say, "What am I looking at in my different functions across the business and what does that mean, if I want to improve the process and I want to get those end results."

As they starting peeling that back, they soon discover that it’s usually around revenue cost savings. It’s also about improving the business process and reducing cycle time. When you put all those together and you look at a recent study that we just did, you recognize that there are very large gaps between those that have already deployed cloud-based technologies and solutions.

Then, you step back to those that are even considering or using them as part of their overall extended enterprise. What we’re finding is that the gap is so large and its benefits are so great that there is no reason you wouldn’t want to take all that and put it in there.

The bottom line is that if you don’t do it, there isn’t even a ton of money on the table. You’re not able to take out the cost that you want to take out. You can’t get the products in there and teach the individuals the business process and cut down your cycle time that you’re going for. And most importantly, you’re not getting your revenue. You’re leaving it on the table.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Ariba.

You may also be interested in:

Three new Open Group white papers help make for a peaceful leap to cloud computing

This guest blog comes courtesy of Dr. Chris Harding, who leads the Cloud Computing Working Group at The Open Group. He can be reached at c.harding@opengroup.org.

By Chris Harding


History has many examples of invaders wielding steel swords, repeating rifles, or whatever the latest weapon may be, driving out people who are less well-equipped. Corporate IT departments are starting to go the same way, at the hands of people equipped with cloud computing.

Last week I was at The Open Group Conference in Boston. The Open Group is neutral territory with a good view of the IT landscape: An ideal place to watch the conflict develop.

The Open Group Cloud Computing Work Group has been focused on the business reasons why companies should use cloud computing. The Work Group released three free white papers at the Boston conference, which I think are worth a closer look: “Strengthening your Business Case for Using Cloud,” “Cloud Buyers Requirements Questionnaire,” and “Cloud Buyers Decision Tree.” Three Work Group members, Penelope Gordon of 1Plug, Pam Isom of IBM, and Mark Skilton of Capgemini, presented the ideas from these papers in the conference’s Cloud Computing stream.

"Strengthening your Business Case for Using Cloud" features business use cases based on real-world experience that exemplify the situations, where companies are turning to cloud computing to meet their own needs. This is followed by an analysis intended to equip you with the necessary business insights to justify your path for using cloud.

My prediction: Over time, cloud will be able to occupy the fertile valleys, and corporate IT will be forced to take to the hills.



The “Cloud Buyers' Decision Tree” can help you discover where cloud opportunities and solutions might fit in your organization. And the "Cloud Buyers' Requirements Questionnaire" will help you identify your requirements for cloud computing in a structured way, so that you can more easily reach the best solution. These two papers contain ideas that will help you assess the potential the cloud has for your organization, and they will be refined as practical decision tools through use out in the field.

Deciding whether, and where, to use cloud computing can be difficult. Trying it out is easy. You can set up a small-scale trial quickly, and the cost is low. You can probably pay by credit card.

Assessing the financial implications for a particular application is relatively straightforward, although there can be unseen pitfalls. But, assessing the risks is more of a problem, particularly because cloud is so new, and the dangers -- where they are known -- may not be understood. And, integrating cloud solutions with each other and with in-house systems can present significant problems. Best practices in these areas are still evolving.

The white papers will help you reach these decisions, and understand where cloud is a good fit for businesses. Today, it is often a good fit, but there are many situations where it is not the best solution. These situations will become less common as cloud computing matures and enterprise architectures evolve to be more cloud-compatible. But there will always be cases where computer capacity should be retained in-house.

So perhaps the data center isn't quite dead, but cloud computing is certainly making headway. My prediction: Over time, cloud will be able to occupy the fertile valleys, and corporate IT will be forced to take to the hills.

This guest blog comes courtesy of Dr. Chris Harding, who leads the Cloud Computing Working Group at The Open Group. He can be reached at c.harding@opengroup.org.

You may also be interested in: