Monday, August 13, 2012

Ocean Observatories Initiative: Cloud and Big Data come together to give scientists unprecedented access to essential climate insights

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

A fascinating global ocean studies initiative helps best define some of the IT superlatives around big data, cloud computing, and middleware integration capabilities.

The Ocean Observatories Initiative (OOI) and its accompanying Cyberinfrastructure Program aims to provide an unprecedented ability to study the Earth's oceans and climate using myriad distributed data centers and literally oceans' worth of data.

The scale and impact of the science's importance is closely followed by the magnitude of the computer science needed to make that data accessible and actionable by scientists. In a sense, the OOI and its infrastructure program, a major undertaking by the National Science Foundation, are constructing a big data-scale programmable and integratable cloud fabric for oceanography.

We’ve gathered three leaders to explain the OOI and how the Cyberinfrastructure Program may not only solve this set of data and compute problems, but perhaps establish a path to how future massive data and analysis problems are solved.

Here to share their story on OOI are:
  • Matthew Arrott, Project Manager at the OOI Cyberinfrastructure. Matthew's career spans more than 20 years in design leadership and engineering management for software and network systems. He’s held leadership positions at Currenex, DreamWorks SKG, Autodesk, and the National Center for Supercomputing Applications. His most recent work has been with the University of California as e-Science Program Manager while focusing on delivering the OOI Cyberinfrastructure capabilities.
  • Michael Meisinger, Managing Systems Architect for the Ocean Observatories Initiative Cyberinfrastructure. Since 2007, Michael has been employed by the University of California, San Diego. He leads a team of systems architects on the OOI Project. Prior to UC San Diego, Michael was a lead developer in an Internet startup, developing a platform for automated customer interactions and data analysis. Michael holds a master's degree in computer science from the Technical University of Munich and will soon complete a PhD in formal services-oriented computing and distributed systems architecture.
The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Meisinger: The Ocean Observatories Initiative is a large, US National Science Foundation project intended to build a platform for ocean sciences with an operational life span of 30 years.

It comprises a construction period of five years and will integrate a large number of resources and assets. These range from typical oceanographic assets, like instruments that are mounted on buoys deployed in the ocean, to networking infrastructure on the cyberinfrastructure side. It also includes a large number of sophisticated software systems.

I'm the managing architect for the cyberinfrastructure, so I'm primarily concerned with the interfaces through the oceanographic infrastructure, including beta interfaces, networking interfaces, and then primarily, the design of the system that is the network hardware and software system that comprises the cyberinfrastructure.

OOI’s goals include serving the science and education communities with their needs for receiving, analyzing, and manipulating ocean sciences and environmental data. This will have a large impact on the science community and the overall public, as a whole, because ocean sciences data is very important in understanding the changes and processes of the earth, the environment, and the climate as a whole.

Ocean sciences, as a discipline, hasn't yet received as much infrastructure and central attention as other communities. So the OOI initiative is a very important to bring this to the community. It has an almost $400 million construction budget, and an annual operations budget of $70 million for a planned lifetime of 25 to 30 years.

Gardner: What are the big hurdles here in terms of a compute requirements? What makes this so challenging?

Arrott: It has a number of key aspects that we had to address. It's best to start at the top of the functional requirements, which is to provide interactive mission planning and control of the overall instrumentation on the 65 independent platforms that are deployed throughout the ocean.

The issue there is how to provide a standard command-and-control infrastructure over a core set of 800 instruments, about 50 different classes of instrumentation, as well as be able to deploy -- over the 30-year lifecycle -- new instrumentation brought to us by different scientific communities for experimentation.

The next is that the mission planning and control is meant to be interactive and respond to emergent changes. So we needed an event-response infrastructure that allowed us to operate on scales from microseconds to hours in being able to detect and respond to the changes. We needed an ability to move computing throughout the network to deal with the different latency requirements that were needed for the event-response analysis.

Finally, we have computational nodes all the way down in the ocean, as well as on the shore stations, that are accepting or acquiring the data coming off the network. And we're distributing that data in real time to any one who wants to listen to the signals to develop their own sense-and-response mechanisms, whether they're in the cloud, in their local institutions, or on their laptop.

Domain of control

The fundamental challenge was the ability to create a domain of control over instrumentation that is deployed by operators and for processing and data distribution to be agile in its deployment anywhere in the global network.

Gardner: Why is this a good time to try to solve this from a software distribution and data distribution perspective?

Richardson: It's the scale that's changed the architecture and deployment patterns that people have been using for these applications.

We can see the OOI project is essentially bringing the science needed to collaborate between vast numbers of sensors and signals and a comparatively smaller number of scientists, research institutions, and scientific applications to do analytics in a similar way as to how Facebook combines what people say, what pictures they post, what music they listen to with everybody’s friends, and then allow an application to be attached to that.

So it’s a huge technology challenge that would have been simply infeasible 12 years ago in the year 2000, when we thought things were big, but they were not. Now, when we talk about big data being masses of terabytes and petabytes that need to be analyzed all the time, then we’re starting to glimpse what's possible with the technology that’s been created in the last 10 years.

It’s a huge technology challenge that would have been simply infeasible 12 years ago.



If we had been talking about this 12 years ago, in the year 2000, we would have been talking about companies like Google and Yahoo, which we would not have considered to be of moderate scale.

Since then, many companies have appeared. For example, Facebook, which has many hundreds of millions of users connecting throughout the world, shares vast amounts of data all the time.

In addition to that, many of these companies have brought out essentially a platform capability, whereby others, such as Zynga, in the case of Facebook, can create applications that run inside these networks -- social networks in the case of Facebook.

Arrott: The challenge goes beyond just the big data challenge. It also now introduces, as Alexis said, the concept of the instrument as an equal partner with the human in the participation in the network.

So you now have to think about what it means to have a device that’s acting like a human in the network, and the notion that the instrument is, in fact, owned by someone and must be governed by someone, which is not the case with the human, because the human governs themselves. So it represents the notion of an autonomous agent in the network, as well as that agent having a notion of control that has to stay on the network.

Gardner: I’d like to try to explain for our audience a bit more about what is going on here. We understand that we have a tremendous diversity of sensors gathering in real-time a tremendous scale of data. But we’re also talking about automating the gathering and distribution of that data to a variety of applications.

Numerical framework

We’re talking about having applications within this fabric, so that the output is not necessarily data, but is a computational numerical framework that’s then distributed. So there's a lot of data, a lot of logic, and a lot of scale. Can one of you help step me through it all a bit more to understand the architecture of what’s being conducted here?

Meisinger: The challenge, as you mentioned, is very heterogeneous. We deal with various classes of sensors, classes of data, classes of users, or even communities of users, and with classes of technological problems and solution spaces.

So the architecture is based on a tiered model or in a layered model of most invariant things at the bottom, things that shouldn’t change over the lifetime of 30 years to serve the highest level of attention.

Then, we go into our more specialized layered architecture where we try to find optimal solutions using today’s technologies for high-speed messaging, big data, and so on. Then, we go into specialized solutions for specific groups of users and specific sensors that are there as last-mile technologies to integrate them into the system.

Then as you go towards the core, you approach the invariants of the system.



So you basically see an onion layer model of the architecture, externalization outside. Then as you go toward the core, you approach the invariants of the system.

This architecture is based on defining a common interaction format. It’s based on defining a common data format. Our architecture is strongly communication-oriented, service-oriented, message-oriented, and federated.

As Matthew mentioned, it’s an important means to have the individual resources, agents, provide their own policies, not having a central bottleneck in the system or central governing entity in the system that defines policies.

Strongly federated


Arrott: Think of it as its four core layers. There is the underlying network resource management layer. We talk about agents. They supply that capability to any process in the system, and we create devices that process.

The next layer up is the data layer, and the data layer consists of two core parts. One is the distribution system that allows for data to be moved in real-time from the source to the interested parties. It’s fundamentally a publish-subscribe (pub-sub) model. We're currently using point-to-point as well as topic-based subscriptions, but we're quickly moving toward content-based routing, which is more based on the the selector that is provided by the consumer to direct traffic toward them.

The other part of the data layer is the traditional harvesting or retrieval of data from historical repositories.



The other part of the data layer is the traditional harvesting or retrieval of data from historical repositories.

The next layer up is the analytic layer. It looks a lot like the device layer, but is focused on the management of processes that are using the big data and responding to new arrival of data in the network or change in data in the network. Finally, there is the fourth layer, which is the mission planning and control layer, which we’ll talk about later.

Gardner: Alexis, when you saw the problem that needed to be solved here, you had a lot of experience with advanced message queuing protocol (AMQP). Why did this problem seems to be the right fit for that particular technology, RabbitMQ, and a messaging infrastructure in general?

Richardson: What Matthew and Michael have described can be broken down into three fundamental pieces of technology.

Lot of chatter

Number one, you have a lot of chatter coming from these devices -- machines, people, and other kinds of processes -- and that needs to get to the right place. It's being chattered or twittered away and possibly at high rates and high frequencies. It needs to get to just the set of receivers following that stream, very similar to how we understand distribution to our computers. So you need what’s called pub-sub, which is a fundamental technology.

In addition, that data needs to be stored somewhere. People need to go back and audit it, to pull it out of the archive and replay it, or view it again. So you need some form of storage and reliability built into your messaging network.

Finally, you need the ability to attach applications that will be written by autonomous groups, scientists, and other people who don’t necessarily talk to one another, may choose these different programming languages, and may be deploying our applications, as Matthew said, on their own servers, on multiple different clouds that they are choosing through what you would like to be a common platform. So you need this to be done in a standard way.

AMQP is unique in bringing together pub-sub with reliable messaging with standards, so that this can happen. That is precisely why AMQP is important. It's like HTTP and email SMTP, but it’s aimed at messaging the publish-subscribe reliable message delivery in a standard way. And RabbitMQ is one of the first implementations, and that’s how we ended up working with the OOI team -- because RabbitMQ provides these and does it well.

Gardner: I’d also like to go back to the project itself, and give our listeners a sense of what this can accomplish. I’ve heard it described as "the Hubble Telescope of oceans.

It's the notion that we're providing capabilities that do not currently exist for oceanographers.

"

Let’s go back to the oceanography and the climate science. What can we accomplish with this, when this data is delivered in the fashion we’ve been discussing, where the programmability is there, where certain scientists can interact with these sensors and data, ask it to do things, and then get that information back in a format that’s not raw, but is in fact actionable intelligence?

Matthew, what could possibly happen in terms of the change in our understanding of the oceans from this type of undertaking?

Meisinger: The primary mission of our project is to provide this platform, the space telescope in the ocean. And it’s not a single telescope. In our case, it's a set of 65 buoys, locations in the ocean, and even a cable that runs a 1,000 miles at the seafloor of the Pacific Northwest that provides 10 gigabit ethernet connectivity to the instrument, and high power.

The primary mission of our project is to provide this platform, the space telescope in the ocean.



It’s a model where scientists have to compete. They have to compete for a slot on that infrastructure. They'll have to apply for grants and they'll have to reserve the spot, so that they can accomplish the best scientific discoveries out of that system.

It’s kind of the analogy of the space telescope that will bring ocean scientists to the next level. This is our large platform, our large infrastructure that have the best scientists develop and research to best results. That’s the fascination that I see as part of this project.

Arrott: The way to think about this can be summed up as continual presence in the oceans at multiple scales through multiple perspectives.

The scope of the OOI is such that it is considered to be observing the ocean at multiple scales -- coastal, regional, and global. It is an expandable model. One of the largest classes of applications that we’ll attach to the network are the modeling, in particular the nowcast and forecast modeling.

Happening at scale

Once you have that ability to actually model the oceans and predict where it’s going, you can use that to refocus the instrumentation on emergent events. It's this ability to have long-term presence in the ocean, and the ability to refocus the instrumentation on emergent events, that really represents the revolutionary change in the formation of this infrastructure.

Gardner: Is this in some ways taking the weather of the oceans?

Arrott: There's a movement to instrument the Earth, so that we can understand from observation, as opposed to speculation, what the Earth is actually doing, and from a notion of climate and climate change, what we might be doing to the Earth as participants on it.

The weather community, because of the demand for commercial need for that weather data, has been well in advance of the other environmental sciences in this regard. What you'll find is that OOI is just one of several ongoing initiatives to do exactly what weather has done.

Science more mature


Gardner: How is it that cloud computing is being brought to bear, making this productive, and perhaps even ahead of where the whole weather and predicting weather has been?

Richardson: Happily, that’s an easy one. Imagine if a person or scientist wanted to process very quickly a large amount of data that’s come from the oceans to build a picture of the climate, the ocean, or anything to do with the coastal proprieties of the North American coast. They might need to borrow 10,000 or 20,000 machines for an hour, and they might need to have a vast amount of data readily accessible to those machines.

In the cloud, you can do that, and with big data technologies today, that is a realistic proposition. It was not five to 10 years ago. It’s that simple.

Obviously, you need to have the technologies, like this messaging that we talked about, to get that data to those machines so they can be processed. But, the cloud is really there to bring it altogether and to make it seem to the application owner like something that’s just ready for them to acquire it, and when they don’t need it anymore, they can put it back and someone else can use it.

Its common execution infrastructure subsystem is built in order to enable this access to computation and big data very quickly.



Gardner: How are cloud models enabling this at an unprecedented scale, but also at an efficient cost?

Meisinger: It does enable computing at unprecedented scale. A lot of the earth's environment is changing. Assume that you’re interested in tracking the effect of a hurricane somewhere in the ocean and you’re interested in computing a very complex numerical model that provides certain predictions about currents and other variables of the ocean. You want to do that when the hurricane occurs and you want to do it quickly. Part of the strategy is to enable quick computation on demand.

The OOI architecture, in particular, its common execution infrastructure subsystem, is built in order to enable this access to computation and big data very quickly. You want to be able to make use of execution provider’s infrastructure as a service very quickly to run your own models with the infrastructure that the OOI provides.

Then, there are other users that want to do things more regularly, and they might have their own hardware. They might run their own clusters, but in order to be interoperable, and in order to have excess overflow capabilities, it’s very important to have cloud infrastructure as a means of making the system more homogenous.

So the cloud is a way of abstracting compute resources of the various participants of the system, be they commercial or academic cloud computing providers or institutions that provide their own clusters as cloud systems, and they all form a large compute network, a compute fabric, so that they can run the computation in a predictable way, but also then in a very episodic way.

Cloud as enabler


I really see that the cloud paradigm is one of the enablers of doing this very efficiently, and it enables us as a software infrastructure project to develop the systems, the architecture, to actually manage this computation from a system’s point of view in a central way.

Gardner: Alexis, because of AMQP and the VMware cloud application platform, it seems to me that you’ve been able to shop around for cloud resources, using the marketplace, because you’ve allowed for interoperability among and between platforms, applications, tools, and frameworks.

Is it the case that leveraging AMQP has given you the opportunity to go to where the compute resources are available at the lowest cost when that’s in your best interest?

Richardson: The dividend of interoperability for the end-user and the end-customer in this platform environment is ultimately portability -- portability through being able to choose where your application will run.

Michael described it very well. A hurricane is coming. Do you want to use the machines provided by the cloud provider here for this price? Do you want to use your own servers? Maybe your neighboring data center has servers available to you, provided those are visible and provided there is this fundamental interoperability through cloud platforms of the type that we are investing in. Then, you will be able to have that choice. And that lets you make these decisions in a way that you could not do before.

Providing a strong platform or a strong technological footprint that’s not specific to any technology is a great benefit to the community out there.



Gardner: It’s been mentioned by Alexis and others that this has some common features to Twitter or Facebook.

We think of the social environment because of the scale, complexity, and the use of cloud models. But we’re doing far more advanced computational activities here. This is simply not a display of 140 characters, based on a very rudimentary search, for example. These are at the high performance computing (HPC) level, supercomputer-level types of requests and analysis.

So are we combining the best of a social fabric approach and the architecture behind that to what we’ve been traditionally exposed to in high-performance computing and supercomputing, and what does that mean for the future?

Meisinger: This is the direction in which the future will evolve, and it’s the combination of proven patterns of interaction that are emerging out of how humans interact applied to high-performance computing. Providing a strong platform or a strong technological footprint that’s not specific to any technology is a great benefit to the community out there.

Providing a reference architecture and a reference implementation that can solve these problems, that social network for sensor networks and for device computation will be a pattern that can be leveraged by other interested participants, either by participating in the system directly or indirectly, where it’s just taking that pattern and the technologies that come with it and basically bringing it to the next level in the future. Developing it as one large project in a coherent set really yields a technology stack and architecture that will carry us far into the future.

Arrott: With all the incremental change that we're introducing is taking the concepts of Facebook and of Twitter and the notions of Dropbox, which is the ability to move a file to a shared place so someone else can pick it up later, which was really not possible long ago. I had to do an FTP server, put up an HTTP server to accomplish that.

Sharing processes

W
hat we are now adding to the mix is not sharing just artifacts, but we’re actually sharing processes with one another, and then specifically sharing instrumentation. I can say to you, "Here, have a look through my telescope." You can move it around and focus it.

Basically, we introduced the concept of artifacts or information resources, as well as the concept of a taskable resource, and the thing that we’re adding to that which can be shared are taskable resources.

Meisinger: This pattern is very applicable, and it’s not that frequent that a research and construction project of that size has an ability to provide an end-to-end technology solution to this challenge of big data combined with real-time analysis and real-time command and control of the infrastructure.

What I see that’s evolving into is, first of all, you can take the solutions build in this project and apply it to other communities that are in need for such a solution. But then it could go further. Why not combine these communities into a larger system? Why not federate or connect all these communities into a larger infrastructure that all is based on common ideas, common standards, and that still enables open participation?

It’s a platform where you can plug in your own system or subsystem that you can then make available to whoever is connected to that platform, whoever you trust. So it can evolve into a large ecosystem, and that does not have to happen under the umbrella of one organization such as OOI.

Larger ecosystem

I
t can happen to a larger ecosystem of connected computing based on your own policies, your own technologies, your own standards, but where everyone shares a common piece of the same idea and can take whatever they want and not consume what they’re not interested in.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, August 8, 2012

Infosys unveils Cloud Ecosystem Hub as unified enterprise gateway to hybrid cloud environments

Infosys today launched the Infosys Cloud Ecosystem Hub so enterprises can better create, adopt and govern cloud services across a business ecosystem.

The move shows the demand for managing "cloud of cloud" services, and the rapidly growing need for gaining better control over hybrid services delivery -- for both businesses and cloud services providers. I think Infosys's move also shows that one-size-fits-all public clouds will become behind-the-scenes utilities, and that managing services in a business ecosystem context is where the real value will be in cloud adoption.

Infosys says that businesses can accelerate time to market of cloud services by up to 40 percent, improve productivity by up to 20 percent, and achieve cost-savings of up to 30 percent by using it's Cloud Ecosystem Hub.

A unified self-service catalog feature allows cloud services to quickly subscribe to relevant IT and business services across multiple environments. It also helps dynamically provision IT infrastructure and platforms across a hybrid cloud environment in minutes.

This solution allows clients to fully realize the benefits from the long-standing promise of the cloud.



The smart brokerage feature of the hub provides an enterprise-wide decision support mechanism to select, compare, and deploy cloud services from across providers. Decisions can be based on evaluation of over 20 parameters, such as quality of service, technology compatibility, regulatory compliance needs, and total cost of ownership (TCO) of application workloads.

The hub provides a single-window view of the enterprise cloud ecosystem and brings cohesion into what could be an otherwise fragmented IT environment – across private and public cloud. It also enables easy monitoring of cloud resource usage and optimizes utilization and provides consolidated metering and billing, enabling service chargebacks.

According to Vishnu Bhat, Vice-President and Global Head - Cloud, Infosys, "Our clients are dealing with complexities of a fragmented cloud environment. The Infosys Cloud Ecosystem Hub provides organizations a unified gateway to build, manage, and govern their hybrid cloud ecosystem. This solution allows clients to fully realize the benefits from the long-standing promise of the cloud.”

You may also be interested in:

Tuesday, July 31, 2012

For Steria, cloud not so much a technology as catalyst to responsive and agile business

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance podcast series brings together a top HP cloud evangelist and a leading-edge adopter of improved IT service delivery for a major European business services provider, Steria.

We're joined by our co-host, Chief Evangelist at HP, Paul Muller, and Jean-Michel Gatelais, IT Service Management (ITSM) Solution Manager at Steria, based near Paris.

In this series, we're focusing on how IT leaders are improving performance of their services to deliver better experiences and payoffs for businesses and end-users alike. The discussion is co-hosted and moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: We have a fascinating show today, because we are going to learn about how a prominent European IT-enabled business services provider, Steria, is leveraging cloud services to manage complexity and deliver better services to customers.

Paul, is that what you are finding -- that the cloud model is starting to impact this whole notion of effective performance across services in total?

Muller: This is a conversation I've been having a lot lately. The word "cloud" gets thrown around a lot, but when I drill into the topic, I find that customers are really talking about services and integrating different services, whether they are on-premises, in the public cloud arena, or even that gray land, which is called outsourcing. [Follow Paul on Twitter.]

It's the ability to integrate those different supply models -- internal, external, publicly sourced cloud services -- that really differentiate some of the more forward-leaning organizations from those who are still trying to come to grips with what it means to adopt a cloud service.

Business opportunity

W
e've all come to realize that cloud isn’t so much a technology issue, as it is a business opportunity. It’s an opportunity to improve agility and responsiveness, while also increasing flexibility of cost models, which is incredibly important, especially given the uncertain economic outlook that not only different countries have, but even different segments within different countries.

Take something like the minerals and resources areas within my own country, which are booming right now. Whereas, if you look at other areas of business, perhaps media, or particularly print media, right now, they're going through the opposite type of revolution. They're trying to work out how to adjust their cost to declining demand.

Gardner: With that, let’s move on to our guest. He's been a leading edge adopter for improving IT service delivery for many years, most recently as the IT Service Management (ITSM) Solution Manager at Steria, based near Paris. Please join me in welcoming Jean-Michel Gatelais.

Gatelais: Thank you very much. At Steria, I'm in charge of the Central ITSM Solution we provide for our customers, and I am in-charge of the Global ITSM Program Roadmap, including the ongoing integration from ServiceCenter 6 to Service Manager 9. I'm also responsible for the quality of service that we deliver with this solution, and of the transition of new customers on this platform.

Steria is an IT service provider. We are about a little more than 40 years old. Our business is mainly in system integration, application management, business process outsourcing, and infrastructure management services.

We have big customers in all sectors of industry and services, such as public sector, banking, industry, telecom, and so on. We have customers both in France and UK mainly, but in the whole of Europe also. For example, we have British Telecom, Orange, and the public sector in the UK, with police, etc.

Gardner: What’s different now about IT service delivery than just say few years ago?

Gatelais: It has changed a lot. In fact, few years ago it was something that was very atomic, with different processes and with people running the service with different tools. About three to five years ago, people began to homogenize the processes to run the service, and we saw that in Steria.

In Steria, we bought some companies and we grew. We needed to establish common processes to proceed by a common platform, and that what’s what we did with Service Manager. Now, the way we deliver service is much more mature for all the processes and for the ITSM processes.

Muller: The desire to standardize processes is a really big driver for organizations as they look to improve efficiency and effectiveness. So it's very similar what we're seeing. In fact, I was going to ask Jean-Michel a question. When you talk about homogenizing processes or improving consistently, how does that help the organization? How does that help Steria and its customers perform better?

IT provider

Gatelais: This allows us to deliver the service, whatever the location or organization, because we're an IT provider. We provide services for our customers that can be offshore, nearshore, in Steria local premises, and even in the plant premises. All the common processes and the solution allow us to do to this independently of the customer. Today with this process, we're able to run services for more than 200 customers.

Gardner: I see among your services that you are delivering cloud Workplace on Command, for example, Infrastructure On Command. Is this a bigger part of your business now? Do you find that servicing your cloud customers is dominating some of your strategic thinking?

We have an industrialized solution, allowing our customers to order infrastructure in a couple of minutes.



Gatelais: Yes. Actually, it’s growing day after day. We launched our cloud offering about 18 months ago. Now we can say that we have an industrialized solution, allowing our customers to order infrastructure in a couple of minutes. And this is really integrated with the whole service management solution and the underlying infrastructure.

Gardner: I suppose this gets to this self-service mentality that we are seeing, Paul. End users are seeking a self-service type of approach. They know that they can get services quite easily through a variety of consumer-based means. They're looking for similar choice and enablement in their business dealings.

It seems that an organization like Steria is at the forefront of attracting that sense of enablement and empowerment and then delivering it through a cloud infrastructure. They're interesting on two levels: one, they're delivering cloud and enablement, but they are also using cloud to power their own ability to do so.

Muller: We see almost a contradiction within enterprise users of cloud. We see groups that will quite readily go out and adopt cloud services. The so-called consumerization trend is quite prevalent, especially with what I would describe as simple services. For example, office automation tools, collaboration tools, etc.

Yet, simultaneously, we see reluctance sometimes, particularly for the IT organization, to let go and cloud source services and applications. I sometimes refer to them as "application huggers" or "server huggers."

Relinquish control

In other words, if they can’t see it or touch it, they're reluctant to relinquish control. The most fascinating part for me is that you can often find those two behaviors inside the very same organization. Sometimes, the same person can have diametrically opposed views about the respective merits of those two approaches.

Gardner: Are you selling and delivering cloud services to the IT department or others? Maybe we could call that shadow IT, Jean-Michel?

Gatelais: We do both. In fact, the cloud today is used both for internal organizations and also for our customers. Then, the cloud offering set-up asks to study a business model to study the way we will sell such service. For us, at the central level at Steria, there is no difference between internal delivery and delivery for our customers.

In fact, what we're trying to do is to standardize, as much as possible, the basic offering we propose. On top of that, we have additional requests from our customers. Then, we try to adapt our offering to the specific request.

Providing infrastructure services is not so difficult, but providing platform-as-a-service (PaaS) features can be.



Providing infrastructure services is not so difficult, but providing platform-as-a-service (PaaS) features can be. Even software as a service (SaaS) can be simpler than PaaS, because you provide some package services, startup services, instead for platform services. It’s very consumer specific.

Gardner: So you have the opportunity to go with a fairly standardized approach, but then you can customize on top of that. I'd like to hear some more about your different services. I understand that there’s something called Steria Advanced Remote Services or STARS. How does that fit into the mix, Jean-Michel?

Gatelais: STARS is the ITSM platform Steria rolled out about five years ago, and today this is a framework. It's mainly based on HP products, because it's running on HP Service Manager online, Business Service Manager (BSM), and Operations Orchestration.

We see this platform as a service-enabler, both service-support platform and the service-enabler, because we use it to manage and activate the services we propose to our customer, including cloud services, security services, and our new offering, Workplace On Command services.

STARS is the solution to manage value-added services Steria is offering to its customers.

Muller: When a customer thinks about taking services that maybe they used to run internally and moving those services to Steria, how important is it for them to maintain visibility and control, as they are thinking about moving to cloud?

Depends on the customers

Gatelais: It depends on the customers. You have some customers that are ready to use the services you provide on a common environment, but you also have customers requiring more specific solutions that we can give to them. Steria is developing some facilities to roll out and to instantiate the platforms for dedicated environments.

For example, the STARS solution, with Service Manager in the solution, we can deploy it, instantiate it, when the customer requires it.

Muller: Just following on from that, there's a perception that when you move to cloud services, people don’t really care about visibility, metrics, and service-level reports, because that’s all part of the service-level agreement (SLA). Do you find that customers actually want to see, how their service is performing -- what's the availability and level of security? Do they look for that level of reporting from you?

Gatelais: It depends on the customers. Some are really outsourcing the services. They would only complain if they met some problems on the services.

But other customers want to have the visibility on the quality of service that is delivered by Steria. That means that we need to be able to publish the SLA we have for our offering, but also to publish monthly, for example, the key performance indicators (KPIs) of this platform.

It’s the KPI discussion that is of such great interest to enterprises today.



Muller: And that is certainly a perfect question, because, Dana, it’s the KPI discussion that is of such great interest to enterprises today.

Gardner: Right, and I'm impressed that Steria can manage this variety and be able to provide to each of these customers what they want on their own terms, which is, as you point out, is really what they're calling for.

For you as a provider, that must really amount to quite a bit of complexity. How do you get a handle on that ability to maintain your own profitability while dealing with this level of variability and the different KPIs and giving the visibility to them?

Gatelais: One of the advantages of the cloud structure is that you have to ask these questions in advance. That means that when Steria is designing a new offering, we first design the business model. In fact, that will allow us either to propose some shared services, or for the client that has requested it, some visibility to the services, but based on standard platforms. We try to remain standard in what we propose, and the flexibility is in the configuration of what we propose.

We provide the KPIs that are published for the service offering. This will include such information as service availability rates, outage problems, change management, and also activity reporting.

Strategic decisions

Gardner: Do you have any examples?

Gatelais: Yes. The example I can give is the flexibility the service offering can give to the customers in the software development area.

For example, it allows you to set up some development platforms for a limited period of time, allowing product development. With the service we offer, when the project is finished and you enter into the application management mode, the plant is able to say, "I stopped the server." It's backed up, and if six months later the customer wants to develop a new release of this software, then we would restore his environment. In the meantime, he won't have the use of the platform, but he'll be able to continue his development. This is very flexible.

The notion of tying all of that capital equipment up and leaving it idle for that period of time is simply not tenable.



Muller: The interesting part is that the development and test process is such a resource-intensive process, while you are in the middle of that process. But the minute you are done with it, you go from being almost 100 percent busy and consuming 100 percent of the resources, to, in some cases, doing nothing, as Jean-Michel said, for months, possibly, even years, depending on the nature of the project.

The notion of tying all of that capital equipment up and leaving it idle for that period of time is simply not tenable. The idea of moving all of that into a flex up-flex down model is probably one of the single most commonly pursued use cases for both public and private cloud today.

The other one, as Jean-Michel has already spoken to, is that the idea of more discrete services, particularly that of helpdesk, is just going crazy in terms of adoption by customers.

Gardner: One of the things I am seeing is some of the vision in terms of cloud a few years ago was that one size would fit all, or that it’s cookie cutter, and that there won’t be a need for high variability. But I think what we are actually seeing in practice, and Jean-Michel is certainly highlighting this, is that the KPIs are going to be different for organizations.

There are going to be different requirements for public and private, large and small, jurisdiction by jurisdiction, regulation and compliance. You really need to be able to have the flexibility, not just at the level of infrastructure, but at the level of the types of services, the way that they're built, invoiced, and measured and delivered.

They're interesting for small organizations, because they don’t have to heavily invest in solutions, and we're able to propose shared solutions.



Gatelais: The way we propose the services is they're interesting for small organizations, because they don’t have to heavily invest in solutions, and we're able to propose shared solutions. This is SaaS, this is cloud, and for them it’s very interesting, because it is much more cheaper.

Gardner: What do you advise others who would be pursuing a similar objective?

Gatelais: With such offerings you have to design and think much more than before, to think before running out your solution. You need to be clear on what you want to propose to what kind of customers, where is the market, and then to design your offering according to this. Then, build your business model according to those assumptions.

KPIs that matter

Muller: Right now, I've got a couple of metrics, a couple of KPIs, that matter to me really deeply. From your perspective, are there one or two KPIs that you're looking at at the moment that either make you really happy or that are a cause for concern for you, as you think about business and delivering your services. What are the KPIs that matter to you?

Gatelais: What is very difficult for new services is to evaluate the actual return on investment (ROI). You can establish a business model, a business plan to see if what you will do, you will make some profit with it, but it's much more difficult is to evaluate the ROI.

If I don’t buy this service, it would cost me an amount; if I buy this service, okay, it will cost the service fee, but what would I spend next to that. This is very difficult to measure.

It may be basic, but you should take the configuration management process. That is very important, even in cloud offerings. It's very difficult to make evident that if you do some configuration management, you will have higher a ROI than if you don’t do it.

It's very difficult to make evident that if you do some configuration management, you will have higher a ROI than if you don’t do it.



Today, even internally in Steria, it's much more difficult to get approval to develop and to improve configuration management, because people don’t see the interest, as you don’t sell it directly. It's just a medium to improve your service.

Muller: That’s such a good point. And Dana, it's one of the great benefits. This is going to sound a little bit like an infomercial, but it's worth stating. One of the reasons we've been moving so much of our own management software to the cloud is because it's behind the scenes. It's often seen as plumbing, and people are reluctant to invest often in infrastructure and plumbing, until it has proven its benefit.

It's one of the reasons we've moved to a more variable cost model, or at least have made it available for organizations who might want to dip their toe in the water and show some benefits before they invest more heavily over time.

Distinct line


Gardner: You're really starting to put in place the mechanisms for determining quite distinctly what the payoffs are from investments in IT at that critical business payoff level. So I think that’s a very interesting development in the market.

Muller: The transparency improves, and because you have a variable cost model, it lowers the pain threshold in terms of people being willing to experiment with an idea, see if it works, see if it has that payoff, that ROI. If it doesn’t, stop doing it, and if it does, do more of it. It's really, really very simple.

Gardner: Our audience can carry on this dialogue with Paul Muller through the Discover Performance Group on LinkedIn.

You can also gain more insights and gather more information on the best of IT performance management at www.hp.com/go/discoverperformance.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, July 24, 2012

Summer in the Capital -– Looking back at The Open Group Conference in Washington, D.C.

This guest post comes courtesy of Jim Hietala, Vice President of Security at The Open Group.

By Jim Hietala

This past week in Washington D.C., The Open Group held our Q3 conference. The theme for the event was "Cybersecurity – Defend Critical Assets and Secure the Global Supply Chain," and the conference featured a number of thought-provoking speakers and presentations.

Cybersecurity is at a critical juncture, and conference speakers highlighted the threat and attack reality and described industry efforts to move forward in important areas. The conference also featured a new capability, as several of the events were livestreamed to the Internet.

For those who did not make the event, here's a summary of a few of the key presentations, as well as what The Open Group is doing in these areas. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Joel Brenner, attorney with Cooley, was our first keynote. Joel's presentation was titled, “Turning Us Inside-Out: Crime and Economic Espionage on our Networks.” The talk mirrored his recent book, “America the Vulnerable: Inside the New Threat Matrix of Digital Espionage, Crime, and Warfare,” and Joel talked about current threats to critical infrastructure, attack trends, and challenges in securing information.

Joel's presentation was a wakeup call to the very real issues of IP theft and identity theft. Beyond describing the threat and attack landscape, Joel discussed some of the management challenges related to ownership of the problem, namely that the different stakeholders in addressing cybersecurity in companies, including legal, technical, management, and HR, all tend to think that this is someone else's problem. Joel stated the need for policy spanning the entire organization to fully address the problem.

The DoD now requires the creation of a program protection plan, which is the single focal point for security activities on the program.



Kristin Baldwin
, principal deputy, systems engineering, Office of the Assistant Secretary of Defense, Research and Engineering, described the U.S. Department of Defense (DoD) Trusted Defense Systems Strategy and challenges, including requirements to secure their multi-tiered supply chain. She also talked about how the acquisition landscape has changed over the past few years.

In addition, for all programs, the DoD now requires the creation of a program protection plan, which is the single focal point for security activities on the program. Kristin's takeaways included needing a holistic approach to security, focusing attention on the threat, and avoiding risk exposure from gaps and seams.

Overarching framework

DoD’s Trusted Defense Systems Strategy provides an overarching framework for trusted systems. Stakeholder integration with acquisition, intelligence, engineering, industry, and research communities is key to success. Systems engineering brings these stakeholders, risk trades, policy, and design decisions together. Kristin also stressed the importance of informing leadership early and providing programs with risk-based options.

Dr. Don Ross of NIST presented a perfect storm of proliferation of information systems and networks and an increasing sophistication of threat, resulting in an increasing number of penetrations of information systems in the public and private sectors potentially affecting security and privacy. He proposed a need for an integrated project team approach to information security.

Dr. Ross also provided an overview of the changes coming in NIST SP 800-53, version 4, which is presently available in draft form. He also advocated a dual protection strategy approach involving traditional controls at network perimeters that assumes attackers outside of organizational networks, as well as agile defenses, are already inside the perimeter.

The objective of agile defenses is to enable operation while under attack and to minimize response times to ongoing attacks.

The objective of agile defenses is to enable operation while under attack and to minimize response times to ongoing attacks. This new approach mirrors thinking from the Jericho Forum and others on de-perimeterization and security and is very welcome.

The Open Group Trusted Technology Forum provided a panel discussion on supply chain security issues and the approach that the forum is taking towards addressing issues relating to taint and counterfeit in products.

The panel included Andras Szakal of IBM, Edna Conway of Cisco and Dan Reddy of EMC, as well as Dave Lounsbury, CTO of The Open Group. OTTF continues to make great progress in the area of supply chain security, having published a snapshot of the Open Trusted Technology Provider Framework, working to create a conformance program, and in working to harmonize with other standards activities.

Dave Hornford, partner at Conexiam and chair of The Open Group Architecture Forum, provided a thought provoking presentation titled, "Secure Business Architecture, or just Security Architecture?" Dave's talk described the problems in approaches that are purely focused on securing against threats and brought forth the idea that focusing on secure business architecture was a better methodology for ensuring that stakeholders had visibility into risks and benefits.

Positive and negative

Geoff Besko, CEO of Seccuris and co-leader of the security integration project for the next version of TOGAF, delivered a presentation that looked at risk from a positive and negative view. He recognized that senior management frequently have a view of risk embracing as taking risk with am eye on business gains if revenue/market share/profitability, while security practitioners tend to focus on risk as something that is to be mitigated. Finding common ground is key here.

Katie Lewin, who is responsible for the GSA FedRAMP program, provided an overview of the program, and how it is helping raise the bar for federal agency use of secure cloud computing.

The conference also featured a workshop on security automation, which featured presentations on a number of standards efforts in this area, including on SCAP, O-ACEML from The Open Group, MILE, NEA, AVOS and SACM. One conclusion from the workshop was that there's presently a gap and a need for a higher level security automation architecture encompassing the many lower level protocols and standards that exist in the security automation area.

There's presently a gap and a need for a higher level security automation architecture encompassing the many lower level protocols and standards that exist in the security automation area.



In addition to the public conference, a number of forums of The Open Group met in working sessions to advance their work in the Capitol. These included:
All in all, the conference clarified the magnitude of the cybersecurity threat, and the importance of initiatives from The Open Group and elsewhere to make progress on real solutions.

Join us at our next conference in Barcelona on October 22-25!

This guest post comes courtesy of Jim Hietala, Vice President of Security at The Open Group. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in:

Monday, July 23, 2012

With CMS 10, HP puts workload configuration data newly in hands of those who can best use it to manage services delivery

HP today introduced HP Configuration Management System (CMS) 10, a broad update designed to give more types of IT leaders better insight and control over everything from discrete IT devices to complete services-enabled business processes.

Especially important for the operational control of hybrid services delivery and converged cloud implementations, CMS 10 gathers and shares the configuration patterns and characteristics of highly virtualized workloads. The update helps manage dynamic virtualized applications both inside enterprise data centers as well as leading clouds.

"CMS 10 improves control of converged clouds," said Jimmy Augustine, product marketing manager at HP Software. "It sees the virtual machines and updates the Universal Configuration Management Data Base (UCMDB) with the dynamic information from public and private clouds."

With the new software, HP says clients can reduce costs and risks associated with service disruptions while reducing the time spent on manual discovery by more than 50 percent thanks to automated discovery capabilities. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

With the growing adoption of cloud computing, organizations are under increased pressure to deliver new services and scale existing ones. The complexities of cloud-based infrastructures coupled with a lack of visibility have hampered organizations’ ability to efficiently and predictably manage IT performance.

“Service disruptions within complex cloud and virtualized environments are difficult to identify and resolve,” said Shane Pearson, vice president, Product Marketing, Operations, Software, HP. “With the new enhancements to HP Configuration Management System, IT executives now have the configuration intelligence they need at their fingertips to make rapid decisions to ensure consistent business service availability.”

CMS 10 also introduces new capabilities specifically for service lifecycle design and operations, notably within both business service management (BSM) and IT service management (ITSM).

CMS 10 also introduces new capabilities specifically for service lifecycle design and operations.



I was especially impressed by the ability of CMS 10 users to extend the view of operations to business process analysts, enterprise architects and DevOps managers -- all provided by a new browser-based access and query capability. These business-function-focused leaders can seek out the information they need to cut through the complexity of systems data to measure and react to how an entire application or processes are behaving systemically.

What's more, CMS 10 level insights can be extended to security professionals and business architects to gather data on compliance, performance, and even for better architecting the next process or hybrid services mix. The fact that CMS 10 already supports across many VMs and cloud types shows the importance of ensuring configuration conformity as a baseline capability for hybrid cloud uses.

The CMS update broadly supports virtual machines better, has multi-tenancy support to appeal to service providers, and delivers its outputs via web browsers and search interfaces. "You can see the full applications support infrastructure, and discover out of the box the whole workload support," says Augustine.

More specifically, the new HP CMS 10 includes HP Universal Discovery with Content Pack 11, HP Universal Configuration Management Data Base (UCMDB), HP UCMDB Configuration Manager, and HP UCMDB Browser. With the new solution, enterprises, governments and managed service providers (MSPs) can now:
  • Quickly discover software and hardware inventory, as well as associated dependencies in a single unified discovery solution

    You can see the full applications support infrastructure, and discover out of the box.


  • Speed time to value with the product’s simplified user interface and enhanced scalability, allowing all IT teams to consume as well as use rich intelligence hosted in the HP CMS
  • More easily manage multiple client environments within a single UCMDB with improved security, automation and scalability
  • Automatically locate and catalog new technologies related to network hardware, open source middleware, storage, ERP, and infrastructure software providers
  • Introduce new server compliance thresholds.
HP CMS 10 is a key component of the HP IT Performance Suite, an enterprise performance software platform designed to improve performance with operational intelligence for many types of users and uses.

HP CMS, currently available worldwide in 10 languages, is also available through HP channel partners. More information about CMS 10 is available at www.hp.com/go/CMS.

You may also be interested in:

Wednesday, July 18, 2012

User behavior data open to misuse without privacy and identification standards, says Open Group tweet jam community

The uncharted territory of user behavior data based on what users do in such web walled gardens as Facebook was the focus of a "tweet jam" last week organized by The Open Group.

Some of the many notable participants in the tweet jam around the hash tag #ogChat on July 11 worried about the prospect of misuse of the user identity and behavior data, but were more mixed about what to do about it. I was the moderator of the tweet jam. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

With hundreds of tweets flying at break-neck pace, yesterday's #ogChat saw a very spirited discussion on the Internet's movement toward a walled garden model. In case you missed the conversation, you're now in luck! Here's a re-cap of yesterday's #ogChat:

The full list of participants included:
Here is a high-level a snapshot of yesterday's #ogChat:

Shift from open Internet

Q1 In the context of #WWW, why has there been a shift from the open Internet to portals, apps and walled environs? #ogChat

Participants generally agreed that the impetus behind the walled garden trend was led by two factors: companies and developers wanting more control, and a desire by users to feel "safer."

  • @charleneli: Q1 Peeps & developers like order, structure, certainty. Control can provide that. But too much and they leave. #ogChat.
  • @Technodad: User info & contributions are raw material of walled sites-"If you're not paying for the service, the product being sold is you". #ogChat
  • @Dana_Gardner: @JohnFontana What about the meta data that they can own by registering you? #ogChat

    If you're not paying for the service, the product being sold is you.



    • In response to: @JohnFontana Q1 Eyeballs proved worthless; souls can make you some real money. #ogChat

    • @charleneli: @Dana_Gardner re: Meta data -- once you join a community, there has to be a level of trust. If they respect data, people will trust. #ogChat
  • @AlanWebber #ogChat Q1 - People feel safer inside the "Walls" but don't realize what they are losing
Privacy/control

Q2 How has this trend affected privacy/control? Do users have enough control over their IDs/content within #walledgarden networks? #ogChat


This was a hot topic as participants debated the tradeoffs between great content and privacy controls. Questions of where data was used and leaked to also emerged, as walled gardens are known to have backdoors.
  • @AlanWebber: But do people understand what they are giving up inside the walls? #ogChat
  • @TheTonyBradley: Q2 -- Yes and no. Users have more control than they're aware of, but for many its too complex and cumbersome to manage properly.#ogchat
  • @jim_hietala: #ogChat Q2 privacy and control trade offs need to be made more obvious, visible

    Users have more control than they're aware of, but for many its too complex and cumbersome to manage properly.


  • @zdFYRashid: Q2 users assume that #walledgarden means nothing leaves, so they think privacy is implied. They don't realize that isn't the case#ogchat
  • @JohnFontana: Q2 Notion is wall and gate is at the front of garden where users enter. It's the back that is open and leaking their data #ogchat
  • @subreyes94: #ogchat .@DanaGardner More walls coming down through integration. FB and Twitter are becoming de facto login credentials for other sites
Social and mobile

Q3 What has been the role of social and #mobile in developing #walledgardens? Have they accelerated this trend? #ogChat


Everyone agreed that social and mobile catalyzed the formation of walled garden networks. Many also gave a nod to location as a nascent driver.
  • @jaycross: Q3 Mobile adds your location to potential violations of privacy. It's like being under surveillance. Not very far along yet. #ogChat
  • @charleneli: Q3: Mobile apps make it easier to access, reinforcing behavior. But also enables new connections a la Zynga that can escape #ogChat

    Mobile apps make it easier to access, reinforcing behavior.


  • @subreyes94: #ogChatQ3 They have accelerated the always-inside the club. The walls have risen to keep info inside not keep people out.

    • @Technodad: @subreyes94 Humans are social, want to belong to community & be in touch with others "in the group". Will pay admission fee of info. #ogChat

Current web

Q4 Can people use the internet today without joining a walled garden network? What does this say about the current web? #ogChat


There were a lot of parallels drawn between real and virtual worlds. It was interesting to see that walled gardens provided a sense of exclusivity that human seek out by nature. It was also interesting to see a generational gap emerge as many participants cited their parents as not being a part of a walled garden network.
  • @TheTonyBradley: Q4 -- You can, the question is "would you want to?" You can still shop Amazon or get directions from Mapquest. #ogchat
  • @zdFYRashid: Q4 people can use the internet without joining a walled garden, but they don't want to play where no one is. #ogchat

    We are headed to a time when people will buy back their anonymity.


  • @JohnFontana: Q4 I believe we are headed to a time when people will buy back their anonymity. That is the next social biz. #ogchat
Owning information

Q5 Is there any way to reconcile the ideals of the early web with the need for companies to own information about users? #ogChat


While walled gardens have started to emerge, the consumerization of the Internet and social media has really driven user participation and empowered users to create content within these walled gardens.
  • @JohnFontana: Q5 - It is going to take identity, personal data lockers, etc. to reconcile the two. Wall-garden greed heads can't police themselves#ogchat
  • @charleneli:Q5: Early Web optimism was less about being open more about participation. B4 you needed to know HTML. Now it's fill in a box. #ogChat

    It is going to take identity, personal data lockers, etc. to reconcile the two.


  • @Dana_Gardner: Q5 Early web was more a one-way street, info to a user. Now it's a mix-master of social goo. No one knows what the goo is, tho. #ogChat
  • @AlanWebber: Q5, Once there are too many walls, people will begin to look on to the next (virtual) world. Happening already #ogChat
Next iteration

Q6 What #Web2.0 lessons learned should be implemented into the next iteration of the web? How to fix this? #ogChat


Identity was the most common topic with the sixth and final question. Single sign-on, personal identities on mobile phones/passports and privacy seemed to be the biggest issues facing the next iteration of the web.
  • @Technodad: Q6 Common identity is a key - need portable, mutually-recognized IDs that can be used for access control of shared info. #ogChat
  • @JohnFontana: Q6 Users want to be digital. Give them ways to do that safely and privately if so desired. #ogChat

    We need portable, mutually-recognized IDs that can be used for access control of shared info.


  • @TheTonyBradley: Q6 -- Single ID has pros and cons. Convenient to login everywhere with FB credentials, but also a security Achilles heel.#ogchat

Thank you to all the participants who made this such a great discussion!

Incidentally, the model of a tweet jam or tweet up on IT subjects of interest is a great way to gather insights and make a social splash too. This #ogChat was a top tracking subject under Twitter during and after the online event. I'd be happy to do more of these as a moderator or participant on a subject near and dear to you and your community.

You may also be interested in:

Counting the cost of cloud

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group.

By Chris Harding

IT costs were always a worry, but only an occasional one. Cloud computing has changed that.

Here's how it used to be. The New System was proposed. Costs were estimated, more or less accurately, for computing resources, staff increases, maintenance contracts, consultants and outsourcing. The battle was fought, the New System was approved, the checks were signed, and everyone could forget about costs for a while and concentrate on other issues, such as making the New System actually work.

One of the essential characteristics of cloud computing is "measured service." Resource usage is measured by the byte transmitted, the byte stored, and the millisecond of processing time. Charges are broken down by the hour, and billed by the month. This can change the way people take decisions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

"The New System is really popular. It's being used much more than expected."

"Hey, that's great!"

One of the essential characteristics of cloud computing is "measured service."



Then, you might then have heard,

"But this means we are running out of capacity. Performance is degrading. Users are starting to complain."

"There's no budget for an upgrade. The users will have to lump it."


Now the conversation goes down a slightly different path.

"Our monthly compute costs are twice what we budgeted."

"We can't afford that. You must do something!"


Possible and necessary

And something will be done, either to tune the running of the system, or to pass the costs on to the users. Cloud computing is making professional day-to-day cost control of IT resource use both possible and necessary.

This starts at the planning stage. For a new cloud system, estimates should include models of how costs and revenue relate to usage. Approval is then based on an understanding of the returns on investment in likely usage scenarios. And the models form the basis of day-to-day cost control during the system's life.

Last year's Open Group “State of the Industry” cloud survey found that 55 percent of respondents thought that cloud return on investment (ROI) addressing business requirements in their organizations would be easy to evaluate and justify, but only 35 percent of respondents' organizations had mechanisms in place to do this. Clearly, the need for cost control based on an understanding of the return was not widely appreciated in the industry at that time.

For a new cloud system, estimates should include models of how costs and revenue relate to usage.



We are repeating the survey this year. It will be very interesting to see whether the picture has changed.

Participation in the survey is still open. To add your experience and help improve industry understanding of the use of cloud computing, visit: http://www.surveymonkey.com/s/TheOpenGroup_2012CloudROI

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

You may also be interested in: