Tuesday, July 9, 2013

As Platform 3.0 ripens, expect agile access and distribution of actionable intelligence across enterprises, says The Open Group panel

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

This latest BriefingsDirect discussion, leading into The Open Group Conference on July 15 in Philadelphia, brings together a panel of experts to explore the business implications of the current shift to so-called Platform 3.0.

Known as the new model through which big data, cloud, and mobile and social -- in combination -- allow for advanced intelligence and automation in business, Platform 3.0 has so far lacked standards or even clear definitions.

The Open Group and its community are poised to change that, and we're here now to learn more how to leverage Platform 3.0 as more than a IT shift -- and as a business game-changer. It will be a big topic at next week's conference.

The panel: Dave Lounsbury, Chief Technical Officer at The Open Group; Chris Harding, Director of Interoperability at The Open Group, and Mark Skilton, Global Director in the Strategy Office at Capgemini. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference, which is focused on enterprise transformation in the finance, government, and healthcare sectors. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: A lot of people are still wrapping their minds around this notion of Platform 3.0, something that is a whole greater than the sum of the parts. Why is this more than an IT conversation or a shift in how things are delivered? Why are the business implications momentous?

Lounsbury: Well, Dana, there are lot of IT changes or technical changes going on that are bringing together a lot of factors. They're turning into this sort of super-saturated solution of ideas and possibilities and this emerging idea that this represents a new platform. I think it's a pretty fundamental change.

Lounsbury
If you look at history, not just the history of IT, but all of human history, you see that step changes in societies and organizations are frequently driven by communication or connectedness. Think about the evolution of speech or the invention of the alphabet or movable-type printing. These technical innovations that we’re seeing are bringing together these vast sources of data about the world around us and doing it in real time.

Further, we're starting to see a lot of rapid evolution in how you turn data into information and presenting the information in a way such that people can make decisions on it. Given all that we’re starting to realize, we’re on the cusp of another step of connectedness and awareness.

Fundamental changes

This really is going to drive some fundamental changes in the way we organize ourselves. Part of what The Open Group is doing, trying to bring Platform 3.0 together, is to try to get ahead of this and make sure that we understand not just what technical standards are needed, but how businesses will need to adapt and evolve what business processes they need to put in place in order to take maximum advantage of this to see change in the way that we look at the information.

Harding: Enterprises have to keep up with the way that things are moving in order to keep their positions in their industries. Enterprises can't afford to be working with yesterday's technology. It's a case of being able to understand the information that they're presented, and make the best decisions.

Harding
We've always talked about computers being about input, process, and output. Years ago, the input might have been through a teletype, the processing on a computer in the back office, and the output on print-out paper.

Now, we're talking about the input being through a range of sensors and social media, the processing is done on the cloud, and the output goes to your mobile device, so you have it wherever you are when you need it. Enterprises that stick in the past are probably going to suffer.

Gardner: Mark Skilton, the ability to manage data at greater speed and scale, the whole three Vs -- velocity, volume, and value -- on its own could perhaps be a game changing shift in the market. The drive of mobile devices into lives of both consumers and workers is also a very big deal.

Of course, cloud has been an ongoing evolution of emphasis towards agility and efficiency in how workloads are supported. But is there something about the combination of how these are coming together at this particular time that, in your opinion, substantiates The Open Group’s emphasis on this as a literal platform shift?

Skilton: It is exactly that in terms of the workloads. The world we're now into is the multi-workload environment, where you have mobile workloads, storage and compute workloads, and social networking workloads. There are many different types of data and traffic today in different cloud platforms and devices.

Skilton
It has to do with not just one solution, not one subscription model -- because we're now into this subscription-model era ... the subscription economy, as one group tends to describe it. Now, we're looking for not only just providing the security, the infrastructure, to deliver this kind of capability to a mobile device, as Chris was saying. The question is, how can you do this horizontally across other platforms? How can you integrate these things? This is something that is critical to the new order.

So Platform 3.0 addressing this point by bringing this together. Just look at the numbers. Look at the scale that we're dealing with -- 1.7 billion mobile devices sold in 2012, and 6.8 billion subscriptions estimated according to the International Telecommunications Union (ITU) equivalent to 96 percent of the world population.

Massive growth

We had massive growth in scale of mobile data traffic and internet data expansion. Mobile data is increasing 18 percent fold from 2011 to 2016 reaching 130 exabytes annually.  We passed 1 zettabyte of global online data storage back in 2010 and IP data traffic predicted to pass 1.3 zettabytes by 2016, with internet video accounting for 61 percent of total internet data according to Cisco studies.

These studies also predict data center traffic combining network and internet based storage will reach 6.6 zettabytes annually, and nearly two thirds of this will be cloud based by 2016.  This is only going to grow as social networking is reaching nearly one in four people around the world with 1.7 billion using at least one form of social networking in 2013, rising to one in three people with 2.55 billion global audience by 2017 as another extraordinary figure from an eMarketing.com study.

It is not surprising that many industry analysts are seeing growth in technologies of mobility, social computing, big data and cloud convergence at 30 to 40 percent and the shift to B2C commerce passing $1 trillion in 2012 is just the start of a wider digital transformation.

These numbers speak volumes in terms of the integration, interoperability, and connection of the new types of business and social realities that we have today.

Gardner: Why should IT be thinking about this as a fundamental shift, rather than a modest change?
There's no point giving someone data if it's not been properly managed or if there's incorrect information.

Lounsbury: A lot depends on how you define your IT organization. It's useful to separate the plumbing from the water. If we think of the water as the information that’s flowing, it's how we make sure that the water is pure and getting to the places where you need to have the taps, where you need to have the water, etc.

But the plumbing also has to be up to the job. It needs to have the capacity. It needs to have new tools to filter out the impurities from the water. There's no point giving someone data if it's not been properly managed or if there's incorrect information.

What's going to happen in IT is not only do we have to focus on the mechanics of the plumbing, where we see things like the big database that we've seen in the open-source  role and things like that nature, but there's the analytics and the data stewardship aspects of it.

We need to bring in mechanisms, so the data is valid and kept up to date. We need to indicate its freshness to the decision makers. Furthermore, IT is going to be called upon, whether as part of the enterprise IP or where end users will drive the selection of what they're going to do with analytic tools and recommendation tools to take the data and turn it into information. One of the things you can't do with business decision makers is overwhelm them with big rafts of data and expect them to figure it out.

You really need to present the information in a way that they can use to quickly make business decisions. That is an addition to the role of IT that may not have been there traditionally -- how you think about the data and the role of what, in the beginning, was called data scientist and things of that nature.

Shift in constituency

Skilton: I'd just like to add to Dave's excellent points about, the shape of data has changed, but also about why should IT get involved. We're seeing that there's a shift in the constituency of who is using this data.

We have the Chief Marketing Officer and the Chief Procurement Officer and other key line of business managers taking more direct control over the uses of information technology that enable their channels and interactions through mobile, social and data analytics. We've got processes that were previously managed just by IT and are now being consumed by significant stakeholders and investors in the organization.

We have to recognize in IT that we are the masters of our own destiny. The information needs to be sorted into new types of mobile devices, new types of data intelligence, and ways of delivering this kind of service.

I read recently in MIT Sloan Management Review an article that asked what is the role of the CIO. There is still the critical role of managing the security, compliance, and performance of these systems. But there's also a socialization of IT, and this is where  the  positioning architectures which are cross platform is key to  delivering real value to the business users in the IT community.

Gardner: How do we prevent this from going off the rails?
This is where The Open Group can really help things along by being a recipient and a reflector of best practice and standard.

Harding: This a very important point. And to add to the difficulties, it's not only that a whole set of different people are getting involved with different kinds of information, but there's also a step change in the speed with which all this is delivered. It's no longer the case, that you can say, "Oh well, we need some kind of information system to manage this information. We'll procure it and get a program written" that a year later that would be in place in delivering reports to it.

Now, people are looking to make sense of this information on the fly if possible. It's really a case of having the platforms be the standard technology platform and also the systems for using it, the business processes, understood and in place.

Then, you can do all these things quickly and build on learning from what people have gone in the past, and not go out into all sorts of new experimental things that might not lead anywhere. It's a case of building up the standard platform in the industry best practice. This is where The Open Group can really help things along by being a recipient and a reflector of best practice and standard.

Skilton: Capgemini has been doing work in this area. I break it down into four levels of scalability. It's the platform scalability of understanding what you can do with your current legacy systems in introducing cloud computing or big data, and the infrastructure that gives you this, what we call multiplexing of resources. We're very much seeing this idea of introducing scalable platform resource management, and you see that a lot with the heritage of virtualization.
Companies needs to think about what online marketplaces they need for digital branding, social branding, social networks, and awareness of your customers, suppliers, and employees.

Going into networking and the network scalability, a lot of the customers have who inherited their old telecommunications networks are looking to introduce new MPLS type scalable networks. The reason for this is that it's all about connectivity in the field. I meet a number of clients who are saying, "We’ve got this cloud service," or "This service is in a certain area of my country. If I move to another parts of the country or I'm traveling, I can't get connectivity." That’s the big issue of scaling.

Another one is application programming interfaces (APIs). What we’re seeing now is an explosion of integration and application services using API connectivity, and these are creating huge opportunities of what Chris Anderson of Wired used to call the "long tail effect." It is now a reality in terms of building that kind of social connectivity and data exchange that Dave was talking about.

Finally, there are the marketplaces. Companies needs to think about what online marketplaces they need for digital branding, social branding, social networks, and awareness of your customers, suppliers, and employees. Customers can see that these four levels are where they need to start thinking about for IT strategy, and Platform 3.0 is right on this target of trying to work out what are the strategies of each of these new levels of scalability.

Gardner: We're coming up on The Open Group Conference in Philadelphia very shortly. What should we expect from that? What is The Open Group doing vis-à-vis Platform 3, and how can organizations benefit from seeing a more methodological or standardized approach to some way of rationalizing all of this complexity? [Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Lounsbury: We're still in the formational stages of  "third platform" or Platform 3.0 for The Open Group as an industry. To some extent, we're starting pretty much at the ground floor with that in the Platform 3.0 forum. We're leveraging a lot of the components that have been done previously by the work of the members of The Open Group in cloud, services-oriented architecture (SOA), and some of the work on the Internet of things.

First step

Our first step is to bring those things together to make sure that we've got a foundation to depart from. The next thing is that, through our Platform 3.0 Forum and the Steering Committee, we can ask people to talk about what their scenarios are for adoption of Platform 3.0?

That can range from things like the technological aspects of it and what standards are needed, but also to take a clue from our previous cloud working group. What are the best business practices in order to understand and then adopt some of these Platform 3.0 concepts to get your business using them?

What we're really working toward in Philadelphia is to set up an exchange of ideas among the people who can, from the buy side, bring in their use cases from the supply side, bring in their ideas about what the technology possibilities are, and bring those together and start to shape a set of tracks where we can create business and technical artifacts that will help businesses adopt the Platform 3.0 concept.

Harding: We certainly also need to understand the business environment within which Platform 3.0 will be used. We've heard already about new players, new roles of various kinds that are appearing, and the fact that the technology is there and the business is adapting to this to use technology in new ways.

For example, we've heard about the data scientist. The data scientist is a new kind of role, a new kind of person, that is playing a particular part in all this within enterprises. We're also hearing about marketplaces for services, new ways in which services are being made available and combined.
What are the problems that need to be resolved in order to understand what kind of shape the new platform will have?

We really need to understand the actors in this new kind of business scenario. What are the pain points that people are having? What are the problems that need to be resolved in order to understand what kind of shape the new platform will have? That is one of the key things that the Platform 3.0 Forum members will be getting their teeth into.

Gardner: Looking to the future, when we think about the ability of the data to be so powerful when processed properly, when recommendations can be delivered to the right place at the right time, but we also recognize that there are limits to a manual or even human level approach to that, scientist by scientist, analysis by analysis.

When we think about the implications of automation, it seems like there were already some early examples of where bringing cloud, data, social, mobile, interactions, granularity of interactions together, that we've begun to see that how a recommendation engine could be brought to bear. I'm thinking about the Siri capability at Apple and even some of the examples of the Watson Technology at IBM.
In the future, we'll be talking about a multiplicity of information that is not just about services at your location or your personal lifestyle or your working preferences.

So to our panel, are there unknown unknowns about where this will lead in terms of having extraordinary intelligence, a super computer or data center of super computers, brought to bear almost any problem instantly and then the result delivered directly to a center, a smart phone, any number of end points?

It seems that the potential here is mind boggling. Mark Skilton, any thoughts?

Skilton: What we're talking about is the next generation of the Internet.  The advent of IPv6 and the explosion in multimedia services, will start to drive the next generation of the Internet.

I think that in the future, we'll be talking about a multiplicity of information that is not just about services at your location or your personal lifestyle or your working preferences. We'll see a convergence of information and services across multiple devices and new types of “co-presence services” that interact with your needs and social networks to provide predictive augmented information value.

When you start to get much more information about the context of where you are, the insight into what's happening, and the predictive nature of these, it becomes something that becomes much more embedding into everyday life and in real time in context of what you are doing.

I expect to see much more intelligent applications coming forward on mobile devices in the next 5 to 10 years driven by this interconnected explosion of real time processing data, traffic, devices and social networking we describe in the scope of platform 3.0. This will add augmented intelligence and is something that’s really exciting and a complete game changer. I would call it the next killer app.

First-mover benefits

Gardner: There's this notion of intelligence brought to bear rapidly in context, at a manageable cost. This seems to me a big change for businesses. We could, of course, go into the social implications as well, but just for businesses, that alone to me would be an incentive to get thinking and acting on this. So any thoughts about where businesses that do this well would be able to have significant advantage and first mover benefits?

Harding: Businesses always are taking stock. They understand their environments. They understand how the world that they live in is changing and they understand what part they play in it. It will be down to individual businesses to look at this new technical possibility and say, "So now this is where we could make a change to our business." It's the vision moment where you see a combination of technical possibility and business advantage that will work for your organization.

It's going to be different for every business, and I'm very happy to say this, it's something that computers aren’t going to be able to do for a very long time yet. It's going to really be down to business people to do this as they have been doing for centuries and millennia, to understand how they can take advantage of these things.

So it's a very exciting time, and we'll see businesses understanding and developing their individual business visions as the starting point for a cycle of business transformation, which is what we'll be very much talking about in Philadelphia. So yes, there will be businesses that gain advantage, but I wouldn’t point to any particular business, or any particular sector and say, "It's going to be them" or "It's going to be them."
Pick your industry, and there is huge amount of knowledge base that humans must currently keep on top of.

Gardner: Dave Lounsbury, a last word to you. In terms of some of the future implications and vision, where could this could lead in the not too distant future?

Lounsbury: I'd disagree a bit with my colleagues on this, and this could probably be a podcast on its own, Dana. You mentioned Siri, and I believe IBM just announced the commercial version of its Watson recommendation and analysis engine for use in some customer-facing applications.

I definitely see these as the thin end of the wedge on filling that gap between the growth of data and the analysis of data. I can imagine in not in the next couple of years, but in the next couple of technology cycles, that we'll see the concept of recommendations and analysis as a service, to bring it full circle to cloud. And keep in mind that all of case law is data and all of the medical textbooks ever written are data. Pick your industry, and there is huge amount of knowledge base that humans must currently keep on top of.

This approach and these advances in the recommendation engines driven by the availability of big data are going to produce profound changes in the way knowledge workers produce their job. That’s something that businesses, including their IT functions, absolutely need to stay in front of to remain competitive in the next decade or so.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:


Monday, July 8, 2013

The Open Group July conference seeks to better contain cybersecurity risks with FAIR structure

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

We recently assembled a panel of experts to explore new trends and solutions in the area of anticipating business risk, to help organization gain a foothold on better managed processes and structure for staying clear of identifiable weaknesses.

The goal: To help enterprises better deliver risk assessment and, one hopes, defenses, in the current climate of challenging cybersecurity and against other looming business threats. By predicting risks and potential losses accurately, IT organizations can gain agility via thoughtful priorities and thereby repeatably reduce the odds of losses.

The panel consists of Jack Freund, Information Security Risk Assessment Manager at TIAA-CREF; Jack Jones, Principal at CXOWARE and an inventor of the FAIR risk analysis framework, and Jim Hietala, Vice President, Security, at The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

This special BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference to be held held beginning July 15 in Philadelphia. The conference is focused on enterprise transformation in the finance, government, and healthcare sectors. Registration to the conference remains open. Follow the conference on Twitter at #ogPHL. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:
Freund: We're entering a phase where there is going to be increased regulatory oversight over very nearly everything. When that happens, all eyes are going to turn to IT and IT risk management functions to answer the question of whether we're handling the right things.

Without quantifying risk, you're going to have a very hard time saying to your board of directors that you're handling the right things the way a reasonable company should.

As those regulators start to see and compare among other companies, they'll find that these companies over "here" are doing risk quantification, and you're not. You're putting yourself at a competitive disadvantage by not being able to provide those same sorts of services.

Gardner: So you're saying that the market itself hasn’t been enough to drive this, and that regulation is required?

Freund
Freund: It’s probably a stronger driver than market forces at this point. But especially in information security, if you're not experiencing primary losses as a result of these sorts of things, then you have to look to economic externalities, which are largely put in play by regulatory forces here in the United States.

Jones: To support Jack’s statement that regulators are becoming more interested in this too, just in the last 60 days, I've spent time training people at two regulatory agencies on FAIR. So they're becoming more aware of these quantitative methods, and their level of interest is rising.

Hietala: Certainly, in the cybersecurity world in the past six or nine months, we've seen more and more discussion of the threats that are out there. We’ve got nation-state types of threats that are very concerning, very serious, and that organizations have to consider.

Hietala
With what’s happening, you've seen that the US Administration and President Obama direct the National Institute of Standards and Technology (NIST) to develop a new cybersecurity framework. Certainly on the government side of things, there is an increased focus on what can we do to increase the level of cybersecurity throughout the country in critical infrastructure. So my short answer would be yes, there is more interest in coming up with ways to accurately measure and assess risk so that we can then deal with it.

Gardner: Please give us the high-level overview of FAIR, also know as Factor Analysis of Information Risk.

Jones: First and foremost, FAIR is a model for what risk is and how it works. It’s a decomposition of the factors that make up risk. If you can measure or estimate the value of those factors, you can derive risk quantitatively in dollars and cents.

Risk quantification

You see a lot of “risk quantification” based on ordinal scales -- 1, 2, 3, 4, 5 scales, that sort of thing. But that’s actually not quantitative. If you dig into it, there's no way you could defend a mathematical analysis based on those ordinal approaches. So FAIR is this model for risk that enables true quantitative analysis in a very pragmatic way.

Jones
For example, one organization I worked with recently had certain deficiencies from the security perspective that they were aware of, but that were going to be very problematic to fix. They had identified technology and process solutions that they thought would take them a long way toward a better risk position. But it was a very expensive proposition, and they didn't have money in the IT or information security budget for it.

So, we did a current-state analysis using FAIR, how much loss exposure they had on annualized basis. Then, we said, "If you plug this solution into place, given how it affects the frequency and magnitude of loss that you'd expect to experience, here's what’s your new annualized loss exposure would be." It turned out to be a multimillion dollar reduction in annualized loss exposure for a few hundred thousand dollars cost.

When they took that business case to management, it was a no-brainer, and management signed the check in a hurry. So they ended up being in a much better position.

If they had gone to executive management saying, "Well, we’ve got a high risk and if we buy this set of stuff we’ll have low or medium risk," it would've been a much less convincing and understandable business case for the executives. There's reason to expect that it would have been challenging to get that sort of funding given how tight their corporate budgets were and that sort of thing. It can be incredibly effective in those business cases.

Gardner: There's lots going on in the IT world. Perhaps IT's very nature, the roles and responsibilities, are shifting. Is doing such risk assessment and management becoming part and parcel of core competency of IT, and is that a fairly big departure from the past?

Hietala: It's becoming kind of a standard practice within IT. When you look at outsourcing your IT operations to a cloud-service provider, you have to consider the security risks in that environment. What do they look like and how do we measure them?

It's the same thing for things like mobile computing. You really have to look at the risks of folks carrying tablets and smart phones, and understand the risks associated with those same things for big data. For any of these large-scale changes to our IT infrastructure you’ve got to understand what it means from a security and risk standpoint.
We have to find a way to embed risk assessment, which is really just a way to inform decision making and how we adapt all of these technological changes to increase market position and to make ourselves more competitive.

Freund: We have to find a way to better embed risk assessment [into businesses], which is really just a way to inform decision making and how we adapt all of these technological changes to increase market position and to make ourselves more competitive. That’s important.

Whether that’s an embedded function within IT or it’s an overarching function that exists across multiple business units, there are different models that work for different size companies and companies of different cultural types. But it has to be there. It’s absolutely critical.

Gardner: Jack Jones, how do you come down this role of IT shifting in the risk assessment issues, something that’s their responsibility. Are they embracing that or maybe wishing it away?

Jones: Some of them would certainly like to wish it away. I don't think IT’s role in this idea for risk assessment and such has really changed. What is changing is the level of visibility and interest within the organization, the business side of the organization, in the IT risk position.

Board-level interest

Previously, they were more or less tucked away in a dark corner. People just threw money at it and hoped bad things didn't happen. Now, you're getting a lot more board-level interest in IT risk, and with that visibility comes a responsibility, but also a certain amount of danger. If they’re doing it really badly, they're incredibly immature in how they approach risk.

They're going to look pretty foolish in front of the board. Unfortunately, I've seen that play out. It’s never pretty and it's never good news for the IT folks. They're realizing that they need to come up to speed a little bit from a risk perspective, so that they won't look the fools when they're in front of these executives.

They're used to seeing quantitative measures of opportunities and operational issues of risk of various natures. If IT comes to the table with a red, yellow, green chart, the board is left to wonder, first how to interpret that, and second, whether these guys really get it. I'm not sure the role has changed, but I think the responsibilities and level of expectations are changing.

Gardner: Is there a synergistic relationship between a lot of the big-data and analytics investments that are being made for a variety of reasons, and also this ability to bring more science and discipline to risk analysis?

Are we seeing the dots being connected in these large organizations; that they can take more of what they garner from big data and business intelligence (BI) and apply that to these risk assessment activities? Is that happening yet?

Jones: It’s just beginning to. It’s very embryonic, and there are only probably a couple of organizations out there that I would argue are doing that with any sort of effectiveness. Imagine that -- they’re both using FAIR.
There are some models out there that that frankly are just so badly broken that all the data in the world isn’t going to help you.

But when you think about BI or any sort of analytics, there are really two halves to the equation. One is data and the other is models. You can have all the data in the world, but if your models stink, then you can't be effective. And, of course, vise versa. If you’ve got great model and zero data, then you've got challenges there as well.

Being able to combine the two, good data and effective models, puts you in much better place. As an industry, we aren’t there yet. We've got some really interesting things going on, and so there's a lot of potential there, but people have to leverage that data effectively and make sure they're using a model that makes sense.

There are some models out there that that frankly are just so badly broken that all the data in the world isn’t going to help you. The models will grossly misinform you. So people have to be careful, because data is great, but if you’re applying it to a bad model, then you're in trouble.


Gardner: We're coming up very rapidly on The Open Group Conference, beginning July 15. What should we expect? [ Registration to the conference remains open. Follow the conference on Twitter at #ogPHL.]

Jones: We're offering FAIR training as a part of a conference. It's a two-day session with an opportunity afterwards to take the certification exam.

If history is any indication, people will go through the training. We get a lot of very positive remarks about a number of different things. One, they never imagined that risk could be interesting. They're also surprised that it's not, as one friend of mine calls it "rocket surgery." It's relatively straightforward and intuitive stuff. It's just that as a profession, we haven't had this framework for reference, as well as some of the methods that we apply to make it practical and defensible before.
Once you learn how to do it right, it's very obvious which are the wrong methods and why you can't use them to assess risk.

So we've gotten great feedback in the past, and I think people will be pleasantly surprised at what they experienced.

Freund: One of the things I always say about FAIR training is it's a real red pill-blue pill moment -- in reference to the old Matrix movies. I took FAIR training several years ago with Jack. I always tease Jack that it's ruined me for other risk assessment methods. Once you learn how to do it right, it's very obvious which are the wrong methods and why you can't use them to assess risk and why it's problematic.

It's really great and valuable training, and now I use it every day. It really does open your eyes to the problems and the risk assessment portion of IT today, and gives a very practical and actionable things to do in order to be able to fix that, and to provide value to your organization.

Gardner: Are there any updates that we should be aware of in terms of activities within The Open Group and other organizations working on standards, taxonomy, and definitions when it comes to risk?
In government, clearly there has been a lot of emphasis on understanding risk and mitigating it throughout various government sectors.

Hietala: At The Open Group we originally published a risk taxonomy standard based on FAIR four years ago. Over time, we've seen greater adoption by large companies and we've also seen the need to extend what we're doing there. So we're updating the risk taxonomy standard, and the new version of that should be published by the end of this summer.

We also saw within the industry, the need for a certification program for risk analysts, and so they'd be trained in quantitative risk assessment using FAIR. We're working on that program and we'll be talking more about it in Philadelphia. Follow the conference on Twitter at #ogPHL.

Along the way, as we were building the certification program, we realized that there was a missing piece in terms of the body of knowledge. So we created a second standard that is a companion to the taxonomy. That will be called the Risk Analysis Standard that looks more at some of that the process issues and how to do risk analysis using FAIR. That standard will also be available by the end of the summer and, combined, those two standards will form the body of knowledge that we'll be testing against in the certification program when it goes live later this year.

Gardner: For those organizations that are looking to get started, in addition to attending The Open Group Conference or watching some of the plenary sessions online, what tips do you have? Are there some basic building blocks that should be in place or ways in which to get the ball rolling when it comes to a better risk analysis?

Freund: Strong personality matters in this. They have to have some sort of evangelist in the organization who cares enough about it to drive it through to completion. That’s a stake on the ground to say, "Here is where we're going to start, and here is the path that we are going to go on."

Strong commitment

When you start doing that sort of thing, even if leadership changes and other things happen, you have a strong commitment from the organization to keep moving forward on these sorts of things.

I spend a lot of my time integrating FAIR with other methodologies. One of the messaging points that I keep saying all the time is that what we are doing is implementing a discipline around how we choose our risk rankings. That’s one of the great things about FAIR. It's universally compatible with other assessment methodologies, programs, standards, and legislation that allows you to be consistent and precise around how you're connecting to everything else that your organization cares about.

Concerns around operational risk integration are important as well. But driving that through to completion in the organization has a lot to do with finding sponsorship and then just building a program to completion. But absent that high-level sponsorship, because FAIR allows you to build a discipline around how you choose rankings, you can also build it from the bottom up.

You can have these groups of people that are FAIR trained that can build risk analyses or either pick ranges -- 1, 2, 3, 4 or high, medium, low. But then when questioned, you have the ability to say, "We think this is a medium, because it met our frequency and magnitude criteria that we've been establishing using FAIR."
Different organizations culturally are going to have different ways to implement and to structure quantitative risk analysis.

Different organizations culturally are going to have different ways to implement and to structure quantitative risk analysis. In the end it's an interesting and reasonable path to get to risk utopia.

Jones: A good place to start is with the materials that The Open Group has made available on the risk taxonomy and that soon to be published risk-analysis standard.

Another source that I recommend to everybody I talk to about other sorts of things is a book called How to Measure Anything by Douglas Hubbard. If someone is even least bit interested in actually measuring risk in quantitative terms, they owe it to themselves to read that book. It puts into layman’s terms some very important concepts and approaches that are tremendously helpful. That's an important resource for people to consider too.

As far as within organizations, some organizations will have a relatively mature enterprise risk-management program at the corporate level, outside of IT. Unfortunately, it can be hit-and-miss, but there can be some very good resources in terms of people and processes that the organization has already adopted. But you have to be careful there too, because with some of those enterprise risk-management programs, even though they may have been in place for years, and thus, one would think over time and become mature, all they have done is dig a really deep ditch in terms of bad practices and misconceptions.

So it's worth having the conversation with those folks to gauge how clueful are they, but don't assume that just because they have been in place for a while and they have some specific title or something like that that they really understand risk at that level.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Sunday, July 7, 2013

Managing transformation to Platform 3.0 a major focus of The Open Group Philadelphia conference on July 15

Taken as a whole, the converging IT and business mega trends of big data, cloud, mobile and social amount to more than a mere infrastructure or device shift.

Businesses and organizations often embrace some, but not all, of these activities. Their legacy and experience with them individually varies greatly. Each business and vertical industry has its own essential variables. And rarely are the trends embraced in unison, with a plan for how to cross-reference and exploit the others in concert.

Moreover, there are even more elements to the current upheaval: the Internet of things, aka machine-to-machine (M2M), and consumerization of IT (CoIT) implications, as well as the building interest in bring your own device (BYOD). There's clearly a lot of change afoot.

It's no wonder that the coordinated path to so-called Platform 3.0 that includes all these trends and their inter-relatedness is marked by uncertainty -- despite the opportunity for significant disruption.
rarely are the trends embraced in unison, with a plan for how to cross-reference and exploit the others in concert.

So how should organizations factor standardization, planning, governance, measurement and even leadership over the productive adoption of Platform 3.0? The topic was initially outlined in an earlier blog post by Dave Lounsbury, Chief Technical Officer at The Open Group.

These questions will certainly play a big part of the upcoming The Open Group conference beginning July 15 in Philadelphia. While the theme of the conference is Enterprise Transformation and an emphasis on the finance, government, and healthcare sectors, The Open Group is working with a number of IT experts, analysts and thought leaders to better understand the opportunities available to businesses, and the steps they need to take to best transform amid the Platform 3.0 uptake. Follow the conference on Twitter at #ogPHL.

The Open Group vision of Boundaryless Information Flow™ to me forms a large ingredient to helping enterprises take advantage of these convergent technologies. A working group within the consortium will analyze the use of cloud, social, mobile computing and big data, and describe the business benefits that enterprises can gain from them. The forum will then proceed to describe the new IT platform in the light of this analysis, with an eye to repeatable methods, patterns and standards.

Registration open

Registration to the conference remains open to attend in person, and many parts of the event will be streamed or available to watch later. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

In a lead-up to the conference, The Open Group also organized a Tweet Jam last month around that hashtags #ogP3 and #ogChat to investigate how the early patterns for Platform 3.0 use and adoption are unfolding. I was happy to be the moderator.

Among some of the salient take-aways from the various discussion and the online Twitter chat:
  • Speed of technology and business innovation will rapidly change the focus from asset ownership to the usage of services, requiring more agile architecture models to adapt to the rate and impact of such change
  • New value networks will result from the interaction and growth of the "Internet of things" and multiple devices and the expected new connectivity that targets specific vertical industry sector needs
  • Expect exponential growth of data inside and outside organizations, converging with increased end-point usage in mobile devices, coupled with powerful analytics all amid hybrid-cloud-hosted environments
  • Leaders will need to incorporate new sources of data, including social media and sensors in the Internet of Things and rapidly turn the data into usable information through correlation, fusion, analysis and visualization
  • Performance and security implications will develop from cross-technology platforms across more federated environments
  • Social behavior and market channel changes will result in multiple ways to search and select IT and business services, engendering new market drivers and effects
And some Tweets of interest from the chat:
  • Vince Kuraitis ‏@VinceKuraitis -- Great term. RT @NadhanAtHP: @technodad #ogP3 principle of "Infonomics" introduced by @doug_laney #ogChat http://bit.ly/YnxXwe
  • jim_hietala ‏@jim_hietala -- RT @nadhanathp: @VinceKuraitis Agreed.  Introducing new definition for ROI - Return on Information http://bit.ly/VAsuAK  #ogP3 #ogChat
  • E.G.Nadhan ‏@NadhanAtHP -- Boundaryless Information Flow to be introduced into Healthcare @theopengroup conference in July' 13 http://blog.opengroup.org/2013/06/06/driving-boundaryless-information-flow-in-healthcare/ … #ogChat #ogP3
  • E.G.Nadhan ‏@NadhanAtHP -- Say hello to the Data Scientist - Sexiest job in the world of #bigdata in the 21st century http://bit.ly/V62TcG  #ogChat #ogP3
  •  Vince Kuraitis ‏@VinceKuraitis -- Business strategy and IT strategy converge @ Platform 3.0 #ogp3 #ogChat
Again, registration to the conference remains open to attend in person. I hope to see you there. We'll also be conducting some BriefingsDirect podcasts from the conference, so watch for those in future posts. Follow the conference on Twitter at #ogPHL.

You may also be interested in:

Tuesday, July 2, 2013

Cloud services help SHI redefine the buyer-seller dynamic for huge efficiency gains

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP Company.

This BriefingsDirect discussion, from the recent 2013 Ariba LIVE Conference in Washington, D.C., explores how SHI International teamed with Ariba to streamline IT product discovery and purchasing processes for large agricultural machinery builder AGCO.

A global provider of IT products, procurement, and related services, with more than $4 billion in annual sales, SHI has tapped into the networked economy to improve their business productivity and sales. To learn more about how agile procurement works well, we're joined John D’Aquila, Applications Support Manager at SHI International Corp. in Somerset, New Jersey.

The interview is conducted Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba, an SAP company, is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What’s different now about buying and selling IT products and services than, say, three or four years ago?

D'Aquila: One thing that has really changed is that IT asset management is a hot topic right now. Customers want to track their purchases much more efficiently than in the past, so they can know exactly how much they have at all times. They want to know if they're over-licensed, under-licensed on the software side, or as far as hardware goes, they want to make sure that they have enough hardware in stock, but don’t have too much. You don’t want to have whole closets and warehouses full of equipment.

Gardner: You have to be very precise, and therefore, you need to have the data about what’s going on across your supply chain.

D'Aquila: Correct. That's where electronic commerce comes in, in IT asset management. I always say that it starts with a great PO, because we want to make sure that when we receive that purchase order, we have as much information that the customer is going to be looking for us to report on downstream.

Years later, if they come back to us and say, how many desktops did we purchase over the last three years and who are they for, the only way we could tell them who it was for is if they told us that information on the purchase order.

Streamlined solution

So the best way to get that is to have a streamlined solution that everyone is using when they're procuring their desktop PC, versus the situation where one PO came over handwritten, one PO came over via fax, and the level of information on each of those POs would be different.

At SHI, as part of every customer QBR or RFP demonstration, we definitely focus on the shi.com portal, which is a standalone website solution to provide them the ability to procure their products from a customized catalog solution.
D'Aquila
Then we show them how we can leverage our check-out question process to collect the information, to make sure that every request and purchase order comes over with that same level of information. If a customer has a solution like Ariba, then we explain to them how we can work with that.

Gardner: Tell us about your organization, how it came about, what you're doing, and why this whole notion of being ultra-efficient across your purchasing processes is essential to your business.

D'Aquila: SHI is a global provider of IT products and solutions. We're headquartered in Somerset, New Jersey, and as you mentioned before, we had over $4 billion in revenue last year. This year we expect to surpass $5 billion.

The number of employees has doubled in four years. So there is definitely an investment internally to enhance the backbone of SHI, which is the sales force and the operations departments.

One thing that I always like to talk about is that as I walk in in the morning -- and all employees walk in -- Above the SHI logo it says "Innovative Solutions and World Class Support." This reminds every employee, as they walk in, that our customers are the reason we're successful, and the way we retain those customers is by providing those innovative solutions and world-class support.

Gardner: How are we getting people to be more efficient and more data driven when it comes to procuring their IT services and products?

Customer driven

D'Aquila: The whole Ariba process is typically driven by the customer. In the early stages of evaluating a solution, we can tell them, if they ask us which one have you worked with and what are the benefits of each, but typically the decision has already been made by the time they come to my team.

We'll explain to them our capabilities around that, and how we could seek benefits from little pieces of information on either the punch-out setup request or on the purchase order.

For example, AGCO has been a customer of SHI’s for many years. The spend was at some growth, but it was really a slow trend up. Eric Deese is the contractor who is working on the project of enabling Ariba throughout AGCO.
We had a conference call to discuss the requirements and his scheduling and understanding his expectations of what we were going to do. From there, we put the resources in place. We did some testing with Eric, a full test, from the purchase order to invoice, to make sure that everything worked properly. Then, I handed it over to Tammy Wagner, who is the Account Executive for AGCO.

One thing that we really like to focus on with customers is, rather than show them everything we could sell, we show what they actually need and want. So we've tailored a catalog around the requirements that Eric provided to make it easier for his users to find products.

Since we've gone live, the number of products purchased from SHI and the different product lines has tripled. So it's been a great success story.

Gardner: How are these trends around more process-driven efficiency goals translated into actual savings or efficiencies?

D'Aquila: One thing is that they control their spend. In speaking to Eric, he explained that the AGCO users were buying software from everywhere. Some people would buy a shrink-wrap copy of software, which is really not the right way to buy software. They would use their P-Cards, and then they would just do an expense report, so it wouldn't be captured properly within their cost centers and the internal accounting.

Now, he said, all the employees of AGCO are going into the Ariba application and procuring their software from SHI. So maverick spend has been controlled.

Single-point purchasing

Also, we can show Eric the spend with SHI and how it has grown. We work with you. Your overall spend has helped you secure better pricing with the manufacturers and with SHI, which in the long-term will turn over savings for AGCO.
Gardner: As IT organizations, in particular, are looking to move more toward an operations expenditure (OPEX) approach rather than the capital expenditure (CAPEX), they're looking for services, for leasing, and for outsourcing types of services. How is that impacting your business and how does that also impact the buying and selling process?

D'Aquila: There has definitely been a trend of more operational expense, versus capital. We notice that customers are no longer treating a desktop as a commodity. It's more of a rental. You're going to use it for a few years and it's no longer going to be expected to run the life of an employee.

So the catalog refresh cycles, have changed, as far as the number of items in the catalog. There is definitely standardizing and making sure that everyone in the organization has the same type of product, so they can get better imaging and so forth.

There is also a trend toward bring your own device (BYOD) that has been coming our way. Organizations are telling their employees, here is your minimum specifications, you can buy any PC, but it's out of your own pocket. It's up to you to purchase it, but you can bring that to work, whether it's a mobile device or even a laptop.


Today, there may be a customer that only purchases software from SHI. We want to introduce them to the fact that although we were Software House International, we are SHI now, because we sell all products that are IT related -- hardware, services, and solutions.

Gardner: And because we are here at Ariba LIVE, what are you hearing that excites you. It may be the spot-buying information. Is that something that would be of interest to you?

D'Aquila: Yes. I've used Ariba Discovery in the past. I think there were a lot of empty requests we would respond, and then they wouldn't be viewed. I'm expecting that with the Spot Buy, because it will come directly out of the SAP application and will be someone keying in a request and looking for the bids, we'll get better leads from the solution. I'm looking forward to see what comes of it.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP Company.

You may also be interested in:


Thursday, June 20, 2013

Millennium Pharmacy takes SaaS model to new heights via policy-driven operations management and automation approach

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

Managing applications sprawl has long been a burr in the IT saddle, and the popularity of software-as-a-service (SaaS) applications hasn't exactly been a balm on the situation.

As with on-premises applications, the key to SaaS and hybrid apps is getting better visibility and operational data on the applications' health, and then automating the processes across standardized methods and controls.

Easier said than done. That's why the next BriefingsDirect IT innovator interview examines how an online pharmaceutical services provider, Millennium Pharmacy Systems, Inc., has successfully deployed mission critical SaaS applications, and then implemented advanced IT management and operational efficiency processes and systems to keep all the applications up to date, compliant, performant, and protected.

To learn more about how real-world automation and operational efficiencies helps improve their business results and customer retention, we sat down with Leon Ravenna, Vice President of IT and Operations and Information Security Officer at Millennium Pharmacy Systems, Inc., based in Cranberry Township Pennsylvania. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: You deliver your value via SaaS. What has become key about managing the applications well?

Ravenna: Depending on what the customer needs, we may set up the entire environment for them, networks, wireless, scanners, and printers, or they get to us through their own equipment and internet connections. But it's all SaaS. 

Our SaaS application has 250 separate SQL databases on seven SQL Servers, running in a VMware environment and that helps me dramatically cut my licensing cost for SQL Server and helps to manage them in a high availability way.

I've been here about 14 months. One of the things that we looked at doing right, when I came in, is taking both the data centers that we have -- one is owned and one is a co-located facility -- and eliminating a lot of the older hardware that we had.

What we looked to doing first was consolidating, getting rid of the older hardware, and moving us to a much better state. We are now about 85 percent virtualized. Our  primary datacenter is for our customer-facing application, a SaaS application, built on SQL/.Net and Silverlight, for about 250 nursing care facilities on the East Coast.
Gardner: What have you gotten, in addition to efficiency, perhaps in terms of reliability?

Ravenna: We had a couple of older Dell blade chassis, and inevitably you would lose the power supply or a server, and I just don’t have that now. From an operational standpoint, it just helps to be more efficient. It has the ability to turn new servers up faster. It’s not something that we do all the time, but it helps me be much more efficient. I have a fairly small staff, and my goal is to let them sleep at night.

By having more VMware in place, as I said, about 85 percent virtualized, it allows me to do that. If the server fails, they applications move to a different server. I have the ability to upgrade the servers on the fly. It allows me, from an operational standpoint, to be more secure in what we're doing.

And it helps me lower my cost, because I am not as worried about my HVAC. I have less equipment to worry about. I have less break-fix to worry about. All in all, it helps me be remarkably more efficient.

Gardner: Let’s learn a bit more about Millennium Pharmacy.

Ravenna: We host a system for about 250 nursing-care facilities. This basically controls all of the medications that a patient would need. It does our medical reordering and passes that information in an entirely integrated fashion back to our in-house systems for billing and filling of prescriptions.

As a patient, you don’t have much time with your nurse. The nurse is typically gathering your drugs. We have our own pharmacies that service those homes. We deliver, in a cellophane sealed package, your medications.
We're working to implement the new HIPAA regulations so we can be even tighter in that space.

These packages say, "Mr. Smith, take this at dinner time." There's a barcode for every drug, and when the nurse gives them the drug, they use a wireless scanner to scan that barcode and it automatically reorders the next set of drugs. We give patients about a three- or four-day supply, as opposed to 45- or 90-day supply, which cuts the cost for the nursing care facility itself. Then, we manage all of that data back to our other systems, that manage the filling of new prescriptions and billing and then we deliver every day.

The healthcare space is fairly stringent, and and getting more so with the new HIPAA regulations. New ones just came out on March 26 of this year, and the enforcement and penalties are much greater. There’s some significant items that have  changed, but really it’s the enforcement and penalties, things around encryption, and protecting customers' data.

We also have to protect confidential information and so we need to be very secure. We're working to implement the new HIPAA regulations so we can be even tighter in that space.

Gardner: This is all done through SaaS and cloud. There are no on-premises installations of your application. Is that right?

Ravenna: Only one facility out of our 250 has their own system. They are large, and one of their requirements was to have their own, but we support the rest of them, approximately 250, all cloud-based. They can get to it from their Internet connection.

All SaaS

Gardner: We're talking about being mission critical, people getting their medicine. We're also talking about being highly efficient. What were some of the requirements in terms of the infrastructure, particularly as we look now towards managing so many different instances and the ability to be agile and fire up new versions of VMware and to get those apps up and running? What were some of your requirements just from a management perspective?

Ravenna: It had to be easy. I have three system engineers. I only have a couple of network engineers. We support, on the network side, approximately 250 VPN tunnels out to customers, and as you said, it's mission critical. If people don’t get their drugs, it’s a bad day. We take that mission very seriously, making sure those systems are up and running.

From an operational or management standpoint, we really need to be monitoring to know what’s happening and when. Having VMware in that mix gives us the ability to make things consistent, but it also helps to  reduce our cost from a licensing standpoint and helps us manage them better, because we can see what’s happening at any given moment.

One of the nice things about VMware is that it’s just rock solid. We're kind of weary of knocking on wood, but it’s rock solid for us. It gives us the ability to move applications on an as-needed basis. We can upgrade things on the fly. In one data center, we are currently on 5.1, and we're moving the other data center to 5.1.

Gardner: So as a mid-market organization, you're resource constrained, you just don’t have a huge stuff, and you need automation. You need to have the ability to manage things, perhaps remotely.
It lets us be a lot more efficient with what we are doing. It lets us manage more efficiently.

So it's this notion of total approach to management, rather than silos, rather than integration of different management approaches and products together. That just wouldn’t fly. What have you done to improve management?

Ravenna: There are a couple of things. We're evaluating vCenter Operations Management Suite. One of the things that it has  let us do is dramatically reduce the size of our virtual machines (VMs).

Typically, if you're moving from a physical environment, VMware is a lot more efficient and it’s really kind of surprising seeing some of the reports that come back from vCenter Operations Management that tell you, realistically, you are running this server with six gigabytes of memory, but you are only really using one.

It’s a little bit spooky to look at it and ask if we really want to go that far. In some cases we would say, "Yes, let’s go ahead and do that," and it’s been, for the most part, dead-on. We've looked at a couple of things where our gut didn't say it was the right thing, even though it probably was. There's still a little bit of that old-school mentality that says you need to get more resources, when in fact the server may not even need them.

It lets us be a lot more efficient with what we are doing. It lets us manage more efficiently, because I can put more databases or more servers on each VM host.

Gardner: What was the ramp-up in terms of the skills and the running of the management system?

Ravenna: For vCenter Operations Management Suite, it wasn’t too bad at all. We were talking to VMware, and they said it would be potentially beneficial. We started up, ran it, and there really wasn’t that much training that was necessary.

The harder thing was when they came back and said we were over provisioned. That was  making that rationalization that VMware is a lot more efficient than physical hardware. It meant taking some of our servers from 4 GB RAM down to one half that, because that’s where they needed to be. In some cases, you want to be a little bit safe. You ultimately find out that the tool was right, and you were being gun shy.

Move quickly

Gardner: So when you look at the total picture, you need to be agile and able to move your resources quickly. What's next on your radar?

Ravenna: I have an overriding philosophy, after doing this for last 20 plus years. The simpler I can make it, the more I get to sleep. Sleep is a recurring theme and realistically, that means fewer calls during the night.

We're looking to move to vCloud Suite, in particular Site Recovery Manager (SRM), and using the vCenter Operations Management Suite to allow us to be more efficient. It just helps us work better and faster. Some of the key components will help me to be as efficient as possible. I may eventually need  to build out virtual data centers, so the VMware vCloud Director helps me.

My whole goal is to be able to make things as simple as possible and as easy as possible to manage, and these tools let me do that and be more efficient. The VMware Operations Management Suite and the vCloud Suite will help me get there.

Those are some of the key things I'm looking for in the future. For me, having multiple data centers, the ability to have VMware SRM, is just a great thing. It’s getting ready to thunderstorm here, and having the ability to move my services to a different data center that’s about 35 miles away is key.
I'm very leery about putting my data just in a cloud with everybody else. It would have to be very specific to the healthcare space.

Gardner: It’s pretty interesting that the notion a one-size-fits-all, plain vanilla, public cloud wouldn’t be attractive to you.

Ravenna: I'm very leery about putting my data just in a cloud with everybody else. It would have to be very specific to the healthcare space, because you end up signing a business associate agreement with me.

It would have to be what I would term carrier-class facilities that can prove they are in the healthcare space, dedicated to being there, and abide by all the HIPAA Rules. We have all of the things like PCI and SSAE 16. Those type things really need to be there and geared toward the healthcare space specifically for me to be able to look at them.

No choice

I'm not a guy who wants to understand electricity or heating and ventilation, but unfortunately in the world that we live today, in the mid-market space, you have your own data centers. You have no choice. You have to play in that game. Anything that I can do that helps me to address those issues to run cooler or run with less equipment is just all goodness.

Gardner: How do you convince the bean counters that this is the right thing to do?

Ravenna: It’s not necessarily a metric, but when you're spending less year over year on equipment, that’s evidence. Every server you buy is going to be in the roughly $5,000-$10,000 range. If I'm not doing that, I'm agile and nimble in being able to say that I can accommodate that.

That's opposed to the old process which was, get the capital done, go to finance, and wait six weeks to get a server, and then put it in. Inevitably there is something that’s constrained. So that six-week lead time becomes eight or ten weeks. It just helps me to move faster and spend a lot less capital money.

One of the things that I mentioned a little bit ago was licensing from a SQL standpoint, but things like backup that are running on a per-processor standpoint within VM drop my overall cost.
Anything that I can do that helps me to address those issues to run cooler or run with less equipment is just all goodness.

One of the things that it’s helpful as well is the dashboarding ability to be able to show what’s going on, what’s happening, and what the environment looks like. vCenter Operations Management Suite gives me that and it's all goodness.

If you're comparing the cost of, say, a two processor server, and you are going to go buy four, five, or six servers, take one of those servers and put that investment into VMware and vCenter Operations Management. You're going to be happier in the long term.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: