Thursday, June 23, 2011

Private Clouds: Debunking the myths that can slow adoption

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Platform Computing.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing.

The popularity of cloud concepts and the expected benefits from cloud computing have certainly raised expectations. Forrester now predicts that cloud spending will grow from $40 billion to $241 billion in the global IT market over the next 10 years, and yet, there's still a lot of confusion about the true payoffs and risks associated with cloud adoption. IDC has it's own numbers.

Some enterprises expect to use cloud and hybrid clouds to save on costs, improve productivity, refine their utilization rates, cut energy use and eliminate gross IT inefficiencies. At the same time, cloud use should improve their overall agility, ramp up their business-process innovation, and generate better overall business outcomes.

To others, this sounds a bit too good to be true, and a backlash against a silver bullet, cloud hype mentality is inevitable and is probably healthy. Yet, we find that there is also unfounded cynicism about cloud computing and underserved doubt.

So, where is the golden mean, a proper context for real-world and likely cloud value? And, what are the roadblocks that enterprises may encounter that would prevent them from appreciating the true potential for cloud, while also avoiding the risks?

We assembled a panel to identify and debunk myths on the road to cloud-computing adoption. Such myths can cause confusion and hold IT back from embracing cloud model sooner rather than later. We also define some clear ways to get the best out of cloud virtues without stumbling.

Joining our discussion about the right balance of cloud risk and reward are Ajay Patel, a Technology Leader at Agilysys; Rick Parker, IT Director for Fetch Technologies, and Jay Muelhoefer, Vice President of Enterprise Marketing at Platform Computing. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Let's begin to tackle some of the cloud computing myths.

Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud.



There's an understanding that virtualization is private cloud and private cloud is virtualization. Clearly, that's not the case. Help me understand what you perceive in the market as a myth around virtualization and what should be the right path between virtualization and a private cloud?

Parker: Private cloud, to put a usable definition to it, is a web-manageable virtualized data center. What that means is that through any browser you can manage any component of the private cloud. That's opposed to virtualization, which could just be a single physical host with a couple of virtual machines (WMs) running on it and doesn't provide the redundancy and cost-effectiveness of an entire private cloud or the ease of management of a private cloud.

So there is a huge difference between virtualization and use of a hypervisor versus an entire private cloud. A private cloud is comprised of virtualized routers, firewalls, switches, in a true data center not a server room. There are redundant environmental systems, like air-conditioning and Internet connections. It’s comprised of an entire infrastructure, not just a single virtualized host.

Moving to a private cloud is inevitable, because the benefits so far outweigh the perceived risks, and the perceived risks are more toward public cloud services than private cloud services.

Gardner: We’ve heard about fear of loss of control by IT. Is there a counter-intuitive effect here that cloud will give you better control and higher degrees of security and reliability?

Redundancy and monitoring

Parker: I know that to be a fact, because the private cloud management software and hypervisors provide redundancy and performance monitoring that a lot of companies don't have by default. You don’t only get performance monitoring across a wide range of systems just by installing a hypervisor, but by going with a private cloud management system and the use of VirtualCenter that supports live motion between physical hosts.

It also provides uptime/downtime type of monitoring and reporting capacity planning that most companies don't even attempt, because these systems are generally out of their budget.

Gardner: Tell us about Fetch Technologies.

Parker: Fetch Technologies is a provider of data as a service, which is probably the best way to describe it. We have a software-as-a-service (SaaS) type of business that extracts formats and delivers Internet-scale data. For example, two of our clients are Dow Jones and Shopzilla.

Gardner: Let’s go next to Ajay. A myth that I encounter is that private clouds are just too hard. "This is such a departure from the siloed and monolithic approach to computing that we'd just as soon stick with one server, one app, and one database," we hear. "Moving toward a fabric or grid type of affair is just too hard to maintain, and I'm bound to stumble." Why would I be wrong in assuming that as my position, Ajay?

The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue.



Patel: One of the main issues that the IT management of an organization encounters on a day-to-day basis is the ability for their current staff to change their principles of how they manage the day-to-day operations.

The training and the discipline need to be changed. The fear of the operations being changed is one of the key issues that IT management sees. They also think of staff attrition as a key issue. By doing the actual cloud assessment, by understanding what the cloud means, it's closer to home to what the IT infrastructure team does today than one would imagine through the myth.

For example, virtualization is a key fundamental need of a private cloud -- virtualization at the servers, network and storage. All the enterprise providers at the servers, networks, and storage are creating a virtualized infrastructure for you to plug into your cloud-management software and deliver those services to a end-user without issues -- and in a single pane of glass.

If you look at the some of the metrics that are used by managed service companies, SIs, and outsourcing companies, they do what the end-user companies do, but they do it much cheaper, better and faster.

More efficient manner

How they do it better is by creating the ability to manage several different infrastructure portfolio components in a much more efficient manner. That means managing storage as a virtualized infrastructure; tier storage, network, the servers, not only the Windows environment, but the Unix environment, and the Linux environment, including all that in the hands of the business-owners.

Today, with the money being so tight to come by for a corporation, people need to look at not just a return on investment (ROI), but the return on invested capital.

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry, than if you were to just expand utilizing the islands of bills of test, dev environment, by application, by project.

Gardner: I'd like to hear more about Agilysys? What is your organization and what is your role there as a technology leader?

You can deploy private cloud technologies on top of your virtualized infrastructure at a much lower cost of entry.



Patel: I am the technology leader for cloud services across the US and UK. Agilysys is a value-added reseller, as well as a system integrator and professional services organization that services enterprises from Wall Street to manufacturing to retail to service providers, and telecom companies.

Gardner: And do you agree, Ajay, with Forrester Research and IDC, when they show such massive growth, do you really expect that cloud, private cloud, and hybrid cloud are all going to be in such rapid growth over the next several years?

Patel: Absolutely. The only difference between a private cloud and public cloud, based on what I'm seeing out there, is the fear of bridging that gap between what the end-user attains via private cloud being inside their four walled data center, to how the public cloud provides the ability for the end-user to have security and the comfort level that their data is secure. So, absolutely, private to hybrid to public is definitely the way the industry is going to go.

Gardner: Jay at Platform, I'm thinking about myths that have to do with adoption, different business units getting involved, lack of control, and cohesive policy. This is probably what keeps a lot of CIOs up at night, thinking that it’s the Wild West and everyone is running off and doing their own thing with IT. How is that a myth and what does a private cloud infrastructure allow that would mitigate that sense of a lot of loose cannons?

Muelhoefer: That’s a key issue when we start thinking about how our customers look to private cloud. It comes back a little bit to the definition that Rick mentioned. Does virtualization equal private cloud -- yes or no? Our customers are asking for the end-user organizations to be able to access their IT services through a self-service portal.

But a private cloud isn’t just virtualization, nor is it one virtualization vendor. It’s a diverse set of services that need to be delivered in a highly automated fashion. Because it's not just one virtualization, it's going to be VMware, KVM, Xen, etc.

A lot of our customers also have physical provisioning requirements, because not all applications are going to be virtualized. People do want to tap in to external cloud resources as they need to, when the costs and the security and compliance requirements are right. That's the concept of the hybrid cloud, as Ajay mentioned. We're definitely in agreement. You need to be able to support all of those, bring them together in a highly orchestrated fashion, and deliver them to the right people in a secure and compliant manner.

The challenge is that each business unit inside of the company typically doesn’t want to give up control. They each have their own IT silos today that meet their needs, and they are highly over provisioned.

Some of those can be at 5 to 10 percent utilization, when you measure it over time, because they have to provision everything for peak demands. And, because you have such a low utilization, people are looking at how to increase that utilization metric and also increase the number of servers that are managed by each administrator.

You need to find a way to get all the business units to consolidate all these underutilized resources. By pooling, you could actually get effects just like when you have a portfolio of stocks. You're going to have a different demand curve by each of the different business units and how they can all benefit. When one business unit needs a lot, they can access the pool when another business unit might be low.

You need to find a way to get all the business units to consolidate all these underutilized resources.



But, the big issue is how you can do that without businesses feeling like they're giving up that control to some other external unit, whether it's a centralized IT within a company, or an external service provider? In our case, a lot of our customers, because of the compliance and security issues, very much want to keep it within their four walls at this stage in the evolution of the cloud marketplace.

So, it’s all about providing that flexibility and openness to allow business units to consolidate, but not giving up that control and providing a very flexible administrative capability. That’s something that we've spent the last several years building for our customers.

It’s all about being able to support that heterogeneous environment, because every business unit is going to be a little different and is going to have different needs. Allowing them to have control, but within a defined boundaries, you could have centralized cloud control, where you give them their resources and quotas for what they're initially provisioned for, and you could support costing and charge back, and provide a lot more visibility in to what’s happening.

You get all of that centralized efficiency that Ajay mentioned, but also having a centralized organization that knows how to run a larger scale environment. But then, each of the business units can go in and do their own customized self-service portal and get access to IT services, whether it's a simple OS or a VM or a way to provision a complex multi-tier application in minutes, and have that be an automated process. That’s how you get a lot of the cost efficiencies and the scale that you want out of a cloud environment.

Gardner: And, for those business units, they'd also have to watch the cost and maybe have their own P&L. They might start seeing their IT costs as a shared services or charge-backs, get out of the capital expense business, and so it could actually help them in their business when it comes to cost.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing

Still in evolution

Muelhoefer: Correct. Most of our customers today are very much still in evolution. The whole trend towards more visibility is there, because you're going to need it for compliance, whether it’s Sarbanes-Oxley (SOX) or ITIL reporting.

Ultimately, the business units of IT are going to get sophisticated enough that they can move from being a cost center to a value-added service center. Then, they can start doing that granular charge-back reporting and actually show at a much more fine level the value that they are adding to the organization.

Parker: Different departments, by combining their IT budgets and going with a single private cloud infrastructure, can get a much more reliable infrastructure. By combining budgets, they can afford SAN storage and a virtual infrastructure that supports live VMotion.

They get a fast response, because by putting a cloud management application like Platform on top it, they have much more control, because we are providing the interface to the different departments. They can set up servers themselves and manage their own servers. They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.

IT gives end-users more control by providing a cloud management application and also gives them a much more reliable, manageable system. We've been running a private cloud here at Fetch for three years now, and we've seen this. This isn’t some pie-in-the-sky kind of thing. This is, in fact, what we have seen and proven over and over.

They have a much faster "IT response time,” so they don’t really have to wait for IT’s response through a help desk system that might take days to add memory to a server.



Gardner: I asked both Ajay and Rick to tell us about their companies. Jay, why don’t you give us the overview of Platform Computing?

Muelhoefer: Platform Computing is headquartered in Toronto, Canada and it's about an 18-year-old company. We have over 2,000 customers, and they're spread out on a global basis.

We have a couple of different business units. One is enterprise analytics. Second, is cloud, and the third is HPC grids and clusters. Within the cloud space, we offer a cloud management solution for medium and large enterprises to build and manage private and hybrid cloud environments.

The Platform cloud software is called Platform ISF. It's all about providing the self-service capability to end-users to access this diverse set of infrastructure as a service (IaaS), and providing the automation, so that you can get the efficiencies and the benefits out of a cloud environment.

Gardner: Rick, let’s go back to you. I've heard this myth that private clouds are just for development, test, and quality assurance (QA). Is cloud really formed by developers and it’s being getting too much notoriety, or is there something else going that it’s for test, dev, and a whole lot more?

Beginning of the myth

Parker: I believe that myth just came from the initial availability of VMware and that’s what it was primarily used for. That’s the beginning of that myth.

My experience is that our private cloud isn't a specific use-case. A well designed private cloud should and can support any use case. We have a private cloud infrastructure and on top of this infrastructure, we can deliver development resources and test resources and QA resources, but they're all sitting on top of a base infrastructure of a private cloud.

But, there isn't just a single use case. It’s detrimental to define use cases for private cloud. I don't recommend setting up a private cloud for dev only, another separate private cloud for test, another separate private cloud for QA. That’s where a use case mentality gets into it. You start developing multiple private clouds.

If you combine those resources and develop a single private cloud, that lets you divide up the resources within the infrastructure to support the different requirements. So, it’s really backward thinking, counter-intuitive, to try to define use cases for private cloud.

We run everything on our private cloud. Our goal is 100 percent virtualization of all servers, of running everything on our private cloud. That includes back-office corporate IT, Microsoft Exchange services like domain controllers, SharePoint, and all of these systems run on top of our private cloud out of our data centers.

We don't have any of these systems running out of an office, because we want the reliability that the cost savings that our private cloud gives us to deploy these applications on servers in the data center where these systems belong.

Muelhoefer: Some of that myth is maybe because the original evolution of clouds started out in the area of very transient workloads. By transient, I mean like a demonstration environments. or somebody that just needs to do a development environment for a day or two. But we've seen a transition across our customers, where they also have these longer-running applications that they're putting in the production type of environments, and they don't want to have to over-provision them.

At the end of the quarter, you need to have a certain capacity of 10 units, you don’t want to have that 10 units throughout the entire quarter as resource-hogs. You want to be able to flex up and flex down according to the requirements and the demand on it. Flexing requires a different set of technology capabilities, having the right sets of business policies and defining your applications so they can dynamically scale. I think that’s one of the next frontiers in the world of cloud.

We've seen with our customers that there is a move toward different application architectures that can take advantage of that flexing capability in Web applications and Java applications. They're very much in that domain, and we see that the next round of benefits is going to come from the production environments. But it does require you to have a solid infrastructure that knows how to dynamically manage flexing over time.

It’s going to be a great opportunity for additional benefits, but as Rick said, you don't want to build cloud silos. You don't want to have one for dev, one for QA, one for help desk. You really need a platform that can support all of those, so you get the benefits of the pooling. It's more than just virtualization. We have customers that are heavily VMware-centric. They can be highly virtualized, 60 percent-plus virtualized, but the utilization isn’t where they need it to be. And it's all about how can you bring that automation and control into that environment.

Gardner: Next myth, it goes to Ajay. This is what I hear more than almost any other: "There is no cost justification. The cloud is going to cost the same or even more. Why is that cynicism unjustified?

Patel: One of the main things that proves to be untrue is that when you build a private cloud, you're pulling in the capabilities of the IT technology that is building the individual islands of environments. On top of it, you're increasing utilization. Today, in the industry, I believe the overall virtualization is less than 40 percent. If you think about it, taking the less-than-40 percent virtualized environment, the remaining is 60 percent.

Even if you take 30 percent, which is average utilization -- 15-20 percent in the Windows environment. By putting it on a private cloud, you're increasing the utilization to 60 percent, 70 percent, 80 percent. If you can hit at 85 percent utilization of the resources, now you are buying that much less of every piece of hardware, software, storage, and network.

You put the right infrastructure in place with the ability to service your business, what you do successfully



When you pool all the different projects together, you build an environment. You put the right infrastructure in place with the ability to service your business, what you do successfully. You end up saving minimally 20 percent, if you just keep the current service level agreements (SLAs) and current deliverables, the way you do today.

But, if you retrain your staff to become cloud administrators -- to essentially become more agile in the ability to create the workloads that are virtual-capable versus standalone-capable -- you get much more benefit, and your cost of entry is minimally 20-30 percent lower on day one. Going forward, you can get more than 50 percent lower cost.

[Private cloud] is killing two birds with one stone, because not only can you re-utilize your elasticity of a 100,000 square-foot facility of data center, but you can now put in 2-3 times more compute capacity without breaking the barriers of the power, cooling, heating, and all the other components. And by having cloud within your data center, now the disaster-recovery capabilities of cloud failover is inherent in the framework of cloud.

You no longer have to worry about individual application-based failover. Now, you're looking at failing over an infrastructure instead of applications. And, of course, the framework of cloud itself gives you a much higher availability from the perspective of hardware up-time and the SLAs than you can obtain by individually building sets of servers with test, dev, QA, or production.

Days to hours

Operationally beyond the initial set up of the private cloud environment, the cost to IT, in an environment and the IT budget goes down drastically on the scale based on our interaction to end-users and our cloud providers is anywhere from 11 days to 15 days down to 3-4 hours.

This means that the hardware is sitting on the dock in the old infrastructure deployment model, versus the cloud model. And when you take three to four hours down into individual components it takes one to two to three days to build the server, rack it, power it, connect it.

It takes 10 minutes today within the private cloud environment to install the operating system. It used to take one to two days, maybe two-and-a-half days, depending on the patches and the add-ons. It takes 30 to 60 minutes starting with a template that is available within private cloud and then setting up the dev environments at the application layer, goes down from days down to 30 minutes.

When you combine all that, the operational efficiency you gain definitely puts your IT staff at a much greater advantage than your competitor.

Gardner: Ajay just pointed out that there is perhaps a business continuity benefit here. If your cloud is supporting infrastructure, rather than individual apps, you can have failover, reliability, redundancy, and disaster recovery at that infrastructure level. Therefore, having it across the board.

In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.



What's the business continuity story and does that perhaps provide a stepping stone to hybrid types of computing models?

Parker: To backtrack just a little bit, at Fetch Technologies, we've cut our data-center cost in half by switching to a private cloud. That's just one of the cost benefits that we've experienced.

Going back to the private cloud cost, one of the myths is that you have to buy a whole new set of cloud technology, cloud hardware, to create a private cloud. That's not true. In most cases, a number of the components of a private cloud is just redeployed existing hardware, because the cloud network is more of a configuration than the specific cloud hardware.

In other words, you can reconfigure existing hardware into a private cloud. You don't necessarily need to buy, and there is really no such thing as specific cloud hardware. There are some hardware systems and models that are more optimal in a private cloud environment, but that doesn't necessarily mean you need to buy them to start. You get some initial cost savings, do virtualization to pay for maybe more optimal hardware, but you don't have to start with the most optimal hardware to build a private cloud.

As far as the business continuity, what we've found is that the benefit is more for up-time maintenance than it is for reliability, because most systems are fairly reliable. You don't have servers failing on a day-to-day basis.

Zero downtime

We have systems, at least one server, that's been up for two years with zero downtime. For updating firmware, we can VMotion servers and virtual machines off to other hosts, upgrade the host, and then VMotion those virtual servers back on to the upgraded host so we have a zero downtime maintenance. That's almost more important than reliability, because reliability is generally fairly good.

Gardner: Is there another underlying value here that by moving to private cloud, it puts you in a better position to start leveraging hybrid cloud, that is to say more SaaS or using third-party clouds for specific IaaS and/or maybe perhaps over time moving part of your cloud into their cloud.

Is there a benefit in terms of getting expertise around private cloud that sets you up to be in a better position to enjoy some of the benefits of the more expensive cloud models?

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud.



Muelhoefer: That's a really interesting question, because one of the main reasons that a lot of our early customers came to us was because there was uncontrolled use of external cloud resources. If you're a financial services company or somebody else who has compliance and security issues and you have people going out and using external clouds and you have no visibility into that, it's pretty scary.

We offer a way to provide a unified view of all your IT service usage, whether it's inside your company being serviced through your internal organization or potentially sourced through an external cloud that people may be using as part of their overall IT footprint. It's really the ability to synthesize and figure out -- if an end user is making a request, what's the most efficient way to service that request?

Is it to serve up something internally or externally, based upon the business policies? Is it using very specific customer data that can't go outside the organization? Does it have to use a certain type of application that goes with it where there's a latency issue about how it's served, and being able to provide a lot of business policy context about how to best serve that whether it's a cost, compliance, or security type of objective that you’re going against?

That’s one key thing. Another important aspect we do see in our customers is the disaster recovery and reliability issue is very important. We've been working with a lot of our larger customers to develop a unique ability to do Active/Active failover. We actually have customers that have applications that are running real-time across multiple data centers.

So, in the case of not just the application going down, but an entire data center going down, they would have no loss of continuity of those resources. That’s a pretty extreme example, but it goes to the point of how important meeting some of those metrics are for businesses and making that cost justification.

Stepping stone

Gardner: We started out with some cynicism, risk, and myths, but it sounds like private clouds are a stepping stone, but at the same time, they are attainable. The cost structure sounds very attractive, certainly based on Rick and Ajay’s experiences.

Jay, where do you start with your customers for Platform ISF, when it comes to ease of deployment? Where do you start that conversation? I imagine that they are concerned about where to start. There is a big set of things to do when it comes to moving towards virtualization and then into private cloud. How do you get them on a path where it seems manageable?

Muelhoefer: We like to engage with the customer and understand what their objectives are and what's bringing them to look at private cloud. Is it the ability to be a lot more agile to deliver applications in minutes to end users or is it more on the cost side or is it a mix between the two? It's engaging with them on a one-on-one basis and/or working with partners like Agilysys where we can build out that roadmap for success and that typically involves understanding their requirements and doing a proof of concept.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment. Look at what types of processes you're going to be modifying in addition to the technologies that you’re going to be implementing, so that you can achieve the right set of pooling.

Something that’s very important to building the business case for private cloud is to actually get it installed and working within your own environment.



You’re a very VMware-centric shop, but you don’t want to be locked into VMware. You want to look at KVM or Xen for non-production-type use cases and what you’re doing there. Are you looking at how can you make yourself more flexible and leverage those external cloud resources? How can you bring physical into the cloud and do it at the right price point?

A lot of people are looking at the licensing issue of cloud, and there are a lot of different alternatives, whether it's per VM, which is quite expensive, or other alternatives like per socket and helping build out that value roadmap over time.

For us, we have a free trial on our website that people can use. They can also go to our website to learn more which is http://www.platform.com/privatecloud. We definitely encourage people to take a look at us. We were recently named the number one private cloud management vendor by Forrester Research. We are always happy to engage with companies that want to learn more about private cloud.

Get a complimentary copy of the Forrester Private Cloud Market Overview from Platform Computing.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Platform Computing.

You may also be interested in:

Tuesday, June 21, 2011

Discover Case Study: Genworth Financial looks to HP Executive Scorecard to improve applications management, reliability, costs

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference June 8 in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Genworth Financial, and how they use a number of different products to improve application delivery, performance testing, and also operational integrity. Then, we'll look at the transition to a more comprehensive role for those tools, working in concert, and eventually with the opportunity to have an Executive Scorecard view into operations vis-à-vis these products and solutions.

Join the discussion about Genworth Financial’s applications management experience with Tim Perry, Chief Technology Officer for the Retirement and Protection Division at Genworth Financial in Richmond, Virginia. The interview was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Perry: Let me start with a little bit of a roadmap. We brought in HP Quality Center, way back before HP ALM. We brought that in mainly for requirements management and for testing. That one has evolved over the years to the point where we really wanted to get traceability for developers, testers, business analysts, everything. That’s what we're hoping for in the ALM stack of things on its own.

PPM came in for a lot of different reasons. HP Project Portfolio Management was a piece of it. We had a very raw portfolio of what we are working on. Since then it’s become a service request management within our division, much like what you do with the helpdesk, but for our division in applications, everything from account request to marketing, workflow approvals, things like that. So PPM has taken on life of its own.

The newest one is performance engineering, and performance engineering to us means performance monitoring and performance testing. We’ve had performance testing for a while but we’ve not been great at monitoring and keeping track of our applications as they are living and breathing.

Those are the three big silos for us, and I just want to mention that’s the reason this HP Performance Suite that we are about to talk about is intriguing to us because it starts to glue all of this together.

Gardner: On June 1, HP announced its IT Performance Suite, and a number of people are taking a really deep look at it here at Discover. Tell me what your initial perceptions are and what your potential plans are?

The Executive Scorecard is probably the epitome of it, the top of it, that talks to these executives about where things are, the health of the applications, how we're doing on projects.



Perry: Just like our own internal applications, it felt as if up until now a lot of these suites that HP provides stood on their own and didn't have a lot of integration with each other. What I am starting to see is a lot of synergy around good integrations. The Executive Scorecard is probably the epitome of it, the top of it, that talks to these executives about where things are, the health of the applications, how we're doing on projects, all those things that are the key performance indicators that we live and breathe.

That’s cool, but in order to get the scorecard, that implies data is available to the scorecard and integrations are there in place. That combination is the magic we're looking for.

Gardner: And how about the KPIs? That would bring some standardization and allow you to be able to start doing apples-to-apples comparisons and getting a stronger bead on what is the reality of your IT and therefore, how you can improve on it.

Important indicators

Perry: It appears that HP has looked at 170 or so KPIs that the industry, not just HP, but everybody, has said are important indicators. We can pick and choose which ones are important to us to put them on the scorecard. Those are the ones that we can focus on from an integration standpoint. It’s not like we have to conquer world hunger all at once.

Gardner: I’ve heard folks say that the scorecard is of interest, not just for IT, but to bring a view of what’s going on in IT to the business leadership and the financial leadership in the organization, and therefore, make IT more integral rather than mysterious.

Perry: I have to say this. Our IT organization is part of operations. Last year, at this same event, we had more operations folks here than IT. I think HP should take the IT moniker off and start talking more about "business operations." That’s just my personal view of this, and I agree, this helps us not just roll up information to IT executives, but to our actual operations folks.

Perry: Genworth Financial is an insurance company that covers many different areas like life insurance, long-term care insurance, mortgage insurance, wealth management, and things like that, and we're here for a number of reasons. We use HP for helping us just maintain and keep a lot of our applications alive.

Gardner: Could you give us a sense of your operations, the scope of your IT organization?

Perry: Our IT organization is, depending on the division, hundreds of employees, but then we also have contractors that work internationally on our behalf. So, throughout the world, we’ve got developers in different places.

Gardner: How about some metrics around the number or types of applications that you're using?

Perry: We have a gazillion applications, like every big company has, but for our division alone, we have around 50 applications that are financially important, and we track them more than any of the others. So that gives you a feel for the number of applications. There are a lot of small ones, but 50 big ones.

Gardner: Do you have any sense of what the integration and the continued evolution of a lifecycle approach to IT and quality has done for you? Do you have any metrics of success, either from a business value perspective or just good old speeds-and-feeds and cost perspective?

The piece that's missing right now is the developer integration, and we just saw a lot of that this week. I'm looking forward to evolving that even more. That’s been a big deal.



Perry: Without having actual numbers in front of me, it’s hard to quantify. But let’s just say this, with Quality Center in particular, it’s helped us a lot with traceability between the business requirements and the actual testing that we are doing. I don’t know how to measure it here, but it’s been a big thing for us. The piece that's missing right now is the developer integration, and we just saw a lot of that this week. I'm looking forward to evolving that even more. That’s been a big deal.

Gardner: Perhaps if I ask you that same question a year from now, at Discover 2012, you’ll have some hard numbers in metrics?

Perry: Oh, I’d love to be able to go and have a presentation at one of the sessions that we’ve had such great experience with Performance Suite. I’ll be here talking a lot about it. I’d love to do that.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

HP releases networking solutions, appliances that target specific woes of SMB market

HP today announced new solutions and technology along with an expanded entrepreneur program to enable small and midsize businesses (SMBs) aimed at driving growth, improving employee productivity, and protecting assets.

SMBs, a $234-billion market, face daily challenges that include controlling costs, managing employee productivity and gaining access to credit. HP says these new offerings are designed to address those challenges. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP also has expanded its investment with 40 new training centers in its global program, HP Learning Initiative for Entrepreneurs (HP LIFE), which empowers SMBs to create new revenue streams. Since 2007, HP has invested more than $20 million into the program, resulting in the creation of more than an estimated 20,000 jobs and more than approximately 6,500 new businesses.

SMBs, a $234-billion market, face daily challenges that include controlling costs, managing employee productivity and gaining access to credit.



New SMB-attuned products include:
  • HP ProLiant ML110 G7, an entry-level server that is simple to deploy and manage. It supports basic office applications such as web messaging, small databases, file and print, as well as small vertical applications.
  • HP ProLiant DL120 G7 server, an entry-level rack-optimized server running a wider range of dedicated applications, including IT infrastructure applications, web, messaging, file and print operations, small internet applications, as well as shared web access.
  • HP V1810-48G web-managed switch. This 48-port switch easily integrates into existing multivendor networks and accommodates increasing performance requirements as businesses grow and develop.
  • HP V1410 unmanaged Fast Ethernet switch series. The IEEE Energy Efficiency Ethernet compliant switches are operational out of the box to offer SMBs an affordable entry-level networking solution.
  • HP P2000 G3 Modular Smart Array (MSA). An entry-level storage solutions to support the VMware API for Array Integration and VMware vCenter, the HP P2000 MSA allows SMB clients to get enterprise-class performance and VMware manageability. In addition, HP Insight Control Storage Module Manager for VMware vCenter enables management and monitoring of server, storage and networking resources for virtual machines within the vCenter console.
Business protection

T
he explosive growth of data and email is creating increased risk and complexity as well as recurring disaster recovery expenses for SMBs. Solutions designed for business protection include:
  • HP Branch Office Consolidation. This turnkey solution offers business plan and management software to help SMBs simplify, automate, and integrate infrastructure to increase efficiency, reduce operational risk and support branch offices.
  • HP Business Risk Mitigation, which includes configurations for servers, storage and network upgrades, as well management software for PCs, printers and other technology, to deliver high-availability data protection, security and offsite disaster recovery.
  • HP PC Backup Services (PDF), which reliably back up files on employee PCs ensuring rapid data restore in the event of an outage, corrupted files or a lost or stolen PC.
Revenue opportunities

L
everaging data to gain insight and make decisions is increasingly crucial to establishing a competitive advantage. Until now, this has been too complex and costly for many midsize businesses. New solutions designed to address that include:

The explosive growth of data and email is creating increased risk and complexity as well as recurring disaster recovery expenses for SMBs.


To help companies finance the new solutions, HP Financial Services, the company’s leasing and life cycle asset management services division, is offering two financing options for businesses in the United States and Canada on HP equipment priced between $1,500 and $250,000. The new financing option offers a zero percent, 12-month lease with $1 purchase option or a zero percent, 36-month lease with fair market value purchase option.

I have to admit, I've been a bit confused about HP's SMB intentions over the past few years. There has been quite a bit of back and forth about how aggressively to pursue this opportunity. It looks like they have made up their mind in favor of hot pursuit, which makes sense given that this may be the leading adoption edge of the cloud and mobile era.

You may also be interested in:

Friday, June 17, 2011

Discover Case Study: Holistic ALM helps Blue Cross and Blue Shield of Florida break down application inefficiencies, redundancy

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Blue Cross and Blue Shield of Florida and how they’ve been able to improve their applications' performance -- and even change the culture of how they test, provide, and operate their applications.

Join Victor Miller, Senior Manager of Systems Management at Blue Cross and Blue Shield of Florida in Jacksonville, for a discussion moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Miller: The way we looked at applications was by their silos. It was a bunch of technology silos monitoring and managing their individual ecosystems. There was no real way of pulling information together. We didn’t represent what the customer is actually feeling inside the applications.

One of the things we started looking at was that we have to focus on the customers, seeing exactly what they were doing in the application to bring the information back. We were looking at the performance of the end-user transactions or what the end-users were doing inside the app, versus what Oracle database is doing, for example.

When you start pulling that information together, it allows you to get full traceability of the performance of the entire application from a development, test, staging, performance testing, and then also production side. You can actually compare that information to understand exactly where you're at. Also, you're breaking down those technology silos, when you're doing that. You move more toward a proactive transactional monitoring perspective.

We're looking at how the users are using it and what they're doing inside the applications, like you said, instead of the technology around it. The technology can change. You can add more resources or remove resources, but really it's all up to the end-user, what they are doing in their performance of the apps.

Overcome hurdles

Blue Cross and Blue Shield is one of the 39 independent Blue Crosses throughout the United States. We're based out of Florida. We've been around since about 1944. We're independent licensee of the Blue Cross Blue Shield Association. One of our main focuses is healthcare.

We do sell insurance, but we also have our retail environment, where we're bringing in more healthcare services. It’s really about the well-being of our Florida population. We do things to help Florida as a whole, to make everyone more healthy where possible.

When we started looking at things we thought we were doing fine until we actually started bringing the data together to understand exactly what was really going on, and our customers weren’t happy with IT performance of their application, the availability of their applications.

From an availability perspective, we weren’t looking very good. So, we had to figure out what we could do to resolve that.



We started looking at the technology silos and bringing them together in one holistic perspective. We started seeing that, from an availability perspective, we weren’t looking very good. So, we had to figure out what we could do to resolve that. In doing that, we had to break down the technology silos, and really focus on the whole picture of the application, and not just the individual components of the applications.

Our previous directors reordered our environment and brought in a systems management team. It’s responsibility is to monitor and help manage the infrastructure from that perspective, centralize the tool suites, and understand exactly what we're going to use for the capabilities. We created a vision of what we wanted to do and we've been driving that vision for several years to try to make sure that it stays on target and focused to solve this problem.

We were such early adopters that we actually chose best-in-breed. We were agent-based monitoring environment, and we moved to agent-less. At the time, we adopted Mercury SiteScope. Then, we also brought in Mercury’s BAC and a lot of Topaz technologies with diagnostics and things like that. We had other capabilities like Bristol Technology’s TransactionVision.

Umbrella of products

H
P purchased all the companies and brought them into one umbrella of product suites. It allowed us to bind the best-of-breed. We bought technologies that didn’t overlap, could solve a problem, and integrated well with each other. It allowed us to be able to get more traceability inside of these spaces, so we can get really good information about what the performance availability is of those applications that we're focusing on.

One of the major things was that it was people, process, and technology that we were focused on in making this happen. On the people side, we moved our command center from our downtown office to our corporate headquarters where all the admins are, so they can be closer to the command center. If there were a problem that command center can directly contact them and they go down in there.

We instituted what I guess I’d like to refer to as "butts in the seat." I can't come with a better name for it, but it's when the person is on call, they were in the command center working down there. They were doing the regular operational work, but they were in the command center. So if there was an incident they would be there to resolve it.

In the agent-based technologies we were monitoring thousands of measurement points. But, you have to be very reactive, because you have to come after the fact trying to figure out which one triggered. Moving to the agent-less technology is a different perspective on getting the data, but you’re focusing on the key areas inside those systems that you want to pay attention to versus the everything model.

In doing that, our admins were challenged to be a little bit more specific as to what they wanted us to pay attention to from a monitoring perspective.



In doing that, our admins were challenged to be a little bit more specific as to what they wanted us to pay attention to from a monitoring perspective to give them visibility into the health of their systems and applications.

[Now] there is a feedback loop and the big thing around that is actually moving monitoring further back into the process.

We’ve found out is if we fix something in development, it may cost a dollar. If we fix it in testing, it might cost $10. In production staging it may cost $1,000 It could be $10,000 or $100,000, when it’s in production, because that goes back to the entire lifecycle again, and more people are involved. So the idea is moving things further back in the lifecycle has been a very big benefit.

Also, it involved working with the development and testing staffs to understand that you can’t throw application over the wall and say, "Monitor my app, because it’s production." We have no idea which is your application, or we might say that it’s monitored, because we're monitoring infrastructure around your application, but we may not be monitoring a specific component of the application.

Educating people

The challenge there is reeducating people and making sure that they understand that they have to develop their app with monitoring in mind. Then, we can make sure that we can actually give them visibility back into the application if there is a problem, so they can get to the root cause faster, if there's an incident.

We’ve created several different processes around this and we focused on monitoring every single technology. We still monitor those from a siloed perspective, but then we also added a few transactional monitors on top of that inside those silos, for example, transaction scripts that run at the same database query over-and-over again to get information out of there.

At the same time, we had to make some changes, where we started leveraging the Universal Configuration Management Database (UCMDB) or Run-time Service Model to bring it up and build business services out of this data to show how all these things relate to each other. The UCMDB behind the scenes is one of the cornerstones of the technology. It brings all that silo-based information together to create a much better picture of the apps.

We don’t necessarily call it the system of record. We have multiple systems of record. It’s more like the federation adapter for all these records to pull the information together. It guides us into those systems of record to pull that information out.

We’ve created several different processes around this and we focused on monitoring every single technology.



About eight years ago when we first started this, we had incident meetings where we had between 15 and 20 people going over 20-30 incidents per week. We had those every day of the week On Friday, we would review all the ones for the first four days of the week. So, we were spending a lot of time doing that.

Out of those meetings, we came up with what I call "the monitor of the day." If we found something that was an incident that occurred in the infrastructure that was not caught by some type of monitoring technology, we would then have it monitored. We’d bring that back, and close that loop to make sure that it would never happen again.

Another thing we did was improve our availability. We were taking something like five and six hours to resolve some of these major incidents. We looked at the 80:20 rule. We solved 80 percent of the problems in a very short amount of time. Now, we have six or seven people resolving incidents. Our command center staff is in the command center 24 hours a day to do this type of work.

Additional resources

W
hen they needed additional resources, they just pick up the phone and call the resources down. So, it’s a level 1 or level 2 type person working with one admin to solve a problem, versus having all hands on deck, where you have 50 admins in a room resolving incidents.

I'm not saying that we don’t have those now. We do, but when we do, it’s a major problem. It’s not something very small. It could be a firmware on a blade enclosure going down, which takes an entire group of applications down. It's not something you can plan for, because you're not making changes to your systems. It's just old hardware or stuff like that that can cause an outage.

Another thing that is done for us is those 20 or 30 incidents we had per week are down to one or two. Knock on wood on that one, but it is really a testament to a lot of the things that our IT department has done as a whole. They're putting a lot of effort into into reducing the number of incidents that are occurring in the infrastructure. And, we're partnering with them to get the monitoring in place to allow for them to get the visibility in the applications to actually throw alerts on trends or symptoms, versus throwing the alert on the actual error that occurs in the infrastructure.

[Since the changes] customer satisfaction for IT is a lot higher than it used to be. IT is being called in to support and partner with the business, versus business saying, "I want this," and then IT does it in a vacuum. It’s more of a partnership between the two entities to be able to bring stuff together. Operations is creating dashboards and visibility into business applications for the business, so they can see exactly what they're doing in the performance of their one department, versus just from an IT perspective. We can get the data down to specific people now.

Customer satisfaction for IT is a lot higher now than it used to be. IT is being called in to support and partner with the business.



Some of the big things I am looking at next are closed-loop processes, where I have actually started to work with making some changes, working with our change management team to make changes to the way that we do changes in our environment where everything is configuration item (CI) based, and doing that allows for that complete traceability of an asset or a CI through its entire lifecycle.

You understand every incident, request, problem request that ever occurred on that asset, but also you can actually see financial information. You can also see inventory information and location information and start bringing the information together to make smart decisions based on the data that you have in your environment.

The really big thing is really to help reduce the cost of IT in our business and be able to do whatever we can to help cut our cost and keep a lean ship going.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Sponsor: HP.

You may also be interested in:

Thursday, June 16, 2011

Discover Case Study: Sprint Gains Better Control and Efficiency in IT Operations with Business Service Management Approach

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Sprint. We'll learn how Sprint is doing applications and IT in a better way using Business Service Management. It's an on-going journey to simply and automate, reduce redundancy, and develop more agility as a business solutions provider for their customers, and also their own employees.

Join two executives from the IT organization at Sprint, Joyce Rainey, Program Manager of Enterprise Services, and John Felton, Director of Applications Development and Operations, for a discussion moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Felton: The problem that we had originally had, as any large organization has, were many applications, many of them custom built, many of them purchased applications that now are so customized that the vendor doesn’t even know what to do with it anymore.

We grew those over a long period of time. We were trying, as a way to stabilize, to get it into a centralized, single point of truth and quit the duplication or the redundancy that we built into all these applications.

The goal, as we set forth about a year-and-a-half ago, was to implement the ecosystem that HP provided, the five toolsets that followed our ITIL processes that we wanted to do. The key was that they were integrated to share information, and we'd be able to take down these customized applications and then have one ecosystem to manage our environment with. That's what we've done over the last 14 months.

[At Sprint] there are thousands of outlets, retail stores. We have our third-party customers as well, like Best Buy and RadioShack. We have about 12,000 servers, about five petabytes of storage. We serve about 39,000 customers internally at Sprint.

We host all that information to make sure that we process about a million change records a month. That information that we're capturing are configuration items (CIs). The actual content that goes in the system was, at one point, in the 24 million range. We dialed that back a little bit, because we were collecting a little too much information.

We have about 1,300 applications that were internally built. Many of those are hosted on other external vendor products that we've customized and put into Sprint. And, we have about 64,000 desktops. So, there is a lot going on in this environment. It's moving constantly and that goes back to a lot of the reasons why, if we didn’t put this in quickly, they'd pass us by.

Making it easier

Rainey: We had too many of the same. We had to make it easier for our internal support teams. We had to made it easier for our customers. We had to lessen the impacts on maintenance and cost. Simplification was the key of the entire journey.

Felton: We had to concentrate on not only making sure that the applications base wasn't duplicated, but also the data. The data is where we ended up having issues. One person's copy may not be as accurate as another person's copy, and then what we ended up spending an enormous amount of time saying whose was right.

What we did was provide one single point of truth, one copy of the truth. Instead of everybody being hidden from the data, we allowed everybody to see it all. They may not be able to manipulate it and they may not be able to change it, but everybody could have visibility to the same amount of information. We were hoping they would stop trying to have their own version of it.

Our biggest culture problem was that everybody wanted to put their arms around their little piece, their little view. At the end of the day, having one view that is customized, where you can see what you want to see, but still keeping the content within a single system, really helped us.

Having one view that is customized, where you can see what you want to see, but still keeping the content within a single system, really helped us.



It's the data that supports the application. It's the servers that host the applications. It's the third-party applications that deliver the web experience, the database experience, the back-end experience. It's the ability for us to associate fixed agents to that particular information, so that when I am calling out the fixed agent for an alarm, I'm getting the right person online first, versus having a variety of individuals coming on over time.

Rainey: The HP Excellence Award [at Discover 2011] was a very big milestone for everyone to remind us that it was well worth it, the time that was spent, the energy that was spent. I'm very glad that HP and our customers have been able to recognize that. I am very proud, very proud of Sprint. I'm very proud of the team. I'm very proud of the executive support that we received throughout this journey.

Felton: I'm also very proud of the team, as well, and we also won the CIO 100 Award. So, we’ve been able to take the same platform and the same kind of journey and show a much larger audience that it really was worth it. I think that’s pretty cool.

Importance of speed

What I might do differently is spread it out a little more, do smaller increments of implementation, versus all at one time. Don’t do the Big Bang Theory. Put in BSM, but always know that it's going to integrate with SM, and SM is going to integrate with CMS, and CMS is going to integrate with AM.

Then, build that plan, so that you integrate them. You get your customers involved in that particular application, and then when you go at the very end and put SM in, this the front door. They’re already familiar with what you’ve already done. That is something we probably didn’t do as well as we could have. It was more of a Big Bang approach. You put it in and you go.

But, at the end of the day, don’t be afraid to re-look at the processes. Don’t necessarily assume that you’re going to copy what you did today. Don’t assume that that is the best way to do it. Always ask the question, what business value does it address for your corporation? If you do that over, and over, and over, individuals will quit asking, because if you ask, these platforms are very flexible.

You can do anything. But when you get them so customized that the vendor can't even help you, then every upgrade is painful, every movement that you make is painful. What we’ve done has given us the flexibility to clean up a lot of stuff that was left over from years ago, an approach that may have not been the best solution, and given us an avenue to now extend and subtract without putting a huge investment in place.

What we’ve done has given us the flexibility to clean up a lot of stuff that was left over from years ago.



One other thing is that we had a really good idea of, "This is our business. Run it that way. You are a part of Sprint." We try to say, "We’re going to make investments that also benefit us, but don’t do them just to do them, because in this space as you look out on that floor and see all the techno wizards that are out there, shiny objects are pretty cool, but there are a lot of shiny objects."

We wanted to make sure that the shiny object we produced is something that was long lasting and gave value back to the company for a long period of time, not just a quick introduction.

Rainey: We continued to work on it. Adoption is a big key in any transformation project. One of the things that we had to definitely look at was making sure that facts can prove to people that their business requirements were either valid or invalid. That way we stop the argument of what do I want, versus what do I need?

A lot of education

We really had a lot of communication, a lot of education along the way. We continue to educate people about why we do this and why we're doing it this way. We engage them in the process by making them part of the decision-making, versus just allowing the tools to dictate whether you can do it.

With the tools, you can do whatever you want. However, you want to customize the product, but should we and for what purpose? So, we had to introduce a lot of education along the way to make sure folks understood why we were going down this path.

Felton: We implemented in 12 months. It was 14 months to get the future enhancements of the data quality and all the things we're working on right now. But as to the tipping point, I think the economy had a lot to do with it, the environment that was going on at the time.

You had a reduction in staff. You had downsizing of companies. It made it harder for individuals, to Joyce's point, to protect an application that really had no business value. It might have a lot of value to them, and in their little piece of the world it probably was very valuable, but how did it drive the overall organization?

The economy in any kind of transformational program is a key factor for investing in these kind of products. You're going to make sure that if you're introducing something it's because you're going to add value.



[Sprint CEO] Dan Hesse did a great job in coming in and putting us on a path of making sure that we're fiscally responsible. How are we improving our customer expectations and how are we moving in this direction continuously, so that our customers come to us because we're best provider there could be? And our systems on the back end needed to go that way.

So, to Joyce's point, when you brought them in, you asked "Does this help that goal?" A lot of times, no. And, they were willing to give a little bit up. We said, "You're going to have to give a little bit up because this is not a copy/paste exercise. This is an out-of-the-box solution. We want to keep it that way as much as possible, and we'll make modifications, when we need to to support the business." And, we've done that.

Rainey: It's important to recognize that data is data, but you really derive information to drive decision making. For us, the ability for executives to know how many assets they really have out there, for them to concentrate their initiatives for the future based on that information, became the reason we needed our data quality to really be good.

It's important to recognize that data is data, but you really derive information to drive decision making.



So, every time that somebody asked John why he went after this product suite, it was because of the integration. We wanted to make sure that the products can share the same information across them all. That way, we can hold truth through that single source of information.

Felton: We started with [IT] asset management. Asset management was really the key for us to understand assets and software, and how much cost was involved. Then we associated that to Universal Configuration Management Database (UCMDB). How do we discover things in our environment? How many servers are there, how many desktops are there, where they at, how do I associate them?

Then we looked at Business Service Management (BSM), which was the monitoring side. How do I monitor these critical apps and alarm them correctly? How do I look up the information and get the right fix agents out there and target it, versus calling out the soccer team, as I always say? Then, we followed that up with Release Control, which is a way for our change team to manage and see that information, as it goes through.

The final component, which was the most important, the last one we rolled out, was Service Manager (SM), which is the front door for everybody. We focus everybody on that front door, and then they can spin off of that front door by going into the other individual or underlying processes to actually do the work that they focus on.

Early adopter

Felton: For just BSM in itself, I'm very proud of our team. We had [another product] in 2009. We went to Business Availability Center (BAC) January 2010. HP said they had this new thing called BSM 9. Would we take it? We said sure, and we implemented it in March of that year. We took three upgrades in less than five months.

I give a lot of credit to that team. They did it on their own. There were three of them. No professional services help and no support whatsoever. They did it on their own, and I think that’s pretty interesting how they did that. We also did the same thing with UCMDB. We are on the 8x platform, about halfway deployed, and HP said they'd like us to go to 9x, and so we turned the corner and we said sure.

We did those things because of the web experience. Very few people on my team would tell you that they were satisfied with the old web experience. I know some people were, and that’s great. But, in our environment, as big as it is and as many access points as we had, we had to make sure that was rock-solid.

And, 9x for all those versions, seemed to be the best web experience we could have, and it was very similar, if I'm looking at BSM. Drop-downs and the menus, of course, are all different, but the flow and the layout is exactly the same as SM, and SM is exactly the same as CMS.

We got a nice transition between the applications that made everything smooth for the customer, and the ability for them to consume it better.



We got a nice transition between the applications that made everything smooth for the customer, and the ability for them to consume it better. I'll go so far as to say that a lot of my executive team actually log into BSM now. That would have never happened in the past. They actually go look up events that happen to our applications and see what's going on, and that’s all because we felt like that platform had the best GUI experience.

Rainey: And, if you get your CEOs and your VPs and your directors consuming and leveraging the products, you get the doers, you get the application managers, you get the fix agents, you get the helpdesk team, because they start believing that the data is good enough for decision making at that level of executive support.

Felton: We wanted reduction in our [problem resolution time] by 20 percent. Does that really mean you get a reduction? No, it means you get out there, you fix it faster, and the end-user doesn’t see it. By me focusing on that and getting individuals to go out there, and maybe more proactively understanding what's going on, we can get changes and fixes in before there was a real issue. We’re driving towards that. Do we have that exact number? Maybe not, but that’s the goal and that’s what we continue to drive for.

Removing cost

Additionally the costs are huge, having 35 redundant systems. We removed a lot of maintenance dollars from Sprint, a lot of overhead. A lot of project costs sometimes are not necessarily tangible, because everybody is working on multiple projects all at one time.

But, if I've got to update five systems, it's a lot different if I update one, and make it simpler on my team. My team comprised about 11 folks, and they were managing all those apps before. Now, they're managing five. It’s a lot simpler for them. It's a lot easier for them. We’re making better decisions, and we make better changes.

We’re hoping that by having it that way, all of the infrastructure stability goes up, because we’re focused. To Joyce’s point, the executive team pays attention, managers pay attention, everybody sees the value that if I just watch what this thing is doing, it might tell me before there is a customer call. That is always our goal. I don’t want a customer calling my CIO. I want the customer to call my CIO and for him to reply, "Yes, we know, and we’re going to fix that as fast as we can."

Six years ago that help desk had 400 people. As of today it has 44. The reason it does is that we bypass making calls. I don’t want you to call a fix agent to type a ticket to get you engaged. We came up with a process called "Click It." Click It is a way for you to do online self-service.

If I'm having an Exchange problem, an Outlook problem, or an issue with some application, I can go in and open a ticket, instead of it being transferred to the help desk, who then transfers it to the fix agent. We go directly to the fix agent.

We’re getting you closely engaged, hoping that we can get your fix time faster. We can actually get them talking to you quicker. By having this new GUI interface it streamlined it through a lot of wizards that we can implement. Instead of me having seven forms that are all about access, maybe now I have one. Now, there is a drop-down menu that tells me what application I want it for. That continuous improvement is what we’re after, and I think we’ve now got the tools in place to go make that easy for us.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: