Wednesday, September 26, 2012

McKesson redirects IT to become a services provider that delivers fuller business solutions

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance podcast series highlights how pharmaceuticals distributor and healthcare information technology services provider McKesson has transformed the very notion of IT. We will see how a shift in culture and an emphasis on being a services provider has allowed McKesson to not only deliver better results, but elevate the role of IT into the strategic fabric of the company.

To learn more about how McKesson has recast the role of IT and remade its impact in a positive way, join Andy Smith, Vice President of Applications Hosting Services at McKesson. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Let me start with this notion of IT transformation. What allowed you to convince others that this was worth doing?

Smith: What we did, and this started several years ago, was to focus on what our competition was doing, not the competition to McKesson -- but the competition to IT. In other words, who was the outsourcer or who were the other data-center providers. From that, we were able to focus on our cost, quality, and availability and come up with a set of metrics that covered it all, so that we could know the areas we needed to transform and the areas where we were okay.

Gardner: So, in a sense, you had to redefine yourself as a services provider, because that's who you saw as your competition?

Smith: Exactly, and that's who our customers are talking to -- our competition. When they came to us for a service, they had already talked to third-party providers. And so we realized very quickly that our competition was the outside world, so we had to model ourselves to be more like them and less like an internal IT department.

Gardner: That, of course, cuts across not only technology, but culture and the whole idea of being accountable, and to whom. So let's start at that higher level. How did you begin to define what the new culture for IT should be?

Balanced scorecard

Smith: We started out with a balanced scorecard. It really came down to whether the employees and the customers were satisfied. Did we do what we said – were we accountable -- and were the financials right?

So when we started setting up that balance scorecard, that on its own started to change the culture. Suddenly, customer satisfaction mattered, and suddenly, system availability mattered, because the customer cared, and we had to keep the employees trained, so that they were satisfied.

Over time, that really changed the culture, because we're looking at all four parts of the scorecard to make sure we're moving forward.

When we were just an internal IT department, we spent more time saying, "The customer gave us an order, we hit the checkbox and finished that order, we're done." We were always asking, "Did we do it, and did we do it on time?"
What we really focused in on were the real drivers. A lot of the measures are more trailing indicators. Even money tended to be a trailing indicator.

That's not really what the customer was looking for. The customer was looking for. "Did you deliver what I needed, which may be different than what I asked for. Did you deliver it at a good price? Did you deliver it at a good quality." So it did switch from being measuring the ins and the outs of an order taker, to whether we are delivering the solution at the right price.

Gardner: As we've seen in a number of companies, when they’ve gone to more measurement using metrics, key performance indicators (KPIs), and working towards service-level agreements (SLAs), sometimes that can become daunting. Sometimes, there is too much, and you lose track of your goal. Is there a way that you work towards a triage or a management approach for those metrics, those KPIs, that allowed you to stay focused on these customer issues?

Smith: What we really focused in on were the real drivers. A lot of the measures are more trailing indicators. Even money tended to be a trailing indicator.

So we went into what's really driving our quality, what's really driving our cost. We got down to four or five that we are the ones that mattered. "Is the system up and running. Are changes causing outages. Are data protection services reliable. Are our events being handled quickly and almost like a first call resolution. Are they being resolved by the first person that gets the event?"

The focus was prevent the outage and shorten up the mean time to restore, because in the end, all of that will drop the cost. It worked, but it was focusing on a handful, rather than dozens.

Pulling down cost

It truly did bring down our cost within McKesson. Each year we pull down our cost several million dollars. So every year my budget gets smaller, but every year my quality gets higher, my employee satisfaction gets higher, and my customer satisfaction gets higher.

It can really get both. You don't have to sacrifice quality to reduce cost. The trick was saying that I no longer needed a person to do this commodity factory work. I could use a machine to do that, which freed up the worker from being a reactive commodity person to being a proactive value-add person. It allowed the employee to be more valuable, because they weren't doing the busy work anymore. So it really did work.

Gardner: For those in our audience who might not be familiar with McKesson, tell us a little bit more about the company. Specifically, tell us about the scale of your IT organization to put those millions of dollars into some perspective in the total equation?

Smith: McKesson IT is roughly 1,000 employees. The company is roughly 45,000 employees. So percentage-wise, we're not that big. My personal budget to run the IT infrastructure is about a $100 million a year.

So pulling out a few million dollars a year may be only a few percent, but it's still a pretty significant endeavor. We've managed to pull that cost out, both through the typical things like maintenance contracts and improved equipment, but also by not having to grow the full-time employee (FTE) base. I haven't had to let any FTEs go, but what we've discovered was that, as we did these things, I needed fewer employees.
To get people to stop thinking about the technology and start thinking about the business solution is a slow transition, because it's a real mind-shift.

As employees resigned, I didn't have to replace them. My staff base has been shrinking, but I haven't had anybody lose a job. So that's been also very reassuring for the employees, because they kept waiting for that big shoe to drop, waiting for us to say, "We're going to outsource you," but we've never had to do it.

Gardner: When you compete against the outsourcers better, then you are going to retain those jobs and keep that skill set going. There is a cliché that you're able to take people from firefighting and put them into innovation. Is there a truth to that in what you've done?

Smith: That really is truth. It took time, and we’re not done, but to get people to stop thinking about the technology and start thinking about the business solution is a slow transition, because it's a real mind-shift. In a lot of ways, these employees see the reactive work as the bread and butter work that puts the paycheck on the table. That lets them be a firefighter and a hero, and if you take that away, the motivators are different.

It takes time to get people comfortable with the fact that your brain is worth a lot more doing value-add work than it was just doing the firefighting. We're still going through that cultural shift. In some ways, it's easier for the older employees, because if you go back a few decades, IT was that. It was programmer analyst, system analyst, and business analyst. For me, "analyst" disappeared from all my job titles.

In the last couple of decades, for some reason, we erased analyst, and now you're just a programmer or an operator. In my mind, we're bringing the analyst back, which for the older employees, is easy, because they used to do it. For the younger employees, we've got to teach them how to be consultants. We've got to teach them how to be analyst. In some cases, it's a totally different, scary place to go, because you actually have to come out of the back office and talk to somebody, and they're not used to that.

Cultural shift

Gardner: Maybe there are methodologies that work here that you could discuss, services-oriented architecture (SOA) comes to mind and also ITIL. Have you been using ITIL approaches and SOA to help make those transitions? Is there a technology track is a cultural shift?

Smith: Yes, we went down the ITIL road, because we were manual before. Everybody was doing it with tribal knowledge. The way I did it today might be different than the way I'd do it tomorrow, because it's all manual, and it's all in people's heads.

We did go into ITIL version 3 and push it very hard to give that consistency, because the consistency really mattered. Then, we could really measure the quality. We could be ensured that no matter who did it or when it was done, it was done the same way, and that reliability mattered a lot.

We also got away from custom technology, and we got to where everything is going to be a certain type of machine. It's going to look the same. All the tools are going to be fully integrated and no longer be best-of-breed point solutions. Driving that standardization made a big difference. You don’t have to remember that machine on the left you reboot it this way, and that machine on the right you reboot it a different way. You don’t have to remember anymore, because they're all the same.

We made the equipment and tools standard and more of a commodity so that the people didn’t have to be that anymore. The people could be thought leaders. All those things really did work to drive out the cost and increase the quality, but it's a lot of different pieces. You can't do it with just one golden arrow. You have to hit it from every angle.
We had to increase the transparency to say we’re doing a good job or we’re doing a bad job.

We had to change the technology, the people, and the processes. We had to increase the transparency to say we’re doing a good job or we’re doing a bad job. It was just, "Expose everything you’re doing."

That's scary at first, but in the end, we found out we really are competing with the competitors and we can continue to do it, and do it better. We understand healthcare, we understand McKesson, and we’re an internal group, so we don’t have a profit margin. All those things combined can make us a better IT solution than a third party could be.

What really matters is the business solution you’re trying to solve. We’re stepping even farther back, saying that the service is order to cash, or the service is payroll, or the service is whatever. We’re stepping back farther, so we can look at the service from the standpoint of the customer. What does the customer want? The customer doesn’t want Unix. The customer wants order to cash. The customer doesn’t want Windows. The customer wants payroll.

Thinking about cloud

Stepping back has now allowed us to start thinking about that cloud. All the equipment underneath is commoditized, and so I can now sit back and say that the customer wants this business solution and ask who is the best person to give me the components underneath?

Some of them, for security reasons, we’re going to do on our internal cloud. Some of them, because of no security issues, we’re going to have a broker with an external provider, because they may be better, cheaper, or faster, and they may have that ability to burst up and burst down, if we’re doing R&D kind of work.

So it's brought us back to thinking like a business person. What does the business need and who is the best provider? It might not be me, but we’ll make that decision and broker it out. This year we're probably going to pull off our internal cloud and our external cloud and really have a hybrid solution, which we’ve been talking about for a couple of years. I think it will really happen this year.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Wednesday, September 19, 2012

Heroku provides single-click provisioning for Java applications in the cloud

Heroku, a cloud platform-as-a-service (PaaS) and a Salesforce.com company, today announced Heroku Enterprise for Java, a new service for companies and IT organizations to build and run Java applications in the cloud.

Enterprise for Java is designed to enable quick creation and deployment of enterprise Java applications. It also greases the skids to move development processes to a continuous delivery model, all without traditional, on-premises software or IT infrastructure. Enterprise for Java is part of the Salesforce Platform, which is being updated and expanded this week at Dreamforce in San Francisco.



Traditionally, creating Java applications applications has required piecing together both a range of development and runtime infrastructure tools -- such as source-code control systems, continuous integration servers, testing and staging environments, load balancers, application server clusters, databases and in-memory caching systems.
Developers get all the benefits of developing in Java along with the ease of using an open, cloud platform in a single click.

This often drags out application building and deployment by months. With Heroku's new offering, enterprise developers can get a complete Java solution in a single package, provisioned with a single click, says Heroku.

Heroku began as a PaaS for dynamic languages like Ruby, but has since gone "polyglot," with support for Java, Node.js, Scala, Clojure and Python and PHP. The Java push is designed to expand Heroku's appeak beyond start-ups and SMBs for Web apps into the fuller enterprise development lifecycle.

And moving to a polyglot PaaS and continuous delivery model for applications is an essential ingredient of IT transformation to a fuller hybrid services delivery capability, said Oren Teich, COO, Heroku. The Heroku PaaS approach not only streamlines development, it modernizes the very processes behind delivering IT better as a service, he said.

“Enterprise developers have been looking for a better way to easily create innovative applications without the hassle of building out a back-end infrastructure,” said Teich. “With Heroku Enterprise for Java, developers get all the benefits of developing in Java along with the ease of using an open, cloud platform in a single click.”


Heroku aims to simplify the Java process by automating data connections, sessions management and other plumbing requirements, while keeping up-to-date on reference platform and JDK advancements. These have mostly been the labor of skilled Java developers, and hence costly and time-consuming (when you can find and keep the skills).

Heroku is therefore providing a "curated" and full Java stack that allows developers to use standard tools like Eclipse and Spring Framework to then build and deploy on a common and integrated PaaS, built around Tomcat 7. This is designed to improve compliance of applications to the runtime environment, in a large degree automating the process of deployment to spec.

"We can bring 80 steps down to four," said Teich, of Java deployment with full compliance.

And let's face it, Salesforce is not just targeting the productivity of developers. They are targeting the cost and complexity of the Java runtime targets in the enterprise: Oracle's Weblogic legacy and IBM's WebSphere. "Total cost of ownership for Java apps needs to come down." You hear this a lot in enterprise IT environs.

To me, the costs-benefits analysis of creating new apps -- especially quickly and in volume to support the voracious need for mobile apps -- and being able to deploy without hassles is a pure accelerant to PaaS adoption in general, with even greater economic and agility benefits when applied to Java.

And so Heroku Enterprise for Java also comes with a new and potentially disruptive payments plan of $1,000 per application deployed per month, with no costs incurred until production deployment. Think if that in comparison with total costs of a mission critical traditional Java app across its lifecycle. The math is compelling.

Heroku also Wednesday announced integration with products from Atlassian, which provides enterprise collaboration software for product development teams. A new Heroku plug-in for Atlassian’s Bamboo continuous integration service lets developers automate application delivery across all lifecycle stages.

Product features

Heroku Enterprise for Java includes:
  • Full-Stack Java: Enterprise for Java provides a full stack of pre-configured systems needed to build scalable, high-performance, highly available applications. This also includes memcache for session management and horizontal scaling, and Postgres for relational data management.
  • Heroku Runtime: In addition to providing runtime and management of the full stack of components, the service includes separate environments for development and staging. These environments can be provisioned instantaneously, providing a way for IT organizations to adopt rapid development methodologies. These applications can be scaled to serve massive volume with a simple control change.
    Enterprise developers have been looking for a better way to easily create innovative applications without the hassle of building out a back-end infrastructure
  • Continuous Delivery Framework: When combined with Atlassian’s integration service, Bamboo, Enterprise for Java automates the application delivery process. From code check-in to test builds, staging deploys and production promotion, developers get an out-of-the-box experience with no server set-up needed. All components are automatically provisioned and configured.
  • Native Java Tools: The offering also includes native support for Eclipse Java IDE. Developers can create and deploy Java applications directly within their IDE. In addition, Heroku now supports direct deployment of Java WAR files, providing a simple way to migrate existing Java applications to the cloud.
Pricing starts at $1,000 per month per application, and it is available starting today.

You may also be interested in:

Monday, September 10, 2012

Server and desktop virtualization produce combined cloud and mobility benefits for Israeli insurance giant Clal Group

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

Rapid adoption of server virtualization that enabled desktop virtualization at a large Israeli insurance and financial services group then lead to rapidly modernized IT infrastructure that has in turn spawned cloud and mobile computing benefits.

Clal Insurance Enterprises Holdings, based in Tel Aviv, both satisfied current requirements and built a better long-term enterprise architecture from this path, clearly illustrating the multiplier effect of value and capabilities from such IT transformation efforts.

In the latest BriefingsDirect podcast discussion, learn how Clal’s internal IT organization, Clalbit Systems, translated that IT innovation and productivity into significant and measurable business benefits for its thousands of users and customers.

Haim Inger, the Chief Technology Officer and Head of Infrastructure Operations and Technologies at Clalbit Systems, discusses the journey with BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: One of the things that’s interesting to me is the speed and depth of how your organization has embraced virtualization. You went to nearly 100 percent server virtualization across mission-critical applications in just a few short years. Why did you need to break the old way of doing things and why did you move so quickly to virtualization?

Inger: The answer is quite simple. When I got the job at Clal Insurance four years ago, everything was physical. We had about 700 servers, and to deploy a new server took us about two months. The old way of doing things couldn't hold on for much longer.

Regulations in the new businesses that we needed to implement required us to do such things as deploy servers as quickly as possible and simplify the entire process for demanding a new server -- to deploying it and giving it the full disaster recovery (DR) solution that the regulations require.

The physical way of doing things just couldn’t supply the answer for those requirements, so we started to look for other solutions. We tested the well-known virtualization solutions that were available, Microsoft and VMware, and after a very short proof of concept (POC), we decided to go with VMware in a very specific way.

We didn’t want to go only on the development side, the laboratory side, and so on. We saw VMware as a solution for our co-applications and for a long-term solution, not just for islands of simple virtual servers, where we decided from day one to start using VMware on SQL servers, the Oracle servers, and SAP servers.

Full speed ahead


I
f they held on there very well, then we could, of course, also virtualize the simpler servers. It took us about four months to virtualize those initial servers, and those were very simple. We just pushed the project ahead full speed and virtualized our entire data center.

Gardner: As you’ve gone about this journey, why does it seem to be paying off both on the short-term and setting you up for longer-term benefits?

Inger: That’s very simple to answer. Today, to provision a new server for my customer takes about 20 minutes. As I said, in the past, in the physical world, it took about two months.

DR was the main reason for going into this project. During a DR test in the old days, we had to shut down our production site, start up all servers on the DR site, and hope that everything worked fine. Whatever didn’t work fine, we tested one year after that initial DR was done.

Using VMware with Site Recovery Manager, I can do an entire DR test without any disruption to the organization.



Using VMware with Site Recovery Manager (SRM), I can do an entire DR test without any disruption to the organization, and I do it every three months. Watching our current DR status, if anything needs to be fixed, it’s fixed immediately. I don’t have to wait an entire year to do another test.

So those simple things are enabling us to give our organization the servers that they need, when they need them, and to do the regression in a much simpler way than we did in the past.

Gardner: Tell us a bit more about Clal.

Inger: Clal is a group that contains a very big insurance company and another company that is doing trading on the Israeli international stock market. We have a pension company and insurance for cars, boats, apartments and so on. We even have two facilities running in United States and one in the UK.

We're about 5,000 employees, and 7,000 insurance brokers, so that’s about 12,000 people using our datacenter. We have about 200 different applications serving those people, those customers of ours, running on about 1,300 servers.

Large undertaking


Gardner: That's obviously a very large undertaking. How do you manage that? Is there a certain way that you’ve moved from physical to virtual, but have been able to manage it without what some people refer to as server sprawl.

Inger: I know exactly what you mean about over populating the environment with more servers than needed, because it’s very easy to provide a server today, as I said, within one hour.

The way we manage that is by using VMware Chargeback. We've implemented this module and we have full visibility of the usage of a server. If someone who requested a server is not using it over a period of three months, we’ll know about it. We’ll contact them, and if they don’t require that server, we’ll just take it back, and the resources of that server will be available once again for us.

That way, we're not providing servers as easy as could be. We're taking back servers that are not used or can even be consolidated into one single server. For example, if someone requested five web servers based on Microsoft IAS and we’re sure that it can be consolidated into just one server because CPU utilization is very low, we’ll take it back.

If an application guy requires that the server have eight virtual CPUs, and we judge it's use on peak time is only two, we’ll take six virtual CPUs back. So the process is managed very closely in order not to give away servers, or even power, to existing servers that are not really needed.

We're taking back servers that are not used or can even be consolidated into one single server.



Gardner: Tell me how you’ve been able to develop what sounds like a private cloud. Do you consider what you’ve done a private cloud, or is that something you’re looking to put in?

Inger: We do consider what we've done a private cloud. We're actually looking into ways of going into a hybrid cloud and pushing some of our systems to the public cloud in order to control the hybrid one. But, as I said, we do consider the work we've been doing in the past three years as fully partnering a private cloud.

Gardner: Have there been any hardware benefits when moving to a private cloud, perhaps using x86 hardware and blades? How has that impacted your costs, and have you moved entirely to standardized hardware?

Inger: Of course. When we saw that those 20 servers that we initially did in late 2008 and everything worked okay, we decided to do standards. In one of the standards that was decided upon was if it doesn’t work on VMware it won’t get on our data center. So a lot of applications that run on Itanium microprocessors were migrated into Linux and on top of VMware running on x86,

Saving money

W
e managed to save a lot of money, both in supporting those legacy systems and developing in those legacy systems. They’re all grown. Everything that we have is virtual, 100 percent of the data center. Everything is run on x86 blades, running Windows 2008 or in Linux.

All these systems we have used to run on a mainframe. It’s Micro Focus COBOL running on top of Red Hat Linux latest version, on top of VMware, and x86 blade.

Gardner: Let’s take the discussion more towards the desktop, the virtualization experience you’ve had with servers and supporting such workloads as SQL Server, Oracle, and SAP. This has given you a set of skills and confidence in virtualization that you’ve now taken out, using VMware View, to the desktop. Perhaps you could tell us how far you’ve gone in the virtual desktop infrastructure (VDI) direction?

Inger: After finishing the private cloud in our two data centers, the next step within that cloud was desktop. We looked at was how to minimize the amount of trouble we get from using our desktops -- back then it was with Windows XP Desktop -- and how to enable mobility of users, giving them the full desktop experience, whether they’re connecting from their own desktop in the workplace or if they’re using an iPad device, connecting from home, or visiting an insurance broker outside of our offices.

We looked at the couple of technologies that would fit in VMware View. Again, after a short POC, we decided to go ahead with the VMware View. We started the project in January 2012 and right now w're running 600 users. All of them are using VMware View 4.6 which is being upgraded, as we speak, to version of 5.1.

The plan is that by the end of next year, all of our employees could be working on VMware View.



It enables us to give those users an immediate upgrade to a Windows 7 experience, by just installing VMware View, instead of having to upgrade each station of those viewers, and without going to those 600 users who are on Windows 7 right now.

And we're delivering it on every device that they're working on. If they’re at work, at home, outside of their office, their devices, iPad as we said earlier, are getting the same experience. The plan is that by the end of next year, all of our employees could be working on VMware View.

Same experience

The ability to give the user the same experience on each device that he works on is sometimes priceless. When I fly from Israel to the United States and have a wi-fi connection in the plane, I can use an iPad and then work on my office application as if I were in the office. Otherwise, if it’s a 12 hour flight, I'd be 12 hours out of work.

If you take into account the entire ecosystem that you’ve built surrounding VMware View, it’s actually priceless, but it’s very hard to quote exactly how many dollars it save us on a daily basis.

By end of the year 2012, our plan and budget was for 1,000 users. So we're on the way to meet our goal in December this year. For next year, 2013, our goal is to add 2,000. So it will cover almost the entire organization. It leaves something like 500 power users. I’m not sure that VMware View is the best solution for them yet. That will be tested in 2014.

Of course VDI as a stepping stone is an essential element in implementing a bring your own device (BYOD) policy. That’s something we're doing. We're in the initial steps of this policy mainly with iPad devices, which a lot of employees are bringing to work and would like to bring when they're on site, offsite, or at home. Without VDI, it would be impossible to give them a solution. We have tons of iPads today that are connecting to the office via VDI with a full Window Server experience.

Gardner: I'd like to get your thinking around virtuous adoption. As we started talking about DR, your full virtualization of your server workloads, your being able to go to standardized operating systems and hardware, moving to VDI, then moving to hybrid cloud and also now mobile, it truly sound as if there is a clear relationship between what you’ve done over the years with virtualization and this larger architectural payoff. Maybe you could help me better understand why the whole is perhaps greater than the sum of the parts.

Inger: The whole is greater than sum of the parts, because when I chose VMware as a partner combined with EMC on the storage side and their professional services, I had actually done a lot of the work together with my people.

Life gets easier

L
ife gets easier managing IT as an infrastructure, when you choose all those parts together. An application guy could come to you and say, "I didn’t calculate the workload correctly on the application that's going to be launched tomorrow, and instead of 2 front-end servers I need 15."

Some other person could come to me and say, "I have now five people working offshore, outside of the Israel and I need them to help me with a development task that is urgent. I need to give them access to our development site. What can you do to help me?"

I tell him, "Let’s put in our VDI environment, and they can start working five minutes from now." When you put all of those things together, you actually build an ecosystem that is easier to manage, easier to deploy, and everything is managed from a central view.

I know how many servers I have. I know the power consumption of those servers. I know about CPU’s memory, disk I/O and so on. And it even affects the decision-making process of how much more power I'll need on the server side, how many disks I'll need to buy for the upcoming project that I have. It’s much easier decision making process. Back in the physical day, when each server had its own memory, its own CPU, and its own disk, there was much more guessing than deciding upon facts.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Thursday, September 6, 2012

Cloud approach to IT service desk brings analysis, lower costs and self-help to BMC Remedyforce users

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: BMC Software.
Join Danielle Bailey and Alec Davis at Dreamforce 2012
Sept. 18-21 in San Francisco.
The next BriefingsDirect discussion examines how two companies are extending their use of cloud computing by taking on IT service desk and incident management functions "as a service." We'll see how a common data architecture and fast delivery benefits combine to improve the efficiency, cost, and result of IT support of end users.

Our examples are intelligent energy-management solutions provider Comverge and how it’s extended its use of Salesforce.com into a self-service enabled service desk capability using BMC’s Remedyforce.

We'll also hear the story of how modern furniture and accessories purveyor, Design Within Reach, has made its IT support more responsive -- even at a global scale -- via cloud-based incident-management capabilities.

Learn from them more about improving the business of delivering IT services, and in moving IT support and change management from a cost center to a proactive IT knowledge asset.

Here to share their story on creating the services that empower end users to increasingly solve their own IT issues is Danielle Bailey, IT Manager at Comverge in Norcross, Georgia, and Alec Davis, the Senior System Analyst at Design Within Reach, based in Stamford, Connecticut. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: BMC Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: When you began looking at improving your helpdesk solutions and IT support, what were the problems were that you really wanted to solve?

Bailey: We had three pretty big pain points that we wanted to address. The first was cost. As our company was growing quickly, we were having some growing pains with our financials as far as being able to justify some of the IT expense that we had.

The current solution that we had charged by person, because there was a micro-agent involved, and so as we grew as a company, that expense continued to grow, even though it wasn’t providing us the same return on investment (ROI) per person to justify that.

So we had a little over $55,000 a year expense with our prior software-as-a-service (SaaS) solution, and so we wanted to be able to reduce that, bring it back more in line with the actual size of our IT group, so that it fit a little bit better into our budget.

One of the reasons we went with BMC Remedyforce is that rather than charging us by the end user, the license fees were by the helpdesk agent, which would allow us to stay within the scope of our IT team.

The second big issue that we had was that a lot of our end users were remote. We have field technicians who go out each day and install meters on homes, and they don’t carry laptops, and the micro-agent required laptops for them to be able to log tickets.

We wanted to be able to use something that would allow us to give our field techs the ability to log tickets on a mobile application, like their iPhones, and Remedyforce had that.

The third issue was that we were Sarbanes-Oxley (SOX) compliant and we needed to make sure that whatever solution we chose would allow us to track change management, to go through approval workflows, and to allow our management to have insight into what changes were being made as they went forward, and to be able to interact and collaborate on those changes.

So that was the third reason we chose Remedyforce. It has the change management in there, but it also has the Salesforce.com Chatter interface that we are able to use to make sure that managers can follow some of the incidents and see as we go through if we have any changes that we can quickly work with them to explain what we may need and that they can contribute to that conversation.

Different stories

Davis: We have a different story. A couple of years ago we made a huge corporate move from San Francisco to Stamford, Connecticut. At that move we saw that it was an opportunity to look at our network infrastructure and examine what hardware we needed and whether we could move to the cloud.

So BMC Remedyforce was part of a bigger project. We were moving toward Salesforce and we also moved toward Google Apps for corporate email. We wanted to reduce a lot of the hardware we had, so that we didn’t have to move it across the country.

We were also looking for something that could be up and running before that move, so we wouldn't have any downtime.

We quickly signed up with Google, and that went well. And then we moved into Salesforce.com. At Dreamforce 2010, Remedyforce was announced, and I was there and I was really excited about the product. I was familiar with BMC’s previous tools, as well as some of the other IT staff, so we quickly jumped on it.

But as part of that move, something else kind of changed about our IT group. We did grow a bit smaller, but we were also more spread out. We used to all be in one location. Now, we're in San Francisco, Stamford, and also Texas. So we needed something that was easily accessible to us all. We didn’t necessarily want to have to use a virtual private network (VPN) to get onto a system, to interact with our incidents.

And we also liked the idea of a portal for our customers. Our customers are really just internal customers, our employees. We liked the idea of them being able to log in and see the status of an incident that they have reported.

We're also really big on change management. We manage our own homegrown enterprise resource planning (ERP) system. So we do lots of changes to that system and fix bugs as well. And when we add something new, we need approval of different heads of different departments, depending on what that feature is changing.

So we are big on change management, and prior to that we were just using really fancy Microsoft Word documents to get approvals that were either signed via email or printed out and specifically signed. We like the idea of change management in Remedyforce and having the improved approval process.

Gardner: Tell us about Comverge.

Bailey: Comverge is a green energy company. We try to help reduce peak load for utility companies. For example, when folks are coming home and starting to wash clothes, turn on the air-conditioning and things like that, the energy use for those utilities spikes.
Join Danielle Bailey and Alec Davis at Dreamforce 2012
Sept. 18-21 in San Francisco.
Hardware and software

We provide software and hardware that allows us to cycle air-conditioning compressors on and off, so that we reduce that peak. And by reducing that peak we are able to help utility companies to meet their own energy needs, rather than buying power from other utilities or building new power plants.

We have been in business for about 25 years. We originally started out as part of Scientific Atlanta, but they have taken on new companies across the country to integrate new technology into what we offer.

We are now nationwide. We provide services to utilities in the Northeast, from Pennsylvania, and then all the way down to Florida, and then all the way west to California, and then to Texas, New Mexico, and different areas in-between. And we’ve recently opened new offices in South Africa, providing the same energy services to them.

Comverge tries to make sure that the energy that we're able to help provide by reducing that load is green. It’s renewable. It’s something we can continue to do. It just helps to reduce cost as well as to save the environment from some of the pollution that may happen from new energy production.

In a nutshell, Comverge is a leading provider of intelligent energy management solutions for residential and commercial and industrial customers. We deliver the insight and controls that enables energy providers and consumers to optimize their power usage through the industry’s only proven comprehensive set of technology services and information management solution.

In January, Comverge delivered two new products, the Intel P910 PCU that includes capabilities to support dynamic pricing programs, and Intel Open Source Applications for the iPhone. The iPhone is very important to us. Our field technicians are using it at residential and commercial installations, and we just want to make sure that we continue with that innovation.

Gardner: And how many IT end users are you supporting at this point?

Bailey: About 600, and those are in South Africa, as well as all around the U.S. ... We transitioned in April to Remedyforce from our old SaaS system, but the users say that Remedyforce is a lot easier for them to use, as far as putting in ticket and for them to see updates whenever our technicians write notes or anything on the tickets. It's a lot easier for them to share with others whenever they have to change what we are working on.

Core business


We are still building our knowledge base. We didn’t have that capability previously. So we are able to use some of the tickets that we have come in as we process and update those and control and close those. We are able to build articles that our technicians can use going forward.

I have recently switched my ERP analyst, but because I was able to pull some of that information out of Remedyforce, where I had my prior ERP analyst, it actually helped me to train this new person on some of the things they can do to troubleshoot and resolve problems.

We are also able to use the automated reporting out of Remedyforce so that I can schedule reports on our tickets, see how many we have open, and for what categories and things like that, and take that to our executive management. They're able to see our resource needs, see where we may have bottlenecks, and help us make decisions that help our IT group move faster and more efficiently.

Gardner: Tell us about Design Within Reach.

Davis: Design Within Reach is a modern furniture retailer. We've been around for 12 years, starting in San Francisco. We have a website that has the majority of our sales. We also have “studios” that are better described as showrooms. We have usually about five reps in those studios, and we have about 50 studios around the U.S. and Canada.

So those [reps] are our users that we support. We've become a very mobile company in the last couple of years. A lot of our sales reps are using iPads. One of the requirements we've had is to be able to interact with corporate in a mobile fashion. Our sales reps walk around the showroom and work with our customers and they don’t necessarily want to be tied to a desk or tied to a desktop. So that is definitely a requirement for us.

Our IT staff is small. We have an IT group, information technologies, and we also have our information systems, which is our development side. In IT we have about six people and in our IS department we also have about six people. We have kind of a tiered system. Tickets come in from our employees, and our helpdesk will triage those incidences and then raise them up to a tiered system to our development side, if needed, or to our network team.

We do have also some contractors and developers. As I mentioned before, we have our own ERP system. We do a lot of the development in house, so we don’t have to outsource it. It's important for those contractors to be able to get into Remedyforce and work the change management we have into the requirement, and also in some cases look at incidences to look how bugs are happening in our ERP environment.

Self-help improvement

Gardner: How have you been able to empower those end users to find the resources they need, to keep you fairly lean when it comes to IT?

Davis: We have put most of the onus on our IT department to know how to resolve an issue, and we did have a lot of transition with new employees during our move. So building a knowledge base with on-boarding new IT people is also very important. Again, we're a small team and we support a larger internal customer base, so we need them to start and have the answers pretty quickly.

Time is money, and we have our sales reps out there that are selling to our large customer base. If there's an issue with the reporting, we need to be able to respond to it quickly.

Gardner: And the conventional wisdom is that helpdesks are still costly, and the view has been that it’s a cost center. Is there anything about how you have done things that you think is changing that perception?

Davis: The reporting has helped us to isolate larger issues, and to also identify employees that put a lot of incidents in. With the reporting, which is very flexible, and with reporting for management, requirements can change. With the Remedyforce reporting, I can change those existing reports, create new ones, or add new value to those reports.

Mainly you see how many tickets are coming in. We can show management how many incidents we are handling on a daily basis, weekly, monthly, and so forth. But I use it mainly to identify where are the larger issues. Managing an ERP system is a large task, and I like to see what issues are happening and where can we work to fix those bugs. I work directly with the developers, so I like to be as proactive as I can to fix those bugs.

And we are very spread out and very mobile, so we like the flexibility to be able to get into Remedyforce without VPN or traditional methods.

Collaboration is becoming very important to us. We did roll out Salesforce.com Chatter to most of our company, and we are seeing the benefits in our sales team especially. We are trying to use Chatter and Remedyforce together to collaborate on issues. As I said, we are spread out, and our IT group has different skill sets.

Depending on what the issue is, we talk back and forth about how to resolve it, and that's so important, because you do build up knowledge, but the core of our knowledge is in every one of our employees. It's very important that we can connect quickly and collaborate in a more efficient way than we used to have.

Support scrum

Bailey: We have been able to show where IT is actually starting to save money for the rest of the company by increasing efficiency and productivity for some of our groups. There are some of the development works that we are able to do by being able to track and change processes for folks, making them more efficient.

For example, one of the issues that we had was that we were tasked with trying to reduce our telecom expense. We were able to go through and log all of the different telecom lines and accounts. We had to trace them down and see where they were being used and where they may not be used anymore. We worked with some folks within the team to reduce a lot of the lines that we didn’t need anymore. We have been moving over to digital, but we still had a lot of analog lines.

Before, we didn’t have a way to really track those particular assets to figure out who they belonged to and what their use was. Just being able to have that asset tracking and to work through each of those as a group, we were able to produce a lot.

The first quarter of the year we reduced our telecom expense over $50,000 a year and we are continuing with that effort.

With the knowledge base that we're building, we're able to let a lot of users begin to self-help. We have a pretty small IT team. We have only two people on what we call helpdesk support. Then we have two network team members, and we have about 10 people on our information services team, where we do development for the software and data services.

Support staff

T
he knowledge base has been a lot of help for us to just start building that knowledge repository. Whereas before, if someone left the company, you would lose years and years of knowledge because there was no place that it was documented.

Because Remedyforce also ties into Salesforce.com, we'd [like to soon] be able to track some of our residential and utility customers in the Salesforce side as well, so that if the salesperson is aware that there is an issue going on with their utility, they can follow the information as it applies to that contact. Then, they're able to also reach out directly to the utility and make sure that things get handled the way they need to be handled according to contracts or relationships. So it's certainly something we are hoping to expand on.

We are also planning to use, and have already started using, Remedyforce for our HR group. When we have new hires or terminations, they're able to able to put in IT support tickets for that. We're able to build templates for each individual, so that as we receive notification that someone has been terminated, we can immediately remove them from the system too. HR has that access to put in those tickets and build those requests, and that helps maintain our SOX compliance.

Synergy and benefits

Gardner: What else have you have been doing with Remedyforce?

Davis: Information is very important to us, very important to myself. I like to see what is happening in organizations from a support standpoint. We haven’t really pushed out Remedyforce to a lot of other departments outside of HR, who of course is helping us with on-boarding the new employees and off-boarding as well.

But all of our internal support teams, our operations team that support our sales teams, some people in finance, and of course HR, are all using Salesforce cases.

So we have all of our customer information. We have all of our vendor information. That would be the IT vendors, but we're also a retail company, so our product retailers are in there too.

We've also moved it out to our distribution center. They have the support team there. We've also started bringing in all of our shipping carriers and all the vendors that they work with. So we have all of our data in one place.

We can see where a lot of issues are arising, and we can be more proactive with those vendors with those issues that we are seeing.

It's great to have all of our data, all of our customer information, all of our vendor information, in one location. I don’t like to have all these disparate systems where you have your data spread out. I love having them in one location. It's very helpful. We can run lots of reports to help us identify what’s happening in our company.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: BMC Software.
Join Danielle Bailey and Alec Davis at Dreamforce 2012
Sept. 18-21 in San Francisco.
You may also be interested in:

Wednesday, August 29, 2012

Performance management tools help IT services provider Savvis scale to meet cloud of cloud needs

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance podcast series highlights how cloud infrastructure and hosted IT services provider Savvis has been able to automate out complexity and add deep efficiency to its operations.

Using a range of performance, operations orchestration and Business Service Automation (BSA) solutions from HP, Savvis has improved its incident resolution and sped the delivery of new cloud services to its enterprise clients.

To learn more about how they did it, we're joined by Art Sanderson, Senior Manager Enterprise Management Tools at Savvis. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What are the main drivers in your infrastructure as a service (IaaS) market?

Sanderson: Savvis is recognized as a global IT leader in providing IT as a service (ITaaS) to many of today’s most recognizable enterprise customers around the world. We offer cloud services and hosting infrastructure services to those customers.

Being an IT department of IT departments, or a dynamic service provider, has a lot of unique challenges that you don’t face in every IT shop that you run into. In fact, we have thousands of customers that we have to support with their own IT departments. So our solutions have to be able to scale beyond what you would find in a typical IT organization.

Gardner: And I should think that efficiency is super-important. It's all margin to you, when you can save and do things efficiently?

Better SLAs

Sanderson: Absolutely. There are just the efficiencies alone for operational cost, as well as the value that we provide to our customers, being able to provide better service-level agreements (SLAs), so their businesses are up and running and available to them to service their own customers. There are definitely some economies of scale there.

Our premier services are our Symphony cloud offerings, our Symphony VPDC, Symphony Open and Dedicated cloud, as well as Symphony Database. All, in some form or fashion in various degrees, use the BSA tools on the back end to do their own offerings, and their own automations that we offer our customers.

Gardner: Tell me what you've done in terms of management for better automation, orchestration, and then, how those benefits get passed on.

Sanderson: Sure. We've adopted the HP BSA set of tools as our automation platform and we’ve used that in a number of different ways and areas within Savvis. It's been quite a journey. We’ve been using the tools for approximately three to four years now.

We started out with some of our operational uses, and they've matured to the point now where a lot of our automation-type monitoring is solved by automation rather than by our operational staff.

There is definitely labor saving there, as well as time savings in mean time to resolution values that we’re adding to our customers. That's just one of the benefits that we’re seeing from the automation tools, not to mention the fact that we build a lot of our own key product offerings for the marketplace that we service, using the BSA offerings on the back end as well.

Gardner: How do you measure performance benefits? Is there a set of key performance indicators (KPIs) or some benchmarks?

In just this first quarter of 2012 alone, we recognized somewhere in the neighborhood of $250,000 in labor savings.



Sanderson: From an operational perspective, we do monitor the number of automations that we run that we can capture from the operational side of the house. For example, on a typical day we run anywhere from 10,000-20,000 types of automations through our systems, and that would actually add value back to the business from a labor-savings perspective.

In just this first quarter of 2012 alone, we recognized somewhere in the neighborhood of $250,000 in labor savings just from the automations from an operational perspective. Again, it's hard to quantify the value of adding to the business side, because those are solutions that we’re offering to the market space that are generating new value back to the organization as a whole.

Mature process

From the people and process side, we didn’t start out necessarily doing it the right way from the operations side of the house. But we have matured the process to where we're now delivering solutions in a much more rapid fashion. The business is driving the priorities from an operational perspective as far as what we’re spending our time on.

Then, we can typically turn around automations in a very short time. In some cases, we’ve built frameworks using these tools where we can turn around an automation that used to take two to three weeks. Now, it can take less than an hour to turn around that same automation.

So we’ve gotten really smart at what we’re doing with the tools, not just building something net new every time, but also making the tools more reusable themselves.

From the value to the organization, we’ve also had many groups within the product engineering side of the house take on and learn tools like HP Operations Orchestration (HPOO) and HP Service Activator (HPSA), and leverage their own domain knowledge as network engineers or storage engineers to build net new solutions that we then turn around and offer to our customers.

That eliminates a lot of the business analyst type of work and things like that that would typically go into the normal systems development lifecycle (SDLC)-type process that you would see. We’re able to cut the time to market for the offerings that we’re producing for our customers.

It does make us much more agile and responsive to the needs of our customers and the industry.



It does make us much more agile and responsive to the needs of our customers and the industry.

Gardner: How large is Savvis?

Sanderson: Today, we have about 25,000 servers under management, spread across 50 data centers worldwide, and just to give you an idea, we have approximately 9,000-10,000 automations on a typical day running through HPOO.

As far as the scale and break down of the servers, two-thirds of our servers today are virtualized, and either through the cloud or actual traditional orders that customers are placing. So, we’re seeing a lot of growth in the virtual machines (VMs) and the cloud space. This is where things are going for our organization as well as the industry.

Self healing

Our self-healing infrastructure is where we’ve actually matured our process and recognized the reusability of using a meta-model to drive our HPOO flows that we’re writing. We've taken those patterns that we’ve identified and have been able to build a meta-model that we now have built a user interface in front of.

If somebody wants a new request, they can go in and request that from us, and then we can, within a matter of minutes, produce the data through the user interface and publish a new flow, without ever having to write new operations orchestrations flows.

Gardner: Tell me a little bit about what your future plans to improve both innovation and productivity?

Sanderson: Obviously, the reason we come to conferences like HP Discover is to learn about where HP is going, so we can make sure that we're in alignment, both from our business needs, as well as where the products are going that we use to drive our own solution.

It's critical that we're able to maintain an upgrade path and we're able to support our business. We've already started to plan, based on what we see coming down the path from HP's future infrastructure and even dedicated infrastructure as our business continues to grow. For example, for the Symphony products that we were referring to earlier, we have to break off more-and-more dedicated infrastructure to the scale and capacity that they’re growing.

We would have never have anticipated, when we started a few years ago, that a customer would have come to us to say that we want to order 400 VMs or we want to order 1,000 VMs, but customers are coming us today doing that. That's the kind of scale that we’re seeing, even just a year into the offerings that we’re providing to the marketplace.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, August 28, 2012

Learn why success greets NYSE Euronext's Community Platform for Capital Markets cloud

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

Our next VMworld case study interview revisits a unique vertical industry cloud -- NYSE Euronext's Capital Markets Community Platform -- to take stock of how mission-critical cloud services are being delivered.

We'll learn about how this innovative cloud and groundbreaking business model targets the needs of Wall Street IT leaders, how the business of the financial services industry has received them, and explore how providing cloud services as a business has evolved.

This story comes as part of a special BriefingsDirect podcast series from the 2012 VMworld Conference in San Francisco the week of August 27. The series explores the latest in cloud computing and software-defined datacenter infrastructure developments.

Our guest is Feargal O'Sullivan, the Global Head of Alliances at NYSE Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: How have things progressed over the past year?

O'Sullivan: We've been very happy with the progress we've made. When we announced at VMworld last year, we had just gone into early access for our first clients in our data center in the New York, New Jersey, Connecticut tri-state area, where we have all of our US-based markets running the New York Stock Exchange Markets, the Arca Electronic Markets, and AMEX.

That has since gone into production, has a number of clients on it, is being perceived very well by the community, and is really driving as a lynchpin of our strategy of building a global capital markets community.

Since the success of that, we've actually progressed further, to the point of having deployed the same environment in a second data center that we own and run just outside of London, in a town called Basildon, which is where we run all of our European markets, the Euronext side of NYSE Euronext.

We now have an equivalent VMware-based cloud environment and a range of ancillary services for the capital markets industry available in that location. Clients can now access, as a service, both infrastructure and platform capabilities in both of those facilities.

Furthermore, we've extended to two other financial centers in the world, one in Toronto and one in Tokyo. That's a slightly more stripped-down version of the community platform, but it's very useful for clients who are really expanding the business and gone globally.

Four locations

Now, we have those four locations up and running in production with production clients, so we are very happy with that progress.

Gardner: What is it about the way that we're doing things now -- the whole software-defined datacenter model -- that's allowed you to build out so quickly?

O'Sullivan: Clearly, the technology has advanced significantly from the old days. The capability around virtualization on the the hardware server level with the VMware hypervisors, and in particular the vCloud service suite, gives clients their own control over their environment.

Also on the networking side, it's become much more viable for clients to actually deploy into shared environment, still maintaining confidence that they're going to get both the security profile that they're looking for, as well as the performance capability.

We use the EMC VNX array with the FAST Cache capability to give a very stable performance profile based on demand. It allows different workloads, and yet each gets very good performance and response time. So there are many components along the way. Also, management and monitoring of these types of infrastructures have improved.

Our clients have certainly seen that enhancement in the technology. The financial services industry is unique in the way it leverages technology on two aspects.

One, security profile is absolutely critical. Security isn't just around customer data, but around application development and tools of the trade, intellectual property that firms might have, trading strategies, different analysis, analytics, and other types of components that they develop and build,. They feel they're highly proprietary in nature and don't want to allow anybody to get access to them. So they place security extremely high on the list.

The other unique aspect is performance aspect. It's a slightly different performance model from your typical sort of three-tier web store type of environment. Financial services, first of all, push very high volumes of content through their applications. They need to do so in microseconds, or at least milliseconds, of response time and latency measurements, and they also most importantly need to do so predictably.

With a big batch job of some kind, say a genetic folding job, you drop off a job, go away for 12 hours, and you come back. A little bit of clearly inefficient processing time is not great, because that drags out the whole thing over time, but there is no sort of critical "need it here," "need it now" requirement. So latency spikes are less of a problem.

Latency spikes

But in our industry, latency spikes are a real problem. People look for predictive latency, so we had to make sure that we applied a very tight security profile to our cloud, and a very high performance profile as well.

Gardner: How have you been able to build on this cloud in terms of those value-added services that you deliver specifically to a financial clientele?

O'Sullivan: That's why we built our cloud, because there are many service providers who offer very valuable cloud capabilities that are based on core infrastructure and core computing capabilities, and they do so very well. However, we consider ourselves a vertical industry community. We're specifically focused on capital markets participants. We try to support and make it cheaper, more cost-effective, and more readily accessible to a wider range of participants to be able to get access to the markets.

So in our cloud and our community, we provide a range of platform and services that we have added. The core is "Come into our vCloud Director environment and access your compute infrastructure." By the way, we have a Compute On Demand Virtual Edition, we also have a Compute On Demand Physical Edition for those cases where that latency issue is of the utmost importance.

Then, we provide clients with the value-added features that we know they need, because they're in the capital markets business. The key one is market data. This is something that is absolutely critical in financial services, because every trade, no matter what you are buying or selling, always starts with a quote. Even if you walk into the shop and you ask how much it would it be for a can of soda, they say it's $1 or $1.20, whatever it is, and then you decide if you want to buy.

So in the financial services industry market data is the starting point, the driver of all the business. And the volumes on this, the sheer size of the content that comes down, is really outstanding. It's at the point now that even if you were to just subscribe to all North American equities and options, you'd need a 10-gigabit Ethernet pipe, and at points during the day, you're probably using upwards of 8 gigabits of that pipe just to get all that content.

Obviously, we can provide raw content, but we've added a range of services into our cloud and into the community. We can say, "We can offer you a nice filtered market data feed, where you just present us with the list of instruments you want, and we can add value-added calculations, do analytics, and provide that to you."

We've also developed an historical market-data access service. So if you want to go back and test your strategies against previous days of trading, back for many, many years, we have a database that's deployed in the cloud. So you can query the database, load it into your virtual environment, and analyze and back-test your strategies.

We've added order-routing capabilities, so when you are ready to send your orders to the market, if you are a market maker yourself, you might go direct to our gateway. If you're a sponsored participant, you might go through our risk-managed gateway, which would be sponsored by a broker.

Or if you are just a regular buy-side firm, a money manager, you might use our routing network and ask us to write your orders to the different brokers or the different markets, and we can handle that. Those are either ends of the trade.

Integration pieces

On Thursday, Aug. 30, I'm going to be presenting with VMware and EMC in one of the breakout sessions about us moving up the stack to start offering more of the integration pieces of this. We're using the Spring environment and a range of other VMware tools, GemFire, and so on, to demonstrate a full trading system deployed in the virtual environment with the integration tools -- all running hosted in our environment.

It's more of a framework that we're showing, but it provides platform as a service (PaaS), not just the market data in, which is our specialty, and the order routing out. Once you're within your environment, the range of additional tools makes it easy for you to develop and customize your own trading tools and your own trading strategies. That's something I will be talking about on Thursday.

Gardner: How has the reception in the market?

O'Sullivan: The good news is that we've definitely had great progress here. We have a number of clients in all of the locations I mentioned. We're continuing to grow. It's a tough environment, as you can imagine, both just in the general economy and in particular in the financial services industry. So we expect to continue to grow this significantly further.

We have been certainly very happy with the uptake so far. We knew that we were going out well ahead of everybody else and we were very keen to do so, because we see and understand the vision that VMware and EMC in particular have been promoting over the past few years. We agree with it fully. We feel like we're uniquely positioned within the capital markets industry as the neutral party.

Remember, we're just a place where people go to trade. We don't decide what you buy or what you sell or how much it should be. We just provide the facility, the rules, and the oversight to ensure an orderly market. We wanted to make it easier and more cost-effective for firms to get access to that environment.

So by providing all of this capability, we think we're in a fantastic position now, that as more and more firms continue to explore virtualization and outsourcing of non-business critical functions, which for a while used to be running on your own servers, but which are now nothing but overhead.

We see them moving more and more into the cloud. We expect over the next two or three years, that this is really going to explode. We intend to be there, established, fully in production, tried and tested, and leading the industry from the front, as we think we should be with the a name like the New York Stock Exchange.

Well-known brand

That’s a brand that's so well-known globally. It's the best place to trade. It's the most reliable and most secure place to trade stocks, with the best oversight, and we want to apply that model to all of the services that we offer our clients.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Get the latest announcements about VMware's cloud strategy and solutions by tuning into VMware NOW, the new online destination for breaking news, product announcements, videos, and demos at: http://vmware.com/go/now.

You may also be interested in: