Wednesday, December 7, 2011

Embarcadero brings self-service app store model to enterprises to target PCs and their universe of software

Embarcadero Technologies, a provider of database and application development software, recently announced AppWave, a free platform that provides self-service, one-click access to PC software within organizations for business PCs and even personal employee laptops.

Available via a free download, the AppWave platform gives users access to more than 250 free PC productivity apps for general business, marketing, design, data management, and development including OpenOffice, Adobe Acrobat Reader, 7Zip, FileZilla, and more.

AppWave users also can add internally developed and commercial software titles, such as Adobe Creative Suite products and Microsoft Visio, for on-demand access, control, and visibility into software titles they already own. [Disclosure: Embaradero Technologies is a sponsor of BriefingsDirect podcasts.]

The so-called app store model, pioneered by Apple, is rapidly gaining admiring adopters thanks to its promise of reducing cost of distribution and of updates -- and also of creating whole new revenue streams and even deeper user relationships.

As mobile uses rapidly change the way the world accesses applications, data and services, the app store model is changing expectations and behaviors. And this is a good lesson for enterprises.

App stores work well for both users and providers, internal or external. The users are really quite happy with ordering what they need on the spot, as long as that process is quick, seamless, and convenient.

As with SOA registries, it now makes sense to explore how such "stores" can be created quickly and efficiently to distribute, manage, and govern how PC software is distributed inside of corporations.

The AppWave platform provides business users ways to quickly build productivity, and speed-to-value benefits for PC-based apps. Such approaches form an important advance as organizations pursue more efficient ways to track, manage, and deliver their worker applications, and bill for them based on actual usage.

Easily consumed

The AppWave platform converts valued, but often cumbersome business software into easily consumed and acquired "apps," so business users don't have to wait in line for IT to order, install, and approve the work tools that they really need without delay.

With AppWave, companies have a consumer-like app experience with the software they commonly use. With rapid, self-service access to apps, and real-time tracking and reporting of software utilization, the end result is a boost in productivity and lowering of software costs. Pricing to enable commercial and custom software applications to run as AppWave apps starts at $10 to $400 per app.

With rapid, self-service access to apps, and real-time tracking and reporting of software utilization, the end result is a boost in productivity and lowering of software costs.



Increasing demand for consumer-like technology experiences at work has forced enterprises to face some inconvenient truths about traditional application delivery models. Rather than wait many months for dated applications that take too long to install manually on request, business managers and end users alike are seeking self-provisioning alternatives akin to the consumer models they know from their mobile activities.

You may also be interested in:

Monday, December 5, 2011

HP hybrid cloud news shows emphasis on enabling the telcos and service providers first

HP at the Discover 2011 Conference in Vienna last week announced a wide range of new Cloud Solutions designed to advance deployment of private, public and hybrid clouds for enterprises, service providers, and governments. Based on HP Converged Infrastructure, the new and updated HP Cloud Solutions provide the hardware, software, services and programs rapidly and securely deliver IT as a service.

I found these announcements a clearer indicator of HP's latest cloud strategy, with an emphasis on enabling a global, verticalized and marketplace-driven tier of cloud providers. I've been asked plenty about HP's public cloud roadmap, which has been murky. This now tells me that HP is going first to its key service provider customers for data center and infrastructure enablement for their clouds.

This makes a lot of sense. The next generation of clouds -- and I'd venture the larger opportunity once the market settles -- will be specialized clouds. Not that Amazon Web Services, Google, and Rackspace are going away. But one-size fits all approaches will inevitably give way to specialization and localization. Telecos are in a great position to step up and offer these value-add clouds and services to their business customers. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

And HP is better off providing the picks and shovels to them in spades, than to come to market in catch-up mode with plain vanilla public cloud services under its own brand. It the classic clone strategy that worked for PCs, right? Partnerships and ecosystem alliances are the better way. A good example is the partnership announced last week with Savvis.

HP’s new offerings address the key areas of client needs – building differentiated cloud offerings, consuming cloud services from the public domain, and managing, governing and securing the entire environment. This again makes sense. No need for channel conflict on cloud services between this class of nascent cloud providers and the infrastructure providers themselves.

Expanding the ecosystem

Among the announcements was an expansion of the cloud ecosystem with new partners, offerings and programs:
  • New HP CloudSystem integrations with Alcatel-Lucent will enable communications services providers to deliver high-value cloud services using carrier-class network and IT by automating the provisioning and management of cloud resources.

  • HP CloudAgile Service Provider Program offers service providers expanded sales reach, an enhanced services portfolio and an accelerated sales cycle through direct access to HP’s global sales force. HP has expanded the program with its first European partners and with new certified hosting options that enable service providers to deliver reliable, secure private hosted clouds based on HP CloudSystem.

    Clients want to understand, plan, build and source for cloud computing in a way that allows them to gain agility, reduce risk, maintain control and ensure security.



  • HP CloudSystem Matrix 7.0, the core operating environment that powers HP CloudSystem, enables clients to build hybrid clouds with push-button access to externally sourced cloud-based IT resources with out-of-the-box “bursting capability.” This solution also includes automatic, on-demand provisioning of HP 3PAR storage to reduce errors and speed deployment of new services to just minutes.


  • The HP Cloud Protection Program spans people, process, policies and technologies to deliver a comparable level of security for a hybrid cloud as a private internet-enabled IT environment would receive. The program is supported by a Cloud Protection Center of Excellence that enables clients to test HP solutions as well as partner and third-party products that support cloud and virtualization protection.
Enterprise-class services

New and enhanced HP services that provide a cloud infrastructure as a service to address rapid and secure sourcing of compute services include:
Guidance and training

HP has also announced guidance and training to transform legacy data centers for cloud computing:
  • Three HP ExpertONE certifications – HP ASE Cloud Architect, HP ASE Cloud Integrator and HP ASE Master Cloud Integrator, which encompass business and technical content.

  • Expanded HP ExpertONE program that includes five of the industry’s largest independent commercial training organizations that deliver HP learning solutions anywhere in the world. The HP Institute delivers an academic program for developing HP certified experts through traditional two- and four-year institutions, while HP Press has expanded self-directed learning options for clients.

  • HP Cloud Curriculum from HP Education Services offers course materials in multiple languages covering cloud strategies. Learning is flexible, with online virtual labs, self study, classroom, virtual classroom and onsite training options offered through more than 90 HP education centers worldwide.

    The new offerings are the culmination of HP’s experience in delivering innovative technology solutions, as well as providing the services and skills needed to drive this evolution.



  • Driven by HP Financial Services, HP Chief Financial Officer (CFO) Cloud Roundtables help CFOs understand the benefits and risks associated with the cloud, while aligning their organizations’ technology and financial roadmaps.

  • HP Storage Consulting Services for Cloud, encompassing modernization and design, enable clients to understand their storage requirements for private cloud computing as well as develop an architecture that meets their needs.

  • HP Cloud Applications Services for Windows Azure accelerate the development or migration of applications to the Microsoft Windows Azure platform-as-a-service offering.
A recording of the HP Discover Vienna press conference and additional information about HP’s announcements at its premier client event is available at www.hp.com/go/optimization2011.

You may also be interested in:

Wednesday, November 30, 2011

Big Data meets Complex Event Processing: AccelOps delivers a better architecture to attack the data center monitoring and analytics problem

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: AccelOps. Connect with AccelOps: Linkedin, Twitter, Facebook, RSS.

The latest BriefingsDirect podcast discussion centers on how new data and analysis approaches are significantly improving IT operations monitoring, as well as providing stronger security.

The conversation examines how AccelOps has developed technology that correlates events with relevant data across IT systems, so that operators can gain much better insights faster, and then learn as they go to better predict future problems before they emerge. That's because advances in big data analytics and complex events processing (CEP) can come together to provide deep and real-time, pattern-based insights into large-scale IT operations.

Here to explain how these new solutions can drive better IT monitoring and remediation response -- and keep those critical systems performing at their best -- is Mahesh Kumar, Vice President of Marketing at AccelOps. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: AccelOps is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Is there a fundamental change in how we approach the data that’s coming from IT systems in order to get a better monitoring and analysis capability?

Kumar: The data has to be analyzed in real-time. By real-time I mean in streaming mode before the data hits the disk. You need to be able to analyze it and make decisions. That's actually a very efficient way of analyzing information. Because you avoid a lot of data sync issues and duplicate data, you can react immediately in real time to remediate systems or provide very early warnings in terms of what is going wrong.

The challenges in doing this streaming-mode analysis are scale and speed. The traditional approaches with pure relational databases alone are not equipped to analyze data in this manner. You need new thinking and new approaches to tackle this analysis problem.

Gardner: Also for issues of security, offeners are trying different types of attacks. So this needs to be in real-time as well?

Kumar: You might be familiar with advanced persistent threats (APTs). These are attacks where the attacker tries their best to be invisible. These are not the brute-force attacks that we have witnessed in the past. Attackers may hijack an account or gain access to a server, and then over time, stealthily, be able to collect or capture the information that they are after.

These kinds of threats cannot be effectively handled only by looking at data historically, because these are activities that are happening in real-time.



These kinds of threats cannot be effectively handled only by looking at data historically, because these are activities that are happening in real-time, and there are very, very weak signals that need to be interpreted, and there is a time element of what else is happening at that time. This too calls for streaming-mode analysis.

If you notice, for example, someone accessing a server, a database administrator accessing a server for which they have an admin account, it gives you a certain amount of feedback around that activity. But if on the other hand, you learn that a user is accessing a database server for which they don’t have the right level of privileges, it may be a red flag.

You need to be able to connect this red flag that you identify in one instance with the same user trying to do other activity in different kinds of systems. And you need to do that over long periods of time in order to defend yourself against APTs.

Gardner: It's always been difficult to gain accurate analysis of large-scale IT operations, but it seems that this is getting more difficult. Why?

Kumar: If you look at trends, there are on average about 10 virtual machines (VMs) to a physical server. Predictions are that this is going to increase to about 50 to 1, maybe higher, with advances in hardware and virtualization technologies. The increase in density of VMs is a complicating factor for capacity planning, capacity management, performance management, and security.

In a very short period of time, you have in effect seen a doubling of the size of the IT management problem. So there are a huge number of VMs to manage and that introduces complexity and a lot of data that is created.

Cloud computing

Cloud computing is another big trend. All analyst research and customer feedback suggests that we're moving to a hybrid model, where you have some workloads on a public cloud, some in a private cloud, and some running in a traditional data center. For this, monitoring has to work in a distributed environment, across multiple controlling parties.

Last but certainly not the least, in a hybrid environment, there is absolutely no clear perimeter that you need to defend from a security perspective. Security has to be pervasive.

Given these new realities, it's no longer possible to separate performance monitoring aspects from security monitoring aspects, because of the distributed nature of the problem. ... So change is happening much more quickly and rapidly than ever before. At the very least, you need monitoring and management that can keep pace with today’s rate of change.

At the very least, you need monitoring and management that can keep pace with today’s rate of change.



The basic problem you need to address is one of analysis. Why is that? As we discussed earlier, the scale of systems is really high. The pace of change is very high. The sheer number of configurations that need to be managed is very large. So there's data explosion here.

Since you have a plethora of information coming at you, the challenge is no longer collection of that information. It's how you analyze that information in a holistic manner and provide consumable and actionable data to your business, so that you're able to actually then prevent problems in the future or respond to any issues in real-time or in near real-time.

You need to nail the real-time analytics problem and this has to be the centerpiece of any monitoring or management platform going forward.

Advances in IT

Gardner: So we have the modern data center, we have issues of complexity and virtualization, we have scale, we have data as a deluge, and we need to do something fast in real-time and consistently to learn and relearn and derive correlations.

It turns out that there are some advances in IT over the past several years that have been applied to solve other problems that can be brought to bear here. You've looked at what's being done with big data and in-memory architectures, and you've also looked at some of the great work that’s been done in services-oriented architecture (SOA) and CEP, and you've put these together in an interesting way.

Big data is about volume, the velocity or the speed with which the data comes in and out, and the variety or the number of different data types and sources that are being indexed and managed.



Kumar: Clearly there is a big-data angle to this.

Doug Laney, a META and a Gartner analyst, probably put it best when he highlighted that big data is about volume, the velocity or the speed with which the data comes in and out, and the variety or the number of different data types and sources that are being indexed and managed.

For example, in an IT management paradigm, a single configuration setting can have a security implication, a performance implication, an availability implication, and even a capacity implication in some cases. Just a small change in data has multiple decision points that are affected by it. From our angle, all these different types of criteria affect the big data problem.

Couple of approaches

There are a couple of approaches. Some companies are doing some really interesting work around big-data analysis for IT operations.

They primarily focus on gathering the data, heavily indexing it, and making it available for search, thereby derive analytical results. It allows you to do forensic analysis that you were not easily able to with traditional monitoring systems.

The challenge with that approach is that it swings the pendulum all the way to the other end. Previously we had a very rigid, well-defined relational data-models or data structures, and the index and search approach is much more of a free form. So the pure index-and-search type of an approach is sort of the other end of the spectrum.

What you really need is something that incorporates the best of both worlds and puts that together, and I can explain to you how that can be accomplished with a more modern architecture. To start with, we can't do away with this whole concept of a model or a relationship diagram or entity relationship map. It's really critical for us to maintain that.

What you really need is something that incorporates the best of both worlds and puts that together.



I’ll give you an example. When you say that a server is part of a network segment, and a server is connected to a switch in a particular way, it conveys certain meaning. And because of that meaning, you can now automatically apply policies, rules, patterns, and automatically exploit the meaning that you capture purely from that relationship. You can automate a lot of things just by knowing that.

If you stick to a pure index-and-search approach, you basically zero out a lot of this meaning and you lose information in the process. Then it's the operators who have to handcraft these queries to have to then reestablish this meaning that’s already out there. That can get very, very expensive pretty quickly.

Our approach to this big-data analytics problem is to take a hybrid approach. You need a flexible and extensible model that you start with as a foundation, that allows you to then apply meaning on top of that model to all the extended data that you capture and that can be kept in flat files and searched and indexed. You need that hybrid approach in order to get a handle on this problem.

Gardner: Why do you need to think about the architecture that supports this big data capability in order for it to actually work in practical terms?

Kumar: You start with a fully virtualized architecture, because it allows you not only to scale easily, ... but you're able to reach into these multiple disparate environments and capture and analyze and bring that information in. So virtualized architecture is absolutely essential.

Auto correlate

Maybe more important is the ability for you to auto-correlate and analyze data, and that analysis has to be distributed analysis. Because whenever you have a big data problem, especially in something like IT management, you're not really sure of the scale of data that you need to analyze and you can never plan for it.

Think of it as applying a MapReduce type of algorithm to IT management problems, so that you can do distributed analysis, and the analysis is highly granular or specific. In IT management problems, it's always about the specificity with which you analyze and detect a problem that makes all the difference between whether that product or the solution is useful for a customer or not.

In IT management problems, it's always about the specificity with which you analyze and detect a problem that makes all the difference.



A major advantage of distributed analytics is that you're freed from the scale-versus-richness trade-off, from the limits on the type of events you can process. If I wanted to do more complex events and process more complex events, it's a lot easier to add compute capacity by just simply adding VMs and scaling horizontally. That’s a big aspect of automating deep forensic analysis into the data that you're receiving.

I want to add a little bit more about the richness of CEP. It's not just around capturing data and massaging it or looking at it from different angles and events. When we say CEP, we mean it is advanced to the point where it starts to capture how people would actually rationalize and analyze a problem.

The only way you can automate your monitoring systems end-to-end and get more of the human element out of it is when your CEP system is able to capture those nuances that people in the NOC and SOC would normally use to rationalize when they look at events. You not only look at a stream of events, you ask further questions and then determine the remedy.

No hard limits

To do this, you should have a rich data set to analyze, i.e. there shouldn’t be any hard limits placed on what data can participate in the analysis and you should have the flexibility to easily add new data sources or types of data. So it's very important for the architecture to be able to not only event on data that are is stored in in traditional models or well-defined relational models, but also event against data that’s typically serialized and indexed in flat file databases.

Gardner: What's the payoff if you do this properly?

Kumar: It is no surprise that our customers don’t come to us saying we have a big data problem, help us solve a big data problem, or we have a complex event problem.

Customers say they are so interconnected that they want these managed on a common platform.



Their needs are really around managing security, performance and configurations. These are three interconnected metrics in a virtualized cloud environment. You can't separate one from the other. And customers say they are so interconnected that they want these managed on a common platform. So they're really coming at it from a business-level or outcome-focused perspective.

What AccelOps does under the covers, is apply techniques such as big-data analysis, complex driven processing, etc., to then solve those problems for the customer. That is the key payoff -- that customer’s key concerns that I just mentioned are addressed in a unified and scalable manner.

An important factor for customer productivity and adoption is the product user-interface. It is not of much use if a product leverages these advanced techniques but makes the user interface complicated -- you end up with the same result as before. So we’ve designed a UI that’s very easy to use, requires one or two clicks to get the information you need; a UI-driven ability to compose rich events and event patterns. Our customers find this very valuable, as they do not need super-specialized skills to work with our product.

Key metrics

What we've built is a platform that monitors data center performance, security, and configurations. The three key interconnected metrics in virtualized cloud environments. Most of our customers really want that combined and integrated platform. Some of them might choose to start with addressing security, but they soon bring in the performance management aspects into it also. And vice versa.

And we take a holistic cross-domain perspective -- we span server, storage, network, virtualization and applications. What we've really built is a common consistent platform that addresses these problems of performance, security, and configurations, in a holistic manner and that’s the main thing that our customers buy from us today.

Free trial download

Most of our customers start off with the free trial download. It’s a very simple process. Visit www.accelops.com/download and download a virtual appliance trial that you can install in your data center within your firewall very quickly and easily.

Getting started with the AccelOps product is pretty simple. You fire up the product and enter the credentials needed to access the devices to be monitored. We do most of it agentlessly, and so you just enter the credentials, the range that you want to discover and monitor, and that’s it. You get started that way and you hit Go.

We do most of it agentlessly, and so you just enter the credentials, the range that you want to discover and monitor, and that’s it.



The product then uses this information to determine what’s in the environment. It automatically establishes relationships between them, automatically applies the rules and policies that come out of the box with the product, and some basic thresholds that are already in the product that you can actually start measuring the results. Within a few hours of getting started, you'll have measurable results and trends and graphs and charts to look at and gain benefits from it.

Gardner: It seems that as we move toward cloud and mobile that at some point or another organizations will hit the wall and look for this automation alternative.

Kumar: It’s about automation and distributed analytics and about getting very specific with the information that you have, so that you can make absolutely more predictable, 99.9 percent correct of decisions and do that in an automated manner. The only way you can do that is if you have a platform that’s rich enough and scalable and that allows you to then reach that ultimate goal of automating most of the management of these diverse and disparate environments.

That’s something that's sorely lacking in products today. As you said, it's all brute-force today. What we have built is a very elegant, easy-to-use way of managing your IT problems, whether it’s from a security standpoint, performance management standpoint, or configuration standpoint, in a single integrated platform. That's extremely appealing for our customers, both enterprise and cloud-service providers.

I also want to take this opportunity to encourage those of your listening or reading this podcast to come meet our team at the 2011 Gartner Data Center Conference, Dec. 5-9, at Booth 49 and learn more. AccelOps is a silver sponsor of the conference.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: AccelOps. Connect with AccelOps: Linkedin, Twitter, Facebook, RSS.

You may also be interested in:

Tuesday, November 29, 2011

HP Discover case study: Vodafone Ireland IT group sees huge ROI by emphasizing business service delivery

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 Conference in Vienna. We’re exploring some major case studies from some of Europe’s leading enterprises.

Out next customer case study interview highlights how a shift from a technology emphasis to a business services delivery emphasis has created significant improvements for a large telecommunications provider, Vodafone. We'll see how a series of innovative solutions and an IT transformation approach to better support business benefits Vodafone, their internal users, and their global customers.

To learn more, we’re here with Shane Gaffney, Head of IT operations for Vodafone Ireland, based in Dublin. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gaffney: Back in summer of 2010, when we looked at the business perception of the quality of service received from IT, the confidence was lower than we’d like in terms of predictable and optimal service quality being provided.

There was a lack of transparency. Business owners didn’t fully understand what quality was being received and they didn’t have simple meaningful language that they were receiving from IT operations in terms of understanding service quality: good, bad, or indifferent.

Within IT operations, as a function, we also had our own challenges. We were struggling to control our services. We were under the usual pressure that many of our counterparts face in terms of having to do more with less, and downward pressure on cost and headcount. We were growing a dynamic complex IT estate, plus customers are naturally becoming ever more discerning in terms of their expectations of IT.

So with that backdrop, we knew we needed to take some radical steps to really drive our business forward.

Vodafone is Ireland’s leading telecommunications operator. We have in excess of 2.4 million subscribers, about 1,300 employees in a mixture of on-premise and cloud operations. I mentioned the complex and dynamic IT estate that we manage. To put a bit of color around that, we’ve got 230 applications, about 2,500 infrastructure nodes that we manage either directly or indirectly -- with substantial growth in traffic, particularly the exponential growth in the telecom data market.

Gardner: What does this get for you -- if you do it right? What is it that you've been able to attain by shifting your emphasis to the business services level? What’s the payoff?

Reduction in lost hours

Gaffney: We've seen a 66 percent reduction in customer lost hours year on year from last summer to this. We’ve also seen a 75 percent reaction in mean time to repair or average service restoration time.

Another statistic I'd call out briefly is that at the start of this process, we were identifying root cause for incidents that were occurring in about 40-50 percent of cases on average. We’re now tracking consistently between 90-100 percent in those cases and have thereby been able to better understand, through our capabilities and tools, what’s going on in the department and what’s causing issues. We consequently have a much better chance of avoiding repetition in those issues impacting customers.

At a customer satisfaction level, we’ve seen similar improvements that correlate with the improved operational key performance indicators (KPIs). From all angles, we’ve thankfully enjoyed very substantial improvements. If we look at this from a financial point of view, we’ve realized a return on investment (ROI) of 300 percent in year one and, looking solely at the cost to fix and the cost of failure in terms of not offering optimal service quality, we’ve been able to realize cost savings in the region of €1.2 million OPEX through this journey.

Gardner: Let me just dig into that ROI. That’s pretty amazing, 300 percent ROI in one year. And what was that investment in? Was that in products, services, consulting, how did you measure it?

At a customer satisfaction level, we’ve seen similar improvements that correlate with the improved operational KPIs.



Gaffney: Yes, the ROI is in terms of the expenditure that would have related primarily to our investment in the HP product portfolio over the last year as well as a smaller number of ancillary solutions.

The payback in terms of the benefits realized from financial perspective that relate to the cost savings associated with having fewer issues and in the event where we have issues, the ability to detect those faster and spend less labor investigating and resorting issues, because the tools, in effect, are doing a lot of that legwork and much of the intelligence is built in to that product portfolio.

[Another way] we measure success, is we try to take a 360 view of our service quality. So we have a comprehensive suite of KPIs at the technology layer. We also do likewise in terms of our service management and establishing KPIs and service level agreements (SLAs) at the service layer. We've then taken a look at what quality looks like in terms of customer experience and perception, seeking to correlate metrics between these perspectives.

As an example, we routinely and rigorously measure our customer net promoter score, which essentially assesses whether the customers, based on their experience, would recommend our products and services to others.

[Lastly, we also] build confidence within the team in terms of having a better handle on the quality of service that we’re offering. Having that commercial awareness really does drive the team forward. It means that we’re able to engage with our customers in a much more meaningful way to create genuine value-add, and move away from routine transactional activity, to helping our customers to innovate and drive business forward.

Without having a consolidated or rationalized suite of tools, we found previously that it's very difficult to get control of our services through the various tiers.



We’ve certainly enjoyed those type of benefits through our transformation journey by automating a lot of the more core routine and repeatable activity, facilitating focus on our relationship with our customers in terms of understanding their needs and helping them to evolve the business.

Gardner: How do you, at a philosophical level, bridge the continuum among and between technology and the other softer issues like culture to obtain these benefits?

Gaffney: The first thing we did was engage quite heavily with all of our business colleagues to define a service model. In essence what we were looking at there was having our business unit owners define what services were important to them at multiple levels down to the service transactions, and defining the attributes of each of those services that make them successful or not.

We essentially looked to align our people, revamp our processes, and look at our end-to-end tool strategy, all based around that service model.



Once we had a very clear picture of what that looked like across all business functions, we used that as our starting point to be able to measure success through the customer eyes.

That's the focus and continues to be the core driver behind everything else we do in IT operations. We essentially looked to align our people, revamp our processes, and look at our end-to-end tool strategy, all based around that service model.

The service model has enforced a genuine service orientation and customer centricity that’s driven through all activities and behaviors, including the culture within the IT ops group in how we service customers. It’s really incorporating those commercial and business drivers at the heart of how we work.

Without having a consolidated or rationalized suite of tools, we found previously that it's very difficult to get control of our services through the various tiers. By introducing the HP Application Performance Management tools portfolio, there are a number of modules therein that have allowed us to achieve the various goals that we’ve set to achieve the desired control.

Helicopter view

Essentially, the service model is defined at a helicopter view, which is really what’s important to our respective customers. And we’ve drilled down into a number of customer or service-oriented views of their services, as well as mapping in, distilling, and simplifying the underlying complexities and event volumes within our IT estate.

Gardner: I suppose this would be a good time to step back and take a look at what you actually do have in place. What specifically does that portfolio consist of for you there at Vodafone Ireland?

Gaffney: We have a number of modules in HP's APM portfolio that I'll talk about briefly. In terms of looking to get a much broader and richer understanding of our end-user experience which we lacked previously, we’ve deployed HP’s Business Process Monitors (BPMs) to effectively emulate the end-user experience from various locations nationwide. That provides us with a consistent measure and baseline of how users experience our services.

We’ve deployed HP Real User Monitoring (RUM), which gives us a comprehensive micro and macro view of the actual customer experience to complement those synthetic transactions that mimic user behavior. Those two views combined provide a rich cocktail for understanding at a service level what our customers are experiencing.

Events correlation

We then looked at events correlation. We were one of the first commercial customers to adopt HP’s BSM version 9.1 deployment, which gives us a single pane of glass into our full service portfolio and the related IT infrastructure.

Looking a little bit more closely at BSM, we've used HP’s Discovery and Dependency Mapping Advanced (DDMa) to build out our service model, i.e. effectively mapping our configuration items throughout the estate, back up to that top-down service view. DDMa effectively acts as an inventory tool that granularly links the estate to service. We’ve aligned the DDMa deployment with our service model which, as I mentioned earlier, is integral to our transformation journey.

Beyond that, we’ve looked at HP’s Operations Manager i (OMI) capability, which we use to correlate our application performance and our system events with our business services. This allows our operators to reduce a lot of the noisy events by distilling those high-volume events into unique actionable events. This allows operators to focus instead on services that may be impacted or need attention and, of course, our customers and our business.

We’ve gone farther and looked at ArcSight Logger, software which we’ve deployed to a single location that collects logged files throughout our estate. This allows us to quickly and easily search across all logged files for abnormalities that might be related to a particular issue.

By integrating ArcSight Logger with OMI -- and I believe we’re one of the first HP customers to do this -- we’ve enriched operator views with security information as well as the hardware, OS, and application layer events. That gives us a composite view of what’s happening with our services through multiple lenses, holistically across our technology landscape and products and services portfolio.

A year ago, we were to a degree reactive in terms of how we provided service. At this point, we’re proactive in how we manage services.



Additionally, we’ve used HP’s Operations Orchestration to automate many of our routine procedures and, picking up on the ROI, this has allowed us to free up operators’ time to focus on value-add and effectively to do more with less. That's been quite a powerful module for us, and we’ve further work to exploit that capability.

The last point to call out in terms of the HP portfolio is we’re one of the early trialists of HP’s Service Health Analyzer. A year ago, we were to a degree reactive in terms of how we provided service. At this point, we’re proactive in how we manage services.

Service Health Analyzer will allow us to move to the next level of our evolution, moving toward predictive service quality. I prefer to call the Service Health Analyzer our “crystal ball,” because that’s essentially what we’re looking at. It’s taking trends that are occurring with the services of transaction, and predicting what's likely to happen next and what may be in jeopardy of breaking down the line, so you can take early intervention and remedial action before there’s any material impact on customers.

We’re quite excited about seeing where we can go there. One of the sub-modules of Service Health Analyzer is Service Health Reporter, and that’s a tool that we expect to act as our primary capacity planning capability across a full IT estate going forward.

Throughout our implementation, partnership was a key ingredient to success. Vodafone had the business vision and appetite to evolve. HP provided the thought leadership and guidance. And, Perform IT, HP's partner, brought hands-on implementation and tuning expertise into the mix.

Full transparency

One of our core principles throughout this journey has been to offer full transparency to our customers in terms of the services they receive and enjoy from us. On one hand, we provide the BSM console to all of our customers to allow them to have a view of exactly what the IT teams see, but with a service orientation.

We’re actually going a step further and we’re building out a cloud-based service portal that takes a rich feed in from the full BSM portfolio, including the modules that I've called out earlier. It also takes feeds in from a remedy system, in order to get the view of core processes such as incident management, problem management, change management.

Bringing all of that information together gives customers a comprehensive view of the services they receive from IT operations. That's our aim -- to provide customers with everything they need at their fingertips.

It's essentially providing simple and meaningful information with customized views and dynamic drill-down capabilities, so customers can look at a very high level of how the services are performing, or really drill into the detail, should they so desire. The portal, we believe, is likely to act as a powerful business enabler. Ultimately, we believe there's opportunity to commercialize or productize this capability down the line.

The portal, we believe, is likely to act as a powerful business enabler. Ultimately, we believe there's opportunity to commercialize or productize this capability down the line.



Gardner: Any recommendations now that you've been through this yourself?

Gaffney: For customers embarking on this type of transformation initiative, first off, I would suggest: engage with your customers. Speak with your customers to deeply understand their services, and let them define what success looks like.

Look to promote quick wins and win-wins. Look at what works for the IT community and what works for the customer. Both are equally important. Buy-in is required, and people across those functions all need to understand what success looks like, and believe in it.

I would recommend taking a holistic approach from a couple of angles. Don’t just look at your people, technology, or processes, but look at those collectively, because they need to work in harmony to hit the service quality sweet spot. Holistically, it's important to prepare your strategy, but look top down from the customer view down into your IT estate and vice versa, mapping all configuration items back into those top level services.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, November 28, 2011

Register now to hear HP experts on latest support strategies for unique challenges of virtualized and cloud environments

Advanced and pervasive virtualization and cloud computing trends are driving the need for a better, holistic approach to IT support and remediation. Keeping virtualized servers that support mission-critical applications and databases at top levels of performance 24 x 7 is a much different problem than for maintaining physical servers in traditional configurations.

That's why HP has made the service and support of global virtualization market leader VMware implementations a top priority. And while the technology to support and fix these virtualized environments is essential, the people, skills and knowledge to manage these systems are perhaps the most decisive elements of ongoing performance success.

Live discussion


T
o find out more, I'll be moderating a live deep-dive discussion on Dec. 7, with a group of HP experts to explore how to make the most of the available people, technology and processes to provide an insurance policy against systems failure. [Disclosure: HP and VMware are both sponsors of BriefingsDirect podcasts.]

The stakes have never been higher for keeping applications and business up and running.


Register now as seats are limited for this free HP Expert Chat.

In this discussion, you'll hear latest recommendations for how IT support should be done -- even amid a rapidly changing IT landscape of virtualized, hybrid and cloud computing. First in the hour-long multi-media presentation, comes the inside story of how modern service and support works from one of HP's top services experts, Cindy Manderson, Technical Solutions Consultant for Complex Problem Resolution & Quality for VMware Products, who has 27-plus years experience with HP, and eight-plus years supporting VMware.

After Cindy's chat, viewers will be invited to participate in the interactive question-and-answer session with actual HP VMware experts. Moreover, both questions and answers will be automatically translated into 13 major languages to demonstrate how service and support services know no boundaries, time zones or language barriers.

Register now as seats are limited for this free HP Expert Chat.

You may also be interested in:

Thursday, November 17, 2011

HP experts to explore advances in service and support for highly virtualized VMware data center environments

Most enterprises, service providers and governments have ramped-up their use of virtualization over the past several years, with many impressive results. Those paybacks can only continue, however, if the overall service and support of these complex and dynamic environments keeps pace.

The problem of effectively troubleshooting issues across virtualized data centers consisting of many products from many suppliers is daunting. But there's an added element. The stakes have never been higher for keeping applications and business up and running. Indeed, a businesses' IT systems are increasingly the actual business itself. It's hard to separate them.

The stakes have never been higher for keeping applications and business up and running.



HP has made the service and support of global virtualization market leader VMware implementations a top priority. Keeping virtualized servers that support mission-critical applications and databases at top levels of performance 24 x 7 is a much different problem than for maintaining physical servers in traditional configurations. [Disclosure: HP and VMware are both sponsors of BriefingsDirect podcasts.]

Indeed, advanced and pervasive virtualization and cloud computing trends are driving the need for a better, holistic approach to IT support and remediation. And while the technology to support and fix these virtualized environments is essential, the people, skills and knowledge to manage these systems is perhaps the most decisive element of ongoing performance success.

Live discussion


T
o find out more, I'll be moderating a live deep-dive discussion on Dec. 7, with a group of HP experts to explore how to make the most of the available people, technology and processes to provide an insurance policy against failure.

Register to reserve a place for this free HP Expert Chat on Dec. 7.

Overall, you'll hear recommendations for how IT support can and should be done -- even amid a rapidly changing IT landscape of virtualized, hybrid and cloud computing. First in the hour-long multi-media presentation, is the inside story of how modern service and support works from one of HP's top services experts, Cindy Manderson, Technical Solutions Consultant for Complex Problem Resolution & Quality for VMware Products, who has 27-plus years experience with HP, and eight-plus years supporting VMware.

She will provide a short overview on the HP/VMware relationship and how the HP/VMware software support model uniquely enables always-on support for enterprises, service providers and governments. She’ll also present several case studies of how the HP Call Center global support process has solved problems in VMware environments.

After Cindy's chat, viewers will be invited to participate in the interactive questions and answer session with actual HP VMware experts. Moreover, both questions and answers will be automatically translated into 13 languages to demonstrate how service and support services know no boundaries, time zones or language barriers.

Leading these interactive sessions to answer the audience's questions live will be several additional HP-VMware support experts, including Patrick Lampert, a Critical Service Senior Technical Account Manager and Team Leader responsible for delivery and management of VMware Technical Services for Fortune 500 HP Custom Mission Critical Service Customers.

He'll be joined by Sumithra Reddy, Virtualization Engineer with HP Technology Services in the Global Competency Center, a 27-year veteran of software support, with a current focus on VMware. Other experts will join from Europe and Asia.

Register to reserve a place for this free HP Expert Chat on Dec. 7.

In sum, attendees will see how the breadth of virtualization is extending from servers to networks, desktop clients, storage, and mobile clients. All must operate in conjunction with the rest, especially as virtualized workloads come and go based on dynamic demand. This means that understanding how VMware and its ecosystem of vendors supporting these advanced environments relate. Problems in these environments must be solved from an over-view and neutral perspective, with all the interdependencies considered and managed.

So join the online presentation, discussion and question-and-answer sessions in nearly any major language worldwide. This is the first in a series of Expert Chats that I'll be moderating and that will tackle serious IT issues, with full global language support.


You may also be interested in:

Tuesday, November 15, 2011

Germany's largest travel agency starts a virtual desktop journey to get branch office IT under control

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Our next VMworld case study interview focuses on how Germany’s largest travel agency has remade their PC landscape across 580 branch offices using virtual desktops. We’ll learn how Germany’s DER Deutsches Reisebüro redefined the desktop delivery vision and successfully implemented 2,300 Windows XP desktops as a service.

This story comes as part of a special BriefingsDirect podcast series from the recent VMworld 2011 Conference in Copenhagen. The series explores the latest in cloud computing and virtualization infrastructure developments. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here to tell us what this major VDI deployment did in terms of business, technical, and financial payoffs is Sascha Karbginski, Systems Engineer at DER Deutsches Reisebüro, based in Frankfurt. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: Why were virtual desktops such an important direction for you? Why did it make sense for your organization?

Karbginski: In our organization, we’re talking about 580 travel agencies all over the country, all over Germany, with 2,300 physical desktops, which were not in our control. We had life cycles out there of about 4 or 5 years. We had old PCs with no client backups.

The biggest reason is that recovery times at our workplace were 24 hours between hardware change and bringing back all the software configuration, etc. Desktop virtualization was a chance to get the desktops into our data center, to get the security, and to get the controls.

DER in Germany is the number one in travel agencies. As I said, we're talking about 580 branches. We’re operating as a leisure travel agency with our branches, Atlasreisen and DER, and also, in the business travel sector with FCm Travel Solutions.

IT-intensive business

Gardner: This is a very IT-intensive business now. Everything in travel is done though networked applications and cloud and software-as-a-service (SaaS) services. So a very intensive IT activity in each of these branches?

Karbginski: That’s right. Without the reservation systems, we can’t do any flight bookings or reservations or check hotel availability. So without IT, we can do nothing.

Gardner: And tell me about the problem you needed to solve. You had four generations of PCs. You couldn’t control them. It took a lot of time to recover if there was a failure, and there was a lot of different software that you had to support.

Karbginski: Yes. We had no domain integration no control and we had those crashes, for example. All the data would be gone. We had no backups out there. And we changed the desktops about every four or five years. For example, when the reservation system needed more memory, we had to buy the memory, service providers were going out there, and everything was done during business hours.

We now have nearly about 100 percent virtualization. ... So it's about 99 percent virtualization. ... So the data is under our control in the data center, and important company information is not left in an office out there. Security is a big thing.

Gardner: What were some of the things that you had to do in order to enable this to work properly?

Karbginski: There were some challenges during the rollout. The bandwidth was a big thing. Our service provider had to work very hard for us, because we needed more bandwidth out there. The path we had our offices was 1 or 2-Mbit links to the headquarters data center. With desktop virtualization, we need a little bit more, depending on the number of the workplaces and we needed better quality of the lines.

So bandwidth was one thing. We also had the network infrastructure. We found some 10-Mbit half-duplex switches. So we had to change it. And we also had some hardware problems. We had a special multi-card board for payment to read out passports or to read out credit card information. They were very old and connected with PS/2.

Fixed a lot of problems

So there were a lot of problems, and we fixed them all. We changed the switches. Our service provider for Internet VPN connection brought us more quality. And we changed the keyboards. We don’t need this old stuff anymore.

Gardner: How has this worked out in terms of productivity, energy savings, lowering costs, and even business benefits?

Karbginski: Saving was our big thing in planning this project. The desktops have been running out there now about one year, and we know that we have up to 80 percent energy saving, just from changing the hardware out there. We’re running the Wyse P20 Zero Client instead of physical PC hardware.

We needed more energy for the server side in the data center, but if you look at it, we have 60 up to 70 percent energy savings overall. I think it’s really great.

Gardner: That’s very good. So what else comes in terms of productivity?

The data is under our control in the data center, and important company information is not left in an office out there.



Karbginski: In the past, the updates came during the business hours. Now, we can do all software updates at nights or at the weekends or if the office is closed. So helpdesk cost is reduced about 50 percent.

... We're using Dell servers with two sockets, quad-core, 144-gigabyte RAM. We're also using EMC Clariion SAN with 25 terabytes. Network infrastructure is Cisco, based on 10 GB Nexus data center switches. At the beginning the project, we had View 4.0 and we upgraded it last month to 4.6.

The people side

Gardner: What were some of the challenges in terms of working this through the people side of the process? We've talked about process, we've talked technology, but was there a learning curve or an education process for getting other people in your IT department as well as the users to adjust to this?

Karbginski: There were some unknown challenges or some new challenges we had during the rollout. For example, the network team. The most important thing was understanding of virtualization. It's an enterprise environment now, and if someone, for example, restarts the firewall in the data center, the desktops in our offices were disconnected.

It's really important to inform the other departments and also your own help desk.

... The first thing that the end users told us was that the selling platform from Amadeus, the reservation system, runs much faster now. This was the first thing most of the end users told us, and that’s a good thing.

The next is that the desktop follows the user. If the user works in one office now and next week in another office, he gets the same desktop. If the user is at the headquarters, he can use the same desktop, same outlook, and same configuration. So desktop follows the user now. This works really great.

The desktop follows the user. If the user works in one office now and next week in another office, he gets the same desktop.



Gardner: Looking to the future, are you going to be doing this following-the-user capability to more devices, perhaps mobile devices or at home PCs?

Karbginski: We plan to implement the security gateway with PCoIP support for home office users or mobile users who can access their same company desktop with all their data on it from nearly every computer in the world to bring the user more flexibility.

Gardner: If you were advising someone on what to learn from your experience as they now move toward desktop virtualization, any thoughts about what you would recommend for them?

Inform other departments

Karbginski: The most important thing is to get in touch with the other departments and inform them about the thing you're doing. Also, inform the user help desk directly at the beginning of the project. So take time to inform them what desktop virtualization means and which processes will change, because we know most of our colleagues had a wrong understanding of virtualization.

They think that with virtualization, everything will change and we'll need other support servers, and it's just a new thing and nobody needs it. If you inform them what you're doing that nothing will be changed for them, because all support processes are the same as before, they will accept it and understand the benefits for the company and for the user.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: