Monday, October 31, 2011

Virtualized desktops spur use of 'bring your own device' in schools, allowing always-on access to educational resources

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Educators are using of desktop virtualization in innovative new ways to enable "bring your own device" (BYOD) benefits for faculty and students. This latest BriefingsDirect interview explores how one IT organization has made the leap to allowing young users to choose their own client devices to gain access to all the work or learning applications and data they need -- safely, securely, and with high performance.

The nice thing about BYOD is that you can essentially extend what do you do on-premises or on a local area network (LAN) -- like a school campus -- to anywhere; to your home; to your travels, 24×7.

The Avon Community School Corp. in Avon, Indiana has been experimenting with BYOD and desktop virtualization, and has recently embarked in a wider deployment for both this school year.

To get their story, Dana Gardner, Principal Analyst at Interarbor Solutions, interviewed Jason Brames, Assistant Director of Technology, and Jason Lantz, Network Services Team Leader, both at Avon Community School. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: You've been successful with server virtualization, but what made it important for you now to extend virtualization to the desktop?

Brames: One of the things that is important to our district we noticed when doing an assessment of our infrastructure: We have aging endpoints. We had a need to extend the refresh rate of our desktop computers from what was typical -- for a lot of school districts typical is about a 5-year refresh rate -- to getting anywhere from 7 to 10, maybe even 12 years, out of a desktop computer.

By going to a thin client model and connecting those machines to a virtual desktop, we're able to achieve high quality results for our end users, while still giving them computing power that they need and allowing us to have the cost savings by negating the need to purchase new equipment every five years.

By going with virtual environment, the problem that we were looking to solve was really just that -- how do we provide extended refresh rate for all of our devices?

Supporting 5,500 computers

We're located about 12 miles west of Indianapolis, Indiana, and we have 13 instructional buildings. We're a pre-K-to-12 institution and we have approximately 8,700 students, nearing 10,000 end-users in total. We’re currently supporting about 5,500 computers in our district.

... Currently have 400 View desktop licenses. We’re seeing utilization of that license pool of 20-25 percent right now, and the primary reason that we’re seeing that utilization is because we’re really just beginning that phase, with this being our first year for our virtual desktop roll out. We’re really in the second year, but the first year of more widespread use.

We’re training teachers on how to adequately and effectively use this technology in their classroom with kids It's been very highly received and is being adopted very well in our classrooms, because people are seeing that we were able to improve the computing experience for them.

Lantz: With that many devices, getting out there and installing software, even if it’s a push, locally, or what have you, there's a big management overhead there. By using VMware View and having that in our data center, where we can control that, the ability to have your golden image that you can then push out to a number of devices has made it a lot easier to transition to this type of model.

We’re finding that we can get applications out quicker with more quality control, as far as knowing exactly what’s going to happen inside of the virtual machine (VM) when you run that application. So that’s been a big help.

A lot of our applications are Web-based, Education City. It’s a lot of graphics and video. And we found that we're still able to run those in our View environment and not have issues.

Gardner: What are you running in terms of servers? What is your desktop virtualization platform, and what is it that allows you to move on this so far?

Lantz: On the server side, we're running VMware vSphere 4.1. On the desktop side, we're running View 4.6. Currently in our server production, as we call it, we have three servers. And we're adding a fourth shortly. On the View side of things, we currently have two servers and we’re getting two more in the next month or so. So we’ll have a total of four.

Access from anywhere

Gardner: Now one of the nice things about the desktop virtualization and this BYOD is it allows people to access these activities more freely anywhere. How do you manage to take what was once confined to the school network and allow the students and other folks in your community to do what they need to do, regardless of where they are, regardless of the device?

Brames: We’re a fairly affluent community. We have kids who were requesting to bring in their own devices. We felt as though encouraging that model in our district was something that would help students continue to use computers that were familiar to them and help us realize some cost savings long term.

So by connecting to virtual desktops in our environment, they get a familiar resource while they're within our walls in the school district, have access to all of their shared drives, network drives, network applications, all of the typical resources that are an expectation of sitting down in front of a school-owned piece of equipment. And they're seeing the availability of all of those things on their own device.

... A typical classroom for us contains four student computing stations, as well as, depending upon the building size, three to five labs available. We’re not focusing our desktop virtualization on those labs. We’re focusing on the classroom computing stations right now. Potentially, we'll also be in labs, as we go into the future.

Then, in addition to those student computing stations, we’re seeing those applications where our administrative team or principals and our district-level administrators are able to begin using virtual desktops to access while they’re outside of the district and growing familiar with that, so that whenever we enter into that phase where we’re allowing our students to access from outside of our network, we have that support structure in place.

... We’re also seeing an influx of more mobile-type devices such as tablets and even smartphones and things like that. The percentage of our users that are using tablets and smartphones right now for powerful computing or their primary devices is fairly low. However, we anticipate over time that the variety of devices we’ll have connecting to our network because of virtual desktops is going to increase.

We anticipate over time that the variety of devices we’ll have connecting to our network because of virtual desktops is going to increase.



Gardner: How is that hand-off happening? Are you able to provide a unified experience yet?

Lantz: That’s part of phase two of our approach that we’re implementing right now. We’ve gotten it out into the classrooms to get the students familiar with it, so that they understand how to use it. The next step in that process is to allow them to use this at home.

We currently have administrators that are using it in this fashion. They have tablets and are using the View client they connect in and get the same experience if they're in school or out of school.

So we’re to that point. Now that our administrators understand the benefits, now that our teachers have seen it in the classrooms, it’s a matter of getting it out there to the community.

One of the other ways that we’re making it available is that at our public library, we have a set of machines that students can access as well, because as you know, not every student has access to high-speed Internet, but they are able to go to library, check out these machines, and be able to get into the network that way. Those are some of the ways that we’re trying to bridge that gap.

Huge win-win

Technology Integration Group has resources that allow us to see what other school districts are doing and what are some of the things that they’ve run into. Then, they bring back here and we can discuss how we want to roll it out in our environment. They’ve been very good at giving us ideas of what has worked with other organizations and what hasn’t. That’s where they've come in. They’ve really helped us understand how we can best use this in our environment.

Gardner: I often hear from organizations, when they move to desktop virtualization, that there are some impacts on things like network or storage that they didn’t fully anticipate. How has that worked for you? How has this roll out movement towards increased desktop virtualization impacted you in terms of what you needed to do with your overall infrastructure?

Lantz: Luckily for us we’ve had a lot of growth in the last two to three years, which has allowed us to get some newer equipment. So our network infrastructure is very sound. We didn’t run into a lot of the issues that commonly you would with network bandwidth and things like that.

On the storage side, we did increase our storage. We went with an EqualLogic box for that, but with View, it doesn’t take up a ton of storage space with link clones and things like that. So having seen a huge impact there, now as we get further into this, storage requirements will get greater, but currently that hasn’t been a big issue for us.

Gardner: On the flip-side of that, a lot of organizations I talk to, who moved to desktop virtualization, gained some benefits on things like backup, disaster recovery, security, and control over data and assets, and even into compliance and regulatory issues. Has there been an upside that you could point to in terms of being a more centralized control of the desktop content and assets?

Difficult to monitor

Lantz: When you start talking about students bringing in their own devices, it's difficult to monitor what's on that personally owned device.

We found that by giving them a View desktop, we know what's in our environment and we know what that virtual machine has. That allows us to have more secure access for those students without compromising what's on that student’s machine, or what you may not know about what's on that student’s machine. That’s been a big benefit for us allowing students to bring in their own devices.

Gardner: Do we have any metrics of success either in business or, in this case, learning terms and/or IT cost savings? What has this done for you? I know it's a little early, but what's the early results?

Brames: You did mention that it is a little bit early, but we believe that as we begin using virtual desktops more so in our environment, one of the major cost savings that we’re going to see as a result is licensing cost for unique learning applications.

By creating these pools of machines that have specialty software on them we’re able to significantly reduce the number of titles we need to license.



Typically in our district we would have purchased x number of licenses for each one of our instructional buildings because they needed to utilize that with students in the classroom. They may have a certain number of students that need access to this application, for example, but they're not all accessing it during the same time of the day or it's on a machine that’s on a fat client, a physical machine somewhere in the building, and it's difficult for students to have access to it.

By creating these pools of machines that have specialty software on them we’re able to significantly reduce the number of titles we need to license for certain learning applications or certain applications that improve efficiencies for teachers and for students.

So that’s one area in which we know we’re going to see significant return on our investment. We already talked about extending the endpoints, and with energy savings, I think we can prove some results there as well. Anything to add, Jason?

Lantz: One of the ones that’s hard to calculate is, as you mentioned, maintenance or management of this piece and technology, as we all know you’re doing more with less. This really gives you the ability to do that. How you measure that is sometimes difficult, but there are definitely cost savings there as well.

Administrators are able to begin using virtual desktops to access while they’re outside of the district.



Gardner: I know budgets are really important in just about any school environment. Do you have any sense of the delta there between what it would be if you stuck to traditional cost structures, traditional licensing, fat client, to get to that one to one ratio, compared to what you’re going to be able to do over time with this virtualized approach?

Brames: Our Advanced Learning Center is the school building that has primarily senior students and advanced placement students. There are about 600 students that attend there.

Last year, 75 percent of those students were using school-owned equipment and 25 percent of them were bringing their own laptops to school. This year, what we have seen is that 43 percent of our students are beginning to bring their own devices to connect to our network and have access to network resources.

If that trend continues, which we think it will, we’ll be looking at certainly over 50 percent next year, hopefully approaching 60-65 percent of our students bringing their own devices. When you consider that that is approximately 400 devices that the school district did not need to invest in, that’s a significant saving for us.

This year, what we have seen is that 43 percent of our students are beginning to bring their own devices to connect to our network and have access to network resources.



Gardner: If you could do this over again, a little bit of 20/20 hindsight, what might you want to tell others in terms of being prepared?

Lantz: One thing that’s important is that when you explain this to users, the words "virtual desktop" can be a little confusing to teachers and your end-users. What I've done is taken the approach of it’s no different than having a regular machine and you can set it up to where it looks exactly the same.

No real difference

When you start talking with end users about virtual, it gets into, okay, "So it’s running back here, but what problems am I going to encounter?" and those sort of things. Trying to get that end user to realize that there really isn’t a difference between a virtual desktop and a real desktop has been important for us for getting them on board and making them understand that it’s not going to be a huge change for them.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Friday, October 28, 2011

Continuous improvement and flexibility are keys to successful data center transformation, say HP experts

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

The latest BriefingsDirect podcast discussion targets two major pillars of proper and successful data center transformation (DCT) projects. We’ll hear from a panel of HP experts on proven methods that have aided productive and cost-efficient projects to reshape and modernize enterprise data centers.

Learn about the latest trends buttressing the need for DCT and then how to do it well and safely. Specifically, we’ll delve into why it's important to fully understand the current state of an organization’s IT landscape and data center composition in order to then properly chart a strategy for transformation.

Secondly, we'll explore how to avoid pitfalls by balancing long-term goals with short-term flexibility. The key is to know how to constantly evaluate based on metrics and to reassess execution plans as DCT projects unfold. This avoids being too rigidly aligned with long-term plans and roadmaps and potentially losing sight of how actual progress is being made -- or not.

This is the first in a series of podcasts on DCT best practices and is presented in conjunction with a complementary video series.

With us now to explain why DCT makes sense and how to go about it with lower risk, we are joined by Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business; Mark Grindle, Master Business Consultant at HP, and Bruce Randall, Director of Product Marketing for Project and Portfolio Management at HP. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Tang: We all know that in this day and age, the business demands innovation, and IT is really important, a racing engine for any business. However, there are a lot of external constraints. The economy is not getting any better. Budgets are very, very tight. They are dealing with IT sprawl, aging infrastructure, and are just very much weighed down by this decade of old assets that they’ve inherited.

So a lot of companies today have been looking to transform, but getting started is not always very easy. So HP decided to launch this HUB project, which is designed to be a resource engine for IT to feature a virtual library of videos, showcasing the best of HP, but more importantly, ideas for how to address these challenges. We as a team, decided to tackle it with a series that’s aligned around some of the ways customers can approach addressing data centers, transforming them, and how to jump start their IT agility.

The five steps that we decided that as keys for the series would be the planning process, which is actually what we’re discussing in this podcast: data center consolidation, as well as standardization; virtualization; data center automation; and last but not least, of course, security.

IT superheroes


T
o make this video series more engaging, we hit on this idea of IT as superheroes, because we’ve all seen people, especially in this day and age, customers with the clean budget, whose IT team is really performing superhuman feats.

We thought we’d produce a series that's a bit more light-hearted than is usual for HP. So we added a superhero angle to the series. That’s how we hit upon the name of "IT Superhero Secrets: Five Steps to Jump Start Your IT Agility." Hopefully, this is going to be one of the little things that can contribute to this great process of data center modernizing right now, which is a key trend.

With us today are two of these experts that we’re going to feature in Episode 1. And to find these videos, you go to hp.com/go/thehub.

Gardner: Mark, you've been doing this for quite some time and have learned a lot along the way. Tell us why having a solid understanding of where you are in the present puts you in a position to better execute on your plans for the future.

Grindle: There certainly are a lot of great reasons to start transformation now.

But as you said, the key to starting any kind of major initiative, whether it’s transformation, data center consolidation, or any of these great things like virtualization, technology refresh that will help you improve your environment, improve the service to your customers, and reduce costs, which is what this is all about, is to understand where you are today.

Most companies out there with the economic pressures and technology changes that have gone on have done a lot to go after the proverbial low-hanging fruit. But now it’s important to understand where you are today, so that you can build the right plan for maximizing value the fastest and in the best way.

When we talk about understanding where you are today, there are a few things that jump to mind. How many servers do I have? How much storage do I have? What are the operating system levels and the versions that I'm at? How many desktops do I have? People really think about that kind of physical inventory and they try to manage it. They try to understand it, sometimes more successfully and other times less successfully.

But there's a lot more to understanding where you are today. Understanding that physical inventory is critical to what you need to understand to go forward, and most people have a lot of tools out there already to do that. I should mention that those of you who don’t have tools that can get that physical inventory, it’s important that you do.

I've found so many times when I go into environments that they think they have a good understanding of what they have physically, and a lot of times they do, but rarely is that accurate. Manual processes just can't keep things as accurate or as current as you really need, when you start trying to baseline your environment so that you can track and measure your progress and value.

Thinking about applications


O
f course, beyond the physical portions of your inventory, you'd better start thinking about your applications. What are your applications. What language are they written in? Are those traditional or supportable commercial-off-the-shelf (COTS) type applications? Are they homegrown? That’s going to make a big difference in how you move forward.

And of course, what does your financial landscape look like? What’s going in the operating expense? What’s your capital expense? How is it allocated out, and by the way, is it consistently allocated out.

I've run into a lot of issues where a business unit in the United States has put certain items into an operating expense bucket. In another country or a sub-business unit or another business unit, they're tracking things differently in where they put network cost or where they put people cost or where they put services. So it's not only important to understand where your money is allocated, but what’s in those buckets, so that you can track the progress.

Then, you get into things like people. As you start looking at transformation, a big part of transformation is not just the cost savings that may come about through being able to redeploy your people, but it's also from making sure that you have the right skill set.

If you don’t really understand how many people you have today, what roles and what functions they’re performing, it's going to become really challenging to understand what kind of retraining, reeducation, or redeployment you’re going to do in the future as the needs and the requirements and the skills change.

You really need to understand where they are, so you can properly prepare them for that future space that they want to get into.



You transform, as you level out your application landscape, as you consolidate your databases, as you virtualize your servers, as you use more storage carrying all those great technology. That's going to make a big difference in how your team, your IT organization runs the operations. You really need to understand where they are, so you can properly prepare them for that future space that they want to get into.

So understanding where you are, understanding all those aspects of it are going be the only ways to understand what you have to do to get you in a state. As was mentioned earlier, you know the metrics of measurement to track your progress. Are you realizing the value, the saving, the benefit to your company that you initially used or justified transformation?

Gardner: Mark, I had a thought when you were talking. We’re not just going from physical to physical. A lot of DCT projects now are making that leap from largely physical to increasingly virtual. And that is across many different aspects of virtualization, not just server virtualization.

Is there a specific requirement to know your physical landscape better to make that leap successfully? Is there anything about moving toward a more virtualized future that adds an added emphasis to this need to have a really strong sense of your present state?

Grindle: You're absolutely right on with that. A lot of people have server counts -- I've got a thousand of these, a hundred of those, 50 of those types of things. But understanding the more detailed measurements around those, how much memory is being utilized by each server, how much CPU or processor is being utilized by each server, what do the I/Os look like, the network connectivity, are the kind of inventory items that are going to allow you to virtualize.

Higher virtualization ratios


I
talk to people and they say, "I've got a 5:1 or a 10:1 or a 15:1 virtualization ratio, meaning that you have 15 physical servers and then you’re able to talk to one. But if you really understand what your environment is today, how it runs, and the performance characteristics of your environment today, there are environments out there that are achieving much higher virtualization ratios -- 30:1, 40:1, 50:1. We’ve seen a couple that are in the 60 and 70:1.

Of course, that just says that initially they weren’t really using their assets as well as they could have been. But again, it comes back to understanding your baseline, which allows you to plan out what your end state is going to look like.

If you don’t have that data, if you don’t have that information, naturally you've got to be a little more conservative in your solutions, as you don’t want to negatively impact the business of the customers. If you understand a little bit better, you can achieve greater savings, greater benefits.

Remember, this is all about freeing up money that your business can use elsewhere to help your business grow, to provide better service to those customers, and to make IT more of a partner, rather than just a service purely for the business organization.

Gardner: So it sounds as if measuring your current state isn’t just measuring what you have, but measuring some of the components and services you have physically in order to be able to move meaningfully and efficiently to virtualization. It’s really a different way to measure things, isn’t it?

The more data you have, the better you’re going to be able to figure out your end-state solution, and the more benefit you’re going to achieve out of that end state.



Grindle: Absolutely. And it’s not a one-time event. To start out in the field -- whether transformation is right for you and what your transformations look like -- you can do that one-time inventory, that one-time collection of performance information. But it’s really going to be an ongoing process.

The more data you have, the better you’re going to be able to figure out your end-state solution, and the more benefit you’re going to achieve out of that end state. Plus, as I mentioned earlier, the environment changes, and you’ve got to constantly keep on top of it and track it.

You mentioned that a lot of people are going towards virtualization. That becomes an even bigger problem. At least when you’re standing up a physical server today, people complain about how long it takes in a lot of organizations, but there are a lot of checks and balances. You’ve got to order that physical hardware. You've got to install the hardware. You’ve got to justify it. It's got to be loaded up with software. It’s got to be connected to the network.

A virtualized environment can be stood up in minutes. So if you’re not tracking that on an ongoing basis, that's even worse.

Gardner: Bruce, you’ve been looking at the need for being flexible in order to be successful, even as you've got a long-term roadmap ahead of you. Perhaps you could fill us in on why it’s important to evaluate along the way and not be even blinded by long-term goals, but keep balancing and reassessing along the way?

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Account for changes

Randall: That goes along with what Mark was just saying about the infrastructure components, how these things are constantly changing, and there has to be a process to account for all of the changes that occur.

If you’re looking at a transformation process, it really is a process. It's not a one-time event that occurs over a length of time. Just like any other big program or project that you may be managing you have to plan not only at the beginning of that transformation, but also in the middle and even sometimes in the end of these big transformation projects.

If you think about these things that may change throughout that transformation, one is people. You have people that come. You have people that are leaving for whatever reason. You have people that are reassigned to other roles or take roles that they wanted to do outside of the transformation project. The company strategy may even change, and in fact, in this economy, probably will most likely within the course of the transformation project.

The money situation will most likely change. Maybe you’ve had a certain amount of budget when you started the transformation. You counted on that budget to be able to use it all, and then things change. Maybe it goes up. Maybe it goes down, but most likely, things do change. The infrastructure as Mark pointed to is constantly in flux.

So even though you might have gotten a good steady state of what the infrastructure looked like when you started your transformation project, that does change as well. And then there's the application portfolio. As we continue to run the business, we continue to add or enhance existing applications. The application portfolio changes and therefore the needs within the transformation.

Even though you might have gotten a good steady state of what the infrastructure looked like when you started your transformation project, that does change as well.



Because of all of these changes occurring around you, there's a need to plan not only for contingencies to occur at the beginning of the process, but also to continue the planning process and update it as things change fairly consistently. What I’ve found over time, Dana, with various customers, as they are doing these transformation projects and they try to plan, that planning stage is not just the beginning, not just at the middle, and not just the one point. In other words, it makes the planning process go a lot better and it becomes a lot easier.

In fact, I was speaking with a customer the other day. We went to a baseball game together. It was a customer event, and I was surprised to see this particular customer there, because I knew it was their yearly planning cycle that was going on. I asked them about that, and they talked about the way that they had used our tools. The HP tool sets that they used had allowed them to literally do planning all the time. So they could attend a baseball game instead of attend the planning fire-drill.

So it wasn’t a one-time event, and even if the business wanted a yearly planning view, they were able to produce that very, very easily, because they kept their current state and current plans up to date throughout the process.

Gardner: This reminds me that we've spoken in the past, Bruce, about software development. Successful software development for a lot of folks now involves agile principles. There are these things they call scrum meetings, where people get together and they're constantly reevaluating or adjusting, getting inputs from the team.

Having just a roadmap and then sticking to it turns out to not be just business as usual, but can actually be a path to disaster. Any thoughts about learning from how software is developed in terms of planning for a large project like a DCT.

A lot of similarities

Randall: Absolutely. There are a lot of similarities between the new agile methodologies and what I was just describing in terms of planning at the beginning, in the middle, and the end basically constantly. And when I say the word, plan, I know that evokes in some people a thought of a lot of work, a big thing. In reality, what I am talking about is much smaller than that.

If you’re doing it frequently, the planning needs to be a lot smaller. It's not a huge, involved process. It's very much like the agile methodology, where you’re consistently doing little pieces of work, finishing up sub-segments of the entire thing that you needed to do, as opposed to all of it describing it all, having all your requirements written out at the beginning, then waiting for it to get done sometime later.

You’re actually adapting and changing, as things occur. What's important in the agile methodology, as well as in this transformation, like the planning process I talked about for transformation, is that you still have to give management visibility into what's going on.

Having a planning process and even a tool set to help you manage that planning process will also give management the visibility that they need into the status of that transformation project. The planning process, also like the agile, the development methodology allows collaboration. As you’re going back to the plan, readdressing it, thinking about the changes that have occurred, you’re collaborating between various groups in silos to make sure that you’re still in tune and that you’re still doing things that you need to be doing to make things happen.

One other thing that often is forgotten within the agile development methodology, but it’s still very important, particularly for transformation, is the ability to track the cost of that transformation at any given point in time. Maybe that's because the budget needs to be increased or maybe it's because you're getting some executive mandate that the budget will be decreased, but at least knowing what your costs are, how much you’ve spent, is very, very important.

One other thing that often is forgotten within the agile development methodology, but it’s still very important, particularly for transformation, is the ability to track the cost of that transformation.



Gardner: When you say that, it reminds me of something 20 years or more ago in manufacturing, the whole quality revolution, thought leaders like Deming and the Japanese Kaizen concept of constantly measuring, constantly evaluating, not letting things slip. Is there some relationship here to what you’re doing in project management to what we saw during this “quality revolution” several decades ago?

Randall: Absolutely. You see some of the tenets of project management that are number one. You're tracking what’s going on. You’re measuring what’s going on at every point in time, not only with the cost and the time frames, but also with the people who are involved. Who's doing what? Are they fulfilling the task we’ve asked them to do, so on and so forth. This produces, in the end, just as Deming and others have described, a much higher quality transformation than if you were to just haphazardly try to fulfill the transformation, without having a project management tool in place, for example.

One thing that I would start with is to use multiple resources from HP and others to help customers in their transformation process to both plan out initially what that transformation is going to look like and then give you a set of tools to automate and manage that program and the changes that occur to it throughout time.

That planning is important, as we’ve talked about, because it occurs at multiple stages throughout the cycle. If you have an automated system in place, it certainly it makes it easier to track the plan and changes to that plan over time.

A lot of tools


We have a lot of tools. One of the ones I want to highlight is the Data Center Transformation Experience workshop. And the reason I want to highlight because it really ties into what we’ve been talking about today. It’s an interactive session involving large panels, very minimal presentation and very minimal speaking by the HP facilitators.

We walk people through all the aspects of transformation and this is targeted at a strategic level. We’re looking at the CIOs, CTOs, and the executive decision makers to understand why HP did what they did as far as transformation goes.

We discuss what we’ve seen out in the industry, what the current trends are, and pull out of the conversation with these people where their companies are today. At the end of a workshop, and it's a full-day workshop, there are a lot of materials that are delivered out of it that not only documents the discussions throughout the day, but really provides a step or steps of how to proceed.

So it’s a prioritization. You have facility, for example, that might be in great shape, but your data warehouses are not. That’s an area that you should go after fast, because there's a lot of value in changing it, and it’s going to take you a long time. Or there's a quick hit in your organization and the way you manage your operation, because we cover all the aspects of program management, governance, management of change. That’s the organizational change for a lot of people. As for the technology, we can help them understand not only where they are, but what the initial strategy and plan should be.

You brought up a little bit earlier, Dana, some of the quality people like Deming, etc. We’ve got to remember that transformation is really a journey. There's a lot you can accomplish very rapidly. We always say that the faster you can achieve transformation, the faster you can realize value and the business can get back to leveraging that value, but transformation never ends. There's always more to do. So it's very analogous to the continuous improvement that comes out of some of the quality people that you mentioned earlier.

We always say that the faster you can achieve transformation, the faster you can realize value and the business can get back to leveraging that value, but transformation never ends.



The workshops are scheduled with companies individually. So a good touch point would be with your HP account manager. He or she can work with you to schedule a workshop and understand that how it can be done. They're scheduled as needed.

We do hold hundreds of them around the world every year. It’s been a great workshop. People find it very successful, because it really helps them understand how to approach this and how to get the right momentum within their company to achieve transformation, and there's also a lot of materials on our website.

For more information on The HUB -- HP's video series on data center transformation, go to www.hp.com/go/thehub.

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, October 24, 2011

Four key reasons to include cloud computing in your business strategy

This guest post comes from Mark Skilton, Global Director of Strategy and Global Infrastructure Services, Capgemini, and Co-Chair of the Cloud Computing Work Group, The Open Group.

By Mark Skilton

The issue of success or failure in moving your company data, IT storage, servers or software to the cloud is often driven by technical issues, including performance, bandwidth, security and total-cost-of-ownership (TCO) considerations. While many of these factors are key criteria for selecting cloud solutions, they usually don’t align with the bigger picture that C-level executives must consider when adding new IT solutions.

How IT can help sustain or create a competitive advantage has never been more apparent than today through the use of cloud computing. This technology boasts benefits such as reduced costs and scalability, just to name a few, but many companies fail to find the right fit for cloud within their business. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

This is because cloud computing is not one size fits all. Performance, network bandwidth, security, and total cost concerns can be allayed through a better portfolio and investment approach that considers the multitude of options available.

How cloud computing fits

In industries where working capital bears a high price and is in short supply, businesses often have to make ends meet and have limited investment available. Therefore, being able to source the lowest cost and drive efficiencies even further is critical to growing business and market share.

For companies with limited working capital resource or cash flow funds, the use of on-demand services becomes an attractive option for consumers to avoid upfront costs or maintenance of services. Likewise, companies seeking to provide better profitability from their operation and vendors managing their cost center can leverage on-demand models to target areas of their portfolio to reduce cost and maximize return.

When adopting cloud computing, companies are often driven by cost effectiveness, rather than looking at the bigger picture and asking what cloud solution is the best fit for the business.



When adopting cloud computing, companies are often driven by cost effectiveness, rather than looking at the bigger picture and asking what cloud solution is the best fit for the business. Cost savings, longevity of product, and performance aren’t mutually exclusive, and all should be factored into the decision-making process when researching and purchasing a cloud solution.

Here are four questions, which include key metrics and drivers, to ask when researching cloud solutions that will maximize the value of cloud computing for your organization:
  1. Why is investment being spent on areas of IT that are not differentiating your business and can be commoditized?
    • Key Metric: The balance of percent of investment on non-core commodity IT
    • Key drivers: TCO needs to consider where to focus IT investment

  2. How can IT grow and adapt with the ever-increasing expansion of data storage and the growth of computing demands eclipsing on-premise facilities?
    • Key Metric: The cost of storage and archiving , recovery and continuity
    • Key drivers: Latency of network and storage costs can be targeted through considering the whole IT portfolio, not just niche use cases of cost-performance. Look at the bigger picture.

  3. How can access to new markets and new channels be better served through extending networks and partnerships?
    • Key Metric: Size of markets and effectiveness of sales channels, both internal sales and external direct sales and reselling
    • Key drivers: Total cost of acquisition can include the creation or use of third-party distributed marketplaces and self-service portals and platforms

  4. Is your own IT fast enough to beat your competition or drive the cost savings or revenue and margin growth plans you need?
    • Key metric: Speed of IT delivery and its cost and quality of service.
    • Key drivers: Performance can be offered through selected service provisioning. Question whether all knowledge needs to be in-house. Skills can be as-a-service too.
Open Group Cloud Computing Book

T
he Open Group recently published Cloud Computing for Business –The Open Group Guide to address many of these key questions. The book is intended for senior business executives and practicing architects responsible for defining corporate strategy, and it identifies how to select and buy cloud computing services to achieve the best business and technical outcomes.

You can purchase a copy of the book from The Open Group here or download preview sections here:
Cloud Computing for Business – Preview – first 30 pages
Cloud Computing for Business – Section 1.8
Cloud Computing for Business – Section 4.1
This guest post comes from Mark Skilton, Global Director of Strategy and Global Infrastructure Services, Capgemini, and Co-Chair of the Cloud Computing Work Group, The Open Group.

You may also be interested in:

Wednesday, October 19, 2011

Top 10 pitfalls of P2P integration to avoid in the cloud

This guest post comes courtesy of Ross Mason, CTO and founder of MuleSoft. Disclosure: MuleSoft is a sponsor of BriefingDirect podcasts.

By Ross Mason

While integration isn’t necessarily a new problem, the unique challenges of integrating in the cloud require a new approach. Many enterprises, however, are still using point-to-point (P2P) solutions to address their cloud integration needs.

In order to tackle cloud integration successfully, we need to move beyond P2P integration and avoid repeating the same mistakes. To aid in that effort, here is a list (in no particular order) of the top 10 pitfalls of P2P integration to avoid repeating in the cloud:

1. Building vs. buying: If you have developers with integration experience in your IT department, you can have them build a custom P2P integration in house, rather than buy a packaged solution. Building your own integration, however, typically means that you will also have to manage and maintain a codebase that isn’t central to your business and is difficult to change.

2. Quickfire integrations: Let’s say you need to integrate two systems quickly and hire a developer to work on the project over a couple of days. You notice an improvement in business efficiency and see an opportunity to integrate additional systems. You hire the same developer and expect the same quickfire integrations, but the complexity of the project has increased exponentially. The takeaway? It’s always a good idea to approach integration systematically and establish a plan up front, rather than integrate your systems in an ad hoc P2P fashion.

3. Embedding integrations in your application: Although it might be tempting to embed P2P integrations in your web application, you should be cautious about this approach. It may be fine for really simple integrations, but over time, your integration logic becomes scattered in different web apps. Instead, you should think of integration as a separate tier of your application architecture and centralize this logic.

One of the consistently recurring mistakes of doing quick P2P integrations is assuming that things will not break.



4. Creating dependencies between applications: When you integrate applications in a P2P fashion, you create a dependency between them. For example, let’s say you’ve integrated App A and App B. When App A is modified or updated, you will need to change the integration that connects it to App B. You also need to re-test the integration to make sure it works properly. If you add App C to the mix, your workload can increase exponentially.

5. Assuming everything always works: One of the consistently recurring mistakes of doing quick P2P integrations is assuming that things will not break. The reality is that integrations don’t always work as planned. As you integrate systems, you need to design for errors and establish a course of action for troubleshooting different kinds of errors. Error handling is particularly troublesome when integrating software-as-a-service (SaaS) applications, because you have limited visibility and control over the changes that SaaS vendors make to them.

Test each integration

6. It worked yesterday: Just because P2P integration worked for one project does not mean it will work for another. The key is to test each integration you build. Unfortunately, P2P integrations are often built and deployed quickly without sufficient planning or proper testing, increasing the chances for errors. Although it can be difficult and does require a decent amount of effort, testing integrations is absolutely critical.

7. Using independent consultants: Many companies are not staffed with developers who have enough integration expertise and hire consultants to resolve their integration issues. The problem with this approach is that you often have limited visibility into whatever the consultant delivers. If you need to make changes, you typically need to work with the same consultant, which is not always possible.

8. Creating single points of failure: As your P2P integration architecture grows in size and complexity, its chances of becoming a single point of failure in your entire network increase as well. Minimizing the potential for single points of failure should be a priority when it comes to integration, but the lack of decoupling in a P2P approach makes it hard to eliminate bottlenecks in your system.

Quick P2P integrations are relatively manageable when you have 2 or 3 systems to connect, but when you start adding other systems, your architecture quickly becomes a complicated mess.



9. Black-box solutions: Custom-built P2P solutions are usually black box in nature. In other words, they lack reporting capabilities that tell you what is happening between systems. This makes it very hard to debug problems, measure performance, or find out if things are working properly.

10. Creating a monster: Quick P2P integrations are relatively manageable when you have 2 or 3 systems to connect, but when you start adding other systems, your architecture quickly becomes a complicated mess. And because no two P2P integrations are exactly the same, managing your integrations becomes a major pain. If you invest in doing some design work up front, however, this will save you from having to throw away a tangled P2P architecture and starting from scratch to find a new solution under pressure. If you have a well thought out design and a simple architecture, you can reduce the management burdens and costs associated with integration.

Ross Masson is the CTO and Founder of MuleSoft. He founded the open source Mule project in 2003. Frustrated by integration "donkey work," he started the Mule project to bring a modern approach, one of assembly, rather than repetitive coding, to developers worldwide. Now, with the MuleSoft team, Ross is taking these founding principles of dead-simple integration to the cloud with Mule iON, an integration platform as a service (iPaaS).

You may also be interested in:

Monday, October 17, 2011

VMworld case study: City of Fairfield uses virtualization to more efficiently deliver crucial city services

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Our next VMware case study interview focuses on the City of Fairfield, California, and how the IT organization there has leveraged virtualization and cloud-delivered applications to provide new levels of service in an increasingly efficient manner.

We’ll see how Fairfield, a mid-sized city of 110,000 in Northern California, has taken the do-more-with-less adage to its fullest, beginning interestingly with core and mission-critical city services applications.

This story comes as part of a special BriefingsDirect podcast series from the VMworld 2011 Conference. The series explores the latest in cloud computing and virtualization infrastructure developments.

Here to share more detail on how virtualization is making the public sector more responsive at lower costs is Eudora Sindicic, Senior IT Analyst Over Operations in Fairfield. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why virtualize mission-critical applications, things like police and fire support, first?

Sindicic: First of all, it’s always been challenging in disaster recovery and business continuity. Keeping those things in mind, our CAD/RMS systems for the police center and also our fire staffing system were high on the list for protecting. Those are Tier 1 applications that we want to be able to recover very quickly.

We thought the best way to do that was to virtualize them and set us up for future business continuity and true failover and disaster recovery.

So I put it to my CIO, and he okayed it. We went forward with VMware, because we saw they had the best, most robust, and mature applications to support us. Seeing that our back-end was SQL for those two systems, and seeing that we were just going to embark on a brand-new upgrading of our CAD/RMS system, this was a prime time to jump on the bandwagon and do it.

Also, with our back-end storage being NetApp, and NetApp having such an intimate relationship with VMware, we decided to go with VMware.

Gardner: So you were able to accomplish your virtualization and also gain that disaster recovery and business continuity benefit, but you pointed out the time was of the essence. How long did it take you?.

We went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.



Sindicic: Back in early fiscal year 2010, I started doing all the research. I probably did a good nine months of research before even bringing this option to my CIO. Once I brought the option up, I worked with my vendors, VMware and NetApp, to obtain best pricing for the solution that I wanted.

I started implementation in October and completed the process in March. So it took some time. Then we went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.

Gardner: Tell me about your IT operations.

Sindicic: I have our finance system, an Oracle-based system, which consists of an Oracle database server and Apache applications server, and another reporting server that runs on a different platform. Those will all be virtual OSs sitting in one of my two clusters.

For the police systems, I have a separate cluster just for police and fire. Then, in the regular day-to-day business, like finance and other applications that the city uses, I have a campus cluster to keep those things separated and to also relieve any downtime of maintenance. So everything doesn’t have to be affected if I'm moving virtual servers among systems and patching and doing updates.

Other applications

We’re also going to be virtualizing several other applications, such as a citizen complaint application called Coplogic. We're going to be putting that in as well into the PD cluster.

The version of VMware that we’re using is 4.1, we’re using ESXi server. On the PD cluster, I have two ESXi servers and on my campus, I have three. I'm using vSphere 4, and it’s been really wonderful having a good handle on that control.

Also, within my vSphere, vCenter server, I've installed a bunch of NetApp storage control solutions that allow me to have centralized control over one level snapshotting and replication. So I can control it all from there. Then vSphere gives me that beautiful centralized view of all my VMs and resources being consumed.

It’s been really wonderful to be able to have that level of view into my infrastructure, whereas when the things were distributed, I hadn’t had that view that I needed. I’d have to connect one by one to each one of my systems to get that level.

Also, there are some things that we’ve learned during this whole thing. I went from two VLANs to four VLANs. When looking at your traffic and the type of traffic that’s going to traverse the VLANs, you want segregate that out big time and you’ll see a huge increase in your performance.

We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.



The other thing is making sure that you have the correct type of drives in your storage. I knew that right off the bat that IOPS was going to be an issue and then, of course, connectivity. We’re using Brocade switches to connect to the backend fiber channel drives for the server VMs, and for lower-end storage, we’re using iSCSI.

Gardner: And how has the virtualization efforts within all of that worked out?

Sindicic: It’s been wonderful. We’ve had wonderful disaster recovery capabilities. We have snapshotting abilities. I'm snapshotting the primary database server and application server, which allows for snapshots up to three weeks in primary storage and six months on secondary storage, which is really nice, and it has served us well.

We already had a fire drill, where one report was accidentally deleted out of a database due to someone doing something -- and I'll leave it at that. Within 10 minutes, I was able to bring up the snapshot of the records management system of that database.

The user was able to go into the test database, retrieve his document, and then he was able to print it. I was able to export that document and then re-import it into the production system. So there was no downtime. It literally took 10 minutes, and everybody was happy.

... We are seeing cost benefits now. I don’t have all the metrics, but we’ve spun up six additional VMs. If you figure out the cost of the Dells, because we are a Dell shop, it would cost anywhere between $5,000 and $11,000 per server. On top of that, you're talking about the cost of the Microsoft Software Assurance for that operating system. That has saved a lot of money right there in some of the projects that we’re currently embarking on, and for the future.

We have several more systems that I know are going to be coming online and we're going to save in cost. We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.

As it grows and it becomes more robust, and it will, I'm looking forward to a large cost savings over a 5- to 10-year period.

Better insight

Gardner: Was there anything that surprised you that you didn’t expect, when you moved from the physical to the virtualized environment?

Sindicic: I was pleasantly surprised with the depth of reporting that I could physically see, the graph, the actual metrics, as we were ongoing. As our CAD system came online into production, I could actually see utilization go up and to what level.

I was also pleasantly surprised to be able to see to see when the backups would occur, how it would affect the system and the users that were on it. Because of that, we were able to time them so that would be the least-used hours and what those hours were. I could actually tell in the system when it was the least used.

It was real time and it was just really wonderful to be able to easily do that, without having to manually create all the different tracking ends that you have to do within Microsoft Monitor or anything like that. I could do that completely independently of the OS.

We're going to have some compliance issues, and it’s mostly around encryption and data control, which I really don’t foresee being a problem with VMware.



Gardner: We're hearing a lot here at VMworld about desktop virtualization as well. I don’t know whether you’ve looked at that, but it seems like you've set yourself up for moving in that direction. Any thoughts about mobile or virtualized desktops as a future direction for you?

On the horizon

Sindicic: I see that most definitely on the horizon. Right now, the only thing that's hindering us is cost and storage. But as storage goes down, and as more robust technologies come out around storage, such as solid state, and as the price comes down on that, I foresee that something definitely coming into our environment.

Even here at the conference I'm taking a bunch of VDI and VMware View sessions, and I'm looking forward to hopefully starting a new project with virtualizing at the desktop level.

This will give us much more granular control over not only what’s on the user’s desktop, but patch management and malware and virus protection, instead of at the PC level doing it the host level, which would be wonderful. It would give us really great control and hopefully decreased cost. We’d be using a different product than probably what we’re using right now.

If you're actually using virus protection at the host level, you’re going to get a lot of bang for your buck and you won't have any impact on the PC-over-IP. That’s probably the way we we'll go, with PC-over-IP.

Right now, storage, VLANing all that has to happen, before we can even embark on something like that. So there's still a lot of research on my part going on, as well as finding a way to mitigate costs, maybe trade-in, something to gain something else. There are things that you can do to help make something like this happen.

I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.



... In city government, our IT infrastructure continues to grow as people are laid off and departments want to automate more and more processes, which is the right way to go. The IT staff remains the same, but the infrastructure, the data, and the support continues to grow. So I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.

VMware sure does allow that with centralized control in management, with being able to dynamically update virtual desktops, virtual servers, and the patch management and automation of that. You can take it to whatever level of automation you want or a little in between, so that you can do a little bit of check and balances with your own eyes, before the system goes off and does something itself.

Also, with the high availability and fault tolerance that VMware allows, it's been invaluable. If one of my systems goes down, my VMs automatically will be migrated over, which is a wonderful thing. We’re looking to implement as much virtualization as we can as budget will allow.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: