Monday, October 17, 2011

VMworld case study: City of Fairfield uses virtualization to more efficiently deliver crucial city services

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Our next VMware case study interview focuses on the City of Fairfield, California, and how the IT organization there has leveraged virtualization and cloud-delivered applications to provide new levels of service in an increasingly efficient manner.

We’ll see how Fairfield, a mid-sized city of 110,000 in Northern California, has taken the do-more-with-less adage to its fullest, beginning interestingly with core and mission-critical city services applications.

This story comes as part of a special BriefingsDirect podcast series from the VMworld 2011 Conference. The series explores the latest in cloud computing and virtualization infrastructure developments.

Here to share more detail on how virtualization is making the public sector more responsive at lower costs is Eudora Sindicic, Senior IT Analyst Over Operations in Fairfield. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why virtualize mission-critical applications, things like police and fire support, first?

Sindicic: First of all, it’s always been challenging in disaster recovery and business continuity. Keeping those things in mind, our CAD/RMS systems for the police center and also our fire staffing system were high on the list for protecting. Those are Tier 1 applications that we want to be able to recover very quickly.

We thought the best way to do that was to virtualize them and set us up for future business continuity and true failover and disaster recovery.

So I put it to my CIO, and he okayed it. We went forward with VMware, because we saw they had the best, most robust, and mature applications to support us. Seeing that our back-end was SQL for those two systems, and seeing that we were just going to embark on a brand-new upgrading of our CAD/RMS system, this was a prime time to jump on the bandwagon and do it.

Also, with our back-end storage being NetApp, and NetApp having such an intimate relationship with VMware, we decided to go with VMware.

Gardner: So you were able to accomplish your virtualization and also gain that disaster recovery and business continuity benefit, but you pointed out the time was of the essence. How long did it take you?.

We went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.



Sindicic: Back in early fiscal year 2010, I started doing all the research. I probably did a good nine months of research before even bringing this option to my CIO. Once I brought the option up, I worked with my vendors, VMware and NetApp, to obtain best pricing for the solution that I wanted.

I started implementation in October and completed the process in March. So it took some time. Then we went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.

Gardner: Tell me about your IT operations.

Sindicic: I have our finance system, an Oracle-based system, which consists of an Oracle database server and Apache applications server, and another reporting server that runs on a different platform. Those will all be virtual OSs sitting in one of my two clusters.

For the police systems, I have a separate cluster just for police and fire. Then, in the regular day-to-day business, like finance and other applications that the city uses, I have a campus cluster to keep those things separated and to also relieve any downtime of maintenance. So everything doesn’t have to be affected if I'm moving virtual servers among systems and patching and doing updates.

Other applications

We’re also going to be virtualizing several other applications, such as a citizen complaint application called Coplogic. We're going to be putting that in as well into the PD cluster.

The version of VMware that we’re using is 4.1, we’re using ESXi server. On the PD cluster, I have two ESXi servers and on my campus, I have three. I'm using vSphere 4, and it’s been really wonderful having a good handle on that control.

Also, within my vSphere, vCenter server, I've installed a bunch of NetApp storage control solutions that allow me to have centralized control over one level snapshotting and replication. So I can control it all from there. Then vSphere gives me that beautiful centralized view of all my VMs and resources being consumed.

It’s been really wonderful to be able to have that level of view into my infrastructure, whereas when the things were distributed, I hadn’t had that view that I needed. I’d have to connect one by one to each one of my systems to get that level.

Also, there are some things that we’ve learned during this whole thing. I went from two VLANs to four VLANs. When looking at your traffic and the type of traffic that’s going to traverse the VLANs, you want segregate that out big time and you’ll see a huge increase in your performance.

We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.



The other thing is making sure that you have the correct type of drives in your storage. I knew that right off the bat that IOPS was going to be an issue and then, of course, connectivity. We’re using Brocade switches to connect to the backend fiber channel drives for the server VMs, and for lower-end storage, we’re using iSCSI.

Gardner: And how has the virtualization efforts within all of that worked out?

Sindicic: It’s been wonderful. We’ve had wonderful disaster recovery capabilities. We have snapshotting abilities. I'm snapshotting the primary database server and application server, which allows for snapshots up to three weeks in primary storage and six months on secondary storage, which is really nice, and it has served us well.

We already had a fire drill, where one report was accidentally deleted out of a database due to someone doing something -- and I'll leave it at that. Within 10 minutes, I was able to bring up the snapshot of the records management system of that database.

The user was able to go into the test database, retrieve his document, and then he was able to print it. I was able to export that document and then re-import it into the production system. So there was no downtime. It literally took 10 minutes, and everybody was happy.

... We are seeing cost benefits now. I don’t have all the metrics, but we’ve spun up six additional VMs. If you figure out the cost of the Dells, because we are a Dell shop, it would cost anywhere between $5,000 and $11,000 per server. On top of that, you're talking about the cost of the Microsoft Software Assurance for that operating system. That has saved a lot of money right there in some of the projects that we’re currently embarking on, and for the future.

We have several more systems that I know are going to be coming online and we're going to save in cost. We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.

As it grows and it becomes more robust, and it will, I'm looking forward to a large cost savings over a 5- to 10-year period.

Better insight

Gardner: Was there anything that surprised you that you didn’t expect, when you moved from the physical to the virtualized environment?

Sindicic: I was pleasantly surprised with the depth of reporting that I could physically see, the graph, the actual metrics, as we were ongoing. As our CAD system came online into production, I could actually see utilization go up and to what level.

I was also pleasantly surprised to be able to see to see when the backups would occur, how it would affect the system and the users that were on it. Because of that, we were able to time them so that would be the least-used hours and what those hours were. I could actually tell in the system when it was the least used.

It was real time and it was just really wonderful to be able to easily do that, without having to manually create all the different tracking ends that you have to do within Microsoft Monitor or anything like that. I could do that completely independently of the OS.

We're going to have some compliance issues, and it’s mostly around encryption and data control, which I really don’t foresee being a problem with VMware.



Gardner: We're hearing a lot here at VMworld about desktop virtualization as well. I don’t know whether you’ve looked at that, but it seems like you've set yourself up for moving in that direction. Any thoughts about mobile or virtualized desktops as a future direction for you?

On the horizon

Sindicic: I see that most definitely on the horizon. Right now, the only thing that's hindering us is cost and storage. But as storage goes down, and as more robust technologies come out around storage, such as solid state, and as the price comes down on that, I foresee that something definitely coming into our environment.

Even here at the conference I'm taking a bunch of VDI and VMware View sessions, and I'm looking forward to hopefully starting a new project with virtualizing at the desktop level.

This will give us much more granular control over not only what’s on the user’s desktop, but patch management and malware and virus protection, instead of at the PC level doing it the host level, which would be wonderful. It would give us really great control and hopefully decreased cost. We’d be using a different product than probably what we’re using right now.

If you're actually using virus protection at the host level, you’re going to get a lot of bang for your buck and you won't have any impact on the PC-over-IP. That’s probably the way we we'll go, with PC-over-IP.

Right now, storage, VLANing all that has to happen, before we can even embark on something like that. So there's still a lot of research on my part going on, as well as finding a way to mitigate costs, maybe trade-in, something to gain something else. There are things that you can do to help make something like this happen.

I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.



... In city government, our IT infrastructure continues to grow as people are laid off and departments want to automate more and more processes, which is the right way to go. The IT staff remains the same, but the infrastructure, the data, and the support continues to grow. So I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.

VMware sure does allow that with centralized control in management, with being able to dynamically update virtual desktops, virtual servers, and the patch management and automation of that. You can take it to whatever level of automation you want or a little in between, so that you can do a little bit of check and balances with your own eyes, before the system goes off and does something itself.

Also, with the high availability and fault tolerance that VMware allows, it's been invaluable. If one of my systems goes down, my VMs automatically will be migrated over, which is a wonderful thing. We’re looking to implement as much virtualization as we can as budget will allow.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

No comments:

Post a Comment