Friday, June 22, 2012

Learn how enterprise architects can better relate TOGAF and DoDAF to bring best IT practices to defense contracts

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on how enterprise architecture (EA), enterprise transformation, and securing global supply chains.

We're joined by one of the main speakers at the July 16 conference, Chris Armstrong, President of Armstrong Process Group, to examine how governments in particular are using various frameworks to improve their architectural planning and IT implementations.

Armstrong is an internationally recognized thought leader in EA, formal modeling, process improvement, systems and software engineering, requirements management, and iterative and agile development.
Governments in particular are using various frameworks to improve their architectural planning and IT implementation.
He represents the Armstrong Process Group at the Open Group, the Object Management Group (OMG), and Eclipse Foundation. Armstrong also co-chairs The Open Group Architectural Framework (TOGAF), and Model Driven Architecture (MDA) process modeling efforts, and also the TOGAF 9 Tool Certification program, all at The Open Group.

At the conference, Armstrong will examine the use of TOGAF 9 to deliver Department of Defense (DoD) Architecture Framework or DoDAF 2 capabilities. And in doing so, we'll discuss how to use TOGAF architecture development methods to drive the development and use of DoDAF 2 architectures for delivering new mission and program capabilities. His presentation will also be live-streamed free from The Open Group Conference. The discussion now is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: TOGAF and DoDAF, where have they been? Where are they going? And why do they need to relate to one another more these days?

Armstrong: TOGAF [forms] a set of essential components for establishing and operating an EA capability within an organization. And it contains three of the four key components of any EA.

First, the method by which EA work is done, including how it touches other life cycles within the organization and how it’s governed and managed. Then, there's a skills framework that talks about the skills and experiences that the individual practitioners must have in order to participate in the EA work. Then, there's a taxonomy framework that describes the semantics and form of the deliverables and the knowledge that the EA function is trying to manage.

One-stop shop

One of the great things that TOGAF has going for it is that, on the one hand, it's designed to be a one-stop shop -- namely providing everything that a end-user organization might need to establish an EA practice. But it does acknowledge that there are other components, predominantly in the various taxonomies and reference models, that various end-user organizations may want to substitute or augment.

It turns out that TOGAF has a nice synergy with other taxonomies, such as DoDAF, as it provides the backdrop for how to establish the overall EA capability, how to exploit it, and put it into practice to deliver new business capabilities.

Frameworks, such as DoDAF, focus predominantly on the taxonomy, mainly the kinds of things we’re keeping track of, the semantics relationships, and perhaps some formalism on how they're structured. There's a little bit of method guidance within DoDAF, but not a lot. So we see the marriage of the two as a natural synergy.

Gardner: So their complementary natures allows for more particulars on the defense side, but the overall TOGAF looks at the implementation method and skills for how this works best. Is this something new, or are we just learning to do it better?

Armstrong: I think we’re seeing the state of industry advance and looking at trying to have the federal government, both United States and abroad, embrace global industry standards for EA work. Historically, particularly in the US government, a lot of defense agencies and their contractors have often been focusing on a minimalistic compliance perspective with respect to DoDAF. In order to get paid for this work or be authorized to do this work, one of our requirements is we must produce DoDAF.
A lot of defense agencies and their contractors have often been focusing on a minimalistic compliance perspective with respect to DoDAF.


People are doing that because they've been commanded to do it. We’re seeing a new level of awareness. There's some synergy with what’s going on in the DoDAF space, particularly as it relates to migrating from DoDAF 1.5 to DoDAF 2.

Agencies need some method and technique guidance on exactly how to come up with those particular viewpoints that are going to be most relevant, and how to exploit what DoDAF has to offer, in a way that advances the business as opposed to just solely being to conforming or compliant?

Gardner: Have there been hurdles, perhaps culturally, because of the landscape of these different companies and their inability to have that boundary-less interaction. What’s been the hurdle? What’s prevented this from being more beneficial at that higher level?

Armstrong: Probably overall organizational and practitioner maturity. There certainly are a lot of very skilled organizations and individuals out there. However, we're trying to get them all lined up with the best practice for establishing an EA capability and then operating it and using it to a business strategic advantage, something that TOGAF defines very nicely and which the DoDAF taxonomy and work products hold in very effectively.

Gardner: Help me understand, Chris. Is this discussion that you’ll be delivering on July 16 primarily for TOGAF people to better understand how to implement vis-à-vis, DoDAF, is this the other direction, or is it a two-way street?

Two-way street

Armstrong: It’s a two-way street. One of the big things that particularly the DoD space has going for it is that there's quite a bit of maturity in the notion of formally specified models, as DoDAF describes them, and the various views that DoDAF includes.

We’d like to think that, because of that maturity, the general TOGAF community can glean a lot of benefit from the experience they’ve had. What does it take to capture these architecture descriptions, some of the finer points about managing some of those assets. People within the TOGAF general community are always looking for case studies and best practices that demonstrate to them that what other people are doing is something that they can do as well.

We also think that the federal agency community also has a lot to glean from this. Again, we're trying to get some convergence on standard methods and techniques, so that they can more easily have resources join their teams and immediately be productive and add value to their projects, because they’re all based on a standard EA method and framework.
One of the major changes between DoDAF 1 and DoDAF 2 is the focusing on fitness for purpose.


One of the major changes between DoDAF 1 and DoDAF 2 is the focusing on fitness for purpose. In the past, a lot of organizations felt that it was their obligation to describe all architecture viewpoints that DoDAF suggests without necessarily taking a step back and saying, "Why would I want to do that?"

So it’s trying to make the agencies think more critically about how they can be the most agile, mainly what’s the least amount of architecture description that we can invest and that has the greatest possible value. Organizations now have the discretion to determine what fitness for purpose is.

Then, there's the whole idea in DoDAF 2, that the architecture is supposed to be capability-driven. That is, you’re not just describing architecture, because you have some tools that happened to be DoDAF conforming, but there is a new business capability that you’re trying to inject into the organization through capability-based transformation, which is going to involve people, process, and tools.

One of the nice things that TOGAF’s architecture development method has to offer is a well-defined set of activities and best practices for deciding how you determine what those capabilities are and how you engage your stakeholders to really help collect the requirements for what fit for purpose means.

Gardner: As with the private sector, it seems that everyone needs to move faster. I see you’ve been working on agile development. With organizations like the OMG and Eclipse is there something that doing this well -- bringing the best of TOGAF and DoDAF together -- enables a greater agility and speed when it comes to completing a project?
Register for The Open Group Conference
July 16-18 in Washington, D.C.
Different perspectives

Armstrong: Absolutely. When you talk about what agile means to the general community, you may get a lot of different perspectives and a lot of different answers. Ultimately, we at APG feel that agility is fundamentally about how well your organization responds to change.

If you take a step back, that’s really what we think is the fundamental litmus test of the goodness of an architecture. Whether it’s an EA, a segment architecture, or a system architecture, the architects need to think thoughtfully and considerately about what things are almost certainly going to happen in the near future. I need to anticipate, and be able to work these into my architecture in such a way that when these changes occur, the architecture can respond in a timely, relevant fashion.

We feel that, while a lot of people think that agile is just a pseudonym for not planning, not making commitments, going around in circles forever, we call that chaos, another five letter word. But agile in our experience really demands rigor, and discipline.

Of course, a lot of the culture of the DoD brings that rigor and discipline to it, but also the experience that that community has had, in particular, of formally modeling architecture description. That sets up those government agencies to act agilely much more than others.

Gardner: Do you know of anyone that has done it successfully or is in the process? Even if you can’t name them, perhaps you can describe how something like this works?

Armstrong: First, there has been some great work done by the MITRE organization through their work in collaboration at The Open Group. They’ve written a white paper that talks about which DoDAF deliverables are likely to be useful in specific architecture development method activities. We’re going to be using that as a foundation for the talk we’re going to be giving at the conference in July.

The biggest thing that TOGAF has to offer is that a nascent organization that’s jumping into the DoDAF space may just look at it from an initial compliance perspective, saying, "We have to create an AV-1, and an OV-1, and a SvcV-5," and so on.

Providing guidance

T
OGAF will provide the guidance for what is EA. Why should I care? What kind of people do I need within my organization? What kind of skills do they need? What kind of professional certification might be appropriate to get all of the participants up on the same page, so that when we’re talking about EA, we’re all using the same language?

TOGAF also, of course, has a great emphasis on architecture governance and suggests that immediately, when you’re first propping up your EA capability, you need to put into your plan how you're going to operate and maintain these architectural assets, once they’ve been produced, so that you can exploit them in some reuse strategy moving forward.

So, the preliminary phase of the TOGAF architecture development method provides those agencies best practices on how to get going with EA, including exactly how an organization is going to exploit what the DoDAF taxonomy framework has to offer.

Then, once an organization or a contractor is charged with doing some DoDAF work, because of a new program or a new capability, they would immediately begin executing Phase A: Architecture Vision, and follow the best practices that TOGAF has to offer.

Just what is that capability that we’re trying to describe? Who are the key stakeholders, and what are their concerns? What are their business objectives and requirements? What constraints are we going to be placed under?
As the project unfolds, they're going to discover details that may cause some adjustment to that final target.


Part of that is to create a high-level description of the current or baseline architecture descriptions, and then the future target state, so that all parties have at least a coarse-grained idea of kind of where we're at right now, and what our vision is of where we want to be.

Because this is really a high level requirements and scoping set of activities, we expect that that’s going to be somewhat ambiguous. As the project unfolds, they're going to discover details that may cause some adjustment to that final target.

Internalize best practices

So, we're seeing defense contractors being able to internalize some of these best practices, and really be prepared for the future so that they can win the greatest amount of business and respond as rapidly and appropriately as possible, as well as how they can exploit these best practices to affect greater business transformation across their enterprises.

Gardner: We mentioned that your discussion on these issues, on July 16 will be live-streamed for free, but you’re also doing some pre-conference and post-conference activities -- webinars, and other things. Tell us how this is all coming together, and for those who are interested, how they could take advantage of all of these.

Armstrong: We’re certainly very privileged that The Open Group has offered this as opportunity to share this content with the community. On Monday, June 25, we'll be delivering a webinar that focuses on architecture change management in the DoDAF space, particularly how an organization migrates from DoDAF 1 to DoDAF 2.
We’ll be talking about things that organizations need to think about as they migrate from DoDAF 1 to DoDAF 2.


I'll be joined by a couple of other people from APG, David Rice, one of our Principal Enterprise Architects who is a member of the DoDAF 2 Working Group, as well as J.D. Baker, who is the Co-chair of the OMG’s Analysis and Design Taskforce, and a member of the Unified Profile for DoDAF and MODAF (UPDM) work group, a specification from the OMG.

We’ll be talking about things that organizations need to think about as they migrate from DoDAF 1 to DoDAF 2. We'll be focusing on some of the key points of the DoDAF 2 meta-model, namely the rearrangement of the architecture viewpoints and the architecture partitions and how that maps from the classical DoDAF 1.5 viewpoint, as well as focusing on this notion of capability-driven architectures and fitness for purpose.

We also have the great privilege after the conference to be delivering a follow-up webinar on implementation methods and techniques around advanced DoDAF architectures. Particularly, we're going to take a closer look at something that some people may be interested in, namely tool interoperability and how the DoDAF meta-model offers that through what’s called the Physical Exchange Specification (PES).

We’ll be taking a look a little bit more closely at this UPDM thing I just mentioned, focusing on how we can use formal modeling languages based on OMG standards, such as UML, SysML, BPMN, and SoaML, to do very formal architectural modeling.

One of the big challenges with EA is, at the end of the day, EA comes up with a set of policies, principles, assets, and best practices that talk about how the organization needs to operate and realize new solutions within that new framework. If EA doesn’t have a hand-off to the delivery method, namely systems engineering and solution delivery, then none of this architecture stuff makes a bit of a difference.

Driving the realization


We're going to be talking a little bit about how DoDAF-based architecture description and TOGAF would drive the realization of those capabilities through traditional systems, engineering, and software development method.
Register for The Open Group Conference
July 16-18 in Washington, D.C.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Wednesday, June 20, 2012

HP provides tools and services to help SMBs survive and thrive in a mobile environment

HP today announced technology solutions and services, as well as financing and training programs that enable resource-challenged small and medium businesses (SMBs) to simplify IT, while enhancing collaboration in an increasingly mobile world.

As mobile devices, as well as bring-your-own-device (BYOD) initiatives, continue to grow, organizations of all sizes are challenged by the need to access, manage and secure mobile devices and the data generated by them. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

According to Gartner research, by 2016, at least 50 percent of business email users will rely primarily on a tablet or mobile client instead of a traditional desktop. The trend to use devices to access email and other business data requires SMBs to prepare their infrastructures to support increased mobility.

"We've spent a lot of time working with our channel partners," said Lisa Wolfe, Worldwide Small and Medium Business lead for HP. "They're facing a fairly new set of IT challenges. Companies have a growing mobile workforce. These new solutions align to a bring-your-own-device world and are designed to help SMBs provide infrastructure to support a growing workforce."

New offerings

New HP solutions and services include:
Collaboration tools

A
lso among the new offerings is HP Unified Communications & Collaboration (UC&C) Solutions with Microsoft Lync, an integrated hardware and software solution that enables SMBs to securely video conference, share information on desktops, and collaborate to improve productivity. The comprehensive UC&C solution includes Microsoft Lync software, Voice over Internet Protocol (VoIP) phones, networking and ProLiant Gen8 servers, storage, and services.

HP has also announced new hardware offerings to facilitate building new wired and wireless networks. These include:
Availability

T
he HP Client Virtualization SMB Reference Architecture for Microsoft VDI is scheduled to be available in September. All other solutions and services are available now through HP and worldwide channel partners.

To that end, HP is providing SMBs and channel partners with a broad range of programs to drive growth, create new revenue streams, and ensure collaboration. These include:
Additional information about HP’s new SMB offerings is available at http://www.hp.com/go/whatsnewforsmb.

You may also be interested in:

Virtustream delivers cloud software to run private, public and hybrid enterprise-class clouds

Virtustream this week announced xStream 2.0, a private cloud solution designed to provide secure, high-performance, enterprise-class cloud infrastructure services across private, virtual private, public, and hybrid implementations.

Available as software, stand-alone appliance, and as a managed service, xStream helps foster better management of mission-critical applications on clouds, the venture-backed, three-year-old company said. Those deploying may select a tailored mix of on-site private cloud, combined with off-site public and virtual private clouds.

Now in beta and becoming generally available in August, xStream 2.0:
  • Allows enterprises to control their IT infrastructure as a single cloud, combining on-site private clouds, offsite virtual private/public clouds, and managed cloud services -- providing a tailored hybrid cloud to suit each enterprise’s requirements;

    Until now, there have been no cloud solutions that gave customers the confidence to move both legacy and web-scale applications to the cloud.

  • Provides multi-layered logical, physical cloud security, and in-depth threat monitoring and includes silicon-level authentication with Intel TxT, to meet security and compliance standards;
  • Uses patented Virtustream µVM (Micro VM) technology to optimally and dynamically combine compute, memory, network and storage to deliver application performance assured by commercial service-level agreements (SLAs), with cloud efficiency; and,
  • Allows consumption-based pricing/chargeback, so enterprises pay only for resources that they actually use, each five minutes.
Additionally, xStream provides the versatility to allow enterprises to run both mission-critical legacy and web-scale applications in cloud configurations, gaining cloud benefits without rewriting existing applications. µVM technology delivers multi-tenant virtualization benefits, said Simon Aspinall, CMO at Virtustream. And xStream runs as a macro hypervisor/cloud director that supports leading virtualization hypervisors and most major hardware.

Mixing applications

Enterprises run extremely complex IT environments that mix many legacy and web-scale applications. Until now, there have been no cloud solutions that gave customers the confidence to move both legacy and web-scale applications to the cloud,” said Rodney Rogers, Chairman and CEO, Virtustream. “Virtustream’s xStream software fills that gap by providing them with an enterprise-class cloud solution – for private clouds, virtual private/public clouds and a combination of both in a hybrid model.”

xStream is available in three editions:
  • Private Cloud -- allows enterprises to run private clouds in existing data centers
  • Public Cloud -- for service providers to offer enterprise cloud services to their customers
  • Virtual Private Cloud -- a full set of managed cloud services for enterprises from Virtustream’s cloud
Virtustream, recently closed a series funding round with Columbia Capital, Intel Capital, Noro‐Moseley Partners, QuestMark Partners, TDF and Top Tier Capital Partners (TTCP) bringing total equity raised to $75 million.

While xStream is initially targeting enterprises and governments, I think the solution makes a lot of sense, too, for small- to medium-sized businesses that want to get out of the IT infrastructure business and need a flexible way to do so across a variety cloud models. That means this makes sense for migration activities. Being able to mix and match hypervisors also helps with moving to more than one cloud provider or platform.

I can also see where enterprises that are seeking a cloud support model for their big data architectures would do well to evaluate xStream as a way to move to cloud in increments, with later choices on where to deploy open. What's more, xStream appears well-suited for the auto-scale, massive network and massive storage demands of big data uses. Big data as a service, anyone?

You may also be interested in:

Tuesday, June 19, 2012

Microsoft Surface makes us all winners, even as it loses

Where goes Apple, there follows Microsoft. Where goes Microsoft, there follows the enterprise.

And so, the inevitability of mobile work and resulting higher productivity across almost all that IT enables is now assured, thanks to Microsoft's unveiling yesterday of its Surface family of mobility PCs.

It's not that workers have not wanted mobility, as only defined in the past few years by smartphones and tablets, best represented by iOS and Android. It's just that IT departments and planners didn't really know how to give it to them.

Now, with Surface and the Windows PC-tablet hybrid it defines, Microsoft is showing a way to enterprise mobility, albeit via a perilous path for its historic partners and channels. But Microsoft has bolted from its own ecosystem before and still thrived.

Give up control

W
hat's different this time is that Microsoft will need to give up much more control over its users and its ecosystem in order to make its late-to-the game Windows mobility plan work. And that means the Surface plan will be no means replicate the old Windows Everywhere business model.

In effect, to be successful against Apple and the Android ecosystems, Microsoft must walk away from how its own very definition of success was once measured. At best, Microsoft will go head to head in a three-way tied race over a long slog. And that does not allow for the margins or lock-in it has enjoyed in the past -- at any level.

Microsoft must walk away from how its own very definition of success was once measured.



The more that Windows locks in across Windows-only devices, the more value the other platforms demonstrate for doing dynamic and services-based, extra-enterprise business -- even if all other things are equal. To win, Microsoft must give up its long-cherished assets of control -- which means it loses.

To clamp down and force a Windows-only enterprise, means that those shops suffer compared to ones that enjoy more open mobility, broader ecosystems and agile cloud-services vibrancy.

Low chance of lock in

T
his is great news for enterprises. Surface gives them a path from their legacy Windows PCs, applications, and data to progress to mobility, but with low chance of being locked in again, or of losing their past Windows investments. They can have their old Windows cake, and their new cloud-driven mobility marketplace productivity -- and the choice of new services galore, a blooming universe of available native apps, and interoperability across nearly all their HTML 5-empowered web and software-as-a-service (SaaS) services.

Because Microsoft has not won a cloud advantage either, it lacks a critical mass of applications to force a mobile platform lock in Surface. There's just no way to cut off the oxygen of the cloud. That means Surface is just another mobile choice, not THE mobile choice for enterprises, and that makes all the difference.

There's just no way to cut off the oxygen of the cloud.



More likely -- and a reverse from its role in the lead up to GUI PCs -- Microsoft will soften up the enterprise for mobility in general, and make it easier for its competitors to do far better there than without Surface in the game. Surface also forces total client strategy choices that may well lead to more mobile and less PC, which at this point does not favor Redmond.

This is all a huge boon, and it shouldn't be underestimated. There is so much opportunity to improve how business is done and how people work when mobility is part of the full mix.

Absolutely huge


Enterprise architects, business analysts, and IT innovators around the world should now feel confident that they can design their processes and innovate and transform businesses based on the knowledge that nearly all apps and all data can reach all people at all time. This is absolutely huge.

With Surface, Microsoft has pushed the enterprise from the era of limited client vehicles to the era of processes borne on any transport, of untying work from a client form factor. Finally.

Microsoft will try to keep this a Windows Everywhere world, but that won't hold up.



Microsoft will try to keep this a Windows Everywhere world, but that won't hold up. What makes mobility powerful is the escape from the platform, device, app shackle. Once information and process flow and agility are the paramount goals, those shackles can no longer bind.

Mobility requires the information flow to move across all boundaries. Windows lock-in can't meet the requirements of mobility, and the mobility competitors will always stay one step more interoperable -- and therefore advantageous -- than a Windows only solution.

You may also be interested in:

Monday, June 18, 2012

Le Moyne College accelerates IT innovation with help from VMware View VDI solution provider SMP

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

The latest BriefingsDirect end-user case study hones in on how higher education technology innovator Le Moyne College in upstate New York successfully embraced several levels of virtualization as a springboard to broad client-tier virtualization benefits.

Le Moyne worked with technology solutions provider Systems Management Planning Inc. to make the journey in a structured, predictive fashion to deep server virtualization -- and then on to virtual desktop infrastructure (VDI).

The combined path to smooth VDI implementations at the server and desktop levels for Le Moyne came from teaming with a seasoned technology partner so the college community could quickly gain IT productivity payoffs via VDI, even amid the demanding environment and high expectations of an active higher education campus.

To learn more, BriefingsDirect assembled Shaun Black, IT Director at Le Moyne College in Syracuse, New York, and Dean Miller, Account Manager at Systems Management Planning, or SMP, also based in Rochester. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Doing IT at a college comes with its own unique challenges. Why did you choose to go to VDI quickly?

Black: It all started for us back in the early 2000s, and was motivated by our management information systems program, our computer science-related programs, and their need for very specialized software.

A lot of that was started by using movable hard drives in very specific computing labs. As we progressed with them, and their needs continue to evolve, we just continued to find that the solutions that we had weren't flexible enough. They needed more and different servers in very specific needs.

There is tremendous diversity in the college and university environment. Our ability to be responsive as an IT organization is incredibly crucial, given the range of different clients, constituents, and stakeholders that we have. These include our students, faculty, administrators, fundraisers, and the like. There's a wide variety of needs that they have, not to mention the higher education expectations in a very much open environment.

Le Moyne is a private, Catholic, Jesuit institution located in Syracuse, New York. We have about 500 employees and we educate roughly 4,000 students on an annual basis. We're the second youngest of 28 Jesuit college universities nationally. Some of our better-known peers are Boston College, Gonzaga, and Georgetown, but we like to think that we're on an equal footing with our older and more esteemed colleagues.

We've been leveraging virtualization technology now for a number of years, going back to VMware Desktop, VMware Player, and the like. Then, in 2007 we embraced ESX Virtual Server Technology and, more recently, the VMware VDI to help us meet those flexibility needs and to make sure that the staff that we have are well aligned with the expectations of the college.

From an IT workforce perspective, we were having the same problem most organizations have. We were spending a tremendous amount of time keeping existing systems working. We were finding that we weren't able to be as responsive to the academic environments, and to some degree, were potentially becoming an impediment in moving forward the success of the organization.

We started experimenting with it initially within a few classrooms and then realized that this is a great technology.



Virtualization was a technology that was out there. How could we apply this to our server infrastructure, where we were spending close to six months a year having one of our people swapping out servers?

We saw tremendous benefits from that, increased flexibility and an increased ability for our staff to support the academic mission. Then, as we start looking in the last couple years, we saw similar demands on the desktop side with requirements for new software and discussions of new academic programs. We recognized that VDI technology was out there and was another opportunity for us to try to embrace the technology to help us propel forward.

Gardner: Tell me about how Systems Management Planning, or SMP, came into play and the relationship between you.

Black: Our relationship with SMP and the staff there has been critical from back in 2006-2007, when we began adopting server virtualization. With a new technology, you try to bring in a new environment. There are learning and assimilation curves. To get the value out of that, to get the bang for the buck as quickly as possible, we wanted to identify a partner to help us accelerate into leveraging that technology.

They helped us in 2007 in getting our environment up, which was originally intended to be an 18-month transition of server virtualization. After they helped us get the first few servers converted within a couple weeks, we converted the rest of our environment within about a two-month period, and we saw tremendous benefits in server virtualization.

Front of the list

W
hen we started looking at VDI, we had a discussion with a number of different partners. SMP was always at the front of our list. When we got to them, they just reinforced why they were the right organization to move forward with.

They had a complete understanding of the impact of desktop virtualization and how it has an impact on the entire infrastructure of an environment, not just the desktop itself, but the server infrastructure, storage infrastructure, network infrastructure.

They were the only organization we talked to, from the start, that began with that kind of discussion of what the implications are from a technology perspective, but also understanding what the implications are, and why you want to do this from a business perspective, and particularly an education perspective.

They brought very experienced people to help us through the process of assimilating.



They are already working with a number of different higher education institutions in the New York region. So they understood education. It's just a perfect partnership, and again, they brought very experienced people to help us through the process of assimilating and getting this technology implemented as quickly as possible and putting it to good use.

Gardner: How typical is Le Moyne's experience? Is this the usual path that you see in the market?

Miller: It is, and we like to see that path, because you don't want to disappoint your users with the virtual desktops. They just want to do their job and they don't want to be hung up with something that's slow. You want to make sure that you roll out your virtual desktops well, and you need the infrastructure behind that to support that.

So, yes, they started with a proof of concept which was a limited installation, really just within the IT department, to get their own IT people up to speed, experimenting with ThinApp and ThinApping applications. That went well. The next step was to go to the pilot, which was a limited roll out with some of the more savvy users. That seemed to go pretty well, and then, we went for a complete implementation.

It's fairly typical, and it was a pleasure working with this team. They recognized the value of VDI and they made it happen.

Focus on data center

At Systems Management Planning, we're a women-owned company, headquartered in Rochester, New York, and we were founded in 1997. Our focus is in the data center, implementing virtualization, both server and desktop virtualization, storage virtualization, and networking.

It's a technical organization. In fact, we have more engineers than salespeople on staff, which in my experience is pretty unusual. And we have more technical certification than any partner in upstate or western New York that I know of.

Our expertise in VMware and its complementing technologies allowed us to grow at a rate of about 30 percent year over year. We have offices in Rochester, Albany, and Orlando, Florida, and we use virtual desktops throughout our organization. This gives us the ability to spin up desktops or remote offices quickly. You could say we practice what we preach.

VMware has recognized SMP as a premier partner. We're also on the VMware technical advisory board and we're really proud of that fact. We work closely with VMware, and they bounce a lot of ideas and things off our engineering team. So, in a nutshell, that’s SMP.

We are still in the process of rolling this out and we will be for another 12 months.



Gardner: If you're going to do VDI, you’ve got to do it right. What did you do to make sure that that initial rollout was successful?

Black: We've been very methodical about going through an initial proof of concept, evaluating the technology, and working with SMP. They been great at informing us what some of the challenges might be, architecting an underlying infrastructure, the servers and the network.

Again, this is an area where SMP has informed us of the kinds of challenges that people have in virtual desktop environments, and how to build an environment that’s going to minimize the risk of the challenges, not the least of which are bandwidth and storage.

Methodical fashion

Then, we're being very deliberate about how we roll this out, and to whom, specifically so that we can try to catch some of these issues in a very methodical fashion and adjust what we're doing.

We specifically built the environment to try to build in an excess capacity of roughly a third to support business growth, as well as to support some variations in utilization and unexpected needs. You do everything you can in IT to anticipate what your customers are going to be doing, but we all know that on a day-to-day basis, things change, and those can have pretty dramatic consequences.

With regard to the members of the pilot team, I’ll give a lot of kudos and hats-off to them, because they suffered through a lot of the learning curve with us in figuring out what some of these challenges are. But that really helped us, as we got to what we consider the second phase of the pilot this past fall. We were actually using a production environment with a couple of our academic programs in a couple of classrooms. Then we began to go into full production in the spring with our first 150 production users.

Gardner: What VMware products are you using? Are you up to vSphere 5, is this View 5, or you're using the latest products on that?

There are a couple different ways that we like to measure. I’d like to think of it as both dollars and delight.



Black: View 5.1 has recently been released. But at the time we rolled it out, vSphere, ThinApp, and View 5, were the latest-and-greatest with the latest service patches and all, when we initially implemented our infrastructure in December.

Gardner: What other metrics do you use to decide that this is a successful ongoing effort?

Black: There are a couple different ways that we like to measure. I’d like to think of it as both dollars and delight. From a server virtualization perspective, there's a dollar amount. We extended the lifecycle of our servers from a three-year cycle to five years. So we get some operational as well, as some capital cost savings, out of that extension.

Most significantly, going to the virtual technology on the servers, one motivator for us on the desktop was what our people are doing. So it's an opportunity-cost question and that probably, first and foremost, is the fundamental measure I'm using.

Internally, we're constantly looking at how much of our time are we spending on what we call "keep the lights on" activity, just the operations of keeping things running, versus how much time we're investing on strategic projects.

Free up resources

Second to that, are we not investing enough time such that strategic projects are being slowed down, because IT hasn’t been able to resource that properly. From the perspective of virtualization, it certainly allowed us to free resources and reallocate those to things that the colleges deem more appropriate, rather than the standard kind of operational activities.

Then, just in regard to the overall stability and functionality in an environment is what I think of as a delight factor, the number of issues and the types of outages that we've had as a result of virtualization technology, particularly on the server front. It's dramatically reduced the pain points, even from hardware failures, which are bound to happen. So generally, it increased overall satisfaction of our community with the technology.

So we're expecting that’s going to contribute to overall satisfaction on the part of both our students, as well as our faculty and our administrators, having the tools that they need to do their job in the databases and be able to take advantage of them.

We're also expecting, as a result of that, that we're going to be able to be much more responsive to the new requests that we have.



Miller: Le Moyne College, specifically Shaun Black and his team, saw the value in virtualizing their desktops. They understood the savings in hardware cost, the energy cost, the administrative time, and benefits from their remote users. I think they got some very positive feedback from some of the remote users about View. They had a vision for the future of desktop computers, and they made it happen.

Gardner: In looking to the future, Shaun, is this setting you up for perhaps more ease in moving toward a variety of client endpoints? I'm thinking mobile devices.

Laying the foundation

Black: It lays the foundation for our ability to do that. That was certainly in our thinking in moving to virtual desktop. It wasn’t what we regard as a primary motivator. The primary motivator was how to do better what we’ve previously done, and that’s what we built the financial model on. We see that just as kind of an incremental benefit, and there may be some additional costs that come with that that have to be factored in.

But from the perspective of recognizing that our students, faculty, and everyone want to be able to use their own technology, and rather than having us issue them, be able to access the various software and tools more effectively and more efficiently.

It even opens up opportunities for new ways of offering our academic courses and the like. Whether it would be distance or the students working from home, those are things that are on our shortlist and our radar for opportunities that we can take advantage of because of the technology.

Another one of the areas that was in our thinking was the disaster recovery (DR) strategy. The idea, particularly for our mobile workers who have laptops, instead of them taking the data with them, to keep that data here on campus. We'll still provide them with the ability to readily access that and be just as effective and efficient as they currently are, but keeping the data within the confines of the campus community, and being able to make sure that’s backed up on a routine basis.

It's not just a control perspective, but it's also being able to offer more flexibility to people.



The security controls, better integration of View with our Windows server environment, and our authentication systems are all benefits that we certainly perceive as part of this initiative. It's not just a control perspective, but it's also being able to offer more flexibility to people, striking that balance better.

Miller: We’re seeing that in higher education as well as in Fortune 500s, even small and medium businesses (SMBs), the security factor of keeping all the data behind the firewall and in the data center, rather than on the notebooks out in the field, is a huge selling point for VDI and View specifically.

Gardner: Shaun, if you were to do this over again, or you wanted to provide some insights to somebody just beginning their virtualization journey, are there any thoughts, any 20/20 hindsight conclusions, that you would share with them?

Black: For an organization that’s our size, a medium business, I'd say to anybody to be looking very hard at this, and be looking at doing it sooner, rather than later. Obviously, every institution has its own specific situation, and there are upfront capital costs that have to be considered in moving forward this. But if you want to do it right and if you’re going to do that, you have to make some of the capital investment to make that happen.

Sooner rather than later


B
ut, for anybody, sooner rather than later. Based on the data we've seen from VMware, we were in the front five percent of adopters. With VDI, I think we’re somewhere in maybe the front 15 or something like that.

So, we're a little behind where I’d like to be, but I think we’re really at the point where mainstream adoption is really picking up. Anyone who isn’t looking at this technology at this point is likely to find themselves at a competitive disadvantage by not realizing the efficiency that this technology can bring.

For us, it really gets down to, as I said earlier, opportunity cost in strategic alignment. If your staff are not focused, from an IT perspective, on helping your organization move forward, but just on keeping the existing equipment running, you’re not really contributing maximally, or as I would say, contributing maximally to move your organization forward.

So to the extent that you can re-allocate those resources toward strategic type initiatives by getting them off of things that can be done differently and therefore done more effectively, any organization welcomes that.

In five years or whatever, the market will be matured enough that we could go to a desktop-as-a-service type environment and have the same level of flexibility and control.



I've told many individuals on the campus, including my vice president, that I expect this to very likely be the last time that Le Moyne is required to make this kind of investment in capital infrastructure. The next time, in five years or whatever, the market will be matured enough that we could go to a desktop-as-a-service type environment and have the same level of flexibility and control.

So we can really focus on the end services that we’re trying to provide, the applications. We can focus on the implications for those, the academics, as opposed to the underlying technology and letting the organization have the time and the focus on the technology, maintaining that underlying infrastructure, take advantage of their competencies and allow us to focus on our core business.

We’re hoping that there's an evolution. Right now, we are talking with various organizations with regard to burst capacity, DR-type capabilities and also talking about our longer term desires to outsource even if some of the equipment is posted here, but ultimately, get most of the technology and underlying infrastructure in somebody else’s hands.

Insight question

Gardner: Dean, what are some good concepts to keep in mind as you're beginning?

Miller: We began talking about virtual desktops, maybe two-and-a-half, three years ago. We started training on it, but it really hadn't taken off for the last year-and-a-half. Now, we’re seeing tremendous interest in it.

Initially, people were looking at savings for a hardware cost and administrative cost. A big driver today is bring your own device (BYOD). People are expecting to use their iPad, their tablet, or even their phone, and it's up to the IT department to deliver these applications to all these various devices. That’s been a huge driver for View and it's going to drive the View and virtual desktop market for quite a while.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: