Monday, March 25, 2013

Indiana health care provider goes fully virtualized, gains head start on BYOD and DR benefits

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

This BriefingsDirect IT leadership interview focuses on how Associated Surgeons and Physicians, LLC in Indiana went from zero to 100 percent virtualized infrastructure, and how many compliance and efficiency goals have been met and exceeded as a result.

In part one of a two-part sponsored interview series, we discuss how a mid-market health services provider rapidly adopted server and client virtualization, and how that quickly lead to the ability to move to mobile, bring your own device (BYOD), and ultimately advanced disaster recovery (DR) benefits.

Associated Surgeons and Physicians found the right prescription for allowing users to designate and benefit from their own device choices, while also gaining an ability to better manage sensitive data and to create a data-protection lifecycle approach.

Here to share his story on how they did it, we welcome, Ray Todich, Systems Administrator at Associated Surgeons and Physicians. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: When I go to the physician’s office, I see how they've gotten so efficient at moving patients in and out, the scheduling is amazing. Every minute is accounted for. Downtime is just very detrimental and backs up everything. This critical notion of time management is so paramount.
The ability that virtualization gives us is the core or heart of the entire infrastructure of the business.

Todich: Oh, it’s absolutely massive. If we have a snag somewhere, or even if our systems are running slow, then everything else runs slow. The ability that virtualization gives us is the core or heart of the entire infrastructure of the business. Without an efficient heart, blood doesn’t move, and we have a bigger problem on our hands.

Gardner: So over the past 10 or 15 years, as you pointed out, technology has just become so much more important to how a health provider operates, how they communicate to the rest of the world in terms of supplies, as well as insurance companies and payers, and so forth. Tell me a little bit about Associated Surgeons and Physicians. How big is the organization, what do you do, how have they been growing?

Todich: Pretty rapidly. Associated Surgeons and Physicians is a group of multi-specialty physicians and practices in Northeast Indiana and Northwest Ohio.

It began at the practice level, and then it really expanded. We're up to, I think, 14 additional locations and/or practices that have joined. We're also using an electronic medical record (EMR) application, given to us by Greenway, and that’s a big one that comes in.

We're growing exponentially. It went from one or two satellite practices that needed to piggyback Greenway, to probably 13 or 14 of them, and this is only the beginning. With that type of growth rate, you have to concern yourself with the amount of money it costs to serve everybody. If you have one physical server that goes out, you affect hundreds of users and thousands of patients, doctors, and whatnot. It’s a big problem, and that’s where virtualization came in strong.

Gardner: How about this in terms of the size of the organization? How many seats are you accommodating in terms of client, and then what is it about an IT approach to an organization such as yours that also makes virtualization a good fit?

Todich: Right now, we have somewhere around 300 employees. As far as how many clients this overall organization has, it’s thousands. We have lots of people who utilize the organization. The reality is that the IT staff here is used in a minimalist approach, which is one thing that I saw as well when I was coming into this.

One or even two persons to manage that many servers can be a nightmare, and on top of that, you try to do your best to help all the users. If you have 300-plus people and their desktops, printers, and so forth, so the overall infrastructure can be pretty intimidating, when you don’t have a lot of people managing it.

Going virtual was a lifesaver. Everything is virtualized. You have a handful of physical ESX hosts that are managing anything, and everything is stored on centralized storage. It makes it considerably efficient as an IT administrator to utilize virtualization.

The right answer

That’s actually how we went into the adoption of VMware View, because of 300-plus users, and 300-plus desktops. At that point, it can be very hairy. At times, you have to try and divine what the right answer is. You have this important scenario going on, and you have this one and another one, and how do you manage them all. It becomes easier, when you virtualize everything, because you can get to everything very easily and cover everyone’s desktops.

Gardner: What attracted you, at the beginning, to go to much higher total levels of server -- and then client -- virtualization.

Todich: When I first started here, the company was entirely physical. And as background, I came from a couple of companies that utilized virtualization at very high levels. So I'm very aware of the benefits, as far as administration, and the benefits of overall redundancy and activities -- the software and hardware used to allow high performance, high availability, access to people’s data, and still allow security be put in place.

Todich
When I came in, it looked like something you might have seen maybe 15 years ago. There were a lot of older technologies in place. The company had a lot of external drives hanging off the servers for backups, and so on.

My first thing to implement was server virtualization, which at the time, was the vSphere 4.1 package. I explained to them what it meant to have centralized storage, what it meant to have ESX host, and how creating virtual machines (VMs) would benefit them considerably over having physical servers in the infrastructure.

I gave them an idea on how nice it is to have alternate redundancy configured correctly, which is very important. When hardware drops out, RAID configuration goes south, or the entire server goes out, you've just lost an entire application -- or applications -- which in turn gives downtime.

I helped them to see the benefits of going virtualized, and at that time, it was solely for the servers.

Gardner: How long did it take you to go from being 100 percent physical to where you are now, basically 100 percent virtual?
VMware, in itself, has the ability to reach out as far and wide as you want it to. It’s really up to the people who are building it.

Todich: We've been going at it for about about a year-and-a-half. We had to build the infrastructure itself, but we had to migrate all our applications from physical to virtual (P2V). VMware does a wonderful job with its options for using P2V. It’s a time saver as well. For anybody who has to deal with the one that’s building the house itself, it can really be a help.

VMware, in itself, has the ability to reach out as far and wide as you want it to. It’s really up to the people who are building it. It was very rapid, and it’s so much quicker to build servers or desktops, once you get your infrastructure in place.

In the previous process of buying a server, in which you have to get it quoted out and make sure everything is good, do all the front-end sales stuff, and then you have to wait for the hardware to get here. Once it’s here, you have to make sure it’s all here, and then you have to put it altogether and configure everything, so forth. Any administrator out there who's done this understands exactly what that’s all about.

Then you have to configure and get it going, versus, "Oh, you need another server, here, right click, deploy from template," and within 10 minutes you have a new server. That, all by itself, is priceless.

Technology more important

Gardner: And you have a double whammy here, because you're a mid-market size company and don’t have a large, diversified IT staff to draw on. At the same time, you have branch offices and satellites, so you're distributed. To have people physically go to these places is just not practical. What is it about the distributed nature of your company that also makes virtualization and View 5.1 a good approach for a lean IT organization?

Todich: It helped us quite a bit, first and foremost, with the ability to give somebody a desktop, even if they were not physically connected to our network. That takes place a lot here.We have a lot of physicians who may be working inside of another hospital at the time.

Instead of them creating a VPN connection back into our organization, VMware View gave them the ability to have a client on their desktop, whether it be a PC, a MacBook, an iPod, an iPad, or whatever they have, even a phone, if they really want to go that route. They can connect anywhere, at anytime, as long as they have an Internet connection and they have the View client. So that was huge, absolutely huge.
It helped us quite a bit, first and foremost, with the ability to give somebody a desktop, even if they were not physically connected to our network.

They also have the ability to use PC-over-IP, versus RDP, That’s very big for us as well. It keeps the efficiency and the speed of the machines moving. If you're in somebody else’s hospital, you're bound to whatever network you are attached to there, so it really helps and it doesn’t bother their stuff as much. All you're doing is borrowing their Internet and not anything else.

Gardner: Tell me a bit more about your footprint. We've spoken about vSphere 4.1 and adopting along the path of 5.1. You even mentioned View. What else are you running there to support this impressive capabilities set?

Todich: We moved from vSphere 4.1 to 5.1, and going to VMware View. We use 5.1 there as well. We decided to utilize the networking and security vCloud Networking package, which at the time was a package called vShield. When we bought it, everything changed, nomenclature wise, and some of the products were dispersed, which actually was more to our benefit. We're very excited about that.

As far as our VDI deployment, that gave us the ability to use vShield Endpoint, which takes your anti-virus and offloads it somewhere else on the network, so that your hosts are not burdened with virus scans and updates. That’s a huge.

The word huge doesn’t even represent how everybody feels about that going away. It's not going away physically, just going away to another workhorse on the network so that the physicians, medical assistants (MAs), and everybody else isn’t burdened with, "Oh, look, it's updating," or "Look, it's scanning something." It's very efficient.

Network and security

Gardner: You mentioned the networking part of this, which is crucial when you're going across boundaries and looking for those efficiencies. Tell me a bit more about how the vCloud networking and security issues have been impacted.

Todich: That was another big one for us. Along with that the networking and security package comes a portion of the package called the vShield Edge, which will ultimately give us the ability to create our own DMZ the way that we want to create it, something that we don’t have at this time. This is very important to us.

Utilizing the vShield Edge package was fantastic, and yet another layer of security as well. Not only do we have our physical hardware, our guardians at the gate, but we also have another layer, and the way that it works, wrapping itself around each individual ESX host, is absolutely beautiful. You manage it just like you manage firewalls. So it’s very, very important.

Plus, some of the tools that we were going to utilize we felt most comfortable in, as far as security servers for the VDI package, that you want them sitting in a DMZ. So, all around, it really gave us quite a bit to work with, which we're very thankful for.
I'm a firm believer that centralized storage, and even more the virtualized centralized storage, is the answer to many, many, many issues.

Gardner: One of the things, of course, that’s key in your field is compliance and there's a lot going on with things like HIPAA, documents, and making sure the electronic capabilities are there for payers and provided. Tell me a bit about compliance and what you've been able to achieve with these advancements in IT?

Todich: With compliance, we've really been able to up our security, which channels straight into HIPAA. Obviously, HIPAA is very concerned with people’s data and keeping it private. So it’s a lot easier to manage all our security in one location.

With VDI, it's been able to do the same. If we need to make any adjustments security wise, it’s simply changing a golden image for our virtual desktop and then resetting everybody's desktops. It’s absolutely beautiful, and the physicians are very excited about it. They seem to really get ahold of what we have done with the ability that we have now, versus the ability we had two years ago. It does wonders.

Upgrading to a virtual infrastructure has helped us considerably in maintaining and increasing meaningful use expectations, with the ability to be virtual and have the redundancy that gives, along with the fact that VMs seem to run a lot more efficiently virtually. We have better ways to collect data, a lot more uptime, and a lot more efficiency, so we can collect more data from our customers.

Exceeding expectations

The more people come through, the more data is collected, the more uptime is there, the more there are no problems, which in turn has considerably helped meeting and exceeding the expectations of what's expected with meaningful use, which was a big deal.

Gardner: I've heard that term "meaningful use" elsewhere. What does that really mean? Is that just the designation that some regulatory organization has, or is that more of a stock-in-trade description?

Todich: My understanding of it, as an IT administrator, is basically the proper collection of people's data and keeping it safe. I know that it has a lot in with our EMR application, and what is collected when our customers interact with us.

Gardner: Are there any milestones or achievements you've been able to make in terms of this adoption, such as behaviors and then the protection of the documents and privacy data that has perhaps moved you into a different category and allows you to move forward on some of these regulatory designations?

Todich: It's given us the ability to centralize all our data. You have one location, when it comes to backing up and restoring, versus a bunch of individual physical servers. So data retention and protection has really increased quite a bit as far as that goes.

Gardner: How about DR?

Disaster recovery

Todich: With DR, I think there are a lot of businesses out there that hear that and don’t necessarily take it that seriously, until disaster hits. It’s probably the same thing with people and tornadoes. When they're not really around, you don’t really care. When all of a sudden, a tornado is on top of your house, I bet you care then.

VMware gives you the ability to do DR on a variety of different levels, whether it’s snapshotting, or using Site Recovery Manager, if you have a second data center location. It’s just endless.

One of the most important topics that can be covered in an IT solution is about our data. What happens if it stops or what happens if we lose it? What can we do to get it back, and how fast, because once data stops flowing, money stops flowing as well, and nobody wants that.

It’s important, especially if you're recording people’s private health information. If you lose certain data that’s very important, it’s very damaging across the board. So to be able to retain our data safely is of the highest concern, and VMware allows us to do that.

Also, it’s nice to have the ability to do snapshotting as well. Speaking of servers and whatnot, I'll have to lay it on that one, because in IT, everybody knows that software upgrades come. Sometimes, software upgrades don’t go the way that they're supposed to, whether it’s an EMR application, a time-saving application, or ultrasounds.
If it doesn’t work out in your favor, you have the ability to delete that snapshot and you're back to where you started from before the migration.

If you take a snapshot before the upgrade and run your upgrade on that snapshot, if everything goes great and everybody is satisfied. You can just merge the snapshot with the primary image and you are good to go.

If it doesn’t work out in your favor, you have the ability to delete that snapshot and you're back to where you started from before the migration, which was hopefully a functioning state.

Gardner: Let’s look to the future a bit. It sounds as if with these capabilities and the way that you've been describing DR benefits, you can start to pick and choose data center locations, maybe even thinking about software-defined networking and data center. That then allows you to pick and choose a cloud provider or a hosting model. So are you thinking about being able to pick up and virtually move your entire infrastructure, based on what makes sense to your company over the next say 5 or 10 years.

Todich: That’s exactly right, and the way this is growing, something that's been surfacing a lot in our neck of the woods is the ability to do hosting and provide cloud-based solutions, and VMware is our primary site on that as well.

But, if need be, if we had to migrate our data center from one state to another, we'll have the option to do that, which is very important, and it helps with uptime as well. Stuff happens. I mean, you can be at a data center physically and something happens to a generator that has all the power. All of a sudden, everybody is feeling the pain.

So with the ability to have the Site Recovery, it’s priceless, because it just goes to location B and everybody is still up. You may see a blip or you may not, and nothing is lost. That leaves everybody to deal with the data-center issue and everything is still up and going, which is very nice.

Creating redundancy

Gardner: I imagine too, Ray, that it works both ways. On one hand, you have a burgeoning ecosystem of cloud and hosting, of providers and options, that you can pursue, do your cost benefit analysis, think about the right path, and create redundancy.

At the same time, you probably have physicians or individual, smaller physician practices, that might look to you and say, "Those guys are doing their IT really well. Why don’t we just subscribe to their services or piggyback on their infrastructure?" Do you have any thoughts about becoming, in a sense, an IT services provider within the healthcare field? It expands your role and even increases your efficiency and revenues.

Todich: Yes, our sights are there. As a matter of fact, our heads are being turned in that direction without even trying to, because a lot of people are doing that. It’s a lot easier for smaller practices, instead of buying all the infrastructure and putting it all in place to get everything up, and then maintaining it, we will house it for you. We'll do that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Friday, March 22, 2013

ownCloud debuts cloud tool to give organizations more control over file sync and software

OwnCloud, Inc. recently released the latest version of the ownCloud Community Edition with a number of usability, performance, and integration enhancements.

Based on an open-source project of the same name, the ownCloud file sync and share software, deployed on-premise, not only offers users greater control, but allows organizations to integrate existing security, storage, monitoring and reporting tools, while still taking advantage of the software’s simplicity and flexibility.

File sync and share services like Dropbox, Google Docs, and Box Inc. have revolutionized the way users share information. These cloud-based services make it easy to share files with clean interfaces and seemingly endless amounts of storage. However, not everyone wants to turn over their information to a service provider – for those who prefer to control how and where their data is stored there’s ownCloud. 
We’ve completely revamped the design with a much simplified interface so you can differentiate the navigation elements and focus on what you want to work with.

OwnCloud comes in a free, community edition, and the company will launch a commercially supported enterprise edition of the software in the second quarter. That version will targeting enterprise IT departments in need of on-premise file sync and share for sensitive corporate data. The company estimates it has more than 750,000 users worldwide today.

In the latest offering, the user interface has been streamlined, so that the main web navigation panel is now clearly differentiated from in-app navigations, says Markus Rex, CEO of ownCloud. And the way in which the software’s settings are laid out have been revamped, making it easier to distinguish personal settings from app-specific settings, he says.

“We’ve completely revamped the design with a much simplified interface so you can differentiate the navigation elements and focus on what you want to work with, instead of distracting from that,” says Rex.

New features

This version of ownCloud also features a Deleted Files app that lets users restore accidentally deleted files and folders, and improved app management, so that third-party apps can be easily installed from the central apps repository and automatically removed from the server, if disabled. Also included is a new search engine that lets users find files stored by both name and by content, thanks to the Lucene-based full text search engine app, and a new antivirus feature courtesy of Clam AV scans uploaded files for malware. This release also includes improved contacts, calendar and bookmarks, says Rex.

Performance benefits in this release come from improved file cache and faster syncing of the desktop client, according to company officials. Externally mounted file systems such as Google Drive, Dropbox, FTP and others can be scanned on-demand and in the background to increase performance. And hybrid clouds can be created by mixing and matching storage, thanks to file system abstraction that offers more flexibility and greater performance.
You can get to the data in all of your data silos from one spot on a mobile client or desktop client, so you can get to files you might not be able to access otherwise from those devices.

“You can get to the data in all of your data silos from one spot on a mobile client or desktop client, so you can get to files you might not be able to access otherwise from those devices,” says Rex.

This release features improved integration with LDAP and Active Directory and an enhanced external storage app to boost performance of integrated secondary storage including Dropbox, Swift, FTP, Google Docs, Amazon S3, WebDAV and external ownCloud servers.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Wednesday, March 20, 2013

Gaining greater cohesion: Bringing business analysis and business architecture into focus

This guest post comes courtesy of Craig Martin, Chief Operating Officer and Chief Architect at Enterprise Architects, which is a specialist Enterprise Architecture firm operating in the U.S., UK, Asia and Australia.

By Craig Martin, Enterprise Architects

Having delivered many talks on business architecture over the years, I’m often struck by the common vision driving many members in the audience – a vision of building cohesion in a business, achieving the right balance between competing forces and bringing the business strategy and operations into harmony.  However, as with many ambitious visions, the challenge in this case is immense.  As I will explain, many of the people who envision this future state of nirvana are, in practice, inadvertently preventing it from happening.

Standards Silos

There are a host of standards and disciplines that are brought into play by enterprises to improve business performance and capabilities. For example standards such as PRINCE2, BABOK, BIZBOK, TOGAF, COBIT, ITIL and PMBOK are designed to ensure reliability of team output and approach across various business activities. However, in many instances these standards, operating together, present important gaps and overlaps. One wonders whose job it is to integrate and unify these standards. Whose job is it to understand the business requirements, business processes, drivers, capabilities and so on?

Apples to Apples?

As these standards evolve they often introduce new jargon to support their view of the world. Have you ever had to ask your business to explain what they do on a single page? The diversity of the views and models can be quite astonishing:
The list goes on and on…

Each has a purpose and brings value in isolation. However, in the common scenario where they are developed using differing tools, methods, frameworks and techniques, the result is usually greater fragmentation, not more cohesion – and consequently we can end up with some very confused and exacerbated business stakeholders who care less about what standard we use and more about finding clarity to just get the job done.

The Convergence of 
Business Architecture and Business Analysis

Ask a room filled with business analysts and business architects how their jobs differ and relate, and I guarantee that would receive a multitude of alternative and sometimes conflicting perspectives.

Martin
Both of these disciplines try to develop standardized methods and frameworks for the description of the building blocks of an organization. They also seek to standardize the means by which to string them together to create better outcomes.

In other words, they are the disciplines that seek to create balance between two important business goals:
  • To produce consistent, predictable outcomes
  • To produce outcomes that meet desired objectives
In his book, “The Design of Business: Why Design Thinking is the Next Competitive Advantage,” Roger Martin describes the relationships and trade-offs between analytical thinking and intuitive thinking in business. He refers to the “knowledge funnel,” which charts the movement of business focus from solving business mysteries using heuristics to creating algorithms that increase reliability, reducing business complexity and costs and improving business performance.

The disciplines of business architecture and business analysis are both currently seeking to address this challenge. Martin refers to this as ”design thinking.”


(Click here to see an illustration that further explains these concepts.)

Vision Vs. Reality For Business Analysts
and Business Architects

When examining the competency models for business analysis and business architecture, the desire is to position these two disciplines right across the spectrum of reliability and validity.

The reality is that both the business architect and the business analyst spend a large portion of their time in the reliability space, and I believe I’ve found the reason why.

Both the BABOK and the BIZBOK provide a body of knowledge focused predominantly around the reliability space. In other words, they look at how we define the building blocks of an organization, and less so at how we invent better building blocks within the organization.

Integrating the Disciplines

While we still have some way to go to integrate, the business architecture and business analysis disciplines are currently bringing great value to business through greater reliability and repeatability.

However, there is a significant opportunity to enable the intuitive thinkers to look at the bigger picture and identify opportunities to innovate their business models, their go-to-market, their product and service offerings and their operations.
Perhaps we might consider introducing a new function to bridge and unify the disciplines? This newly created function might integrate a number of incumbent roles and functions and cover:
  • A holistic structural view covering the business model and the high-level relationships and
    The business architecture and business analysis disciplines are currently bringing great value to business through greater reliability and repeatability.
    interactions between all business systems
  • A market model view in which the focus is on understanding the market dynamics, segments and customer need
  • A products and services model view focusing on customer experience, value proposition, product and service mix and customer value
  • An operating model view – this is the current focus area of the business architect and business analyst. You need these building blocks defined in a reliable, repeatable and manageable structure. This enables agility within the organization and will support the assembly and mixing of building blocks to improve customer experience and value
At the end of the day, what matters most is not business analysis or business architecture themselves, but how the business will bridge the reliability and validity spectrum to reliably produce desired business outcomes.

I will discuss this topic in more detail at The Open Group Conference in Sydney, April 15-18, which will be the first Open Group event to be held in Australia.

This guest post comes courtesy of Craig Martin, Chief Operating Officer and Chief Architect at Enterprise Architects, which is a specialist Enterprise Architecture firm operating in the U.S., UK, Asia and Australia. He is presenting the Business Architecture plenary at the upcoming Open Group conference in Sydney. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:



Tuesday, March 19, 2013

Dutch insurance giant Achmea deploys 'ERP for IT' to reinvent IT processes and boost business performance

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how Achmea Holding, one of the largest providers of financial services and insurance in the Netherlands, has made large strides in running their IT operations like an efficient business itself.

We'll hear how Achmea rearchitected its IT operations to both be more responsive to users and more manageable by the business, all based on clear metrics.

Here to explore these and other enterprise IT performance issues, we're joined by our co-host for this sponsored podcast, Georg Bock, Director of the Customer Success Group at HP Software, and he's based in Germany.

And we also welcome our special guest, Richard Aarnink, leader in the IT Management Domain at Achmea in the Netherlands, to explain how they've succeeded in making IT better governed and agile -- even to attain "enterprise resource planning (ERP) for IT" benefits.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why is running IT more like a business important? Why does this make sense now?

Aarnink: Over the last year, whenever a customer asked us questions, we delivered what he asked. We came to the conclusion that delivery of every request that we got was an intensive process for which we created projects.

It was very difficult to make sure that it was not a one-time hero effect, but that we could deliver to the customer what he asked every time, on scope, on specs, on budget, and on time. We looked at it and said, "Well, it is actually like running a normal business, and therefore why should we be different? We should be predictive as well."

Gardner: Georg Bock, is this something you are seeing more and more of in the field?

Trend in the market

Bock: Yes, we definitely see this as a trend in the market, specifically with the customers that are a little more mature in their top-down strategic thinking. Let’s face it, running IT like a business is an end-to-end process that requires quite a bit of change across the organization -- not only technology, but also process and organization. Everyone has to work hand in hand to be, at the end of the day, predictable and repeatable in what they're doing, as Richard just explained.

That’s a huge change for most organizations. However, when it’s being done and when it has lived in the organization, there's a huge payback. It is not an easy thing to undertake but it’s inevitable, specifically when we look at the new trends around cloud multi-sourcing, mobility, etc., which brings new complexity to IT.

You'd better have your bread and butter business under control before moving into those areas. That’s why also the timing right now is very important and top of people’s minds.

Gardner: Tell us a bit about Achmea, the size of your organization, and why IT is so fundamentally important to you.

Aarnink: Achmea is a large insurance provider in the Netherlands. We have around eight million customers in the Netherlands with 17,000 employees. We're a very old and cooperative organization, and we have had lots and lots of mergers and acquisitions in the last 20 years. So we had various sets of IT departments from all the other companies that we centralized over the past years.

Aarnink
If you look at insurance, it's actually having the trust that whenever something happens to a customer, he can rely on the insurer to help him out, and usually this means providing money. IT is necessary to ensure that we can deliver on those promises that we made to our customers. So it’s a tangible service that we deliver, it’s more like money, and it’s all about IT.

Of the 17,000 employees that we have in the Netherlands, about 1,800-2,000 employees work in the centralized IT department. Over the last year, we changed our target operating model to centralize the technologies in competence centers, as we call them, in the department that we call Solution Development.
We created a new department, IT Operations, and we created business-relationship departments that were merged with the business units that were asking or demanding functionality from our IT department. We changed our entire operating model to cope with that, but we still have a lot of homegrown applications that we have to deliver on a daily basis.

Changing the department and the organizational structure is one thing, and now we need to change the content and the applications we deliver.

Gardner: How has all this allowed you to better manage all the aspects of IT, and make it align with the business?

Strategy and governance

Aarnink: To answer that question I need to elaborate a little bit on the strategy and governance department, which is actually within the IT department. What we centralized there were project portfolio and project steering, and also the architectural capabilities.

We make sure that whatever solution we deliver is architectured from a single model that we manage centrally. That's a real benefit that we gained in centralizing this and making sure that we can -- from both the architecture and project perspectives -- govern the projects that we're going to deliver to our business units.

Bock: Achmea is a leader in that, and the structure that Richard described is inevitable to be successful. ERP for IT, or running IT as a business, the fundamental IT processes, is all about standardization, repeatability, and predictability, especially in situations where you have mergers and acquisitions. It’s always a disruption if you have to bring different IT departments together. If you have a standard that’s easy to replicate, that’s a no-brainer and winner from a business bottom-line perspective.

In order to achieve that, you have to have a team that has a horizontal unit and that can drive the standardization of the company. Richard and Achmea are not alone in that. Richard and I have quite a number of discussions with other companies from other industries, and we very much see that everyone has the same problem, and given those horizontal teams, primary enterprise architecture, chief technology officer (CTO) office, or whatever you like to call those departments, is definitely a trend in the industry and for those mature customers that want to take that perspective and drive it forward that way.
It’s not rocket science from an intellectual perspective, but we have to cut through the political difficulties.

But as I said, it’s all about standardization. It’s not rocket science from an intellectual perspective, but we have to cut through the political difficulties of driving the adoptions across the different organizations in the company.

Gardner: What sort of problems or issues did you need to resolve as you worked to change things for the better?

Aarnink: We looked at the entire scope of implementing ERP for IT and first we looked at the IT projects and the portfolio. We looked at that and found out that we still had several departments running their own solutions in managing IT projects and also budgets. In the past, we had a mechanism of only controlling the budget for the different business units, but no centralized view on the IT portfolio, as a whole, for Achmea.

We started in that area, looking at one system of record for IT projects and portfolio management, so we could steer what we wanted to develop and what we wanted to sunset.

Next, we looked at application portfolio management and tried to look at the set of applications that we want to currently use and want to use in the future and the set of applications that we want to sunset in the next year and how that related to the IT project. So that was one big step that we made in the last two years. There's still a lot of work to be done in that area, but it was a big topic.

Service management

The second big topic was looking at service management. Due to all the mergers, we still had lots of variations on IT process. Incident management was covered in a whole different way, when you looked at several departments from the past.

We adopted service desks to cater to all those kind of deviations from the standard ITIL process. We looked at that and said that we had to centralize again and we had to make sure that we become more prescriptive in how these process will look and how we make sure that it's standardized.

That was the second area that we looked at. The third area was more on the application quality. How could we make sure that we got a better first-time-right score in delivering IT projects? How could we make sure that there is one system of record for requirements and one system of record for test results and defects. That’s three areas that we invested in in the first phase.

Lots of change going on

Gardner: What have you have seen in the market that leads you to believe that ERP for IT is not a vision, but is, in fact, happening, and that we're starting to see tangible benefits?

Bock: Richard very much nicely described real, practical results, rather than coming up with a dogmatic, philosophical process in the first place. I think it’s all about practical results and practical results need to be predictable and repeatable, otherwise it’s always the one-time hero effort that Richard brought up in the beginning, and that’s not scalable at all.

At some point you need process, but you shouldn’t try that dogmatically. I also hear about the Agile versus the waterfall, whatever is applicable to the problem is the right thing to do. Does that rule out process? No, not at all. You have to live the process in a little different way.
Technology always came first and now we look for the nail that you can use that hammer for. That’s not the right thing to do.

Everyone has to get-away from their dogmatic position and look at it in a little more relaxed way. We shouldn’t take our thoughts too seriously, but when we drive ERP for IT to apply some standard ways of doing things, we just make our life easier. It has nothing to do with esoteric vision, but it's something that is very achievable. It’s about getting a couple of people to agree on practical ways of getting it done.

Then, we can draw the technological consequences from it, rather than the other way around. That's been the problem in IT from my perspective for years. Technology always came first and now we look for the nail that you can use that hammer for. That’s not the right thing to do.

From my perspective, standardization is simply a necessary conclusion from some of the trial-and-error mistakes that have been made over the last 10-15 years, where people tried to customize the hell out of everything just to be in line with the specificity of how things are being done in their particular company. But nobody asked why it was that way.

Aarnink: I completely agree. We had several discussions about how the incident process is being carried out, and it’s the same in every other company as well. Of course there are slight differences, but the fact is that an incident needs to be so resolved, and that’s the same within every company.

Best practice

You can easily create a best practice for that, adopt it within your own company, and unburden yourself from thinking about how you should go for this process, reinvent it, creating your own tool sets, interfaces with external companies. That can all be centralized, it can all be standardized.

It’s not our business to create our own IT tools. It’s the business of delivering policy management systems for our core industry, which is insurance. We don’t want all the IT that we need in order to just to keep the IT running. We want that standardized, so we can concentrate on delivering business value.

Gardner: Now that we've been calling this ERP for IT, I think it’s important to look back on where ERP as a concept came from and the fact that getting more data, more insight, repeatability, analyzing processes, determining best processes and methods and then instantiating them, is at the core of ERP. But when we try to do that with IT, how do we measure, what is the data, and what do we analyze?

Richard, at Achmea, are you looking at key performance indicators (KPIs) and are using project portfolio management maturity models? How is it that you're measuring this so that you can, in fact, do what ERP does best, make it repeatable, make it standardized?
The IT project is a vehicle helping you deliver the value that you need, and the processes underneath that actually do the work for you.

Aarnink: If you look from the budget perspective, we look at the budgets, the timeframes, and the scope of what we need to deliver and whether we deliver on time, on budget, and on specs, as I already said. So those are basically the KPIs that we're looking for when we deliver projects.

But also, if you look at the processes involved when you deliver a project, then you talk about requirements management. How quickly can you create a set of requirements and what is the reuse of requirements from the past. Those are the KPIs we're looking for in the specific processes when you deliver an IT project.

So the IT project is a vehicle helping you deliver the value that you need, and the processes underneath that actually do the work for you. At that level we try to standardize and we try to make KPIs in order to make sure that we use as much as possible, that we deliver quality, and we have the resources in place that we actually need to deliver those functionalities.

You need to look at small steps that can be taken in a couple of months’ time. So draw up a roadmap and enable yourself to deliver value every, let’s say 100 days. Make sure that every time you deliver functionality that’s actually used, and you can look at your roadmap and adjust it, so you enable yourself to be agile in that way as well.
The biggest thing that you need to do is take small steps. The other thing is to look at your maturity. We did a CMMi test review. We didn't do the entire CMMi accreditation, but only looked at the areas that we needed to invest in.

Getting advice

We looked at where we had standardized already and the areas that we needed to look at first. That can help you prioritize. Then, of course, look at companies in your network that actually did some steps in this and make sure that you get advice from them as well.

Bock: I absolutely agree with what Richard said. If we're looking for some recipe for successes, you have to have a good balance of strategic goals and tactical steps towards that strategic goal. Those tactical step need to have a clear measure and a clear success criteria associated with them. Then you're on a good track.

I just want to come back to the notion of ERP for IT that you alluded to earlier, because that term can actually hurt the discussion quite a bit. If you think about ERP 20 years ago, it was a big animal. And we shouldn’t look at IT nowadays in the same manner as ERP was looked at 20 years ago. We don’t want to reinvent a big animal right now, but we have to have a strategic goal where we look at IT from an end-to-end perspective, and that’s the analogy that we want to draw.
If we're looking for some recipe for successes, you have to have a good balance of strategic goals and tactical steps towards that strategic goal.

ERP is something that has always been looked as an end-to-end process, and having a clear, common context associated from an end-to-end perspective, which is not the case in IT today. We should learn from those analogies that we shouldn’t try to implement ERP literally for IT, because that would take the whole thing in one step, where as Richard just said very nicely, you have to take it in digestible pieces, because we have to deal with a lot of technology there. You can't take that in one shot.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, March 18, 2013

Avaya announces flexible Collaborative Cloud UC offerings for cloud service providers, channel, enterprises

Avaya today announced a set of Collaborative Cloud offerings designed to make it easier for more types of organizations to deploy unified communications (UC), contact center (CC) and video conferencing -- all as on-demand services.

The adoption of UC and CC as a service (UCaaS and CCaaS) brings utility-based pricing to cloud-service providers (CSPs) so they can offer varied and flexible packages to many types of clients. This creates new revenue streams for CSPs by allowing them to deliver app integrations, mobile collaboration and multichannel customer service for their customers. And it allows buyers to only pay for the IP-based communications services they want and need.

This makes the burgeoning bring you own device (BYOD) trend easier for enterprises to manage because they can off-load more of the complexities of mobile and BYOD environments to their cloud and service providers, said Bruce MacVarish, Director of Cloud Solutions at Avaya. The offerings enable CSPs to evolve and augment enterprise communications with cloud-based solutions, as well as provide greater interoperability across vendors, domains and protocols, he said.
Think of it as video conferencing as a service on demand, integratable into more mobile devices and therefore business processes.

Santa Clara-based Avaya is carving out four delivery and distribution models for UCaaS and CCaaS: private cloud/on-premises stacks, managed services for service providers, hosted multi-tenancy services for channel players, and a full software-as-a-service (SaaS) cloud capability powered by Avaya focused on the mid-market and smaller organization users.

The video services are more geared toward synchronous video interactions, and not hosted, asynchronous video serving, although Avaya offers both. Think of it as video conferencing as a service on demand, integratable into more mobile devices and therefore business processes.

Avaya's move, like with many evolving cloud models, forms a transition from CapEx to OpEx, utility-based pricing and consumption. It also offers ease and speed in adoption, and a single point of integration for value-added SPs and developers.

I expect to see more SaaS business apps providers and cloud-savvy enterprises integrate Avaya's and other UC services into their web, mobile and cloud offerings. These would include such benefits as click-to-call, customer support interception points, and embedded video conferencing brought directly into more business apps, services and processes.

Hybrid deployments

It will be curious to see how the hybrid deployments of UCaaS and CCaaS are assimilated into other business cloud services as the market matures. Will enterprises and SPs alike seek to embed more UC functions, while themselves controlling the UC stack? Or will communications, like many other business services, be something they expect in any cloud stack? Or what combo of hosting will they prefer in which apps?

A lot of the noise around hybrid cloud fails to take the communications feature and their integration into account. Same for big data: Shouldn't all the unstructured data in communications by part of any analytics mix? How to manage that?

Avaya is now in a controlled release of the solutions, and expects general availability in three to six months, said MacVarish.

Earlier this month, Avaya announced new security enhancements for enterprise collaboration.
A lot of the noise around hybrid cloud fails to take the communications feature and their integration into account.

In more detail, the new and expanded Avaya offerings for CSPs are:
  • Avaya Cloud Enablement for Unified Communications and Customer Experience Management. Based on Avaya Aura, it allows flexible, utility-based, OpEx pricing for CSPs so they pay on actual customer usage. Avaya Control Manager enables centrally managed multi-tenancy.
  • Avaya Cloud Enablement for Video provides CSPs with a scalable platform and multi-tenancy that delivers interoperable, multi-vendor mobile video collaboration. Enhancements to the Elite Series MCUs, Scopia Mobile and Scopia Desktop extend BYOD videoconferencing across most endpoints.
  • Avaya Communications Outsourcing Solutions (COS) Express, a private cloud offering for up to 500-seat contact centers, can be hosted by Avaya, a CSP or channel partners -- either as Avaya or co-branded services.
Avaya Collaborative Cloud solutions also include Avaya Collaboration Pods, a portfolio of cloud-ready, turnkey solutions designed to simplify installation and operations of real-time applications; and the AvayaLive suite of public-cloud based communications and collaboration services.

You may also be interested in:

Monday, March 11, 2013

Fighting in the cloud service orchestration wars

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Combine the supercharged cloud computing marketplace with the ubergeek cred of the open source movement, and you’re bound to have some Mentos-in-Diet-Coke moments. Such is the case with today’s cloud service orchestration (CSO) platforms. At this moment in time, the leading CSO platform is OpenStack. Dozens of vendors and cloud service providers (CSPs) have piled on this effort, from Rackspace to HP to Dell, and most recently, IBM has announced that they’re going all in as well. Fizzy to be sure, but all Coke, no Mentos.

Then there are CloudStack, Eucalyptus, and a few other OpenStack competitors. With all the momentum of OpenStack, it might seem that these open source alternatives are little more than also-rans, doomed to drop further and further behind the burgeoning leader. But there’s more to this story. This is no techie my-open-source-is-better-than-your-open-source battle of principle, of interest only to the cognoscenti. On the contrary: big players are now involved, and they’re placing increasingly large bets. Add a good healthy dose of Mentos – only this time, the Mentos are money.

Understanding the CSO Marketplace

Look around the infrastructure-as-a-service (IaaS) market. Notice that elephant in the corner? That’s Amazon Web Services (AWS). The IaaS market simply doesn’t make sense unless you realize that AWS essentially invented IaaS. And by invented, we mean actually got it to work. Which if you think about it, is rather atypical for most technology vendors. Your average software vendor will identify a new market opportunity, take some old stuff they’ve been struggling to sell, give it a nice new coat of PowerPoint, and shoehorn it into the new market. If customers bite, then the vendor will devote resources into making the product actually do what it’s supposed to do. Eventually. We hope.

Bloomberg
But AWS is different. Amazon.com is an online reseller, not a software vendor. They think more like Wal-Mart than IBM. They figured out elasticity at scale, added customer self-service, and christened it IaaS. Then they grew it exponentially, defining what cloud computing really means. Today, they leverage their market dominance and economies of scale to continually lower prices, squeezing their competitors’ margins to nothing. It worked for Rockefeller’s Standard Oil, and it works for Wal-Mart. Now it’s working for Amazon.

But as with any market, there are always competitors looking to carve off a bit of opportunity for themselves. Given AWS’s dominance, however, there are two basic approaches to competing with Amazon: do what AWS is doing but try to do it a bit better (say, with Rackspace’s promise of better customer service), or do something similar to AWS but different enough to interest some segment of the market (leading in particular to the enterprise public cloud space populated by the likes of Verizon Terremark and Savvis, to name a few).

And then there are the big vendors like HP and IBM, who not only offer a range of enterprise software products, but who also offer enterprise data center managed services and associated consulting. Such vendors want to play two sides of this market: they want to be public cloud providers in their own right, and also offer “turnkey” cloud gear to customers who want to build their own private clouds.

Enter OpenStack. Both of the aforementioned vendors as well as the smaller players realize that piecing together their own cloud offerings will never enable them to catch up to AWS. Instead, they’re joining forces to build out a common cloud infrastructure platform that supports the primary capabilities of IaaS (compute, storage, database, and network), as well as providing the infrastructure platform for platform-as-a-service (PaaS) and Software-as-a-Service (SaaS) capabilities down the road. The open source model is perfect for such collaboration, as the Apache license allows contributors to take the shared codebase and build out whatever proprietary add-ons they like.

Most challenging benefits

Perhaps the most touted, and yet most challenging benefits of the promised all-OpenStack world is the holy grail of workload portability. In theory, if you’re running your workloads on one OpenStack-based cloud, you should be able to move them lock stock and barrel to any other OpenStack-based cloud, even if it belongs to a different CSP. Workload portability is the key to cloud-based failover and disaster recovery, cloud bursting, and multi-cloud deployments. Today, workload portability requires a single proprietary platform, and only VMware offers such portability. AWS offers a measure of portability within its cloud, but will face challenges supporting portability between itself and other providers. As a result, if OpenStack can get portability to work properly, participating CSPs will have a competitive lever against Amazon.

Achieving a strong competitive position against AWS with OpenStack is easier said than done, however. OpenStack is a work in progress, and many bits and pieces are still missing. Open source efforts take time to mature, and meanwhile, AWS keeps growing. In response, the players in this space are taking different tacks to build mature offerings that have a hope of carving off a viable chunk of the IaaS marketplace:
  • Rackspace is trying to capitalize on its OpenStack leadership position and the aforementioned customer service to provide a viable alternative to AWS. They are also touting the workload portability benefits of OpenStack. But downward pricing pressure combined with the holes in OpenStack capabilities are pounding on Rackspace’s stock price.
    Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration.

  • Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration. Cloud integration is a critical enabler of hybrid clouds, but doesn’t address the workload portability challenge. As a result, Dell’s cloud marketing efforts are focused on the benefits of integration over portability. Dell’s recent acquisition of Quest Software also hints at a Microsoft application migration strategy for Dell Cloud.
  • HP wants to rush its enterprise public cloud offering to market, and it doesn’t want to wait for OpenStack to mature. Instead, it’s hammering out its own version of OpenStack, essentially forking the OpenStack codebase to its own ends, according to Nnamdi Orakwue, vice president for Dell Cloud. Such a move may pay off for HP, but increases the risk that the HP add-ons to OpenStack will have quality issues.
  • IBM recently announced that they are “all in” with OpenStack with the rollout of  IBM SmartCloud Orchestrator built on the platform.  But IBM has a problem: the rest of their SmartCloud suite isn’t built on OpenStack, leaving them to scramble to rewrite a number of existing products leveraging OpenStack’s incomplete codebase, while in the meantime, integrating the mishmash of SmartCloud components at the PowerPoint layer.
  • Red Hat is making good progress hammering out what they consider an “enterprise” deployment of OpenStack. As perhaps the leading enterprise open source vendor, they are well-positioned to lead this segment of the market, but it still remains to be seen whether enterprise customers will want to  build all open source private clouds in the near term, as the products gradually mature. On the other hand, IBM has a history of leveraging Red Hat’s open source products, so an IBM/Red Hat partnership may move SmartCloud forward more quickly than IBM might be able to accomplish on its own.
CSO Wild Card: CloudStack

There are several more players in this story, but one more warrants a discussion: Citrix. The desktop virtualization leader had been one face in the OpenStack crowd, but they suddenly decided to switch horses and take a contrarian strategy. They ditched OpenStack, took their 2011 Cloud.com acquisition and donated the code to CloudStack. Then they switched CloudStack’s licensing model from GNU (derivative products must be licensed under GNU) to Apache (OK to build proprietary offerings on top of the open source codebase), and subsequently passed the entire CloudStack effort along to the Apache Foundation, where it’s now in incubation.

There are far fewer players on the CloudStack team than OpenStack’s, and its core value proposition is quite similar to OpenStack, so on first glance, Citrix’s move raises eyebrows. After all, why bail on the market leader to join the underdog? But look more closely, and it seems that Citrix may be onto something.
Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen.

First, Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen. Xen is one of the two leading open source virtualization platforms, and provides the underpinnings to many commercial virtualization products on the market today. Citrix’s 2007 acquisition of XenSource positioned them as a Xen leader, and they’ve been driving development of the Xen codebase ever since.

Citrix’s heavy investment in Xen bucks the conventional virtualization wisdom: since Xen’s primary competitor, KVM (Kernel-based Virtual Machine) is distributed as part of standard Linux distros, KVM is the no-brainer choice for the virtualization component of open source CSOs. After all, it’s essentially part of Linux, so any CSP (save those focusing on Windows-centric IaaS) don’t have to lift a finger to build their offerings on KVM. Citrix, however, picked up on a critical fact: KVM is simply not as good as Xen. And now that Citrix has been pushing Xen to mature for half a dozen years, Xen is a far better choice for building turnkey cloud solutions than KVM. So they Citrix combined Xen and CloudStack into a single cloud architecture they dubbed Windsor, which forms the basis of their CloudPlatform offering.

And therein lies the key to Citrix’s contrarian strategy: CloudPlatform is a turnkey cloud solution for customers who want to deploy private clouds – or as close to turnkey as today’s still nascent cloud market can offer. Citrix is passing on the opportunity to be their own CSP (at least for now), instead focusing on driving CloudStack and Xen maturity to the point that they can put together a complete cloud infrastructure software offering. In other words, they are focusing on a niche and giving it all they got.

The ZapThink Take

If this ZapFlash makes comprehending the IaaS marketplace look like herding cats, you’re right. AWS has gotten so big, so fast, and their products are so good, that everyone else is scrambling to put something together that will carve off a piece of what promises to be an immense market. But customers are holding the cards, because everyone knows how AWS works, which means that everyone knows how IaaS is supposed to work. If a vendor or CSP brings an offering to market that doesn’t compare with AWS on quality, functionality, or cost, then customers will steer clear, no matter how good the contenders’ PowerPoints are.

But as with feline wrangling, it’s anybody’s guess where this tabby or that calico is heading next. If anyone truly challenges Amazon’s dominance, who will it be? Rackspace? IBM? Dell? Or any of the dozens of other four-legged critters just looking for a warm spot in the sun? And then there’s the turnkey cloud solution angle. Today, building out your own private cloud is difficult, expensive, and fraught with peril. But if tomorrow brings simple, low cost, low risk private clouds to the enterprise, how will that impact the public CSP marketplace? You pays your money, you takes your chances. But today, the safe IaaS choice is AWS, unless you have a really good reason for selecting an alternative.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in:

Thursday, March 7, 2013

Cloud, mobile bringing new value to Agile development methods, even in bite-sized chunks

As IT aligns itself with business goals, Agile software development is increasingly enabling developers to better create applications that meet user needs quickly. And, now, the advent of increased mobile apps development is further accelerating the power of Agile methods.

Thought it’s been around for decades, Agile’s tenets of collaboration, incremental development, speed, and flexibility resonate with IT leaders who want developers to focus on working with users to develop the applications. This method stands in contrast to the more rigid and traditional process of collecting user requirements, taking months to create a complete application, and delivering the application to users with the hopes that it fits the bill and that requirements haven’t changed during the process.
In many cases today, the business has alternatives, thanks to cloud -- all the services they could need are available with a credit card.

In fact, in today’s world, where business leaders can shop for the technology they need with any cloud or software-as-a-service (SaaS) provider they choose, IT must ensure enterprise applications are built collaboratively to meet needs, or lose out to the competition.

“In many cases today, the business has alternatives, thanks to cloud -- all the services they could need are available with a credit card,” says Raziel Tabib, Senior Product Manager of Application Lifecycle Management with HP Software . “IT has to work to be the preferred solution. If the IT department wants to maintain its position, it has to make the best tools to meet business needs. Developers have to get engaged with end users to ensure they are meeting those needs.” [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP Software recently released HP Agile Manager, a SaaS-based solution for planning and executing agile projects. And the division itself has embraced some of the principles of agile that have, for example, helped it to move from an 18-month release cycle to come up with product releases and refreshes every month, says Tabib.

Pick and choose

However, Agile is far from an all-or-nothing proposition, particularly for large organizations with developers distributed across the globe that may have a harder a time adopting certain agile work styles, he warns.

“We’re not saying any organization can just look at the agile manifesto and start tomorrow with scrum meetings and everything will work well,” Tabib says. “We have engineers in Israel, Prague, and Vietnam. While some agile practices are easy to pick up, others are really difficult to adopt, when you’re talking about organizations at that scale.”

That’s okay, he adds -- organizations should be encouraged to cherry pick the elements of agile that make sense to embrace, blend them with more traditional approaches to application development, and still reap benefits.

A report published in September of 2012 by Forrester Consulting on behalf of HP supports the idea that Agile is one of many disciplines that can be used to develop applications that meet users needs.
Agile software development is one tool in a vast toolbox,” reads the report. “But a fool with a tool is still a fool.

The report, entitled Agile Software Development and the Factors that Drive Success, surveyed 112 professionals regarding application development habits and success. It found that companies already successful in application development used Agile techniques to make them even better.

For example, respondents cited the Agile practice of limiting the amount of work in progress to reduce the impact of sudden business change meant that requirements didn’t grow stale while waiting for coding to begin -- but that their overall success was based on more than just implementing agile.

And it found respondents at companies that weren’t as successful with application development reported using aspects of agile. The upshot of the survey was that simply adopting agile did not ensure success. “Agile software development is one tool in a vast toolbox,” reads the report. “But a fool with a tool is still a fool.”

I think Agile will get even more of a boost in value as developers move toward a "mobile first" approach, which seems tightly coupled with fast, iterative apps improvement schedules.

One of the neat things about a mobile first orientation is that it forces long-overdue simplification and ease in use in apps. When new apps are designed for their mobile device deployment first, the dictates of the mobile constraints prevail.

Combine that with Agile, and the guiding principles of speed and keeping user requirements dominant help keep projects from derailing. Revisions and updates remain properly constrained. Mobile First discourages snowballing of big applications, instead encouraging releases of smaller, more manageable apps.

Mobile First design benefits combined with Agile methods can be well extended across SaaS, cloud, VDI, web, and even client-server applications.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in: