Tuesday, June 15, 2010

Delta Air Lines improves customer self-service apps quickly using automated quality assurance tools strategically

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

Our next customer case study focuses on Delta Air Lines and the use of quality assurance tools for requirements management as well as mapping the test cases and moving into full production quickly.

We're joined by David Moses, Manager of Quality Assurance for Delta.com and its self-service apps efforts, and John Bell, a Senior Test Engineer at Delta. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Moses: Generally, the airline industry, along with the lot of other industries I'm sure, is highly competitive. We have a very, very quick, fast-to-market type environment, where we've got to get products out to our customers. We have a lot of innovation that's being worked on in the industry and a lot of competing channels outside the airline industry that would also like to get at the same customer set. So, it's very important to be able to deliver the best products you can as quickly as possible. "Speed Wins" is our motto.

It goes back to speed to market with new functionality and making the customer's experience better. In all of our self-service products, it's very important that we test from the customers’ point of view.

We deliver those products that make it easier for them to use our services. That's one of the things that always sticks in my mind, when I'm at an airport, and I'm watching people use the kiosk. That's one of the things we do. We bring our people out to the airports and we watch our customers use our products, so we get that inside view of what's going on with them.

A lot on the line

I'll see people hesitantly reaching out to hit a button. Their hand may be shaking. It could be an elderly person. It could be a person with a lot on the line. Say it’s somebody taking their family on vacation. It's the only vacation they can afford to go on, and they’ve got a lot of investment into that flight to get there and also to get back home. Really there's a lot on the line for them.

A lot of people don’t know a lot about the airline industry and they don’t realize that it's okay if they hit the wrong button. It's really easy to start over. But, sometimes they would be literally shaking, when they reach out to hit the button. We want to make sure that they have a good comfort level. We want to make sure they have the best experience they could possibly have. And, the faster we can deliver products to them, that make that experience real for them, the better.

By offering these types of products to the customers, you give them the best of both worlds. You give them a fast pass to check in. You give them a fast pass book. But, you can also give the less-experienced customer an easy-to-understand path to do what they need as well.

Bell: One thing that we've found to be very beneficial with HP Quality Center is that it shows the development organization that this just isn't a QA tool that a QA team uses. What we've been able to do by bringing the requirements piece into it and by bringing the defects and other parts of it together, is bring the whole team on board to using a common tool.

In the past, a lot of people have always thought of Quality Centers as a tool that the QA people use in the corner and nobody else needs to be aware of. Now, we have our business analysts, project managers, and developers, as well as the QA team and even managers on it, because each person can get a different view of different information.

From Dashboard, your managers can look at your trends and what type of overall development lifecycle is coming through. Your project managers can be very involved in pulling the number of defects and see which ones are still outstanding and what the criticality of that is. The developers can be involved via entering information in on defects when those issues have been resolved?

We've found that Quality Center is actually a tool that has drawn together all of the teams. They're all using a common interface, and they all start to recognize the importance of tying all of this together, so that everyone can get a view as to what's going on throughout the whole lifecycle.

Moses: We've realized the importance of automating, and we've realized the importance of having multiple groups using the same tool.

It's not just a tool. There are people there too. There are processes. There are concepts you're going to have to get in your head to get this to work, but you have to be willing to buy-in by having the people resources dedicated to building the test scripts. Then, you're not done. You've got to maintain them. That's where most people fall short and that's where we fell short for quite some time.

Once we were able to finally dedicate the people to the maintenance of these scripts to keep them active and running, that's where we got a win. If you look at a web site these days, it's following one of two models. You either have a release schedule, that’s a more static site, or you have a highly dynamic site that's always changing and always throwing out improvements.

We fit into that "Speed Wins," when we get the product out for the customers’ trading, and improve the experience as often as possible. So, we’re a highly dynamic site. We'll break up to 20 percent of all of our test scripts, all of our automated test scripts, every week. That's a lot of maintenance, even though we're using a lot of reusable code. You have to have those resources dedicated to keep that going.

Bell: One thing that we've been able to do with HP Quality Center is connect it with Quick Test Pro, and we do have Quality Center 10, as well as Quick Test Pro 10. We've been able to build our automation and store those in the Test Plan tab of Quality Center.

It's very nice that Quality Center has it all tied into one unit. So, as we go through our processes, we're able to go from tab to tab and we know that all of that information is interconnected. We can ultimately trace a defect back to a specific cycle or a specific test case, all the way back to our requirement. So, the tool is very helpful in keeping all of the information in one area, while still maintaining the consistent process.

This has really been beneficial for us, when we go into our test labs and build our test set. We're able to take all of these automated pieces and combine them into test set. What this has allowed us to do is run all of our automation as one test set. We've been able to run those on a remote box. It's taken our regression test time from one person for five days, down to zero people and approximately an hour and 45 minutes.

So, that's a unique way that we've used Quality Center to help manage that and to reduce our testing times by over 50 percent.

I look back to metrics we pulled for 2008. We were doing fewer than 70 projects. By 2009, after we had fully integrated Quality Center, we did over 129 projects. That also included a lot of extra work, which you may have heard about us doing related to a merger.

Moses: The one thing I really like about the HP Quality Center suite especially is that your entire software development cycle can live within that tool. Whenever you're using different tools to do different things, it becomes a little bit more difficult to get the data from one point to another. It becomes a little bit more difficult to pull reports and figure out where you can improve.

Data in one place

What you really want to do is get all your data in one place and Quality Center allows you to do that. We put our requirements in in the beginning. By having those in the system, we can then map to those with our test cases, after we build those in the testing phase.

Not only do we have the QA engineers working on it in Quality Center, we also have the business analysts working on it, whenever they're doing the requirements. That also helps the two groups work together a bit more closely.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

You may also be interested in:

McKesson shows bringing testing tools on the road improves speed to market and customer satisfaction

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This customer case-study focuses on McKesson Corp., a provider of certified healthcare information technology, including electronic health records, medical billing, and claims management software. McKesson is a user of HP’s project-based performance testing products used to make sure that applications perform in the field as intended throughout their lifecycle.

To learn more about McKesson’s innovative use of quality assurance software, we interview Todd Eaton, Director of Application Lifecycle Management Tools in the CTO’s office at McKesson. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Eaton: It's one thing that we can test within McKesson. It's another thing when you test out at the customer site, and that's a main driver of this new innovation that we’re partnering up with HP.

When we build an application and sell that to our customers, they can take that application, bring it into their own ecosystem, into their own data center and install it onto their own hardware.

Controlled testing

The testing that we do in our labs is a little more controlled. We have access to HP and other vendors with their state-of-the-art equipment. We come up with our own set of standards, but when they go out to the site and get put in to those hospitals, we want to ensure that our applications act at the same speed and same performance at their site that we experience in our controlled environment.

We want to make sure that our solutions get out there as fast as possible, so that we can help those providers and those healthcare entities in giving the best patient care that they can. So, being able to test on their equipment is very important for us.

Just knowing how many different healthcare providers there are out there, you could imagine all the different hardware platforms, different infrastructures, and the needs or infrastructure items that they may have in their data centers.

After further investigation, it became apparent to us that we weren’t able to replicate all those different environments in our data center. It’s just too big of a task.

The next logical thing to do was to take the testing capabilities that we had and bring it all out on the road. We have these different services teams that go out to install software. We could go along with them and bring the powerful tools that we use with HP into those data centers and do the exact same testing that we did, and make sure that our applications were running as expected on their environments.

Another very important thing is using their data. The hospitals themselves will have copies of their production data sets that they keep control of. There are strict regulations. That kind of data cannot leave their premises. Being able to test using the large amount of data or the large volume of data that they will have onsite is very crucial to testing our applications.

The tool that we use primarily within McKesson is Performance Center, and Performance Center is an enterprise-based application. It’s usually kept where we have multiple controllers, and we have multiple groups using those, but it resides within our network.

On the road

So, the biggest hurdle was how to take that powerful tool and bring it out to these sites? So, we went back to HP, and said, "Here’s our challenge. This is what we’ve got. We don’t really see anything where you have an offering in that space. What can you do for us?"

Currently, we have two engagements going on simultaneously with two different hospitals, testing two different groups of applications. I have one site that’s using it for 26 different applications and other that’s using it for five. We’ve got two teams going out there, one from my group and one from one of the internal R&D groups that are assisting the customer and testing the applications on their equipment.

We have been able to reduce the performance defects dramatically. We’re talking something like 40-50 percent right off the bat. Some of the timing that we had experienced internally seemed to be fine, well within SLAs. But as soon as I got out to a site and onto different hardware configurations, it took some application tuning to get it down. We were finding 90 percent increases with our help of continual testing and performance tweaks.

Items like that are just so powerful, when you are bringing that out to the various customer, and can say, "If you engage us, and we can do this testing for you, we can make sure that those applications will run in the way that you want them to."
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript. Sponsor: HP.

You may also be interested in:

Monday, June 14, 2010

Top reasons and paybacks for adopting cloud computing sooner rather than later

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a podcast panel discussion on identifying the top reasons and paybacks for adopting cloud computing.

Like any other big change affecting business and IT, if cloud, in its many forms, gains traction, then adopters will require a lot of rationales, incentives, and measurable returns to keep progressing successfully. But, just as the definition of cloud computing itself can elicit myriad responses, the same is true for why an organization should encourage cloud computing.

The major paybacks are not clearly agreed upon, for sure. Are the paybacks purely in economic terms? Is cloud a route to IT efficiency primarily? Are the business agility benefits paramount? Or, does cloud transform business and markets in ways not yet fully understood?

We'll seek a list of the top reasons why exploiting cloud computing models make sense, and why at least experimenting with cloud should be done sooner rather than later. We have assembled a panel of cloud experts to put some serious wood behind the arrow leading to the cloud.

Please join me now in welcoming Archie Reed, HP's Chief Technologist for Cloud Security and the author of several publications including The Definitive Guide to Identity Management and a new book, The Concise Guide to Cloud Computing; Jim Reavis, executive director of the Cloud Security Alliance (CSA) and president of Reavis Consulting Group, and Dave Linthicum, Chief Technology Officer of Bick Group and also a prolific cloud blogger and author. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Reed: When we go into all of this discussion around what is the benefit [to cloud], we need to do our standard risk analysis. There’s nothing too much that's new here, but what we do see is that when you get to the cloud and you're doing that assessment, the [payoffs] come down to agility.

Agility, in this sense, has the dimensions of speed at scale. For businesses, that can be quite compelling in terms of economic return and business agility, which is another variation on the theme. But, we gain this through the attributes we ascribe to cloud -- things like instant on/off, huge scale, per-use billing, all the things we tried to achieve previously but finally seem to be able to get with a cloud-computing architectural model.

The risks may go down, if it’s a private environment.



If we're going to do the cost-benefit analysis, it does come down to the fact that, through that per-use billing, we're able to do this in a much more fine-grain manner and then compare to the risks that we are going to encounter as a result of using this type of environment. Again, that's regardless of whether it’s public or private. The risks may go down, if it’s a private environment.

Factoring all those things in together, there's not too much of a new model in how we try to achieve this justification and gain those benefits.

Linthicum: This notion of business agility is really where the money is. It's the ability to scale up and scale down, the ability to allocate compute resources around business opportunities, and the ability to align the business to new markets quickly and efficiently, without doing waves and waves of software acquisitions, setups, installs, and all the risks around doing that. That's really where the core benefit is.

If you look at that and you look at the strategic value of agility within your enterprise, it’s always different. In other words, your value of agility is going to vary greatly between a high tech company, a finance company, and a manufacturing company. You can come up with the business benefit and the reason for moving into cloud computing, and people have a tendency not to think that way.

Innate risks

But you have to weigh that benefit in line with the innate risks in moving to these platforms. Whether or not you are moving from on-premises to off-premises, on-premies to cloud, or traditional on-premises to private cloud computing, there’s always risk involved in terms of how you do security, governance, latency, and those things.

Once you factor those things in and you understand what the value drivers are in both OPEX and CAPEX cost and the trade-offs there, as well as business agility, and weigh in the risk, then you have your cost-benefit analysis equation, and it comes down to a business decision. Nine times out of ten, the cloud computing provider is going to provide a more strategic IT value than traditional computing platforms.

Reavis: When you think about the economics, what’s the core of economics? It's supply and demand. Cloud gives you that ability to more efficiently serve your customers. It becomes a customer-service issue, where you can provide a supply of whatever your service is that really fits with their demand.

Their business would not have been able to exist in the earlier era of the Internet. It’s just not possible.



Ten years ago I started a little minor success in the Internet dot-com days. It was called Securityportal.com. You all remember something called the "Slashdot effect," where a story would get posted on Slashdot and it would basically take your business out. You would have an outage, because so much traffic would go your way.

We would, on the one hand, love those sorts of things, and we would live in fear of when that would happen, when we would get recognition, because we didn’t have cloud-based models for servicing our customers. So, when good things would happen, it would sometimes be a bad thing for us.

I had a chance to spend a lot of time with an online gaming company, and the way they've been able to scale up would only be possible in the cloud. Their business would not have been able to exist in the earlier era of the Internet. It’s just not possible.

So, yeah, it provides us this whole new platform. I've maintained all along that we're not just going to migrate IT into the cloud, but we're going to reinvent new businesses, new business processes, and new ways of having an intermediary relationship with other suppliers and our customers as well. So it’s going to be very, very transformational.

Reed: At HP, when we talk to customers and even try to evaluate internally, we talk about this thing called business outcomes being core to how IT and business align. Whether they're small companies or large companies, it's providing services that support the business outcomes and understanding that ultimately you want to deliver.

In business terms, it's more processing of loan requests and financial transactions. Then, if that’s the measure that people are looking at what the business outcomes need to be, then IT can align with that and they become the service provider for that capability.

We've talked to a lot of customers, particularly in the financial industry, for example, where IT wasn’t measured in how they cut costs or how much staff they had. They were measured in incremental improvements on how many advances could be made in delivering more business capability.

In that example, one particular business metric was, "We can process more loans in a day, when necessary." The way they achieved that was by re-architecting things in a more cloud or service-centric way, wherein they could essentially ramp up, on what they called a private cloud, the ability to process things much more quickly.

Now, many in IT realize -- perhaps not enough, but we're seeing the change -- that they need to make this toward the service oriented architecture (SOA) approach and delivery, such that they are becoming experts in brokering the right solution to deliver the most significant business outcomes.

That becomes the latency that drives the lateness of the business process changes that need to occur within the enterprise.



The source of those services is less about how much hardware and software you need to buy and integrate and all that sort of thing, and more about the most economical and secure way that they can deliver the majority of desired outcomes. You don’t just want to build one service to provide a capability. You want to build an environment and an architecture that achieves the bulk of the desired outcomes.

Linthicum: Cloud computing will provide us with some additional capabilities. It's not necessarily nirvana, but you can get at compute and you can get at even some of these pretty big services. For example, the Predictive API that Google just announced at Google I/O recently is an amazing piece of data-mining stuff that you can get for free, for now.

The ability to tie that into your existing processes and perhaps make some predictions in terms of inventory control things, means you could save potentially a million dollars a month, supporting just-in-time inventory processes within your enterprise. Those sorts of things really need to come into the mix in order to provide the additional value.

Sometimes we can drive processes out of the cloud, but I think processes are really going to be driven on-premises and they are going to include cloud resources. The ability to on-board those cloud resources is needed to support the changes in the processes and is really going to be the value of cloud computing.

That the area that’s probably the most exciting thing. I just came back from Gluecon in Denver. That is, in a sense, a cloud developers’ conference, and they're all talking about application programming interfaces (APIs) and building the next infrastructure.

When those things come online, become available, and we don’t have to build those things in-house, we can actually leverage them into a "pay per drink" basis through some kind of provider, buying those into our processes. We'll perhaps have thousands of APIs that exist all over the place, and perhaps even not even local data within these APIs.

That’s where the value of cloud computing is going to appear, and we haven’t seen anything yet. There are huge amounts of value being built right now.



They just produce behavior, and we bring them together to form these core business processes. More importantly, we bring them together to recreate these core business processes around new needs of the business.

Reed: I think the incentives, the risks, and all those things with cloud computing change, dependent on the type of business we're looking at.

Certainly, when we talk to smaller organizations and mid-sized organizations as well, they're looking for the edge that they can gain in terms of cost and support and, in most cases, more security. In this case, they look for broader back-office solutions than perhaps some of the larger organizations, things such as email, account management, HR, and so forth, as well as front-end stuff, basic web hosting and more advanced versions of that.

We've implemented things like Microsoft Business Productivity Online Suite (BPOS) for many customers, especially in the mid range. They do find better support, better up time, better cost controls, and to Jim’s point, more security than they are able to provide for themselves.

When we get to talk to larger organizations, some are looking for this. We know, even in the financial industry, which you might consider to be one of the most security paranoid type environments there are outside of the three-letter agencies, they find that kind of thing appealing as well. Some of those have actually gone to use Salesforce.com for some of their services.

But, they're generally more concerned with the security stuff and they often find specific capabilities more appealing in a service model, such as data processing, data analysis, data retrieval, functional analysis, and things like that. The mashups are definitely more popular as a type of model or the service-oriented nature is more popular model with larger organizations that we talk to.

Linthicum: Moving into cloud is going to make people think in a very healthy, paranoid state. In other words, they are going to think twice about what information goes out there, how that information is secured and modeled, what APIs they are leveraging, and service level agreements (SLAs). They're going to consider encryption and identity management systems that they haven’t done in the past.

In most of the instances that I am seeing deploying cloud computing systems, they are as secure, if not more secure, than the existing on-premise systems. I would trust those cloud computing systems more than I would the existing on-premise systems.

That comes with some work, some discipline, some governance, some security, and a lot of things that we just haven’t thought about a lot, or haven’t thought about enough with the traditional on-premise systems. So, that’s going to be a side benefit. In two years, we're going to have better security and better understanding of security because of cloud.

Reed: There will be businesses that are willing and able and can manage cloud-type environments to their benefit. But, eventually, the gaps become so small and the availability of these services online becomes so ubiquitous that I'm not sure how long this window goes for.

I don’t want to say that, in a few years, everybody will be able to deliver the same thing just as quickly. But for the moment, I think there’s a few forward thinking organizations that will be able to achieve that to great success.

Reavis: The organizations that are developing what they think is state-of-the-art -- but it’s not cloud -- are going to be struggling, because all of the neat, interesting new developments. It’s hard to even put your head around all of implications of compute-as-a-utility and all the innovation we are going to see, but we know it’s going to be on that platform.

If you think of this as the new development platform, then yeah, it’s going to be a real competitive issue. There are going to be a lot of new capabilities that will only be accessible in this platform, and they're going to come a lot quicker.

Five years from now

So, in terms of the first movers and the environment now, it’s going to look very different. Anybody who carved out some space right now and some lead in the market in cloud shouldn't feel too comfortable about their position, because there are companies we don't even know about at this point, that are going to be fairly pervasive and have a lot to say about IT five years from now.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, June 10, 2010

HP BTO executive on how cloud service automation aids visibility and control over total management lifecycles

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

The latest BriefingsDirect executive interview centers on gaining visibility and control into the IT services management lifecycle while progressing toward cloud computing. We dig into the Cloud Service Automation (CSA) and lifecycle management market and offerings with Mark Shoemaker, Executive Program Manager, BTO Software for Cloud at HP.

As cloud computing in its many forms gains traction, higher levels of management complexity are inevitable for large enterprises, managed service providers (MSPs), and small-to-medium sized businesses (SMBs). Gaining and keeping control becomes even more critical for all these organizations, as applications are virtualized and as services and data sourcing options proliferate, both inside and outside of enterprise boundaries.

More than just retaining visibility, however, IT departments and business leaders need the means to fine-tune and govern services use, business processes, and the participants accessing them across the entire services ecosystem. The problem is how to move beyond traditional manual management methods, while being inclusive of legacy systems to automate, standardize, and control the way services are used.

We're here with HP's Shoemaker examine an expanding set of CSA products, services, and methods designed to help enterprises exploit cloud and services values, while reducing risks and working toward total management of all systems and services. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Shoemaker: When we talk about management, it starts with visibility and control. You have to be able to see everything. Whether it’s physical or virtual or in a cloud, you have to be able to see it and, at some point, you have to be able to control its behavior to really benefit.

Once you marry that with standards and automation, you start reaping the benefits of what cloud and virtualization promise us. To get to the new levels of management, we’ve got to do a better job.

Up until a few years ago, everything in the data center and infrastructure had a physical home, for the most part. Then, virtualization came along. While we still have all the physical elements, now we have a virtual and a cloud strata that actually require the same level of diligence in management and monitoring, but it moves around.

Where we're used to having things connected to physical switches, servers, and storage, those things are actually virtualized and moved into the cloud or virtualization layer, which makes the services more critical to manage and monitor.

All the physical things

Cloud doesn’t get rid of all the physical things that still sit in data centers and are plugged in and run. It actually runs on top of that. It actually adds a layer, and companies want to be able to manage the public and private side of that, as well as the physical and virtual. It just improves productivity and gets better utilization out of the whole infrastructure footprint.

I don’t know many IT shops that have added people and resources to keep up with the amount of technology they have deployed over the last few years. Now, we're making that more complex.

They aren't going to get more heads. There has to be a system to manage it. The businesses are going to be more productive, the people are going to be happier, and the services are going to run better.

We're looking at a more holistic and integrated approach in the way we manage. A lot of the things we're bringing to bear -- CSA, for example -- are built on years of expertise around managing infrastructures, because it’s the same task and functions.

Ensuring the service level

We’ve expanded these [products and services] to take into account the public cloud ... . We've been able to point these same tools back into a public cloud to see what’s going on and making sure you are getting what you are paying for, and getting what the business expects.

CSA products and services are the product of several years of actually delivering cloud. Some of the largest cloud installations out there run on HP software right now. We listened to what our customers would tell us and took a hard look at the reference architecture that we created over those years that encompassed all these different elements that you could bring to bear in a cloud and started looking, how to bring that to market and bring it to a point where the customer can gain benefit from it quicker.

We want to be able to come in, understand the need, plug in the solution, and get the customer up and running and managing the cloud or virtualization inside that cloud as quickly as possible, so they can focus on the business value of the application.

The great thing is that we’ve got the experience. We’ve got the expertise. We’ve got the portfolio. And, we’ve got the ability to manage all kinds of clouds, whether, as I said, it’s infrastructure as a service (IaaS) or platform as a service (PaaS) that your software's developed on, or even a hybrid solution, where you are using a private cloud along with a public cloud that actually bursts up, if you don’t want to outlay capital to buy new hardware.

We have the ability, at this point, to tap into Amazon’s cloud and actually let you extend your data center to provide additional capacity and then pull it back in on a per-use basis, connected with the rest of your infrastructure that we manage today.

A lot of customers that we talk to today are already engaged in a virtualization play and in bringing virtualization into their data centers and putting on top of the physical.



We announced CSA on May 11, and we're really excited about what it brings to our customers ..., industry-leading products together with solutions that allow you to control, build, and manage a cloud.

We’ve taken the core elements. If you think about a cloud and all the different pieces, there is that engine in the middle, resource management, system management, and provisioning. All those things that make up the central pieces are what we're starting with in CSA.

Then, depending on what the customer needs, we bolt on everything around that. We can even use the customers’ investments in their own third-party applications, if necessary and if desired.

As the landscape changes, we're looking at how to change our applications as well. We have a very large footprint in the software-as-a-service (SaaS) arena right now where we actually provide a lot of our applications for management, monitoring, development, and test as SaaS. So, this becomes more prevalent as public cloud takes off.

Also, we're looking at what’s going to be important next. What are going to be the technologies and the services that our customers are going to need to be successful in this new paradigm.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

HP service aims to lower cost and risk by tackling vulnerabilities early in 'devops' cycle

Security breaches and the cost of repairing and patching enterprise applications hang like a cloud over every company doing business today. HP is taking direct aim at that problem today with release of a security service that aims to prevent vulnerabilities and to bake security and reliability in at the earliest stages of application design and architecture.

Part of HP's Secure Advantage, the Comprehensive Applications Threat Analysis (CATA) service provides architectural and design guidance alongside recommendations for security controls and best practices. By addressing and eliminating application vulnerabilities as early in the lifecycle as possible, companies stand to gain incredible returns on investment (ROI) and drastically lower total cost of ownership (TCO) across the "devOps" process, according to HP. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

"Customers are under increasing pressure from threats that exploit security weaknesses that were either missed or insufficiently addressed during the early lifecycle phases," said Chris Whitener, chief security strategist of Secure Advantage. Whitener added that he believes HP is the first company to come to market with such a service.

HP has been using this service internally for more than six years and, according to Whitener, has seen a return of 5- 20-times on the cost of implementation. And this, he says, is just on things that can be measured. The service has freed up a lot of schedule time formerly spent in finding and fixing application vulnerabilities.

Two problems

Many other risk-analysis programs come later in the development process, meaning that developers often miss vulnerabilities at the earliest stages of design. That brings up two problems, according to John Diamant, HP's Secure Product Development strategist, the risks associated with the vulnerabilities and the cost of patching the software.

"By addressing these vulnerabilities early in the process," Diamant said, "we're able to reduce the risk and eliminate the cost of repair."

The new service offers two main thrusts for increased security:
  • A gap analysis to examine applications and identify often-missed technical security requirements imposed by laws, regulations, or best practices.
  • An architectural threat analysis, which identifies changes in application architecture to reduce the risk of latent security defects. This also eliminates or lowers costs from security scans, penetration tests, and other vulnerability investigations.
While lowering development costs, using a security service early in the lifecycle can also lower the threat of security breaches, which can cost in the millions of dollars in fines and penalties, as well as the fallout in a loss of customer confidence.

Security and proper applications development, of course, come into particular focus when cloud computing models and virtualization are employed, and where an application is expected to scale dramatically and dynamically.

Although HP plans to develop a training program sometime in the future, right now, this is offered as a service using HP personnel who have been schooled in the processes and who have been using it inside HP for years. For more information, go to http://h10134.www1.hp.com/services/applications-security-analysis/.

You may also be interested in:

Wednesday, June 9, 2010

Adopting cloud-calibre security now pays dividends across all IT security concerns

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the video. Sponsor: Akamai Technologies.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Today's headlines point to more sophisticated and large-scale and malicious online activities. For some folks, therefore, the consensus seems to be that the cloud computing model and vision are not up to the task when it comes to security.

But at the RSA Conference earlier this year, a panel came together to talk about security and cloud computing, to examine the intersection of cloud computing, security, Internet services, and Internet-based security practices to uncover differences between perceptions and reality.

The result is a special sponsored BriefingsDirect podcast and video presentation that takes stock of cloud-focused security -- not just as a risk, but also as an amelioration of risk across all aspects of IT.

Join panelists Chris Hoff, Director of Cloud and Virtualization Solutions at Cisco Systems; Jeremiah Grossman, the founder and Chief Technology Officer at WhiteHat Security, and Andy Ellis, the Chief Security Architect at Akamai Technologies. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Grossman: An interesting paradigm shift is happening. When you look at website attacks, things haven't changed much. An application that exists in the enterprise is the same application that exists in the cloud. For us, when we are attacking websites and assessing their security, it doesn't really matter what infrastructure it's actually on. We break into it just as same as everything else.

Our job, in the website vulnerability management business, is to find those vulnerabilities ahead of time and help our customers fix those issues before they become larger problems. And if you look at any security report on the Web right now, as far as security goes, it's a web security world.

What's different [with cloud] among our customer base is that they can't run to their comfort zone. They can't run to secure their enterprise with firewalls, intrusion detection systems, and encryption. They have to focus on the application. That's what's really different about cloud, when it comes to web security. You have to focus on the apps, because you have nothing else to go on.

Understand your business

Ellis: The first thing you have to do is to understand your own business. That's often the first mistake that security practitioners may make. They try to apply a common model of security thinking to very unique businesses. Even in one industry, everybody has a slightly different business model.

You have to understand what risks are acceptable to your business. Every business is in the practice of taking risk. That's how you make money. If you don't take any risk, you're not going to make money. So, understand that first. What are the risks that are acceptable to the business, and what are the ones that are unacceptable?

Security often lives in that gray area in between. How do we take risks that are neither fully acceptable nor fully unacceptable, and how do we manage them in a fashion to make them one or the other? If they're not acceptable, we don't take them, and if they are acceptable, we do. Hopefully we find a way to increase our revenue stream by taking those risks.

... There's a huge gap in what people think is secure and what people are doing today in trusting in the security in the cloud. When we look at our customer base, over 90 of the top 100 retailers on the Internet are using our cloud-based solutions to accelerate their applications -- and what's more mission-critical than expecting money from your customers?

At Akamai, we see that where people are saying, "The cloud is not secure, we can't trust the cloud." At the same time, business decision makers are evaluating the risk and moving forward in the cloud.

A lot of that is working with their vendors to understand their security practices and comparing that to what they would do themselves. Sometimes, there are shifts. Cloud gives you different capabilities that you might be able to take advantage of, once you're out in the cloud.

Hoff: I like to say that if your security stinks before you move to the cloud, you will be pleasantly unsurprised by change, because it’s not going to get any better -- or probably not even necessarily any worse -- when you move to cloud computing.

What we're learning today is that if we secure our information and applications properly and the infrastructure is able to deal with the dynamism, you will, by default, start to see derivative impacts and benefits on security, because our models will change. At least, our thinking about security models will change.

We in the security industry in some way try to hold the cloud providers to a higher standard. I'm not sure that the consumer, who actually uses these services, sees much of a difference in terms of what they expect, other than it should be up, it should be available, and it should be just as secure as any other Internet-based service they use.

Those cloud providers -- cloud service and cloud computing providers -- are in the business of making sure that they can offer you really robust delivery. At this time, they focus there. We have a challenge to take everything we have done previously, in all these other different models, still do that, and deal with some of the implementation and operational elements that cloud computing, elasticity, dynamism, and all this fantastic set of capabilities bring.

So we get wrapped around the axle many times in discussions about cloud, where a lot of what we are talking about still needs to be taken care of from an infrastructure and application standpoint.

Ellis: That’s the challenge for people who are moving out to the cloud. That area may be in the purview of the provider. While they may trust the provider, and the provider has done the best they can do in that arena, when they still see risks, they can no longer say, "I'll just put in a firewall. I'll just do this." Now, they have to tackle a really sticky wicket. Do you have a safe application wherever it lives?

That’s where people run into a challenge: "It’s cloud. Let me make the provider responsible." But, at the end of day, the overall risk structure is still the responsibility of the business. Ultimately, the data owner, the business who is actually using whatever the compute cycles are.

It's not yours

Grossman: To piggyback on what Andy said, something has been lost. When you host an application internally, you can build it, you can deploy it, and you can test it. Now, all of a sudden, you've brought in a cloud provider, on somebody else’s infrastructure, and you have to get permission to test it. It’s not yours anymore.

Actually, one of the big things [to attend to] out there is a right to test. You have no right to test these infrastructure systems. If you do so without permission, it's illegal. So, you have lost visibility. You've lost technical visibility and security of the application.

When the cloud provider changes the app, it changes the risk profile of the application, too, but you don’t know when that happens and you don’t know what the end result is. There's a disconnect between the consumer, the business, and the cloud computing provider or whatever the system is.

Hoff: Cloud computing has become a fantastic forcing function, because what its done to the business and to IT. We talked about paradigm shifts and how important this is in the overall advancement of computing.

The reality is that cloud causes people to say, "If the thing that’s most important to me is information and protecting that information, and applications are conduits to it, and the infrastructure allows it to flow, then maybe what I ought to do is take a big picture view of this. I ought to focus on protecting my information, content, and data, which is now even more interestingly a mixture of traditional data, but also voice and video and mixed media applications, social networks, and mashups."

Fantastic interconnectivity

T
he complexity comes about, because with collaboration, we have enabled all sorts of fantastic interconnectivity between what was previously disparate, little mini-islands, with mini-perimeters that we could secure relatively well.

The application security and the information security, tied in and tightly coupled with an awareness of the infrastructure that powers it, even though it’s supposed to be abstracted in cloud computing, is really where people have a difficult time grasping the concepts between where we are today and what cloud computing offers them or doesn’t, and what that means for the security models.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Ellis: There's a great initiative going on right now called CloudAudit, which is aimed at helping people think through this security of a process and how you share controls between two disparate entities, so we can make those decisions at a higher level.

If I am trusting my cloud provider to provider some level of security, I should get some insight into what they're doing, so that I can make my decisions as a business unit. I can see changes there, the changes I am taking advantage of, and how that fits my entire software development life cycle.

Cloud computing, depending on who you talk to, encompasses almost everything; your kitchen blender, any element that you happen to connect to your enterprise and your home life.



It’s still nascent. People are still changing their mindset to think through that whole architecture, but we're starting to see that more and more -- certainly within our customer base -- as people think, "I'm out in the cloud. How is that different? What can I take advantage of that’s there that wasn’t there in my enterprise? What are the things that aren’t there that I am used to that now I have to shift and adapt to that change?"

Hoff: What's interesting about cloud computing as a derivative set of activities that you might have focused on from a governance perspective, with outsourcing, or any sort of thing where you have essentially given over control of the operation and administration of your assets and applications, is that you can outsource responsibility, but not necessarily accountability. That's something we need to remember.

Think about the notion of risk and risk management. I was on a panel the other day and somebody said, "You can't say risk management, because everyone says risk management." But, that's actually the answer. If I understand what's different and what is the same about cloud computing or the cloud computing implementation I am looking at, then I can make decisions on whether or not that information, that application, that data, ought to be put in the hands of somebody else.

No one-size-fits-all

In some cases, it can't be, for lots of real, valid reasons. There's no one-size-fits-all for cloud. Those issues force people to think about what is the same and what is different in cloud computing.

Previously, you introduced the discussion about the CSA. The thing we really worked on initially were 15 areas of concerns, and they're now consolidated to 13 areas of concern. What's different? What's the same? How do I need to focus on this? How can I map my compliance efforts? How can I assess, even if there are technical elements that are different in cloud computing? How can I assess the operational and cultural impacts?

Awareness of break-ins

Grossman: What I've seen in the last couple of years is that what drives security awareness is break-ins. Whether the bad guys are nation- or state-sponsored actors or whether they are organized criminals after credit card numbers, breaches happen. They're happening in record numbers, and they're stealing everything they can get their hands on.

Fortunately or unfortunately, from a cloud computing standpoint, all the attacks are largely the same, whether one application is here or in the cloud. You attack it directly, and all the methodologies to attack a website are the same. You have things like cross-site scripting, SQL injection, cross-site request forgery. They are all the same. That’s one way to access the data that you are after.

The other way is to get on the other half of web security. That’s the browser. You infect a website, the user runs into it, and they get infected. You email them a link. They click something. You infect them that way. Once you get on to the host machine, the client side of the connection, then you can leverage those credentials and then get into the cloud, the back-end way, the right way, and no one sees you.

Breaches make headlines. Headlines make people nervous, whether it's businesses or consumers. When a business outsources things to the cloud or a SaaS provider, they still have this nervous reaction about security, because their customers have this nervous reaction about security. So they start asking about security. "What are you doing to protect my data?"

All of a sudden, if that cloud provider, that vendor, takes security seriously and can prove it, demonstrate it, and get the market to accept it, security becomes a differentiating factor. It becomes an enabler of the top line, rather than a cost on the bottom line.

Ellis: I like to look at security as being a business-enabler in three areas. The obvious one, we all think, is risk reduction. How can I reduce my risk with cloud-based security services? Are there ways which I can get out there and do things safer? I'm not necessarily going to change anything else about my business. That's great and that's our normal model.

There are a lot of services available through the cloud that can be used to protect your brand and your revenue against loss, but also help you grow revenue.



Security can also be a revenue-enabler and it can also be a protection of revenue. Web application firewalls is a great example of fraud mitigation services. There are a lot of services available through the cloud that can be used to protect your brand and your revenue against loss, but also help you grow revenue. As you just said, it's all about trust. People go back to brands that they trust, and security can be a key component of that.

It doesn't always have to be visible to the end user, but as you noted with the car industry, people build the perception around incidents. If you can be incident-free compared to your competition, that's a huge differentiator, as you go down into more and deeper activities that require deep trust with your end users.

A lot of what we try to do is build a wrapper in a sandbox around each customer to give them the same, consistent level of security. A big challenge in the enterprise model is that for every application that you stand up, you have to build that security stack from the ground up.

The weak point is often the browser. Compromise the client, and you get access to the data.



One advantage cloud does give you is that, if you are working with somebody who has thought about this is, you can take advantages of practices that they have already instituted. So, you get some level of commonality. Then, if a customer sees something and says, "You should improve this," that improvement can affect an entire customer base. Cloud has a benefit there to match some of the weaknesses it may have elsewhere.

Historically, in the enterprise model, we think about data in terms of being tied to a given application. That’s not really accurate. The data still moves around inside an enterprise. As Jeremiah noted, the weak point is often the browser. Compromise the client, and you get access to the data.

As people move to cloud, they start to change their risk thinking. Now, they think about the data and everywhere it lives and that gives them an opportunity to change their own risk model and think about how they're protecting the data and not just a specific application it used to live in.

As we noted earlier, a large fraction of the Internet retailers are using cloud for their most mission-critical things, their financial data, coming through every time somebody buys something.

If you are willing to trust that level of data to the cloud, you are making some knee-jerk reaction about an internal web conference between 12 people and a presentation about something that frankly most people aren’t going to care about, and you are saying, "That’s too sensitive to be in the cloud." But your revenue stream could be in the cloud. Sometimes it shows that we think parochially about security in some places.

Grossman: What's interesting about security spending versus infrastructure spending or just general IT spending is that it seems security is diametrically opposed to the business. We spend the most money on applications and our data, but the least amount of security risk spend. We spend the least on infrastructure relative to applications, but that's where we spend the most of our security dollars. So you seem to be diametrically opposed.

What cloud computing does, and the reason for this talk, is that it flattens the world. It abstracts the cloud below and forces us to realign with the business. That's what cloud will bring in a good way. It's just that you have to do it commensurate with the business.

To view a full video of the panel discussion on cloud-based security, please go to the registration page.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. View the video. Sponsor: Akamai Technologies.

You may also be interested in: