Friday, June 17, 2011

Discover Case Study: Holistic ALM helps Blue Cross and Blue Shield of Florida break down application inefficiencies, redundancy

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Blue Cross and Blue Shield of Florida and how they’ve been able to improve their applications' performance -- and even change the culture of how they test, provide, and operate their applications.

Join Victor Miller, Senior Manager of Systems Management at Blue Cross and Blue Shield of Florida in Jacksonville, for a discussion moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Miller: The way we looked at applications was by their silos. It was a bunch of technology silos monitoring and managing their individual ecosystems. There was no real way of pulling information together. We didn’t represent what the customer is actually feeling inside the applications.

One of the things we started looking at was that we have to focus on the customers, seeing exactly what they were doing in the application to bring the information back. We were looking at the performance of the end-user transactions or what the end-users were doing inside the app, versus what Oracle database is doing, for example.

When you start pulling that information together, it allows you to get full traceability of the performance of the entire application from a development, test, staging, performance testing, and then also production side. You can actually compare that information to understand exactly where you're at. Also, you're breaking down those technology silos, when you're doing that. You move more toward a proactive transactional monitoring perspective.

We're looking at how the users are using it and what they're doing inside the applications, like you said, instead of the technology around it. The technology can change. You can add more resources or remove resources, but really it's all up to the end-user, what they are doing in their performance of the apps.

Overcome hurdles

Blue Cross and Blue Shield is one of the 39 independent Blue Crosses throughout the United States. We're based out of Florida. We've been around since about 1944. We're independent licensee of the Blue Cross Blue Shield Association. One of our main focuses is healthcare.

We do sell insurance, but we also have our retail environment, where we're bringing in more healthcare services. It’s really about the well-being of our Florida population. We do things to help Florida as a whole, to make everyone more healthy where possible.

When we started looking at things we thought we were doing fine until we actually started bringing the data together to understand exactly what was really going on, and our customers weren’t happy with IT performance of their application, the availability of their applications.

From an availability perspective, we weren’t looking very good. So, we had to figure out what we could do to resolve that.



We started looking at the technology silos and bringing them together in one holistic perspective. We started seeing that, from an availability perspective, we weren’t looking very good. So, we had to figure out what we could do to resolve that. In doing that, we had to break down the technology silos, and really focus on the whole picture of the application, and not just the individual components of the applications.

Our previous directors reordered our environment and brought in a systems management team. It’s responsibility is to monitor and help manage the infrastructure from that perspective, centralize the tool suites, and understand exactly what we're going to use for the capabilities. We created a vision of what we wanted to do and we've been driving that vision for several years to try to make sure that it stays on target and focused to solve this problem.

We were such early adopters that we actually chose best-in-breed. We were agent-based monitoring environment, and we moved to agent-less. At the time, we adopted Mercury SiteScope. Then, we also brought in Mercury’s BAC and a lot of Topaz technologies with diagnostics and things like that. We had other capabilities like Bristol Technology’s TransactionVision.

Umbrella of products

H
P purchased all the companies and brought them into one umbrella of product suites. It allowed us to bind the best-of-breed. We bought technologies that didn’t overlap, could solve a problem, and integrated well with each other. It allowed us to be able to get more traceability inside of these spaces, so we can get really good information about what the performance availability is of those applications that we're focusing on.

One of the major things was that it was people, process, and technology that we were focused on in making this happen. On the people side, we moved our command center from our downtown office to our corporate headquarters where all the admins are, so they can be closer to the command center. If there were a problem that command center can directly contact them and they go down in there.

We instituted what I guess I’d like to refer to as "butts in the seat." I can't come with a better name for it, but it's when the person is on call, they were in the command center working down there. They were doing the regular operational work, but they were in the command center. So if there was an incident they would be there to resolve it.

In the agent-based technologies we were monitoring thousands of measurement points. But, you have to be very reactive, because you have to come after the fact trying to figure out which one triggered. Moving to the agent-less technology is a different perspective on getting the data, but you’re focusing on the key areas inside those systems that you want to pay attention to versus the everything model.

In doing that, our admins were challenged to be a little bit more specific as to what they wanted us to pay attention to from a monitoring perspective.



In doing that, our admins were challenged to be a little bit more specific as to what they wanted us to pay attention to from a monitoring perspective to give them visibility into the health of their systems and applications.

[Now] there is a feedback loop and the big thing around that is actually moving monitoring further back into the process.

We’ve found out is if we fix something in development, it may cost a dollar. If we fix it in testing, it might cost $10. In production staging it may cost $1,000 It could be $10,000 or $100,000, when it’s in production, because that goes back to the entire lifecycle again, and more people are involved. So the idea is moving things further back in the lifecycle has been a very big benefit.

Also, it involved working with the development and testing staffs to understand that you can’t throw application over the wall and say, "Monitor my app, because it’s production." We have no idea which is your application, or we might say that it’s monitored, because we're monitoring infrastructure around your application, but we may not be monitoring a specific component of the application.

Educating people

The challenge there is reeducating people and making sure that they understand that they have to develop their app with monitoring in mind. Then, we can make sure that we can actually give them visibility back into the application if there is a problem, so they can get to the root cause faster, if there's an incident.

We’ve created several different processes around this and we focused on monitoring every single technology. We still monitor those from a siloed perspective, but then we also added a few transactional monitors on top of that inside those silos, for example, transaction scripts that run at the same database query over-and-over again to get information out of there.

At the same time, we had to make some changes, where we started leveraging the Universal Configuration Management Database (UCMDB) or Run-time Service Model to bring it up and build business services out of this data to show how all these things relate to each other. The UCMDB behind the scenes is one of the cornerstones of the technology. It brings all that silo-based information together to create a much better picture of the apps.

We don’t necessarily call it the system of record. We have multiple systems of record. It’s more like the federation adapter for all these records to pull the information together. It guides us into those systems of record to pull that information out.

We’ve created several different processes around this and we focused on monitoring every single technology.



About eight years ago when we first started this, we had incident meetings where we had between 15 and 20 people going over 20-30 incidents per week. We had those every day of the week On Friday, we would review all the ones for the first four days of the week. So, we were spending a lot of time doing that.

Out of those meetings, we came up with what I call "the monitor of the day." If we found something that was an incident that occurred in the infrastructure that was not caught by some type of monitoring technology, we would then have it monitored. We’d bring that back, and close that loop to make sure that it would never happen again.

Another thing we did was improve our availability. We were taking something like five and six hours to resolve some of these major incidents. We looked at the 80:20 rule. We solved 80 percent of the problems in a very short amount of time. Now, we have six or seven people resolving incidents. Our command center staff is in the command center 24 hours a day to do this type of work.

Additional resources

W
hen they needed additional resources, they just pick up the phone and call the resources down. So, it’s a level 1 or level 2 type person working with one admin to solve a problem, versus having all hands on deck, where you have 50 admins in a room resolving incidents.

I'm not saying that we don’t have those now. We do, but when we do, it’s a major problem. It’s not something very small. It could be a firmware on a blade enclosure going down, which takes an entire group of applications down. It's not something you can plan for, because you're not making changes to your systems. It's just old hardware or stuff like that that can cause an outage.

Another thing that is done for us is those 20 or 30 incidents we had per week are down to one or two. Knock on wood on that one, but it is really a testament to a lot of the things that our IT department has done as a whole. They're putting a lot of effort into into reducing the number of incidents that are occurring in the infrastructure. And, we're partnering with them to get the monitoring in place to allow for them to get the visibility in the applications to actually throw alerts on trends or symptoms, versus throwing the alert on the actual error that occurs in the infrastructure.

[Since the changes] customer satisfaction for IT is a lot higher than it used to be. IT is being called in to support and partner with the business, versus business saying, "I want this," and then IT does it in a vacuum. It’s more of a partnership between the two entities to be able to bring stuff together. Operations is creating dashboards and visibility into business applications for the business, so they can see exactly what they're doing in the performance of their one department, versus just from an IT perspective. We can get the data down to specific people now.

Customer satisfaction for IT is a lot higher now than it used to be. IT is being called in to support and partner with the business.



Some of the big things I am looking at next are closed-loop processes, where I have actually started to work with making some changes, working with our change management team to make changes to the way that we do changes in our environment where everything is configuration item (CI) based, and doing that allows for that complete traceability of an asset or a CI through its entire lifecycle.

You understand every incident, request, problem request that ever occurred on that asset, but also you can actually see financial information. You can also see inventory information and location information and start bringing the information together to make smart decisions based on the data that you have in your environment.

The really big thing is really to help reduce the cost of IT in our business and be able to do whatever we can to help cut our cost and keep a lean ship going.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download the transcript. Sponsor: HP.

You may also be interested in:

Thursday, June 16, 2011

Discover Case Study: Sprint Gains Better Control and Efficiency in IT Operations with Business Service Management Approach

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Sprint. We'll learn how Sprint is doing applications and IT in a better way using Business Service Management. It's an on-going journey to simply and automate, reduce redundancy, and develop more agility as a business solutions provider for their customers, and also their own employees.

Join two executives from the IT organization at Sprint, Joyce Rainey, Program Manager of Enterprise Services, and John Felton, Director of Applications Development and Operations, for a discussion moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Felton: The problem that we had originally had, as any large organization has, were many applications, many of them custom built, many of them purchased applications that now are so customized that the vendor doesn’t even know what to do with it anymore.

We grew those over a long period of time. We were trying, as a way to stabilize, to get it into a centralized, single point of truth and quit the duplication or the redundancy that we built into all these applications.

The goal, as we set forth about a year-and-a-half ago, was to implement the ecosystem that HP provided, the five toolsets that followed our ITIL processes that we wanted to do. The key was that they were integrated to share information, and we'd be able to take down these customized applications and then have one ecosystem to manage our environment with. That's what we've done over the last 14 months.

[At Sprint] there are thousands of outlets, retail stores. We have our third-party customers as well, like Best Buy and RadioShack. We have about 12,000 servers, about five petabytes of storage. We serve about 39,000 customers internally at Sprint.

We host all that information to make sure that we process about a million change records a month. That information that we're capturing are configuration items (CIs). The actual content that goes in the system was, at one point, in the 24 million range. We dialed that back a little bit, because we were collecting a little too much information.

We have about 1,300 applications that were internally built. Many of those are hosted on other external vendor products that we've customized and put into Sprint. And, we have about 64,000 desktops. So, there is a lot going on in this environment. It's moving constantly and that goes back to a lot of the reasons why, if we didn’t put this in quickly, they'd pass us by.

Making it easier

Rainey: We had too many of the same. We had to make it easier for our internal support teams. We had to made it easier for our customers. We had to lessen the impacts on maintenance and cost. Simplification was the key of the entire journey.

Felton: We had to concentrate on not only making sure that the applications base wasn't duplicated, but also the data. The data is where we ended up having issues. One person's copy may not be as accurate as another person's copy, and then what we ended up spending an enormous amount of time saying whose was right.

What we did was provide one single point of truth, one copy of the truth. Instead of everybody being hidden from the data, we allowed everybody to see it all. They may not be able to manipulate it and they may not be able to change it, but everybody could have visibility to the same amount of information. We were hoping they would stop trying to have their own version of it.

Our biggest culture problem was that everybody wanted to put their arms around their little piece, their little view. At the end of the day, having one view that is customized, where you can see what you want to see, but still keeping the content within a single system, really helped us.

Having one view that is customized, where you can see what you want to see, but still keeping the content within a single system, really helped us.



It's the data that supports the application. It's the servers that host the applications. It's the third-party applications that deliver the web experience, the database experience, the back-end experience. It's the ability for us to associate fixed agents to that particular information, so that when I am calling out the fixed agent for an alarm, I'm getting the right person online first, versus having a variety of individuals coming on over time.

Rainey: The HP Excellence Award [at Discover 2011] was a very big milestone for everyone to remind us that it was well worth it, the time that was spent, the energy that was spent. I'm very glad that HP and our customers have been able to recognize that. I am very proud, very proud of Sprint. I'm very proud of the team. I'm very proud of the executive support that we received throughout this journey.

Felton: I'm also very proud of the team, as well, and we also won the CIO 100 Award. So, we’ve been able to take the same platform and the same kind of journey and show a much larger audience that it really was worth it. I think that’s pretty cool.

Importance of speed

What I might do differently is spread it out a little more, do smaller increments of implementation, versus all at one time. Don’t do the Big Bang Theory. Put in BSM, but always know that it's going to integrate with SM, and SM is going to integrate with CMS, and CMS is going to integrate with AM.

Then, build that plan, so that you integrate them. You get your customers involved in that particular application, and then when you go at the very end and put SM in, this the front door. They’re already familiar with what you’ve already done. That is something we probably didn’t do as well as we could have. It was more of a Big Bang approach. You put it in and you go.

But, at the end of the day, don’t be afraid to re-look at the processes. Don’t necessarily assume that you’re going to copy what you did today. Don’t assume that that is the best way to do it. Always ask the question, what business value does it address for your corporation? If you do that over, and over, and over, individuals will quit asking, because if you ask, these platforms are very flexible.

You can do anything. But when you get them so customized that the vendor can't even help you, then every upgrade is painful, every movement that you make is painful. What we’ve done has given us the flexibility to clean up a lot of stuff that was left over from years ago, an approach that may have not been the best solution, and given us an avenue to now extend and subtract without putting a huge investment in place.

What we’ve done has given us the flexibility to clean up a lot of stuff that was left over from years ago.



One other thing is that we had a really good idea of, "This is our business. Run it that way. You are a part of Sprint." We try to say, "We’re going to make investments that also benefit us, but don’t do them just to do them, because in this space as you look out on that floor and see all the techno wizards that are out there, shiny objects are pretty cool, but there are a lot of shiny objects."

We wanted to make sure that the shiny object we produced is something that was long lasting and gave value back to the company for a long period of time, not just a quick introduction.

Rainey: We continued to work on it. Adoption is a big key in any transformation project. One of the things that we had to definitely look at was making sure that facts can prove to people that their business requirements were either valid or invalid. That way we stop the argument of what do I want, versus what do I need?

A lot of education

We really had a lot of communication, a lot of education along the way. We continue to educate people about why we do this and why we're doing it this way. We engage them in the process by making them part of the decision-making, versus just allowing the tools to dictate whether you can do it.

With the tools, you can do whatever you want. However, you want to customize the product, but should we and for what purpose? So, we had to introduce a lot of education along the way to make sure folks understood why we were going down this path.

Felton: We implemented in 12 months. It was 14 months to get the future enhancements of the data quality and all the things we're working on right now. But as to the tipping point, I think the economy had a lot to do with it, the environment that was going on at the time.

You had a reduction in staff. You had downsizing of companies. It made it harder for individuals, to Joyce's point, to protect an application that really had no business value. It might have a lot of value to them, and in their little piece of the world it probably was very valuable, but how did it drive the overall organization?

The economy in any kind of transformational program is a key factor for investing in these kind of products. You're going to make sure that if you're introducing something it's because you're going to add value.



[Sprint CEO] Dan Hesse did a great job in coming in and putting us on a path of making sure that we're fiscally responsible. How are we improving our customer expectations and how are we moving in this direction continuously, so that our customers come to us because we're best provider there could be? And our systems on the back end needed to go that way.

So, to Joyce's point, when you brought them in, you asked "Does this help that goal?" A lot of times, no. And, they were willing to give a little bit up. We said, "You're going to have to give a little bit up because this is not a copy/paste exercise. This is an out-of-the-box solution. We want to keep it that way as much as possible, and we'll make modifications, when we need to to support the business." And, we've done that.

Rainey: It's important to recognize that data is data, but you really derive information to drive decision making. For us, the ability for executives to know how many assets they really have out there, for them to concentrate their initiatives for the future based on that information, became the reason we needed our data quality to really be good.

It's important to recognize that data is data, but you really derive information to drive decision making.



So, every time that somebody asked John why he went after this product suite, it was because of the integration. We wanted to make sure that the products can share the same information across them all. That way, we can hold truth through that single source of information.

Felton: We started with [IT] asset management. Asset management was really the key for us to understand assets and software, and how much cost was involved. Then we associated that to Universal Configuration Management Database (UCMDB). How do we discover things in our environment? How many servers are there, how many desktops are there, where they at, how do I associate them?

Then we looked at Business Service Management (BSM), which was the monitoring side. How do I monitor these critical apps and alarm them correctly? How do I look up the information and get the right fix agents out there and target it, versus calling out the soccer team, as I always say? Then, we followed that up with Release Control, which is a way for our change team to manage and see that information, as it goes through.

The final component, which was the most important, the last one we rolled out, was Service Manager (SM), which is the front door for everybody. We focus everybody on that front door, and then they can spin off of that front door by going into the other individual or underlying processes to actually do the work that they focus on.

Early adopter

Felton: For just BSM in itself, I'm very proud of our team. We had [another product] in 2009. We went to Business Availability Center (BAC) January 2010. HP said they had this new thing called BSM 9. Would we take it? We said sure, and we implemented it in March of that year. We took three upgrades in less than five months.

I give a lot of credit to that team. They did it on their own. There were three of them. No professional services help and no support whatsoever. They did it on their own, and I think that’s pretty interesting how they did that. We also did the same thing with UCMDB. We are on the 8x platform, about halfway deployed, and HP said they'd like us to go to 9x, and so we turned the corner and we said sure.

We did those things because of the web experience. Very few people on my team would tell you that they were satisfied with the old web experience. I know some people were, and that’s great. But, in our environment, as big as it is and as many access points as we had, we had to make sure that was rock-solid.

And, 9x for all those versions, seemed to be the best web experience we could have, and it was very similar, if I'm looking at BSM. Drop-downs and the menus, of course, are all different, but the flow and the layout is exactly the same as SM, and SM is exactly the same as CMS.

We got a nice transition between the applications that made everything smooth for the customer, and the ability for them to consume it better.



We got a nice transition between the applications that made everything smooth for the customer, and the ability for them to consume it better. I'll go so far as to say that a lot of my executive team actually log into BSM now. That would have never happened in the past. They actually go look up events that happen to our applications and see what's going on, and that’s all because we felt like that platform had the best GUI experience.

Rainey: And, if you get your CEOs and your VPs and your directors consuming and leveraging the products, you get the doers, you get the application managers, you get the fix agents, you get the helpdesk team, because they start believing that the data is good enough for decision making at that level of executive support.

Felton: We wanted reduction in our [problem resolution time] by 20 percent. Does that really mean you get a reduction? No, it means you get out there, you fix it faster, and the end-user doesn’t see it. By me focusing on that and getting individuals to go out there, and maybe more proactively understanding what's going on, we can get changes and fixes in before there was a real issue. We’re driving towards that. Do we have that exact number? Maybe not, but that’s the goal and that’s what we continue to drive for.

Removing cost

Additionally the costs are huge, having 35 redundant systems. We removed a lot of maintenance dollars from Sprint, a lot of overhead. A lot of project costs sometimes are not necessarily tangible, because everybody is working on multiple projects all at one time.

But, if I've got to update five systems, it's a lot different if I update one, and make it simpler on my team. My team comprised about 11 folks, and they were managing all those apps before. Now, they're managing five. It’s a lot simpler for them. It's a lot easier for them. We’re making better decisions, and we make better changes.

We’re hoping that by having it that way, all of the infrastructure stability goes up, because we’re focused. To Joyce’s point, the executive team pays attention, managers pay attention, everybody sees the value that if I just watch what this thing is doing, it might tell me before there is a customer call. That is always our goal. I don’t want a customer calling my CIO. I want the customer to call my CIO and for him to reply, "Yes, we know, and we’re going to fix that as fast as we can."

Six years ago that help desk had 400 people. As of today it has 44. The reason it does is that we bypass making calls. I don’t want you to call a fix agent to type a ticket to get you engaged. We came up with a process called "Click It." Click It is a way for you to do online self-service.

If I'm having an Exchange problem, an Outlook problem, or an issue with some application, I can go in and open a ticket, instead of it being transferred to the help desk, who then transfers it to the fix agent. We go directly to the fix agent.

We’re getting you closely engaged, hoping that we can get your fix time faster. We can actually get them talking to you quicker. By having this new GUI interface it streamlined it through a lot of wizards that we can implement. Instead of me having seven forms that are all about access, maybe now I have one. Now, there is a drop-down menu that tells me what application I want it for. That continuous improvement is what we’re after, and I think we’ve now got the tools in place to go make that easy for us.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, June 14, 2011

Discover Case Study: Seagate ramps up dev-ops benefits with HP Application Lifecycle Management tools

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Seagate Technology, one of the world's largest manufacturers of rotating storage media hard-drive disks, where the application development teams are spanning the dev-ops divide and exploiting agile development methodologies.

Please now join Steve Katz, Manager of Software Performance and Quality at Seagate, an adopter of modern application development techniques like agile, for a discussion moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Katz: Seagate is one of the largest manufacturers of rotating media hard disks and we also are into the solid state [storage media] and hybrids. Last quarter, we shipped about 50 million drives. That continues to grow every quarter.

As you can imagine, with that many products -- and we have a large product line and a large supply chain -- the complexities of making that happen, both from a supply chain perspective and also from a business perspective, are very complicated and get more complicated every day.

The Holy Grail for us would definitely be an integrated approach to doing software development that incorporates the development activities, but also all of the test, monitoring, provisioning, and all of the quality checks and balances that we want to have to make sure that our applications meet the needs of our customers.

In the last couple of years, with the explosion with cloud, with the jump to virtual machines (VMs), virtualization of your data center, and also global operations, global development teams, new protocols, and new applications, most of what we do, rather than developing from scratch, is integrate other people’s third-party applications to meet our needs. That brings to the table a whole new litany of challenges, because one vendor’s Web 2.0 protocol standard is completely different than another vendor’s Web 2.0 protocol standard. Those are all challenges.

Also, we're adopting, and have been adopting, more of the agile development techniques, because we can deliver quanta of capability and performance at different intervals. So we can start small, get bigger, and keep adding more functionality. Basically, it lets us deliver more, more quickly, but also gives us the room to grow and be able to adapt to the changing customer needs, because in the market, things change every day.

So for us, our goal has been the ability to get all those things together early in the program and have a way to collaborate and ultimately have the collaboration platform to be able to get all the different stakeholders’ views and needs at the very beginning of the program, when it’s the cheapest and most effective to do it. We’re not there. I don’t know if anybody will ever be there, but we’ve made a lot of efforts and feel like we’ve made a lot of ground.

Early adoption

The dev-ops perspective has really interested us, and we have been doing some of the early adoption, the early engagement with our customers, in our business projects very early in the game for performance testing.

We get into the project early and we start understanding what the requirements are for performance and don’t just cross our fingers and hope for the best down the road, but really put some hard metrics around what it is the expectations are for performance. What’s the transfer function? What’s the correlation between performance and the infrastructure that need to deliver that performance? Finally, what are the customer needs and how do you measure it?

That’s been a huge boon for us, because it’s helped us script that early in the project and actually look at the unit-level pieces, especially in each different iteration of the agile process. We can break down the performance and do testing to make sure that we’ve optimized that piece of it to be as good as possible.

Now when you add in the needs for VM provisioning, storage, networking, and databasing, the problem starts to mushroom and get more complex. So, for a long time, we've been big users of HP Quality Center (QC), which is what we use to gather requirements, build test plans, and link those requirements to the test plans ultimately to successful tests and defects. We have traceability from what the need of the customer is to our ability to validate that we deliver that need. And, it worked well.

Then, we have the performance testing which was an add-on to that. And now, with the new ALM 11, which by the way, marries the QC functionality and Performance Center functionality. They're not two different things any more. It’s the same thing, and that’s the beauty for us.

Having the QC and performance testing closer together has made a lot of sense for us and allowed us to go faster and cheaper, and end up with something that, in fact, is better.



That’s what we’ve been preaching and trying to work with our project teams on, to say that it’s just a requirement. Any requirement is just a requirement and how we decide to implement, fulfill, and test that is our choice. But, having the QC and performance testing closer together has made a lot of sense for us and allowed us to go faster and cheaper, and end up with something that, in fact, is better.

The number of applications we have in production is in the 300-500 range, but as far as mission critical, probably 30. As far as some things that are on everybody’s radar, probably 50 or 60. In Business Servive Management (BSM), we monitor about 50 or 60 applications, we also have the lower-level monitors in place that are looking at infrastructure. Then, our data all goes up to the single pane, so we can get visibility into what the problems are.

The number of things we monitor is less important to us than the actual impact that these particular applications have, not only on the customers experience, but also on our ability to support it. We need to make sure that whatever it is that we do is, first of all, faster. I can’t afford to get a report every morning to see what broke in the last 24 hours. I need to know where the fires are today and what’s happening now, and then we need to have direct traceability out to the operator.

As soon as something goes wrong, the operator gets the information right away and either we’re doing auto-ticketing, or that operator is doing the triage to understand where the root cause is. A lot of that information comes from our dashboards, BSM, and Operations Manager. Then, they know what to do with that issue and who to send it to.

SaaS processes

We’ve subscribed to a number of internal cloud services that are software-as-a-service (SaaS) processes and services. For those kind of things, we need to first make sure it’s not us before we go looking to find out what our software service providers are going to do about the problems. And both of our applications, all the BSM and all the dev-ops has helped us get to that point a little better.

The final piece of the puzzle that we’re trying to implement is the newer BSM and how we get that built into the process as well, because that’s just another piece of the puzzle.

Gardner: What sort of paybacks are you expecting?

Katz: It’s two things for us. One is the better job you do up front, the better job you’re going to do in the back end. Things are a lot cheaper and faster, and you can be a whole lot more agile to react a problem. So the better job we do up front, understand what the requirements are and not just what this application is or what it’s supposed to do, but how is it supposed to affect the rest of our infrastructure, how is it supposed to perform under stress, and what are the critical quality, the quality of service, the quality of experience aspects that we need to look at.

Defining that up front helps us to be better and helps us to develop and launch better products. In in doing that, we find issues earlier in the process, when it’s a lot cheaper to fix them and a lot more effective.

The better job you do up front, the better job you’re going to do in the back end. Things are a lot cheaper and faster, and you can be a whole lot more agile.



On the back end, we need to be more agile. We need to get information faster and we need to be able to react to that information. So, when there’s a problem, we know about it as soon as possible, and we’re able to reduce our root-cause analysis and time to resolution.

Gardner: Is integrated ALM helping you move the cloud and also adopt other IT advancements?

Katz: I look at that like a baseball team. My kids are in Little League right now. We’re in the playoffs. When a team does well, you get this momentum. Success really feeds momentum, and we’ve had a lot of success with the dev-ops, with pulling in ALM performance management and BSM into our application development lifecycle. Just because of the momentum we've got from that, we’ve got a lot more openness to explore new items, to pull more information into the system, and to get more information into the single pane.

Before we had the success, the philosophy was. "I don’t have time to fix this. I don’t have time to add new great things." Or, "I've got to go fix what I got." But when you get a little bit of that momentum and you get the successes, there is a lot more openness to it and willingness to see what happens. We’ve had HP helping us with. They’re helping us to describe what the next phase of the world looks like.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Kapow Mobile Katalyst debuts as new means to rapidly convert web applications to mobile apps sans APIs

Kapow Software today released Kapow Mobile Katalyst as a platform for rapid mobile-enablement of business applications.

The post-PC era writing has gone from the wall to the tablet, and many enterprises, customer-facing retailers and service providers therefore want to make more of their web and business applications work on popular mobile smartphone and tablet devices such as Android and iOS.

"It’s no surprise that millions of employees around the world are bringing their smartphones and mobile devices to work, resetting workplace expectations to have always-on access to the instantly available business apps that they’ve grown accustomed to from their personal lives," said Stefan Andreasen, Founder and CTO, Kapow Software.

However, many of these applications do not come with application programming interfaces (APIs), or complete APIs, and the transition to workable and dependable mobile apps can be arduous, expensive, time-consuming and some times nearly impossible. [Disclosure: Kapow is a sponsor of BriefingsDirect podcasts.]

Kapow has entered the mobile migration opportunity with a platform and tools that wrap underlying logic and transaction services from existing applications into a series of REST and SOAP services. Such functions as shopping baskets and transaction integrations and business logic can be re-purposed to mobile devices as native apps in a few months versus much longer, said Andreasen.

Kapow Katalyst accesses and integrates the data and business logic of nearly any existing packaged or proprietary business applications without requiring APIs, he said. Adding a service-level interface to a legacy application is a complex development project requiring an extensive rewrite — years of planning, coding, and testing as well as spending, disrupting, and, too often abandoning, he said.

Visual tools and mappings


Using visually built flow-charts and data mappings to control the application’s business logic through its existing web interface, users can then deploy the "mobilized" application with one click into a production environment without re-writing any existing code, according to Kapow.

Furthermore, Kapow Mobile Katalyst allows for repurposing of existing applications as mobile applications, but leaving the underlying systems untouched.

Kapow is partnering with companies that specialize in mobile front-end development such as Antenna Software. “A mobile website is only as good as the data that supports it,” said Jim Somers, chief marketing & strategy officer at Antenna Software. “Together with Kapow Mobile Katalyst, we are able to accelerate the delivery of our mobile web solutions to help drive significant business value for our customers, quickly. We’ve proven our joint success with several leading global brands and look forward to building on this relationship.”

Kapow Mobile Katalyst is available now and can be deployed on-premises or via a hosted online service from Kapow.

You may also be interested in:

Monday, June 13, 2011

HP Discover Interview: Security Evangelist Rafal Los on balancing risk and reward amid consumerization of IT trends

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

It’s an interesting time for IT and cyber security. We have more threats. We hear about breaches in large organizations like Sony and Google, but at the same time IT organizations are being asked to make themselves more like Google or Amazon, the so-called consumerization of IT.

So how do IT organizations become more open while being more protective? Are these goals mutually exclusive, or can security enhancements and governance models make risks understood and acceptable for more kinds of social, collaboration, mobile and cloud computing activities?

BriefingsDirect directed such questions to Rafal Los, Enterprise Security Evangelist for HP Software. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Raf, what comes in your mind when we say "consumerization of IT?"

Los: I think of the onslaught of consumer devices, from your tablets to your mobile handsets, that start to flood our corporate environments with their ever-popular music, photo-sharing, data-gobbling, and wireless-gobbling capabilities that just catch many enterprises completely unaware.

Gardner: Is this a good thing? The consumers seem to like it. The user thinks it’s good productivity. I want to do things at the speed that I can do at home or in the office, but this comes with some risk, doesn’t it?

Los: Absolutely, risk is everywhere. But you asked if it’s a good thing. It’s a good thing, depending on which platform you're standing on. From the consumer perspective, absolutely, it’s a great thing. I can take my mobile device with me and have one phone for example, on which I get my corporate email, my personal email on, and not have four phones in my pocket. I can have a laptop from my favorite manufacturer, whatever I want to use, bring into my corporate environment, take it home with me at night, and modify it however I want.

That’s cool for the consumer, but that creates some very serious complexities for the enterprise security folks. Often, you get devices that aren't meant to be consumed in an enterprise. They're just not built for an enterprise. There's no enterprise control. There's no notion of security on somebody’s consumer devices.

Now, many of the manufacturers are catching up, because enterprises are crying out that these devices are showing up. People are coming after these big vendors and saying, "Hey, you guys are producing devices that everybody is using. Now they are coming up into my company, and it’s chaos" But, it’s definitely a risk, yes.

Gardner: What would a traditional security approach need to do to adjust to this? What do IT people need to think about differently about security, given this IT consumerization trend?

Need to evolve

Los: We need to evolve. Over the last decade and a half or so, we’ve looked at information security as securing a castle. We've got the moat, the drawbridge, the outer walls, the center or keep, and we’ve got our various stages of weaponry, an armory and such. Those notions have been blown to pieces over the last couple of years as, arguably, the castle walls have virtually evaporated, and anybody can bring in anything, and it’s been difficult.

Companies are now finding themselves struggling with how to deal with that. We're having to evolve from simply the ostrich approach where we are saying, "Oh, it’s not going to happen. We're simply not going to allow it," and it happens anyway and you get breached. We have to evolve to grow with it and figure out how we can accommodate certain things and then keep control.

In the end, we're realizing that it’s not about what you let in or what you don’t. It’s how you control the intellectual property in the data that’s on your network inside your organization.

Gardner: So, do IT professionals in enterprises need to start thinking about the organizations differently? Maybe they're more like a service provider or a web applications provider than a typical bricks and mortar environment.

Los: That’s an interesting concept. There are a number of possible ways of thinking about that. The one that you brought up is interesting. I like the idea of an organization that focuses less on the invasive technology, or what’s coming in, and more on what it is that we're protecting.

I like the idea of an organization that focuses less on the invasive technology, or what’s coming in, and more on what it is that we're protecting.



From an enterprise security perspective, we've been flying blind for many years as to where our data is, where our critical information is, and hoping that people just don’t have the capacity to plug into our critical infrastructure, because we don’t have the capacity to secure it.

Now, that notion has simply evaporated. We can safely assume that we now have to actually go in and look at what the threat is. Where is our property? Where is our data? Where are the things that we care about? Things like enterprise threat intelligence and data storage and identifying critical assets become absolutely paramount. That’s why you see many of the vendors, including ourselves, going in that direction and thinking about that in the intelligent enterprise.

Gardner: This is interesting. To use your analogy about the castle, if I had a high wall, I didn’t need to worry about where all my stuff was. I perhaps didn’t even have an inventory or a list. Now, when the wall is gone, I need to look at specific assets and apply specific types of security with varying levels, even at a dynamic policy basis, to those assets. Maybe the first step is to actually know what you’ve got in your organization. Is that important?

Los: Absolutely. There’s often been this notion that if we simply build a impenetrable, hard, outer shell, the inner chewy center is irrelevant. And, that worked for many years. These devices grew legs and started walking around these companies, before we started acknowledging it. Now, we’ve gotten past that denial phase and we're in the acknowledgment phase. We’ve got devices and we’ve got capacity for things to walk in and out of our organization that are going to be beyond my control. Now what?

Don't be reactionary

Well, the logical thing to do is not to be reactionary about it and try to push back and say that can’t be allowed, but it should be to basically attempt to classify and quantify where the data is? What do we care about as an organization? What do we need to protect? Many times, we have these archaic security policies and we have disparate systems throughout an organization.

We've shelled out millions of dollars in our corporate hard-earned capital and we don’t really know what we're protecting. We’ve got servers. The mandate is to have every server have anti-virus and an intrusion prevention system (IPS) and all this stuff, but where is the data? What are you protecting? If you can’t answer that question, then identifying your data asset inventory is step one. That’s not a traditional security function, but it is now, or at least it has to be.

Gardner: I suppose that when we also think about cloud computing, many organizations might not now be doing public cloud or hybrid cloud, but I don’t think it’s a stretch to say that they probably will be some day. They're definitely going to be doing more with mobile. They're going to be doing more with cloud. So wouldn’t it make sense to get involved with these new paradigms of security sooner rather than later? I think the question is really about being proactive rather than reactive.

Los: The whole idea of cloud, and I've been saying this for a while, is that it's not really that dramatic of a shift for security. What I said earlier about acknowledging the fact that our preconceived notions of defending the castle wall has to be blown apart extrapolates beautifully into the cloud concept, because not only is it that data is not properly identified within our "castle wall," but now we're handing it off to some place else.

What are you handing off to some place else? What does that some place else look like? What are the policies? What are the procedures? What’s their incident response? Who else are you sharing with? Are you co-tenanting with somebody? Can you afford downtime? Can you afford an intrusion? What does an intrusion mean?

What are you handing off to some place else? What does that some place else look like? What are the policies? What are the procedures?



This all goes back to identifying where your data lives, identifying and creating intelligent strategies for protecting it, but it boils down to what my assets are. What makes our business run? What drives us? And, how are we going to protect this going forward?

Gardner: Now thinking about data for security, I suppose we're now also thinking about data for the lifecycle for a lot of reasons about storage efficiency and cutting cost. We're also thinking about being able to do business intelligence (BI) and analytics more as a regular course of action rather than as a patch or add-on to some existing application or dataset.

Is there a synergy or at least a parallel track of some sort between what you should be doing with security, and what you are going to probably want to be doing with data lifecycle and in analytics as well?

Los: It's part-and-parcel of the same thing. If you don’t know what information your business relies on, you can’t secure it and you can’t figure out how to use it to your competitive advantage.

I can’t tell you how many organizations I know that have mountains and mountains and mountains of storage all across the organization and they protect it well. Unfortunately, they seem to ignore the fact that every desktop, every mobile device, iPhone, BlackBerry, WebOS tablet has a piece of their company that walks around with it. It's not until one of these devices disappears that we all panic and ask what was on that. It’s like when we lost tape. Losing tapes was the big thing, as was encrypting tapes. Now, we encrypt mobile devices. To what degree are we going to go and how much are we going to get into how we can protect this stuff?

Enabling the cause

BI is not that much different. It’s just looking at the accumulated set of data and trying to squeeze every bit of information out of it, trying to figure out trends, trying to find out what can you do, how do you make your business smarter, get to your customers faster, and deliver better. That’s what security is as well. Security needs to be furthering and enabling that cause, and if we're not, then we're doing it wrong.

Gardner: Based on what you’ve just said, if you do security better and you have more comprehensive integrated security methodology, perhaps you could also save money, because you will be reducing redundancy. You might be transforming and converging your enterprise, network, and data structure. Do you ever go out on a limb and say that if you do security better, you'll save money?

Los: Coming from the application security world, I can cite the actual cases where security done right has saved the company money. I can cite you one from an application security perspective. A company that acquires other companies all of a sudden takes application security seriously. They're acquiring another organization.

They look at some code they are acquiring and say, "This is now going to cost us X millions of dollars to remediate to our standards." Now, you can use that as a bargaining chip. You can either decrease the acquisition price, or you can do something else with that. What they started doing is leveraging that type of value, that kind of security intelligence they get, to further their business costs, to make smarter acquisitions. We talk about application development and lifecycle.

That’s what security is as well. Security needs to be furthering and enabling that cause, and if we're not, then we're doing it wrong.



There is nothing better than a well-oiled machine on the quality front. Quality has three pillars: does it perform, does it function, and is it secure? Nobody wants to get on that hamster wheel of pain, where you get all the way through requirements, development, QA testing, and the security guys look at it Friday, before it goes live on Saturday, and say, "By the way, this has critical security issues. You can’t let this go live or you will be the next ..." -- whatever company you want to fill in there in your particular business sector. You can’t let this go live. What do you do? You're at an absolutely impossible decision point.

So, then you spend time and effort, whether it’s penalties, whether it’s service level agreements (SLAs), or whether it’s cost of rework. What does that mean to you? That’s real money. You could recoup it by doing it right on the front end, but the front end costs money. So, it costs money to save money.

Gardner: Okay, by doing security better, you can cut your risks, so you don’t look bad to your customers or, heaven forbid, lose performance altogether. You can perhaps rationalize your data lifecycle. You can perhaps track your assets better and you can save money at the same time. So, why would anybody not be doing better security immediately? Where should they start in terms of products and services to do that?

Los: Why would they not be doing it? Simply because maybe they don’t know or they haven't quite haven't gotten that level of education yet, or they're simply unaware. A lot of folks haven't started yet because they think there are tremendously high barriers to entry. I’d like to refute that by saying, from a perspective of an organization, we have both products and services.

We attack the application security problem and enterprise security problem holistically because, as we talked about earlier, it’s about identifying what your problems are, coming up with a sane solution that fits your organization to solve those problems, and it’s not just about plugging products in.

We have our Security Services that comes in with an assessment. My organization is the Application Security Group, and we have a security program that we helped build. It’s built upon understanding our customer and doing an assessment. We find out what fits, how we engage your developers, how we engage your QA organization, how we engage your release cycle, how we help to do governance and education better, how we help automate and enable the entire lifecycle to be more secure.

Not invasive

I
t’s not about bolting on security processes, because nobody wants to be invasive. Nobody wants to be that guy or that stands there in front of a board and says "You have to do this, but it’s going to stink. It’s going to make your life hell."

We want to be the group that says, "We’ve made you more secure and we’ve made minimal impact on you." That’s the kind of things we do through our Fortified Application Security Center group, static and dynamic, in the cloud or on your desktop. It all comes together nicely, and the barrier to entry is virtually eliminated, because if we're doing it for you, you don’t have to have that extensive internal knowledge and it doesn’t cost an arm and a leg like a lot people seem to think.

I urge people that haven't thought about it yet, that are wondering if they are going to be the next big breach, to give it a shot, list out your critical applications, and call somebody. Give us a call, and we’ll help you through it.

Gardner: HP has made this very strategic for itself with acquisitions. We now have the ArcSight, Fortify and TippingPoint. I have been hearing quite a bit about TippingPoint here at the show, particularly vis-à-vis the storage products. Is there a brand? Is there an approach that HP takes to security that we can look to on a product basis, or is it a methodology, or all of the above?

Los: I think it’s all of the above. Our story is the enterprise security story. How do we enable that Instant-On Enterprise that has to turn on a dime, go from one direction strategically today? You have to adapt to market changes. How does IT adapt, continue, and enable that business without getting in the way and without draining it of capital.

There is no secure. There is only manageable risk and identified risk.



If you look around the showroom floor here and look at our portfolio of services and products, security becomes a simple steel thread that’s woven through the fabric of the rest of the organization. It's enabling IT to help the CIO, the technology organization, enable the business while keeping it secure and keeping it at a level of manageable risk, because it’s not about making it secure. Let me be clear. There is no secure. There is only manageable risk and identified risk.

If you are going for the "I want to be secure thing," you're lost, because you will never reach it. In the end that’s what our organizational goal is. As Enterprise Security we talk a lot about risk. We talk a lot about decreasing risk, identifying it, helping you visualize it and pinpoint where it is, and do something about it, intelligently.

Gardner: Is there new technology that’s now coming out or being developed that can also be pointed at the security problem, get into this risk reduction from a technical perspective?

Los: I'll cite one quick example from the software security realm. We're looking at how we enable better testing. Traditionally, customers have had the capability of either doing what we consider static analysis, which is looking at source code and binaries, and looking at the code, or a run analysis, a dynamic analysis of the application through our dynamic testing platform.

One-plus-one turns out to actually equal three when you put those two together. Through these acquisition’s and these investments HP has made in these various assets, we're turning out products like a real-time hyperanalysis product, which is essentially what security professionals have been looking for years.

Collaborative effort

I
t’s looking at when an application is being analyzed, taking the attack or the multiple attacks, the multiple verifiable positive exploits, and marrying it to a line of source code. It’s no longer a security guide doing a scan, generating a 5000-page PDF, lobbing it over the wall at some poor developer who then has to figure it out and fix it before some magical timeline expired. It’s now a collaborative effort. It’s people getting together.

One thing that we find broken currently with software development and security is that development is not engaged. We're doing that. We're doing it in real-time, and we're doing it right now. The customers that are getting on board with us are benefiting tremendously, because of the intelligence that it provides.

Gardner: So, built for quality, built for security, pretty much ... synonymous?

Los: Built for function, built for performance, built for security, it’s all part of a quality approach. It's always been here, but we're able to tell the story even more effectively now, because we have a much deeper reach into the security world If you look at it, we're helping to operationalize it by what you do when an application is found that has vulnerabilities.

Built for function, built for performance, built for security, it’s all part of a quality approach.



The reality is that you're not always going to fix it every time. Sometimes, things just get accepted, but you don’t want them to be forgotten. Through our quality approach, there is a registry of these defects that lives on through these applications, as they continue to down the lifecycle from sunrise to sunset. It’s part of the entire application lifecycle management (ALM) story.

At some point, we have a full registry of all the quality defects, all the performance defects, all the security defects that were found, remediated, who fixed them, and what the fixes were? The result of all of this information, as I've been saying, is a much smarter organization that works better and faster, and it’s cheaper to make better software.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: