Tuesday, June 14, 2011

Discover Case Study: Seagate ramps up dev-ops benefits with HP Application Lifecycle Management tools

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on Seagate Technology, one of the world's largest manufacturers of rotating storage media hard-drive disks, where the application development teams are spanning the dev-ops divide and exploiting agile development methodologies.

Please now join Steve Katz, Manager of Software Performance and Quality at Seagate, an adopter of modern application development techniques like agile, for a discussion moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Katz: Seagate is one of the largest manufacturers of rotating media hard disks and we also are into the solid state [storage media] and hybrids. Last quarter, we shipped about 50 million drives. That continues to grow every quarter.

As you can imagine, with that many products -- and we have a large product line and a large supply chain -- the complexities of making that happen, both from a supply chain perspective and also from a business perspective, are very complicated and get more complicated every day.

The Holy Grail for us would definitely be an integrated approach to doing software development that incorporates the development activities, but also all of the test, monitoring, provisioning, and all of the quality checks and balances that we want to have to make sure that our applications meet the needs of our customers.

In the last couple of years, with the explosion with cloud, with the jump to virtual machines (VMs), virtualization of your data center, and also global operations, global development teams, new protocols, and new applications, most of what we do, rather than developing from scratch, is integrate other people’s third-party applications to meet our needs. That brings to the table a whole new litany of challenges, because one vendor’s Web 2.0 protocol standard is completely different than another vendor’s Web 2.0 protocol standard. Those are all challenges.

Also, we're adopting, and have been adopting, more of the agile development techniques, because we can deliver quanta of capability and performance at different intervals. So we can start small, get bigger, and keep adding more functionality. Basically, it lets us deliver more, more quickly, but also gives us the room to grow and be able to adapt to the changing customer needs, because in the market, things change every day.

So for us, our goal has been the ability to get all those things together early in the program and have a way to collaborate and ultimately have the collaboration platform to be able to get all the different stakeholders’ views and needs at the very beginning of the program, when it’s the cheapest and most effective to do it. We’re not there. I don’t know if anybody will ever be there, but we’ve made a lot of efforts and feel like we’ve made a lot of ground.

Early adoption

The dev-ops perspective has really interested us, and we have been doing some of the early adoption, the early engagement with our customers, in our business projects very early in the game for performance testing.

We get into the project early and we start understanding what the requirements are for performance and don’t just cross our fingers and hope for the best down the road, but really put some hard metrics around what it is the expectations are for performance. What’s the transfer function? What’s the correlation between performance and the infrastructure that need to deliver that performance? Finally, what are the customer needs and how do you measure it?

That’s been a huge boon for us, because it’s helped us script that early in the project and actually look at the unit-level pieces, especially in each different iteration of the agile process. We can break down the performance and do testing to make sure that we’ve optimized that piece of it to be as good as possible.

Now when you add in the needs for VM provisioning, storage, networking, and databasing, the problem starts to mushroom and get more complex. So, for a long time, we've been big users of HP Quality Center (QC), which is what we use to gather requirements, build test plans, and link those requirements to the test plans ultimately to successful tests and defects. We have traceability from what the need of the customer is to our ability to validate that we deliver that need. And, it worked well.

Then, we have the performance testing which was an add-on to that. And now, with the new ALM 11, which by the way, marries the QC functionality and Performance Center functionality. They're not two different things any more. It’s the same thing, and that’s the beauty for us.

Having the QC and performance testing closer together has made a lot of sense for us and allowed us to go faster and cheaper, and end up with something that, in fact, is better.



That’s what we’ve been preaching and trying to work with our project teams on, to say that it’s just a requirement. Any requirement is just a requirement and how we decide to implement, fulfill, and test that is our choice. But, having the QC and performance testing closer together has made a lot of sense for us and allowed us to go faster and cheaper, and end up with something that, in fact, is better.

The number of applications we have in production is in the 300-500 range, but as far as mission critical, probably 30. As far as some things that are on everybody’s radar, probably 50 or 60. In Business Servive Management (BSM), we monitor about 50 or 60 applications, we also have the lower-level monitors in place that are looking at infrastructure. Then, our data all goes up to the single pane, so we can get visibility into what the problems are.

The number of things we monitor is less important to us than the actual impact that these particular applications have, not only on the customers experience, but also on our ability to support it. We need to make sure that whatever it is that we do is, first of all, faster. I can’t afford to get a report every morning to see what broke in the last 24 hours. I need to know where the fires are today and what’s happening now, and then we need to have direct traceability out to the operator.

As soon as something goes wrong, the operator gets the information right away and either we’re doing auto-ticketing, or that operator is doing the triage to understand where the root cause is. A lot of that information comes from our dashboards, BSM, and Operations Manager. Then, they know what to do with that issue and who to send it to.

SaaS processes

We’ve subscribed to a number of internal cloud services that are software-as-a-service (SaaS) processes and services. For those kind of things, we need to first make sure it’s not us before we go looking to find out what our software service providers are going to do about the problems. And both of our applications, all the BSM and all the dev-ops has helped us get to that point a little better.

The final piece of the puzzle that we’re trying to implement is the newer BSM and how we get that built into the process as well, because that’s just another piece of the puzzle.

Gardner: What sort of paybacks are you expecting?

Katz: It’s two things for us. One is the better job you do up front, the better job you’re going to do in the back end. Things are a lot cheaper and faster, and you can be a whole lot more agile to react a problem. So the better job we do up front, understand what the requirements are and not just what this application is or what it’s supposed to do, but how is it supposed to affect the rest of our infrastructure, how is it supposed to perform under stress, and what are the critical quality, the quality of service, the quality of experience aspects that we need to look at.

Defining that up front helps us to be better and helps us to develop and launch better products. In in doing that, we find issues earlier in the process, when it’s a lot cheaper to fix them and a lot more effective.

The better job you do up front, the better job you’re going to do in the back end. Things are a lot cheaper and faster, and you can be a whole lot more agile.



On the back end, we need to be more agile. We need to get information faster and we need to be able to react to that information. So, when there’s a problem, we know about it as soon as possible, and we’re able to reduce our root-cause analysis and time to resolution.

Gardner: Is integrated ALM helping you move the cloud and also adopt other IT advancements?

Katz: I look at that like a baseball team. My kids are in Little League right now. We’re in the playoffs. When a team does well, you get this momentum. Success really feeds momentum, and we’ve had a lot of success with the dev-ops, with pulling in ALM performance management and BSM into our application development lifecycle. Just because of the momentum we've got from that, we’ve got a lot more openness to explore new items, to pull more information into the system, and to get more information into the single pane.

Before we had the success, the philosophy was. "I don’t have time to fix this. I don’t have time to add new great things." Or, "I've got to go fix what I got." But when you get a little bit of that momentum and you get the successes, there is a lot more openness to it and willingness to see what happens. We’ve had HP helping us with. They’re helping us to describe what the next phase of the world looks like.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Kapow Mobile Katalyst debuts as new means to rapidly convert web applications to mobile apps sans APIs

Kapow Software today released Kapow Mobile Katalyst as a platform for rapid mobile-enablement of business applications.

The post-PC era writing has gone from the wall to the tablet, and many enterprises, customer-facing retailers and service providers therefore want to make more of their web and business applications work on popular mobile smartphone and tablet devices such as Android and iOS.

"It’s no surprise that millions of employees around the world are bringing their smartphones and mobile devices to work, resetting workplace expectations to have always-on access to the instantly available business apps that they’ve grown accustomed to from their personal lives," said Stefan Andreasen, Founder and CTO, Kapow Software.

However, many of these applications do not come with application programming interfaces (APIs), or complete APIs, and the transition to workable and dependable mobile apps can be arduous, expensive, time-consuming and some times nearly impossible. [Disclosure: Kapow is a sponsor of BriefingsDirect podcasts.]

Kapow has entered the mobile migration opportunity with a platform and tools that wrap underlying logic and transaction services from existing applications into a series of REST and SOAP services. Such functions as shopping baskets and transaction integrations and business logic can be re-purposed to mobile devices as native apps in a few months versus much longer, said Andreasen.

Kapow Katalyst accesses and integrates the data and business logic of nearly any existing packaged or proprietary business applications without requiring APIs, he said. Adding a service-level interface to a legacy application is a complex development project requiring an extensive rewrite — years of planning, coding, and testing as well as spending, disrupting, and, too often abandoning, he said.

Visual tools and mappings


Using visually built flow-charts and data mappings to control the application’s business logic through its existing web interface, users can then deploy the "mobilized" application with one click into a production environment without re-writing any existing code, according to Kapow.

Furthermore, Kapow Mobile Katalyst allows for repurposing of existing applications as mobile applications, but leaving the underlying systems untouched.

Kapow is partnering with companies that specialize in mobile front-end development such as Antenna Software. “A mobile website is only as good as the data that supports it,” said Jim Somers, chief marketing & strategy officer at Antenna Software. “Together with Kapow Mobile Katalyst, we are able to accelerate the delivery of our mobile web solutions to help drive significant business value for our customers, quickly. We’ve proven our joint success with several leading global brands and look forward to building on this relationship.”

Kapow Mobile Katalyst is available now and can be deployed on-premises or via a hosted online service from Kapow.

You may also be interested in:

Monday, June 13, 2011

HP Discover Interview: Security Evangelist Rafal Los on balancing risk and reward amid consumerization of IT trends

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

It’s an interesting time for IT and cyber security. We have more threats. We hear about breaches in large organizations like Sony and Google, but at the same time IT organizations are being asked to make themselves more like Google or Amazon, the so-called consumerization of IT.

So how do IT organizations become more open while being more protective? Are these goals mutually exclusive, or can security enhancements and governance models make risks understood and acceptable for more kinds of social, collaboration, mobile and cloud computing activities?

BriefingsDirect directed such questions to Rafal Los, Enterprise Security Evangelist for HP Software. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Raf, what comes in your mind when we say "consumerization of IT?"

Los: I think of the onslaught of consumer devices, from your tablets to your mobile handsets, that start to flood our corporate environments with their ever-popular music, photo-sharing, data-gobbling, and wireless-gobbling capabilities that just catch many enterprises completely unaware.

Gardner: Is this a good thing? The consumers seem to like it. The user thinks it’s good productivity. I want to do things at the speed that I can do at home or in the office, but this comes with some risk, doesn’t it?

Los: Absolutely, risk is everywhere. But you asked if it’s a good thing. It’s a good thing, depending on which platform you're standing on. From the consumer perspective, absolutely, it’s a great thing. I can take my mobile device with me and have one phone for example, on which I get my corporate email, my personal email on, and not have four phones in my pocket. I can have a laptop from my favorite manufacturer, whatever I want to use, bring into my corporate environment, take it home with me at night, and modify it however I want.

That’s cool for the consumer, but that creates some very serious complexities for the enterprise security folks. Often, you get devices that aren't meant to be consumed in an enterprise. They're just not built for an enterprise. There's no enterprise control. There's no notion of security on somebody’s consumer devices.

Now, many of the manufacturers are catching up, because enterprises are crying out that these devices are showing up. People are coming after these big vendors and saying, "Hey, you guys are producing devices that everybody is using. Now they are coming up into my company, and it’s chaos" But, it’s definitely a risk, yes.

Gardner: What would a traditional security approach need to do to adjust to this? What do IT people need to think about differently about security, given this IT consumerization trend?

Need to evolve

Los: We need to evolve. Over the last decade and a half or so, we’ve looked at information security as securing a castle. We've got the moat, the drawbridge, the outer walls, the center or keep, and we’ve got our various stages of weaponry, an armory and such. Those notions have been blown to pieces over the last couple of years as, arguably, the castle walls have virtually evaporated, and anybody can bring in anything, and it’s been difficult.

Companies are now finding themselves struggling with how to deal with that. We're having to evolve from simply the ostrich approach where we are saying, "Oh, it’s not going to happen. We're simply not going to allow it," and it happens anyway and you get breached. We have to evolve to grow with it and figure out how we can accommodate certain things and then keep control.

In the end, we're realizing that it’s not about what you let in or what you don’t. It’s how you control the intellectual property in the data that’s on your network inside your organization.

Gardner: So, do IT professionals in enterprises need to start thinking about the organizations differently? Maybe they're more like a service provider or a web applications provider than a typical bricks and mortar environment.

Los: That’s an interesting concept. There are a number of possible ways of thinking about that. The one that you brought up is interesting. I like the idea of an organization that focuses less on the invasive technology, or what’s coming in, and more on what it is that we're protecting.

I like the idea of an organization that focuses less on the invasive technology, or what’s coming in, and more on what it is that we're protecting.



From an enterprise security perspective, we've been flying blind for many years as to where our data is, where our critical information is, and hoping that people just don’t have the capacity to plug into our critical infrastructure, because we don’t have the capacity to secure it.

Now, that notion has simply evaporated. We can safely assume that we now have to actually go in and look at what the threat is. Where is our property? Where is our data? Where are the things that we care about? Things like enterprise threat intelligence and data storage and identifying critical assets become absolutely paramount. That’s why you see many of the vendors, including ourselves, going in that direction and thinking about that in the intelligent enterprise.

Gardner: This is interesting. To use your analogy about the castle, if I had a high wall, I didn’t need to worry about where all my stuff was. I perhaps didn’t even have an inventory or a list. Now, when the wall is gone, I need to look at specific assets and apply specific types of security with varying levels, even at a dynamic policy basis, to those assets. Maybe the first step is to actually know what you’ve got in your organization. Is that important?

Los: Absolutely. There’s often been this notion that if we simply build a impenetrable, hard, outer shell, the inner chewy center is irrelevant. And, that worked for many years. These devices grew legs and started walking around these companies, before we started acknowledging it. Now, we’ve gotten past that denial phase and we're in the acknowledgment phase. We’ve got devices and we’ve got capacity for things to walk in and out of our organization that are going to be beyond my control. Now what?

Don't be reactionary

Well, the logical thing to do is not to be reactionary about it and try to push back and say that can’t be allowed, but it should be to basically attempt to classify and quantify where the data is? What do we care about as an organization? What do we need to protect? Many times, we have these archaic security policies and we have disparate systems throughout an organization.

We've shelled out millions of dollars in our corporate hard-earned capital and we don’t really know what we're protecting. We’ve got servers. The mandate is to have every server have anti-virus and an intrusion prevention system (IPS) and all this stuff, but where is the data? What are you protecting? If you can’t answer that question, then identifying your data asset inventory is step one. That’s not a traditional security function, but it is now, or at least it has to be.

Gardner: I suppose that when we also think about cloud computing, many organizations might not now be doing public cloud or hybrid cloud, but I don’t think it’s a stretch to say that they probably will be some day. They're definitely going to be doing more with mobile. They're going to be doing more with cloud. So wouldn’t it make sense to get involved with these new paradigms of security sooner rather than later? I think the question is really about being proactive rather than reactive.

Los: The whole idea of cloud, and I've been saying this for a while, is that it's not really that dramatic of a shift for security. What I said earlier about acknowledging the fact that our preconceived notions of defending the castle wall has to be blown apart extrapolates beautifully into the cloud concept, because not only is it that data is not properly identified within our "castle wall," but now we're handing it off to some place else.

What are you handing off to some place else? What does that some place else look like? What are the policies? What are the procedures? What’s their incident response? Who else are you sharing with? Are you co-tenanting with somebody? Can you afford downtime? Can you afford an intrusion? What does an intrusion mean?

What are you handing off to some place else? What does that some place else look like? What are the policies? What are the procedures?



This all goes back to identifying where your data lives, identifying and creating intelligent strategies for protecting it, but it boils down to what my assets are. What makes our business run? What drives us? And, how are we going to protect this going forward?

Gardner: Now thinking about data for security, I suppose we're now also thinking about data for the lifecycle for a lot of reasons about storage efficiency and cutting cost. We're also thinking about being able to do business intelligence (BI) and analytics more as a regular course of action rather than as a patch or add-on to some existing application or dataset.

Is there a synergy or at least a parallel track of some sort between what you should be doing with security, and what you are going to probably want to be doing with data lifecycle and in analytics as well?

Los: It's part-and-parcel of the same thing. If you don’t know what information your business relies on, you can’t secure it and you can’t figure out how to use it to your competitive advantage.

I can’t tell you how many organizations I know that have mountains and mountains and mountains of storage all across the organization and they protect it well. Unfortunately, they seem to ignore the fact that every desktop, every mobile device, iPhone, BlackBerry, WebOS tablet has a piece of their company that walks around with it. It's not until one of these devices disappears that we all panic and ask what was on that. It’s like when we lost tape. Losing tapes was the big thing, as was encrypting tapes. Now, we encrypt mobile devices. To what degree are we going to go and how much are we going to get into how we can protect this stuff?

Enabling the cause

BI is not that much different. It’s just looking at the accumulated set of data and trying to squeeze every bit of information out of it, trying to figure out trends, trying to find out what can you do, how do you make your business smarter, get to your customers faster, and deliver better. That’s what security is as well. Security needs to be furthering and enabling that cause, and if we're not, then we're doing it wrong.

Gardner: Based on what you’ve just said, if you do security better and you have more comprehensive integrated security methodology, perhaps you could also save money, because you will be reducing redundancy. You might be transforming and converging your enterprise, network, and data structure. Do you ever go out on a limb and say that if you do security better, you'll save money?

Los: Coming from the application security world, I can cite the actual cases where security done right has saved the company money. I can cite you one from an application security perspective. A company that acquires other companies all of a sudden takes application security seriously. They're acquiring another organization.

They look at some code they are acquiring and say, "This is now going to cost us X millions of dollars to remediate to our standards." Now, you can use that as a bargaining chip. You can either decrease the acquisition price, or you can do something else with that. What they started doing is leveraging that type of value, that kind of security intelligence they get, to further their business costs, to make smarter acquisitions. We talk about application development and lifecycle.

That’s what security is as well. Security needs to be furthering and enabling that cause, and if we're not, then we're doing it wrong.



There is nothing better than a well-oiled machine on the quality front. Quality has three pillars: does it perform, does it function, and is it secure? Nobody wants to get on that hamster wheel of pain, where you get all the way through requirements, development, QA testing, and the security guys look at it Friday, before it goes live on Saturday, and say, "By the way, this has critical security issues. You can’t let this go live or you will be the next ..." -- whatever company you want to fill in there in your particular business sector. You can’t let this go live. What do you do? You're at an absolutely impossible decision point.

So, then you spend time and effort, whether it’s penalties, whether it’s service level agreements (SLAs), or whether it’s cost of rework. What does that mean to you? That’s real money. You could recoup it by doing it right on the front end, but the front end costs money. So, it costs money to save money.

Gardner: Okay, by doing security better, you can cut your risks, so you don’t look bad to your customers or, heaven forbid, lose performance altogether. You can perhaps rationalize your data lifecycle. You can perhaps track your assets better and you can save money at the same time. So, why would anybody not be doing better security immediately? Where should they start in terms of products and services to do that?

Los: Why would they not be doing it? Simply because maybe they don’t know or they haven't quite haven't gotten that level of education yet, or they're simply unaware. A lot of folks haven't started yet because they think there are tremendously high barriers to entry. I’d like to refute that by saying, from a perspective of an organization, we have both products and services.

We attack the application security problem and enterprise security problem holistically because, as we talked about earlier, it’s about identifying what your problems are, coming up with a sane solution that fits your organization to solve those problems, and it’s not just about plugging products in.

We have our Security Services that comes in with an assessment. My organization is the Application Security Group, and we have a security program that we helped build. It’s built upon understanding our customer and doing an assessment. We find out what fits, how we engage your developers, how we engage your QA organization, how we engage your release cycle, how we help to do governance and education better, how we help automate and enable the entire lifecycle to be more secure.

Not invasive

I
t’s not about bolting on security processes, because nobody wants to be invasive. Nobody wants to be that guy or that stands there in front of a board and says "You have to do this, but it’s going to stink. It’s going to make your life hell."

We want to be the group that says, "We’ve made you more secure and we’ve made minimal impact on you." That’s the kind of things we do through our Fortified Application Security Center group, static and dynamic, in the cloud or on your desktop. It all comes together nicely, and the barrier to entry is virtually eliminated, because if we're doing it for you, you don’t have to have that extensive internal knowledge and it doesn’t cost an arm and a leg like a lot people seem to think.

I urge people that haven't thought about it yet, that are wondering if they are going to be the next big breach, to give it a shot, list out your critical applications, and call somebody. Give us a call, and we’ll help you through it.

Gardner: HP has made this very strategic for itself with acquisitions. We now have the ArcSight, Fortify and TippingPoint. I have been hearing quite a bit about TippingPoint here at the show, particularly vis-à-vis the storage products. Is there a brand? Is there an approach that HP takes to security that we can look to on a product basis, or is it a methodology, or all of the above?

Los: I think it’s all of the above. Our story is the enterprise security story. How do we enable that Instant-On Enterprise that has to turn on a dime, go from one direction strategically today? You have to adapt to market changes. How does IT adapt, continue, and enable that business without getting in the way and without draining it of capital.

There is no secure. There is only manageable risk and identified risk.



If you look around the showroom floor here and look at our portfolio of services and products, security becomes a simple steel thread that’s woven through the fabric of the rest of the organization. It's enabling IT to help the CIO, the technology organization, enable the business while keeping it secure and keeping it at a level of manageable risk, because it’s not about making it secure. Let me be clear. There is no secure. There is only manageable risk and identified risk.

If you are going for the "I want to be secure thing," you're lost, because you will never reach it. In the end that’s what our organizational goal is. As Enterprise Security we talk a lot about risk. We talk a lot about decreasing risk, identifying it, helping you visualize it and pinpoint where it is, and do something about it, intelligently.

Gardner: Is there new technology that’s now coming out or being developed that can also be pointed at the security problem, get into this risk reduction from a technical perspective?

Los: I'll cite one quick example from the software security realm. We're looking at how we enable better testing. Traditionally, customers have had the capability of either doing what we consider static analysis, which is looking at source code and binaries, and looking at the code, or a run analysis, a dynamic analysis of the application through our dynamic testing platform.

One-plus-one turns out to actually equal three when you put those two together. Through these acquisition’s and these investments HP has made in these various assets, we're turning out products like a real-time hyperanalysis product, which is essentially what security professionals have been looking for years.

Collaborative effort

I
t’s looking at when an application is being analyzed, taking the attack or the multiple attacks, the multiple verifiable positive exploits, and marrying it to a line of source code. It’s no longer a security guide doing a scan, generating a 5000-page PDF, lobbing it over the wall at some poor developer who then has to figure it out and fix it before some magical timeline expired. It’s now a collaborative effort. It’s people getting together.

One thing that we find broken currently with software development and security is that development is not engaged. We're doing that. We're doing it in real-time, and we're doing it right now. The customers that are getting on board with us are benefiting tremendously, because of the intelligence that it provides.

Gardner: So, built for quality, built for security, pretty much ... synonymous?

Los: Built for function, built for performance, built for security, it’s all part of a quality approach. It's always been here, but we're able to tell the story even more effectively now, because we have a much deeper reach into the security world If you look at it, we're helping to operationalize it by what you do when an application is found that has vulnerabilities.

Built for function, built for performance, built for security, it’s all part of a quality approach.



The reality is that you're not always going to fix it every time. Sometimes, things just get accepted, but you don’t want them to be forgotten. Through our quality approach, there is a registry of these defects that lives on through these applications, as they continue to down the lifecycle from sunrise to sunset. It’s part of the entire application lifecycle management (ALM) story.

At some point, we have a full registry of all the quality defects, all the performance defects, all the security defects that were found, remediated, who fixed them, and what the fixes were? The result of all of this information, as I've been saying, is a much smarter organization that works better and faster, and it’s cheaper to make better software.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, June 9, 2011

Discover case study: Paychex leverages HP ALM to streamline and automate application development

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas the week of June 6. We're here to explore some major enterprise IT solution trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study focuses on Paychex, a large provider of services to small and medium-sized businesses (SMBs), and growing rapidly around services for HR, payroll, benefits, tax payments, and quite a few other features.

Please join Joel Karczewski, the Director of IT at Paychex, to learn about how automation and efficiency is changing the game in how they develop and deploy their applications. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Karczewski: Over the past few years, IT has been asked to deliver more quickly, to be more responsive to our business needs, and to help drive down costs in the way in which we develop, deploy, and deliver software and services to our end customers.

To accomplish that, we've been focusing on automating many of the tasks in a traditional software development lifecycle as much as possible to help make sure that when they're performed manually, they're not skipped.

For example, automating from a source code check in, automating the process by which we would close out defects, that source code was resolving, automating the testing that we do when we create a new service, automating the performance testing, automating the unit testing, the code coverage, the security testing, to make sure that we're not introducing key flaws or vulnerabilities that might be exposed to our external customers.

Applications are basically just a combination of integrated services, and we've been moving forward with a strategic service-based delivery model for approximately a year and a half now. We have hundreds of services that are reused and utilized by our applications.

Payroll provider

Paychex is primarily an HR benefits and payroll provider, and our key customers are approximately 570,000 business owners and the employees that work for those business owners.

We've been focusing on the small-business owner because we believe that’s where our specialty is.

We have clients who want Paychex to do some of the business tasks for them, but they want to still do some of the tasks themselves.



What we have been finding over time is that we're developing a hybrid behavioral approach. We have clients who want Paychex to do some of the business tasks for them, but they want to still do some of the tasks themselves.

In order to satisfy the one end of the spectrum or the other and everything in between, we've been moving toward a service-based strategy where we can package, bundle, price, roll out, and deliver the set of services that fit the needs of that client in a very highly personalized and customized fashion.

The more that we can automate, the more we're able to test those services in the various combinations and environments with which they need to perform, with which they need to be highly available, and with which they need to be consistent.

Personal information


We have an awful lot of information that is very personal and highly confidential. For example, think about the employees that work for one of these 560,000-plus business owners. We know when they are planning to retire. We know when they move, because they are changing their addresses. We know when they get married. We know when they have a child. We know an awful lot of information about them, including where they bank, and it’s highly, highly confidential information.

We took a step back and took a look at our software delivery lifecycle. We looked at areas that are potentially not as value-add, areas of our software delivery lifecycle that would cause an individual developer, a tester, or a project manager, to be manually taking care of tasks with which they are not that familiar.

For example, a developer knows how to write software. A developer doesn’t always know how to exercise our quality center or our defect tracking system, changing the ownership, changing statuses, and updating multiple repositories just to get his or her work done.

So, we took a look at tasks that cause latency in our software delivery lifecycle and we focused on automating those tasks.

A developer knows how to write software. A developer doesn’t always know how to exercise our quality center or our defect tracking system.



We're using a host of HP products today. For example, in order to achieve automated functional testing, we're utilizing Quality Center (QC) in combination with Quick Test Professional (QTP). In order to do our performance testing, pre-production, we utilize. Post-production, we're beginning to look an awful lot at Real Use Monitor (RUM), and we're looking to interface RUM with ArcSight, so that when we do have an availability issue, and it is a performance issue for one of our users anywhere, utilizing our services, we're able to identify it quickly and identify the root cause.

Metrics of success


We're looking at the number of testing hours that it takes a manual tester to spin through a regression suite and we compare that with literally no time at all to schedule a regression test suite run. We're computing the number of hours that we're saving in the testing arena. We're computing the number of lines of software that a developer creates today in hopes that we'll be able to show the productivity gains that we're realizing from automation.

We're very interested in looking at the HP IT Performance Suite and an Executive Scorecard. We're also very interested in tying the scorecard of the builds that we're doing in the construction and the development arena. We're very interested in tying those KPIs, those metrics, and those indicators together with the Executive Scorecard. There's a lot of interest there.

We've also done something that is very new to us, but we hope to mainstream this in the future. For the very first time, we employed an external organization from the cloud. We utilized LoadRunner and did a performance test directly against our production systems.

Why did we do that? Well, it’s a very huge challenge for us to build, support, and maintain many testing environments. In order to get a very accurate read on performance and load and how our production systems performed, we picked a peak off-time period, we got together with an external cloud testing firm and they utilized LoadRunner to do performance tests. We watched the capacity of our databases, the capacity of our servers, the capacity of our network, and the capacity of our storage systems, as they throttled the volume forward.

We plan to do more of that as a final checkout, when we deliver new services into our production environment.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, June 8, 2011

Deep-dive panel discussion on HP's new Converged Infrastructure, EcoPOD and AppSystem releases at Discover

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

This latest BriefingsDirect panel discussion on converged infrastructure and data center transformation explores the major news emerging from this week's HP Discover 2011 conference in Las Vegas.

HP has updated and expanded its portfolio of infrastructure products and services, debuted a mini, mobile data center called the EcoPOD, unveiled a unique dual cloud bursting capability, and rolled out a family of AppSystems, appliances focused on specific IT solutions like big data analytics.

To put this all in context, a series of rapidly maturing trends around application types, cloud computing, mobility, and changing workforces is reshaping what high-performance and low-cost computing is about. In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We know, for example, that we’ll see most data centers converge their servers, storage, and network platforms intelligently for high efficiency and for better management and security. We know that we’ll see higher levels of virtualization across these platforms and for more applications, and that, in turn, will support the adoption of hybrid and cloud models.

In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus.



We’ll surely see more compute resources devoted to big data and business intelligence (BI) values that span ever more applications and data types. And of course, we’ll need to support far more mobile devices and distributed, IT-savvy workers.

How well companies modernize and transform these strategic and foundational IT resources will then hugely impact their success and managing their own agile growth and in controlling ongoing costs and margins. Indeed the mingling of IT success and business success is clearly inevitable.

So, now comes the actual journey. At HP Discover, the news is largely about making this inevitable future happen more safely by being able to transform the IT that supports businesses in all of their computing needs for the coming decades. IT executives must execute rapidly now to manage how the future impacts them and to make rapid change an opportunity, not an adversary.

How to execute

Please then meet the panel: Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solutions for HP Enterprise Business; Jon Mormile, Worldwide Product Marketing Manager for Performance-Optimized Data Centers in HP's Enterprise Storage Servers and Networking (ESSN) group within HP Enterprise Business; Jason Newton, Manager of Announcements and Events for HP ESSN, and Brad Parks, Converged Infrastructure Strategist for HP Storage in HP ESSN. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Tang: Last year, HP rolled out this concept of the Instant-On Enterprise, and it’s really about the fact that we all live in a very much instant-on world today. Everybody demands instant gratification, and to deliver that and meet other constituent’s needs, an enterprise really needs to become more agile and innovative, so they can scale up and down dynamically to meet these demands.

In order to get answers straight from our customers on how they feel about the state of agility in their enterprise, we contracted with an outside agency and conducted a survey earlier this year with over 3,000 enterprise executives. These were CEOs, CIOs, CFOs across North America, Europe, and Asia, and the findings were pretty interesting.

Less than 40 percent of our respondents said, "I think we are doing okay. I think we have enough agility in the organization to be able to meet these demands."

Not surprising

So the number is so low, but not very surprising to those of us who have worked in IT for a while. As you know, compared to other enterprise disciplines, IT is a little bit more pre-Industrial Revolution. It’s not a streamlined. It’s not a standardized. There's a long way to go. That clearly spells out a big opportunity for companies to work on that area and optimize for agility.

We also asked, "What do you think is going to change that? How do you think enterprises can increase their agility?" The top two responses coming back were about more innovative, newer applications.

But, the number one response coming from CEOs was that it’s transforming their technology environment. That’s precisely what HP believes. We think transforming that environment and by extension, converged infrastructure, is the fastest path toward not only enterprise agility, but also enterprise success.

Storage innovation news

Parks: A couple of years ago, HP took a step back from the current trajectory that we were on as a storage business and the trajectory that the storage industry as a whole was on. We took a look at some of the big trends and problems that we were starting to hear from customers around virtualization or on the move to cloud computing, this concept of really big everything.

We’re talking about data, numbers of objects, size, performance requirements, just everything at massive, massive scale. When we took a look at those trends, we saw that we were really approaching a systemic failure of the storage that was out there in the data center.

The challenge is that most of the storage deployed out in the data center today was architected about 20 years ago for a whole different set of data-center needs, and when you couple that with these emerging trends, the current options at that time were just too expensive.

They were too complicated at massive scale and they were too isolated, because 20 years ago, when those solutions were designed, storage was its own element of the infrastructure. Servers were managed separately. Networking was managed separately, and while that was optimized for the problems of the day, it in turn created problems that today’s data centers are really dealing with.

Thinking about that trajectory, we decided to take a different path. Over the last two years, we’ve spent literally billions of dollars through internal innovation, as well as some external acquisitions, to put together a portfolio that was much better suited to address today’s trends.

Common standard

A
t the event here, we're talking about HP Converged Storage, and this addresses some of the gaps that we’ve seen in the legacy monolithic and even the legacy unified storage that’s out there. Converged Storage is built on a few main principles we're trying to drive towards common industry-standard hardware, building on ProLiant BladeSystem based DNA.

We want to drive a lot more agility into storage in the future by using modern Scale-Out software layers. And last, we need to make sure that storage is incorporated into the larger converged infrastructure and managed as part of a converged stack that spans servers and storage and network.

When we're able to design on industry-standard platforms like BladeSystem and ProLiant, we can take advantage of the massive supply chain that HP has and roll out solution that are much lower upfront cost point from a hardware perspective.

Second, using that software layer I mentioned, some of the technologies that we bring to bear are like thin provisioning, for example. This is a technology that helps customers cut their initial capacity requirements around 50 percent by just eliminating their over-provisioning that is associated with some of the legacy storage architectures.

One of the things we've seen and talk about with customers worldwide is that data just doesn't go away. It is around forever.



Then, operating expense is the other place where this really is expensive. That's where it helps to consolidating the management across servers and storage and networking, building in as much automation into the solutions as possible, and even making them self-managing.

For example, our 3PAR Storage solution, which is part of this converged stack, has autonomic management capabilities which, when we talk to our customers, has reduced some of their management overhead by about 90 percent. It's self-managing and can load balance, and because of its wide straightening architecture, it can respond to some of the unpredictable workloads in the data center, without requiring the administrative overhead.

Converged Infrastructure

Newton: We're really excited about the AppSystems announcements. We're in a great position as HP to be the ones to deliver on a promise of converging server, storage, network, management, security application all into individual solutions.

So, 2009 was about articulating the definition of what that should look like and what that data center in the future should be. Last year, we spent a lot of time in new innovations in blades and mission-critical computing and strategic acquisitions around storage, network, and other places.

The result last year was what we believe is one of the most complete portfolios from a single vendor in marketplace to deliver converged infrastructure. Now, what we’re doing in 2011 is building on that to bring all that together and simplify that into integrated solutions and extending that strategy all the way out to the application.

If we look at what kind of applications customers are deploying today and the ways that they’re deploying them, we see three dominant new models that are coming to bear. One is applications in a virtualized environment and on virtual machines and that have got very specific requirements and demands for performance and concerns about security, etc.

Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.



We see a lot of acceleration and interest in applications delivered as a service via cloud. Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.

The third way that we wanted to address was a dedicated application environment. These are data warehousing, analytics types of workloads, and collaboration workloads, where performance is really critical, and you want that not on shared resources, but in a dedicated way. But, you also want to make sure that that is supporting applications in a cloud or virtual environment.

So in 2011, it's about how to bring that portfolio together in the solution to solve those three problems. The key thing is that we didn't want to extend sprawl and continue the problem that’s still out there in the marketplace. We wanted to do all that on one common architecture, one common management model, and one common security model.

Individual Solutions

What if we could take that common architecture management security model, optimize it, integrate it into individual solutions for those three different application sets and do it on the stuff that customers are already using in the legacy application environment today and they could have something really special?

What we’re announcing this week at Discover is this new portfolio we called Converged Systems. For that virtual workload, we have VirtualSystems or the dedicated application environment, specifically BI, and data management and information management. We have the AppSystems portfolio. Then, for where most customers want to go in the next few years, cloud, we announced the CloudSystem.

So, those are three portfolios, where common architecture addresses a complete continuum of customer’s application demands. What's unique here is doing that in a common way and being built on some of the best-of-breed technologies on the planet for virtualization, cloud, high performance BI, and analytical applications.

Our acquisition of Vertica powers the BI appliance. The architecture is one of the most modern architectures out there today to handle the analytics in real time.

Before, analytics in a traditional BI data warehouse environment was about reporting. Call up the IT manager, give them some criteria. They go back and do their wizardry and come back with sort of a status report, and it's just looking at the dataset that’s in one of the data stores he is looking.

It sort of worked, I guess, back when you didn’t need to have that answer tomorrow or next week. You could just wait till the next quarterly review. With the demands of big everything, as Brad was speaking of, the speed and scale at which the economy is moving the business, and competition is moving, you've got to have this stuff in real-time.

So we said, "Let’s go make a strategic acquisition. Let’s get the best-in-class, real-time analytics, a modern architecture that does just that and does it extremely well. And then, let’s combine that with the best hardware underneath that with HP Converged Infrastructure, so that customers can very easily and quickly bring that capability into their environment and apply it in a variety of different ways, whether in individual departments or across the enterprise.

Real-time analytics

There are endless possibilities of ways that you can take advantage of real-time analytics with this solution. Including it into AppSystem makes it very easy to consume, bring it into the environment, get it up and running, start connecting the data sources literally in minutes, and start running queries and getting answers back in literally seconds.

What’s special about this approach is that most analytic tools today are part of a larger data warehouse or BI-centered architecture. Our argument is that in the future of this big everything thing that’s going on, where information is everywhere, you can’t just rely on the data sources inside your enterprise. You’ve got to be able to pull sources from everywhere.

In buying a a monolithic, one-size-fits-all OLTP, data warehousing, and a little bit of analytics, you're sacrificing that real-time aspect that you need. So keep the OLTP environment, keep the data warehouse environment, bring in its best in class real-time analytic on top of it, and give your business very quickly some very powerful capabilities to help make better business decisions much faster.

Data center efficiency

Mormile: When you talk about today’s data centers, most of them were built 10 years ago and actually a lot of our analyst’s research talks about how they were built almost 14-15 years ago. These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient. More of them require two to three times the amount of power to run the IT, due to inefficient cooling and power distribution systems.

These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient.



In addition to these systems, these monolithic data centers are typically over-provisioned and underutilized. Because most companies cannot build new facilities all the time and continually, they have to forecast future capacity and infrastructure requirements that are typically outdated before the data centers are even commissioned.

A lot of our customers need to reduce construction cost, as well as operational expenses. This places a huge strain on companies' resources and their bottom lines. By not changing their data center strategy, businesses are throttled and simply just can’t compete in today’s aggressive marketplace.

HP has a solution: Our modular computing portfolio, and it helps to solve these problems.

Modular computing

Our modular computing portfolio started about three years ago, when we first took a look at and modified an actual shipping container, turning it into a Performance Optimized Data Center (POD).

This was followed by continuous innovation in the space with new POD designs, the deployment of our POD-Works facility, which is the world’s first assembly line data centers, the addition of flexible data center product, and today, with our newest edition, the POD 240A, which gives all the benefits of a container data center without sacrificing traditional data center look and feel.

Also, with the acquisition of EYP, which is now HP Critical Facilities Services, and utilizing HP Technical Services, we are able to offer a true end-to-end data center solution from planning and installation of the IT and the optimized infrastructure go with it, to onsite maintenance and onsite support globally.

When you combine in-house rack and power engineering, delivering finely tuned solutions to meet customers’ growing power and rack needs, it all comes together. You're talking about taking that IT and those innovations and then taking it to the next level as far as integrating that into a turnkey solution, which should actually be a POD or modular data center product.

You take the POD, and then you talk about the Factory Express services where we are actually able to take the IT integrate it into a POD, where you have the server, storage, and networking. You have integrated applications, and you've cabled and tested it.

The final step in the POD process is not only that we're providing Factory Express services, but we're also providing POD-Works. At POD-Works, we take the integrated racks that will be installed in the PODs and we provide power, networking, as well as chilled water and cooling to that, so that every aspect of the turnkey data center solution is pre-configured and pre-tested. This way, customers will have a fully integrated data center shipped to them. All they need to do is plug-in the power, networking, and/or add chilled water to that.

Game changer

B
eing able to have a complete data center on site up and running in a little as six weeks is a tremendous game changer in the business, allowing customers to be more agile and more flexible, not only with their IT infrastructure needs, but also with their capital and operational expense.

When you bring all that together, PODs offer customers the ability to deploy fully integrated, high performing, efficient scalable data centers at somewhere around a quarter of the cost and up to 95 percent more efficient, all the while doing this 88 percent faster than they can with traditional brick and mortar data center strategies.

Start services

Newton: There are a multitude of professional services and support announcements at this show. We have some new professional services. I call them start services. We have an AppStart, a CloudStart, and a VirtualStart service. These are the services, where we can engage with the customer, sit down, and assess their level of maturity -- what they have in place, what their goals are.

These services are designed to get each of these systems into the environment, integrated into what you have, optimized for your goals and your priorities, and get this up and running in days or weeks, versus months and years that that process would have taken in the past for building and integrating it. We do that very quickly and simply for the customer.

We have got a lot of expertise in these areas that we've been building on the last 20 years. Just like we're doing on the hardware-software application side simplifications, these start services do the same thing. That extends to HP Solutions support, which then kicks in and helps you support that solution across that lifecycle.

There is a whole lot more, but those are two really key ones that customers are excited about this week.

Parks: HP ExpertONE has also recently come out with a full set of training and certification courseware to help our channel partners, as well as internal IT folks that are customers, to learn about these new storage elements and to learn how they can take these architectures and help transform their information management processes.

Tang: This set of announcements are significant additions in each of their own markets, having the potential to transform, for example, storage, shaking up an industry that’s been pretty static for the last 20 years by offering completely new architecture design for the world we live in today.

That’s the kind of innovation we’ll drive across the board with our customers and everybody that talked before me has talked about the service offering that we also bring along with these new product announcements. I think that’s key. The combination of our portfolio and our expertise is really going to help our customers drive that success and embrace convergence.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: