Monday, June 13, 2011

HP Discover Interview: Security Evangelist Rafal Los on balancing risk and reward amid consumerization of IT trends

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from last week's HP Discover 2011 conference in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

It’s an interesting time for IT and cyber security. We have more threats. We hear about breaches in large organizations like Sony and Google, but at the same time IT organizations are being asked to make themselves more like Google or Amazon, the so-called consumerization of IT.

So how do IT organizations become more open while being more protective? Are these goals mutually exclusive, or can security enhancements and governance models make risks understood and acceptable for more kinds of social, collaboration, mobile and cloud computing activities?

BriefingsDirect directed such questions to Rafal Los, Enterprise Security Evangelist for HP Software. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Raf, what comes in your mind when we say "consumerization of IT?"

Los: I think of the onslaught of consumer devices, from your tablets to your mobile handsets, that start to flood our corporate environments with their ever-popular music, photo-sharing, data-gobbling, and wireless-gobbling capabilities that just catch many enterprises completely unaware.

Gardner: Is this a good thing? The consumers seem to like it. The user thinks it’s good productivity. I want to do things at the speed that I can do at home or in the office, but this comes with some risk, doesn’t it?

Los: Absolutely, risk is everywhere. But you asked if it’s a good thing. It’s a good thing, depending on which platform you're standing on. From the consumer perspective, absolutely, it’s a great thing. I can take my mobile device with me and have one phone for example, on which I get my corporate email, my personal email on, and not have four phones in my pocket. I can have a laptop from my favorite manufacturer, whatever I want to use, bring into my corporate environment, take it home with me at night, and modify it however I want.

That’s cool for the consumer, but that creates some very serious complexities for the enterprise security folks. Often, you get devices that aren't meant to be consumed in an enterprise. They're just not built for an enterprise. There's no enterprise control. There's no notion of security on somebody’s consumer devices.

Now, many of the manufacturers are catching up, because enterprises are crying out that these devices are showing up. People are coming after these big vendors and saying, "Hey, you guys are producing devices that everybody is using. Now they are coming up into my company, and it’s chaos" But, it’s definitely a risk, yes.

Gardner: What would a traditional security approach need to do to adjust to this? What do IT people need to think about differently about security, given this IT consumerization trend?

Need to evolve

Los: We need to evolve. Over the last decade and a half or so, we’ve looked at information security as securing a castle. We've got the moat, the drawbridge, the outer walls, the center or keep, and we’ve got our various stages of weaponry, an armory and such. Those notions have been blown to pieces over the last couple of years as, arguably, the castle walls have virtually evaporated, and anybody can bring in anything, and it’s been difficult.

Companies are now finding themselves struggling with how to deal with that. We're having to evolve from simply the ostrich approach where we are saying, "Oh, it’s not going to happen. We're simply not going to allow it," and it happens anyway and you get breached. We have to evolve to grow with it and figure out how we can accommodate certain things and then keep control.

In the end, we're realizing that it’s not about what you let in or what you don’t. It’s how you control the intellectual property in the data that’s on your network inside your organization.

Gardner: So, do IT professionals in enterprises need to start thinking about the organizations differently? Maybe they're more like a service provider or a web applications provider than a typical bricks and mortar environment.

Los: That’s an interesting concept. There are a number of possible ways of thinking about that. The one that you brought up is interesting. I like the idea of an organization that focuses less on the invasive technology, or what’s coming in, and more on what it is that we're protecting.

I like the idea of an organization that focuses less on the invasive technology, or what’s coming in, and more on what it is that we're protecting.



From an enterprise security perspective, we've been flying blind for many years as to where our data is, where our critical information is, and hoping that people just don’t have the capacity to plug into our critical infrastructure, because we don’t have the capacity to secure it.

Now, that notion has simply evaporated. We can safely assume that we now have to actually go in and look at what the threat is. Where is our property? Where is our data? Where are the things that we care about? Things like enterprise threat intelligence and data storage and identifying critical assets become absolutely paramount. That’s why you see many of the vendors, including ourselves, going in that direction and thinking about that in the intelligent enterprise.

Gardner: This is interesting. To use your analogy about the castle, if I had a high wall, I didn’t need to worry about where all my stuff was. I perhaps didn’t even have an inventory or a list. Now, when the wall is gone, I need to look at specific assets and apply specific types of security with varying levels, even at a dynamic policy basis, to those assets. Maybe the first step is to actually know what you’ve got in your organization. Is that important?

Los: Absolutely. There’s often been this notion that if we simply build a impenetrable, hard, outer shell, the inner chewy center is irrelevant. And, that worked for many years. These devices grew legs and started walking around these companies, before we started acknowledging it. Now, we’ve gotten past that denial phase and we're in the acknowledgment phase. We’ve got devices and we’ve got capacity for things to walk in and out of our organization that are going to be beyond my control. Now what?

Don't be reactionary

Well, the logical thing to do is not to be reactionary about it and try to push back and say that can’t be allowed, but it should be to basically attempt to classify and quantify where the data is? What do we care about as an organization? What do we need to protect? Many times, we have these archaic security policies and we have disparate systems throughout an organization.

We've shelled out millions of dollars in our corporate hard-earned capital and we don’t really know what we're protecting. We’ve got servers. The mandate is to have every server have anti-virus and an intrusion prevention system (IPS) and all this stuff, but where is the data? What are you protecting? If you can’t answer that question, then identifying your data asset inventory is step one. That’s not a traditional security function, but it is now, or at least it has to be.

Gardner: I suppose that when we also think about cloud computing, many organizations might not now be doing public cloud or hybrid cloud, but I don’t think it’s a stretch to say that they probably will be some day. They're definitely going to be doing more with mobile. They're going to be doing more with cloud. So wouldn’t it make sense to get involved with these new paradigms of security sooner rather than later? I think the question is really about being proactive rather than reactive.

Los: The whole idea of cloud, and I've been saying this for a while, is that it's not really that dramatic of a shift for security. What I said earlier about acknowledging the fact that our preconceived notions of defending the castle wall has to be blown apart extrapolates beautifully into the cloud concept, because not only is it that data is not properly identified within our "castle wall," but now we're handing it off to some place else.

What are you handing off to some place else? What does that some place else look like? What are the policies? What are the procedures? What’s their incident response? Who else are you sharing with? Are you co-tenanting with somebody? Can you afford downtime? Can you afford an intrusion? What does an intrusion mean?

What are you handing off to some place else? What does that some place else look like? What are the policies? What are the procedures?



This all goes back to identifying where your data lives, identifying and creating intelligent strategies for protecting it, but it boils down to what my assets are. What makes our business run? What drives us? And, how are we going to protect this going forward?

Gardner: Now thinking about data for security, I suppose we're now also thinking about data for the lifecycle for a lot of reasons about storage efficiency and cutting cost. We're also thinking about being able to do business intelligence (BI) and analytics more as a regular course of action rather than as a patch or add-on to some existing application or dataset.

Is there a synergy or at least a parallel track of some sort between what you should be doing with security, and what you are going to probably want to be doing with data lifecycle and in analytics as well?

Los: It's part-and-parcel of the same thing. If you don’t know what information your business relies on, you can’t secure it and you can’t figure out how to use it to your competitive advantage.

I can’t tell you how many organizations I know that have mountains and mountains and mountains of storage all across the organization and they protect it well. Unfortunately, they seem to ignore the fact that every desktop, every mobile device, iPhone, BlackBerry, WebOS tablet has a piece of their company that walks around with it. It's not until one of these devices disappears that we all panic and ask what was on that. It’s like when we lost tape. Losing tapes was the big thing, as was encrypting tapes. Now, we encrypt mobile devices. To what degree are we going to go and how much are we going to get into how we can protect this stuff?

Enabling the cause

BI is not that much different. It’s just looking at the accumulated set of data and trying to squeeze every bit of information out of it, trying to figure out trends, trying to find out what can you do, how do you make your business smarter, get to your customers faster, and deliver better. That’s what security is as well. Security needs to be furthering and enabling that cause, and if we're not, then we're doing it wrong.

Gardner: Based on what you’ve just said, if you do security better and you have more comprehensive integrated security methodology, perhaps you could also save money, because you will be reducing redundancy. You might be transforming and converging your enterprise, network, and data structure. Do you ever go out on a limb and say that if you do security better, you'll save money?

Los: Coming from the application security world, I can cite the actual cases where security done right has saved the company money. I can cite you one from an application security perspective. A company that acquires other companies all of a sudden takes application security seriously. They're acquiring another organization.

They look at some code they are acquiring and say, "This is now going to cost us X millions of dollars to remediate to our standards." Now, you can use that as a bargaining chip. You can either decrease the acquisition price, or you can do something else with that. What they started doing is leveraging that type of value, that kind of security intelligence they get, to further their business costs, to make smarter acquisitions. We talk about application development and lifecycle.

That’s what security is as well. Security needs to be furthering and enabling that cause, and if we're not, then we're doing it wrong.



There is nothing better than a well-oiled machine on the quality front. Quality has three pillars: does it perform, does it function, and is it secure? Nobody wants to get on that hamster wheel of pain, where you get all the way through requirements, development, QA testing, and the security guys look at it Friday, before it goes live on Saturday, and say, "By the way, this has critical security issues. You can’t let this go live or you will be the next ..." -- whatever company you want to fill in there in your particular business sector. You can’t let this go live. What do you do? You're at an absolutely impossible decision point.

So, then you spend time and effort, whether it’s penalties, whether it’s service level agreements (SLAs), or whether it’s cost of rework. What does that mean to you? That’s real money. You could recoup it by doing it right on the front end, but the front end costs money. So, it costs money to save money.

Gardner: Okay, by doing security better, you can cut your risks, so you don’t look bad to your customers or, heaven forbid, lose performance altogether. You can perhaps rationalize your data lifecycle. You can perhaps track your assets better and you can save money at the same time. So, why would anybody not be doing better security immediately? Where should they start in terms of products and services to do that?

Los: Why would they not be doing it? Simply because maybe they don’t know or they haven't quite haven't gotten that level of education yet, or they're simply unaware. A lot of folks haven't started yet because they think there are tremendously high barriers to entry. I’d like to refute that by saying, from a perspective of an organization, we have both products and services.

We attack the application security problem and enterprise security problem holistically because, as we talked about earlier, it’s about identifying what your problems are, coming up with a sane solution that fits your organization to solve those problems, and it’s not just about plugging products in.

We have our Security Services that comes in with an assessment. My organization is the Application Security Group, and we have a security program that we helped build. It’s built upon understanding our customer and doing an assessment. We find out what fits, how we engage your developers, how we engage your QA organization, how we engage your release cycle, how we help to do governance and education better, how we help automate and enable the entire lifecycle to be more secure.

Not invasive

I
t’s not about bolting on security processes, because nobody wants to be invasive. Nobody wants to be that guy or that stands there in front of a board and says "You have to do this, but it’s going to stink. It’s going to make your life hell."

We want to be the group that says, "We’ve made you more secure and we’ve made minimal impact on you." That’s the kind of things we do through our Fortified Application Security Center group, static and dynamic, in the cloud or on your desktop. It all comes together nicely, and the barrier to entry is virtually eliminated, because if we're doing it for you, you don’t have to have that extensive internal knowledge and it doesn’t cost an arm and a leg like a lot people seem to think.

I urge people that haven't thought about it yet, that are wondering if they are going to be the next big breach, to give it a shot, list out your critical applications, and call somebody. Give us a call, and we’ll help you through it.

Gardner: HP has made this very strategic for itself with acquisitions. We now have the ArcSight, Fortify and TippingPoint. I have been hearing quite a bit about TippingPoint here at the show, particularly vis-à-vis the storage products. Is there a brand? Is there an approach that HP takes to security that we can look to on a product basis, or is it a methodology, or all of the above?

Los: I think it’s all of the above. Our story is the enterprise security story. How do we enable that Instant-On Enterprise that has to turn on a dime, go from one direction strategically today? You have to adapt to market changes. How does IT adapt, continue, and enable that business without getting in the way and without draining it of capital.

There is no secure. There is only manageable risk and identified risk.



If you look around the showroom floor here and look at our portfolio of services and products, security becomes a simple steel thread that’s woven through the fabric of the rest of the organization. It's enabling IT to help the CIO, the technology organization, enable the business while keeping it secure and keeping it at a level of manageable risk, because it’s not about making it secure. Let me be clear. There is no secure. There is only manageable risk and identified risk.

If you are going for the "I want to be secure thing," you're lost, because you will never reach it. In the end that’s what our organizational goal is. As Enterprise Security we talk a lot about risk. We talk a lot about decreasing risk, identifying it, helping you visualize it and pinpoint where it is, and do something about it, intelligently.

Gardner: Is there new technology that’s now coming out or being developed that can also be pointed at the security problem, get into this risk reduction from a technical perspective?

Los: I'll cite one quick example from the software security realm. We're looking at how we enable better testing. Traditionally, customers have had the capability of either doing what we consider static analysis, which is looking at source code and binaries, and looking at the code, or a run analysis, a dynamic analysis of the application through our dynamic testing platform.

One-plus-one turns out to actually equal three when you put those two together. Through these acquisition’s and these investments HP has made in these various assets, we're turning out products like a real-time hyperanalysis product, which is essentially what security professionals have been looking for years.

Collaborative effort

I
t’s looking at when an application is being analyzed, taking the attack or the multiple attacks, the multiple verifiable positive exploits, and marrying it to a line of source code. It’s no longer a security guide doing a scan, generating a 5000-page PDF, lobbing it over the wall at some poor developer who then has to figure it out and fix it before some magical timeline expired. It’s now a collaborative effort. It’s people getting together.

One thing that we find broken currently with software development and security is that development is not engaged. We're doing that. We're doing it in real-time, and we're doing it right now. The customers that are getting on board with us are benefiting tremendously, because of the intelligence that it provides.

Gardner: So, built for quality, built for security, pretty much ... synonymous?

Los: Built for function, built for performance, built for security, it’s all part of a quality approach. It's always been here, but we're able to tell the story even more effectively now, because we have a much deeper reach into the security world If you look at it, we're helping to operationalize it by what you do when an application is found that has vulnerabilities.

Built for function, built for performance, built for security, it’s all part of a quality approach.



The reality is that you're not always going to fix it every time. Sometimes, things just get accepted, but you don’t want them to be forgotten. Through our quality approach, there is a registry of these defects that lives on through these applications, as they continue to down the lifecycle from sunrise to sunset. It’s part of the entire application lifecycle management (ALM) story.

At some point, we have a full registry of all the quality defects, all the performance defects, all the security defects that were found, remediated, who fixed them, and what the fixes were? The result of all of this information, as I've been saying, is a much smarter organization that works better and faster, and it’s cheaper to make better software.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, June 9, 2011

Discover case study: Paychex leverages HP ALM to streamline and automate application development

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference in Las Vegas the week of June 6. We're here to explore some major enterprise IT solution trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study focuses on Paychex, a large provider of services to small and medium-sized businesses (SMBs), and growing rapidly around services for HR, payroll, benefits, tax payments, and quite a few other features.

Please join Joel Karczewski, the Director of IT at Paychex, to learn about how automation and efficiency is changing the game in how they develop and deploy their applications. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Karczewski: Over the past few years, IT has been asked to deliver more quickly, to be more responsive to our business needs, and to help drive down costs in the way in which we develop, deploy, and deliver software and services to our end customers.

To accomplish that, we've been focusing on automating many of the tasks in a traditional software development lifecycle as much as possible to help make sure that when they're performed manually, they're not skipped.

For example, automating from a source code check in, automating the process by which we would close out defects, that source code was resolving, automating the testing that we do when we create a new service, automating the performance testing, automating the unit testing, the code coverage, the security testing, to make sure that we're not introducing key flaws or vulnerabilities that might be exposed to our external customers.

Applications are basically just a combination of integrated services, and we've been moving forward with a strategic service-based delivery model for approximately a year and a half now. We have hundreds of services that are reused and utilized by our applications.

Payroll provider

Paychex is primarily an HR benefits and payroll provider, and our key customers are approximately 570,000 business owners and the employees that work for those business owners.

We've been focusing on the small-business owner because we believe that’s where our specialty is.

We have clients who want Paychex to do some of the business tasks for them, but they want to still do some of the tasks themselves.



What we have been finding over time is that we're developing a hybrid behavioral approach. We have clients who want Paychex to do some of the business tasks for them, but they want to still do some of the tasks themselves.

In order to satisfy the one end of the spectrum or the other and everything in between, we've been moving toward a service-based strategy where we can package, bundle, price, roll out, and deliver the set of services that fit the needs of that client in a very highly personalized and customized fashion.

The more that we can automate, the more we're able to test those services in the various combinations and environments with which they need to perform, with which they need to be highly available, and with which they need to be consistent.

Personal information


We have an awful lot of information that is very personal and highly confidential. For example, think about the employees that work for one of these 560,000-plus business owners. We know when they are planning to retire. We know when they move, because they are changing their addresses. We know when they get married. We know when they have a child. We know an awful lot of information about them, including where they bank, and it’s highly, highly confidential information.

We took a step back and took a look at our software delivery lifecycle. We looked at areas that are potentially not as value-add, areas of our software delivery lifecycle that would cause an individual developer, a tester, or a project manager, to be manually taking care of tasks with which they are not that familiar.

For example, a developer knows how to write software. A developer doesn’t always know how to exercise our quality center or our defect tracking system, changing the ownership, changing statuses, and updating multiple repositories just to get his or her work done.

So, we took a look at tasks that cause latency in our software delivery lifecycle and we focused on automating those tasks.

A developer knows how to write software. A developer doesn’t always know how to exercise our quality center or our defect tracking system.



We're using a host of HP products today. For example, in order to achieve automated functional testing, we're utilizing Quality Center (QC) in combination with Quick Test Professional (QTP). In order to do our performance testing, pre-production, we utilize. Post-production, we're beginning to look an awful lot at Real Use Monitor (RUM), and we're looking to interface RUM with ArcSight, so that when we do have an availability issue, and it is a performance issue for one of our users anywhere, utilizing our services, we're able to identify it quickly and identify the root cause.

Metrics of success


We're looking at the number of testing hours that it takes a manual tester to spin through a regression suite and we compare that with literally no time at all to schedule a regression test suite run. We're computing the number of hours that we're saving in the testing arena. We're computing the number of lines of software that a developer creates today in hopes that we'll be able to show the productivity gains that we're realizing from automation.

We're very interested in looking at the HP IT Performance Suite and an Executive Scorecard. We're also very interested in tying the scorecard of the builds that we're doing in the construction and the development arena. We're very interested in tying those KPIs, those metrics, and those indicators together with the Executive Scorecard. There's a lot of interest there.

We've also done something that is very new to us, but we hope to mainstream this in the future. For the very first time, we employed an external organization from the cloud. We utilized LoadRunner and did a performance test directly against our production systems.

Why did we do that? Well, it’s a very huge challenge for us to build, support, and maintain many testing environments. In order to get a very accurate read on performance and load and how our production systems performed, we picked a peak off-time period, we got together with an external cloud testing firm and they utilized LoadRunner to do performance tests. We watched the capacity of our databases, the capacity of our servers, the capacity of our network, and the capacity of our storage systems, as they throttled the volume forward.

We plan to do more of that as a final checkout, when we deliver new services into our production environment.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, June 8, 2011

Deep-dive panel discussion on HP's new Converged Infrastructure, EcoPOD and AppSystem releases at Discover

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

This latest BriefingsDirect panel discussion on converged infrastructure and data center transformation explores the major news emerging from this week's HP Discover 2011 conference in Las Vegas.

HP has updated and expanded its portfolio of infrastructure products and services, debuted a mini, mobile data center called the EcoPOD, unveiled a unique dual cloud bursting capability, and rolled out a family of AppSystems, appliances focused on specific IT solutions like big data analytics.

To put this all in context, a series of rapidly maturing trends around application types, cloud computing, mobility, and changing workforces is reshaping what high-performance and low-cost computing is about. In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

We know, for example, that we’ll see most data centers converge their servers, storage, and network platforms intelligently for high efficiency and for better management and security. We know that we’ll see higher levels of virtualization across these platforms and for more applications, and that, in turn, will support the adoption of hybrid and cloud models.

In just the past few years, the definition of what a modern IT infrastructure needs and what it needs to do has finally come into focus.



We’ll surely see more compute resources devoted to big data and business intelligence (BI) values that span ever more applications and data types. And of course, we’ll need to support far more mobile devices and distributed, IT-savvy workers.

How well companies modernize and transform these strategic and foundational IT resources will then hugely impact their success and managing their own agile growth and in controlling ongoing costs and margins. Indeed the mingling of IT success and business success is clearly inevitable.

So, now comes the actual journey. At HP Discover, the news is largely about making this inevitable future happen more safely by being able to transform the IT that supports businesses in all of their computing needs for the coming decades. IT executives must execute rapidly now to manage how the future impacts them and to make rapid change an opportunity, not an adversary.

How to execute

Please then meet the panel: Helen Tang, Solutions Lead for Data Center Transformation and Converged Infrastructure Solutions for HP Enterprise Business; Jon Mormile, Worldwide Product Marketing Manager for Performance-Optimized Data Centers in HP's Enterprise Storage Servers and Networking (ESSN) group within HP Enterprise Business; Jason Newton, Manager of Announcements and Events for HP ESSN, and Brad Parks, Converged Infrastructure Strategist for HP Storage in HP ESSN. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Tang: Last year, HP rolled out this concept of the Instant-On Enterprise, and it’s really about the fact that we all live in a very much instant-on world today. Everybody demands instant gratification, and to deliver that and meet other constituent’s needs, an enterprise really needs to become more agile and innovative, so they can scale up and down dynamically to meet these demands.

In order to get answers straight from our customers on how they feel about the state of agility in their enterprise, we contracted with an outside agency and conducted a survey earlier this year with over 3,000 enterprise executives. These were CEOs, CIOs, CFOs across North America, Europe, and Asia, and the findings were pretty interesting.

Less than 40 percent of our respondents said, "I think we are doing okay. I think we have enough agility in the organization to be able to meet these demands."

Not surprising

So the number is so low, but not very surprising to those of us who have worked in IT for a while. As you know, compared to other enterprise disciplines, IT is a little bit more pre-Industrial Revolution. It’s not a streamlined. It’s not a standardized. There's a long way to go. That clearly spells out a big opportunity for companies to work on that area and optimize for agility.

We also asked, "What do you think is going to change that? How do you think enterprises can increase their agility?" The top two responses coming back were about more innovative, newer applications.

But, the number one response coming from CEOs was that it’s transforming their technology environment. That’s precisely what HP believes. We think transforming that environment and by extension, converged infrastructure, is the fastest path toward not only enterprise agility, but also enterprise success.

Storage innovation news

Parks: A couple of years ago, HP took a step back from the current trajectory that we were on as a storage business and the trajectory that the storage industry as a whole was on. We took a look at some of the big trends and problems that we were starting to hear from customers around virtualization or on the move to cloud computing, this concept of really big everything.

We’re talking about data, numbers of objects, size, performance requirements, just everything at massive, massive scale. When we took a look at those trends, we saw that we were really approaching a systemic failure of the storage that was out there in the data center.

The challenge is that most of the storage deployed out in the data center today was architected about 20 years ago for a whole different set of data-center needs, and when you couple that with these emerging trends, the current options at that time were just too expensive.

They were too complicated at massive scale and they were too isolated, because 20 years ago, when those solutions were designed, storage was its own element of the infrastructure. Servers were managed separately. Networking was managed separately, and while that was optimized for the problems of the day, it in turn created problems that today’s data centers are really dealing with.

Thinking about that trajectory, we decided to take a different path. Over the last two years, we’ve spent literally billions of dollars through internal innovation, as well as some external acquisitions, to put together a portfolio that was much better suited to address today’s trends.

Common standard

A
t the event here, we're talking about HP Converged Storage, and this addresses some of the gaps that we’ve seen in the legacy monolithic and even the legacy unified storage that’s out there. Converged Storage is built on a few main principles we're trying to drive towards common industry-standard hardware, building on ProLiant BladeSystem based DNA.

We want to drive a lot more agility into storage in the future by using modern Scale-Out software layers. And last, we need to make sure that storage is incorporated into the larger converged infrastructure and managed as part of a converged stack that spans servers and storage and network.

When we're able to design on industry-standard platforms like BladeSystem and ProLiant, we can take advantage of the massive supply chain that HP has and roll out solution that are much lower upfront cost point from a hardware perspective.

Second, using that software layer I mentioned, some of the technologies that we bring to bear are like thin provisioning, for example. This is a technology that helps customers cut their initial capacity requirements around 50 percent by just eliminating their over-provisioning that is associated with some of the legacy storage architectures.

One of the things we've seen and talk about with customers worldwide is that data just doesn't go away. It is around forever.



Then, operating expense is the other place where this really is expensive. That's where it helps to consolidating the management across servers and storage and networking, building in as much automation into the solutions as possible, and even making them self-managing.

For example, our 3PAR Storage solution, which is part of this converged stack, has autonomic management capabilities which, when we talk to our customers, has reduced some of their management overhead by about 90 percent. It's self-managing and can load balance, and because of its wide straightening architecture, it can respond to some of the unpredictable workloads in the data center, without requiring the administrative overhead.

Converged Infrastructure

Newton: We're really excited about the AppSystems announcements. We're in a great position as HP to be the ones to deliver on a promise of converging server, storage, network, management, security application all into individual solutions.

So, 2009 was about articulating the definition of what that should look like and what that data center in the future should be. Last year, we spent a lot of time in new innovations in blades and mission-critical computing and strategic acquisitions around storage, network, and other places.

The result last year was what we believe is one of the most complete portfolios from a single vendor in marketplace to deliver converged infrastructure. Now, what we’re doing in 2011 is building on that to bring all that together and simplify that into integrated solutions and extending that strategy all the way out to the application.

If we look at what kind of applications customers are deploying today and the ways that they’re deploying them, we see three dominant new models that are coming to bear. One is applications in a virtualized environment and on virtual machines and that have got very specific requirements and demands for performance and concerns about security, etc.

Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.



We see a lot of acceleration and interest in applications delivered as a service via cloud. Security concerns also require new demands on capacity and resource planning, on automation, and orchestration of all the bits and bytes of the application and the infrastructure.

The third way that we wanted to address was a dedicated application environment. These are data warehousing, analytics types of workloads, and collaboration workloads, where performance is really critical, and you want that not on shared resources, but in a dedicated way. But, you also want to make sure that that is supporting applications in a cloud or virtual environment.

So in 2011, it's about how to bring that portfolio together in the solution to solve those three problems. The key thing is that we didn't want to extend sprawl and continue the problem that’s still out there in the marketplace. We wanted to do all that on one common architecture, one common management model, and one common security model.

Individual Solutions

What if we could take that common architecture management security model, optimize it, integrate it into individual solutions for those three different application sets and do it on the stuff that customers are already using in the legacy application environment today and they could have something really special?

What we’re announcing this week at Discover is this new portfolio we called Converged Systems. For that virtual workload, we have VirtualSystems or the dedicated application environment, specifically BI, and data management and information management. We have the AppSystems portfolio. Then, for where most customers want to go in the next few years, cloud, we announced the CloudSystem.

So, those are three portfolios, where common architecture addresses a complete continuum of customer’s application demands. What's unique here is doing that in a common way and being built on some of the best-of-breed technologies on the planet for virtualization, cloud, high performance BI, and analytical applications.

Our acquisition of Vertica powers the BI appliance. The architecture is one of the most modern architectures out there today to handle the analytics in real time.

Before, analytics in a traditional BI data warehouse environment was about reporting. Call up the IT manager, give them some criteria. They go back and do their wizardry and come back with sort of a status report, and it's just looking at the dataset that’s in one of the data stores he is looking.

It sort of worked, I guess, back when you didn’t need to have that answer tomorrow or next week. You could just wait till the next quarterly review. With the demands of big everything, as Brad was speaking of, the speed and scale at which the economy is moving the business, and competition is moving, you've got to have this stuff in real-time.

So we said, "Let’s go make a strategic acquisition. Let’s get the best-in-class, real-time analytics, a modern architecture that does just that and does it extremely well. And then, let’s combine that with the best hardware underneath that with HP Converged Infrastructure, so that customers can very easily and quickly bring that capability into their environment and apply it in a variety of different ways, whether in individual departments or across the enterprise.

Real-time analytics

There are endless possibilities of ways that you can take advantage of real-time analytics with this solution. Including it into AppSystem makes it very easy to consume, bring it into the environment, get it up and running, start connecting the data sources literally in minutes, and start running queries and getting answers back in literally seconds.

What’s special about this approach is that most analytic tools today are part of a larger data warehouse or BI-centered architecture. Our argument is that in the future of this big everything thing that’s going on, where information is everywhere, you can’t just rely on the data sources inside your enterprise. You’ve got to be able to pull sources from everywhere.

In buying a a monolithic, one-size-fits-all OLTP, data warehousing, and a little bit of analytics, you're sacrificing that real-time aspect that you need. So keep the OLTP environment, keep the data warehouse environment, bring in its best in class real-time analytic on top of it, and give your business very quickly some very powerful capabilities to help make better business decisions much faster.

Data center efficiency

Mormile: When you talk about today’s data centers, most of them were built 10 years ago and actually a lot of our analyst’s research talks about how they were built almost 14-15 years ago. These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient. More of them require two to three times the amount of power to run the IT, due to inefficient cooling and power distribution systems.

These antiquated data centers simply can’t support the infrastructure that today’s IT and businesses require. They are extremely inefficient.



In addition to these systems, these monolithic data centers are typically over-provisioned and underutilized. Because most companies cannot build new facilities all the time and continually, they have to forecast future capacity and infrastructure requirements that are typically outdated before the data centers are even commissioned.

A lot of our customers need to reduce construction cost, as well as operational expenses. This places a huge strain on companies' resources and their bottom lines. By not changing their data center strategy, businesses are throttled and simply just can’t compete in today’s aggressive marketplace.

HP has a solution: Our modular computing portfolio, and it helps to solve these problems.

Modular computing

Our modular computing portfolio started about three years ago, when we first took a look at and modified an actual shipping container, turning it into a Performance Optimized Data Center (POD).

This was followed by continuous innovation in the space with new POD designs, the deployment of our POD-Works facility, which is the world’s first assembly line data centers, the addition of flexible data center product, and today, with our newest edition, the POD 240A, which gives all the benefits of a container data center without sacrificing traditional data center look and feel.

Also, with the acquisition of EYP, which is now HP Critical Facilities Services, and utilizing HP Technical Services, we are able to offer a true end-to-end data center solution from planning and installation of the IT and the optimized infrastructure go with it, to onsite maintenance and onsite support globally.

When you combine in-house rack and power engineering, delivering finely tuned solutions to meet customers’ growing power and rack needs, it all comes together. You're talking about taking that IT and those innovations and then taking it to the next level as far as integrating that into a turnkey solution, which should actually be a POD or modular data center product.

You take the POD, and then you talk about the Factory Express services where we are actually able to take the IT integrate it into a POD, where you have the server, storage, and networking. You have integrated applications, and you've cabled and tested it.

The final step in the POD process is not only that we're providing Factory Express services, but we're also providing POD-Works. At POD-Works, we take the integrated racks that will be installed in the PODs and we provide power, networking, as well as chilled water and cooling to that, so that every aspect of the turnkey data center solution is pre-configured and pre-tested. This way, customers will have a fully integrated data center shipped to them. All they need to do is plug-in the power, networking, and/or add chilled water to that.

Game changer

B
eing able to have a complete data center on site up and running in a little as six weeks is a tremendous game changer in the business, allowing customers to be more agile and more flexible, not only with their IT infrastructure needs, but also with their capital and operational expense.

When you bring all that together, PODs offer customers the ability to deploy fully integrated, high performing, efficient scalable data centers at somewhere around a quarter of the cost and up to 95 percent more efficient, all the while doing this 88 percent faster than they can with traditional brick and mortar data center strategies.

Start services

Newton: There are a multitude of professional services and support announcements at this show. We have some new professional services. I call them start services. We have an AppStart, a CloudStart, and a VirtualStart service. These are the services, where we can engage with the customer, sit down, and assess their level of maturity -- what they have in place, what their goals are.

These services are designed to get each of these systems into the environment, integrated into what you have, optimized for your goals and your priorities, and get this up and running in days or weeks, versus months and years that that process would have taken in the past for building and integrating it. We do that very quickly and simply for the customer.

We have got a lot of expertise in these areas that we've been building on the last 20 years. Just like we're doing on the hardware-software application side simplifications, these start services do the same thing. That extends to HP Solutions support, which then kicks in and helps you support that solution across that lifecycle.

There is a whole lot more, but those are two really key ones that customers are excited about this week.

Parks: HP ExpertONE has also recently come out with a full set of training and certification courseware to help our channel partners, as well as internal IT folks that are customers, to learn about these new storage elements and to learn how they can take these architectures and help transform their information management processes.

Tang: This set of announcements are significant additions in each of their own markets, having the potential to transform, for example, storage, shaking up an industry that’s been pretty static for the last 20 years by offering completely new architecture design for the world we live in today.

That’s the kind of innovation we’ll drive across the board with our customers and everybody that talked before me has talked about the service offering that we also bring along with these new product announcements. I think that’s key. The combination of our portfolio and our expertise is really going to help our customers drive that success and embrace convergence.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Talend brings unified data integration platform to public clouds

Talend, an open-source middleware provider, today announced Talend Cloud, a unified integration platform for the cloud and hybrid IT environments.

An extension of the recently announced Unified Integration Platform, Talend Cloud is designed for organizations looking to manage their data integration processes, whether on-premise, in the cloud, via software as a service (SaaS) or for hybrid environments. [Disclosure: Talend is a sponsor of BriefingsDirect podcasts.]

Talend is not providing its own public cloud offering at this time, but is making Talend Cloud available now to enable other cloud and enterprise hybrid users to mange data via a community-enhanced portfolio of data and services connectors.

For organizations with hybrid IT environments – that combine on-premise, private cloud, public cloud and SaaS – application and data integrations are difficult, yet critical to leveraging these multi-sourced models. Concerns surrounding latency, bandwidth, permissions and security are causing new forms of integration and data management challenges.

Although cloud has become ubiquitous in today’s IT deployments, many organizations are still trying to determine how to function in hybrid environments.



Talend Cloud provides flexible and secure integration of on-premise systems, cloud-based systems, and SaaS applications, said Talend. It also provides a common environment for users to manage the lifecycle of integration processes including a graphical development environment, a deployment mechanism and runtime environment for operations and a monitoring console for management – all built on top of a shared metadata repository.

It strikes me that these services are directly applicable to business intelligence and master data management for analytics, as the data can be cleansed, accessed and crunched in clouds, even as it originates from multiple locations. Hybrid data cloud analytics can be very powerful, the Talend is helping to jump-start this value.

“Although cloud has become ubiquitous in today’s IT deployments, many organizations are still trying to determine how to function in hybrid environments,” said Bertrand Diard, co-founder and CEO of Talend, in a release. “Using Talend Cloud, customers can address these issues within a single platform that addresses a broad range of integration needs and technologies, ranging from data-oriented services to data quality and master data management, via a unified environment and a flexible deployment model.”

Deployment Flexibility

The new platform provides deployment flexibility for Talend’s solutions and technologies within the Unified Integration Platform, including data integration, data quality, master data management and enterprise service bus. All components can be installed transparently in the cloud, on premise, or in hybrid mode. Key features include:
  • The ability to expand and contract deployments as required
  • Support for standard systems and protocols
  • An open-source model that makes resources accessible by various of platforms and devices
  • Modular architecture that allows organizations to add, modify or remove functionality as requirements change over time
  • The ability to maintain security and reliability of integration, allowing organizations to meet customer service-level agreements (SLAs)
Talend Cloud provides automated deployment on such popular cloud platforms such as Amazon EC2, Cloud.com and Eucalyptus. Also included is the addition of new connectors offering native connectivity to a broad range of key cloud technologies and applications as well as the most popular SaaS applications.

New connectors continue to be added on a regular basis, either by the open source community or by Talend’s R&D organization. The Talend Exchange provides the latest connectors which can be downloaded and installed directly within the Talend Studio, at no per-connector cost.

Talend Cloud is available immediately. More information is available at http://www.talend.com/products-talend-cloud/.

You may also be interested in:

Tuesday, June 7, 2011

HP takes plunge on dual cloud bursting: public and-or private apps support comes of age

LAS VEGAS – HP today at Discover here introduced advancements to its CloudSystem solutions with the means for cloud provider and enterprises to accomplish dual cloud bursting, one of the Holy Grails of hybrid computing.

The CloudSystem targets service providers by giving them the ability to allow their enterprise customers to extend their private cloud-based applications bursting capabilities to third-party public clouds too. See more news on the HP AppSystems portfolio.

HP CloudSystem, announced in January and expanded in the spring with a partner program, is designed to enable enterprises and service providers to build and manage services across private, public and hybrid cloud environments. As a result, clients have a simplified yet integrated architecture that is easier to manage and can be scaled on demand, said HP. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

In a demo on stage here today at Discover, HP's Dave Donatelli, executive vice president and general manager of Enterprise Servers, Storage and Networking for the Enterprise Business at HP, showed some unique features. The HP CloudSystem demo showed heterogeneous cloud bursting with drag and drop, on HP and 3rd party x86 boxes. Management and set-up ease seemed simple and automatic.

HP CloudSystem should appeal to both cloud providers and enterprises, because it forms a common means to get them both on cloud options spectrum. HP dual bursting works for public clouds that use HP CloudSystem or not, said HP.

HP CloudSystem dual bursting also seems to allow tiered bursting, data on private cloud, web tier on public clouds, just works, said HP. This seems quite new and impactful. And it's now available.

Based on HP Converged Infrastructure and HP Cloud Service Automation software, HP CloudSystem helps automate the application-to-infrastructure lifecycle and operations deployment flexibility, said HP. HP CloudSystem helps businesses package, provision and manage cloud services to users regardless of where those services are sourced, whether from CloudSystem’s “on-premises” resources or from external clouds.

Managing applications resources as elastic compute fabrics that span an enterprise's data centers and one or more public cloud partners offers huge benefits and advantages. Businesses that depend more on customer-facing applications, for example, can hone utilization rates and vastly reduce total cost of ownership while greatly reducing the risk that those applications and their data will not always be available, regardless of seasonal vagaries, unexpected spikes or any issues around business continuity.

"Capacity never runs out," said James Jackson, Vice President for Marketing Strategy, Enterprise Servers, Storage and Networking in HP Enterprise Business.

With CloudSystem, HP is providing the management, security and governance requirements for doing dual-burst hybrid computing, including hardware, software and support services with. Automated management capabilities help ensure that performance, compliance and cost targets are met by allocating private and public cloud resources based on a client’s pre-defined business policies. As a result, clients can create and deliver new services in minutes, said HP.

HP also announced HP CloudAgile, a program that spans the HP enterprise portfolio including CloudSystem. To speed time to revenue and improve financial flexibility for a broad range of service providers, the program provides participants with direct access to HP’s global sales force and its network of channel partners.

HP expects to co-sell and co-market such hybrid services with telcos, VARs, SIs, and a wide range of new and emerging service providers. I expect many of these providers to customize their offerings, but based on an HP or other cloud stack vendor foundation.

Current approaches to cloud computing can create fragmentation and can only address a portion of the capabilities required for a complete cloud solution, aid HP. Over time more enterprise applications may be sourced directly to public clouds, but for the foreseeable future private clouds and hybrid models are expected to predominate. See more news on converged infrastructure, and EcoPOD developments.

HP CloudSystem is powered by HP BladeSystem with the Matrix Operating Environment and HP Cloud Service Automation. It is optimized for HP 3PAR Utility Storage, and protected by HP security solutions, including offerings from TippingPoint, ArcSight and Fortify. HP CloudSystem also supports third-party servers, storage and networking, as well as all major hypervisors, said HP.

HP said that its customers that have already invested in HP Converged Infrastructure technology can expand their current architectures to achieve private, public or complete hybrid cloud environments.

HP announced yesterday that it is making up to $2 billion available to help clients finance their way to the cloud through HP Financial Services Co., HP’s leasing and asset management subsidiary.

Furthermore, HP is offering HP Cloud Consulting Services and HP Education services for CloudSystem, including HP CloudStart, to fast track building a private cloud. HP CloudSystem Matrix Conversion Service helps transition current BladeSystem environments to CloudSystem, said HP.

HP Solution Support for CloudSystem simplifies problem prevention and diagnosis with end-to-end support for the entire environment. These services deliver solutions right-sized for the client’s environment, protect investments when transitioning from a virtual infrastructure to a private cloud solution and rapidly deploy CloudSystem in a hybrid, multi-sourced cloud environment.

HP also unveiled at Discover two new cloud security services, HP Cloud Services Vulnerability Scanning and HP Cloud Vulnerability Intelligence. Available now worldwide, these allow cloud services providers to identify and remedy missing patches or network node vulnerabilities. The second service recommends remediation to infrastructure, as a service, and provides actionable advice to avoid vulnerabilities before they can manifest.

You may also be interested in: