Monday, April 26, 2010

HP rolls out application modernization tools on heels of Forrester survey showing need for better app lifecycle management

Application lifecycle productivity is proving an escalating challenge in today’s enterprise. Bloated app portfolios and obsolete technologies can stifle business agility and productivity, according to a new Forrester Research IT trends survey.

A full 80 percent of IT decision makers queried cited obsolete and overly complex technology platforms as making "significant" or "critical impact" on application delivery productivity. Another 76 percent cited the negative impact of "cumbersome software development lifecycle processes," while 73 percent said it was "difficult to change legacy applications," said Forrester's consulting division. The study is available. [Disclosure: HP is a sponsor of BriefingsDirect podcasts].

A sharp focus on overcoming the challenges associated with improving applications quality and productivity is leading to a growing demand for applications modernization. Specifically, agility, cost reduction and innovation are driving modernization efforts, the Forrester survey concludes.

Fifty-one percent of Forrester’s respondents are currently modernizing software development lifecycle tools, including software testing processes. But are enterprises truly realizing the benefits of application modernization efforts?

On Monday, HP rolled out a set of application quality tools that focus on increasing business agility and reducing time to market to help more companies answer "yes" to that question. The new solutions are part of the HP Application Lifecycle Management portfolio, a key component of HP’s Application Transformation solutions to help enterprises manage shifting business demands.

New challenges, new tools

HP Service Test Management (STM) 10.5 and the enhanced HP Functional Testing 10.0 work to advance application modernization efforts in two ways. First, the tools make it easier for enterprises to focus on hindrances to application quality. Second, the tools improve the all-important line of sight between development and quality assurance teams.

“To maintain a competitive edge in today’s dynamic IT environment, it is critical for business applications to rapidly support changes without compromising quality or performance,” says Jonathan Rende, vice president and general manager of HP’s Business Technology Optimization Applications, Software and Solutions division.

HP STM 10.5 works to mitigate risk and improve business ability by setting the stage for more collaboration between development and quality assurance teams. Built on HP Quality Center, enterprises are using HP STM 10.5 to increase testing efficiency and overall throughput of application components and shared services.

Meanwhile, HP Functional Testing 10.0 ensures application quality to address changing business demands. It even offers a new Web 2.0 Feature Pack and Extensibility Accelerator that supports Web 2.0 apps and lets IT admins test any rich Internet apps technology.

“It is critical for us, particularly in the financial industry, to react rapidly to development changes early on in the testing life cycles,” says Mat Gookin, test automation lead at Suntrust Banks. “We look to ... flexible technology that keeps application quality performance high and operations cost low so we can focus on preventing risks and providing value to our end users.”

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Friday, April 23, 2010

Freed from data center requirements, cloud computing gives start-ups the fast-track to innovate, compete

This quest post comes courtesy of Mike Kavis, is CTO of M-Dot Network, Vice President and Director of Social Technologies for the Center for the Advancement of the Enterprise Architecture Profession (CAEAP ), and a licensed ZapThink architect.

By Mike Kavis

Cloud computing is grabbing a lot of headlines these days. As we have seen with SOA in the past, there is a lot of confusion of what cloud computing is, a lot of resistance to change, and a lot of vendors repackaging their products and calling it cloud-enabled.

While many analysts, vendors, journalists, and big companies argue back and forth about semantics, economic models, and viability of cloud computing, start-ups are innovating and deploying in the cloud at warp speed for a fraction of the cost.

This begs the question, “Can large organizations keep up with the pace of change and innovation that we are seeing from start-ups?”

Innovate or die

Unlike large, well-established companies, start-ups don’t have the time or money to debate the merits of cloud computing. In fact, a start-up will have a hard time getting funded if they choose to build data centers, unless building data centers is their core competency.

Start-ups are looking for two things: Speed to market and keeping the burn rate to a minimum. Cloud computing provides both. Speed to market is accomplished by eliminating long procurement cycles for hardware and software, outsourcing various management and security functions to the cloud service providers, and the automation of scaling up and down resources as needed.

The low burn rate can be achieved by not assuming all of the costs of physical data centers (cooling, rent, labor, etc.), only paying for the resources you use, and freeing up resources to work on core business functions.

I happen to be a CTO of a start-up. For us, without cloud computing, we would not even be in business. We are a retail technology company that aggregates digital coupons from numerous content providers and automatically redeems these coupons in real time at the point of sale when customers shop.

These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough.



To provide this service, we need to have highly scalable, reliable, and secure infrastructure in multiple locations across the nation and eventually across the globe. The amount of capital required to build these data centers ourselves and hire the staff to manage them is at least 10 times the amount we are spending to build our 100 percent cloud-based platform. There are a hand full of large companies who own the paper coupon industry.

You would think that they would easily be the leaders in the digital coupon industry. These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough and build the new digital solutions cheap enough to compete with a handful of start-ups that are racing to sign up all the retailers for this service.

Oh, the irony of it all! The bigger companies have a ton of talent, well established data centers and best practices, and lots of capital. Yet the cash strapped start-ups are able to innovate faster, cheaper, and produce legacy-free solutions that are designed specifically to address a new opportunity driven by increased mobile usage and a surge in the redemption rates of both web and mobile coupons due to economic pressures.

My story is just one use case where we see start-ups grabbing accounts that used to be a honey pot for larger organizations. Take a look at the innovation coming out of the medical, education, home health services, and social networking areas to name a few and you will see many smaller, newer companies providing superior products and services at lower cost (or free) and quicker to market.

While bigger companies are trying to change their cultures to be more agile, to do “more with less” -- and to better align business and IT -- good start-ups just focus on delivery as a means of survival.

Legacy systems and company culture as anchors

Start-ups get to start with a blank sheet of paper and design solutions to specifically take advantage of cloud computing whether they leverage SaaS, PaaS, or IaaS services or a combination of all three. For large companies, the shift to the cloud is a much tougher undertaking.

First, someone has to sell the concept of cloud computing to senior management to secure funding to undertake a cloud based initiative. Second, most companies have years of legacy systems to deal with. Most, if not all of these systems were never designed to be deployed or to integrate with systems deployed outside of an on-premise data center.

Often the risk/reward for re-engineering existing systems to take advantage of the cloud is not economically feasible and has limited value for the end users. If it is not broke don’t fix it!

Smarter companies will start new products and services in the cloud. This approach makes more sense, but there are still issues like internal resistance to change, skill gaps, outdated processes/best practices, and a host of organizational challenges that can get in the way. Like we witnessed with SOA, organization change management is a critical element for successfully implementing any disruptive technology.

The culture for most start-ups is entrepreneurial by nature. The focus is on speed, low cost, results.



Resistance to change and communication silos can and will kill these types of initiatives. Start-ups don’t have these issues, or at least they shouldn’t. Start-ups define their culture from inception. The culture for most start-ups is entrepreneurial by nature. The focus is on speed, low cost, results.

Large companies also have tons of assets that are depreciating on the books and armies of people trained on how to manage stuff on-site. Many of these companies want the benefits of the cloud without given up control that they are used to having. This often leads them down an ill advised path to build private clouds within their data center.

To make matters worse, some even use the same technology partners that supply their on-premise servers without giving the proper evaluation to the thought leading vendors in this space. When you see people arguing about the economics of the cloud, this is why. The cloud is economically feasible when you do not procure and manage the infrastructure on-site.

With private clouds, you give up much of the benefits of cloud computing in return for control. Hybrid clouds offer the best of both worlds but even hybrids add a layer of complexity and manageability that may drive costs higher than desired.

We see that start-ups are leveraging the public cloud for almost everything. There are a few exceptions where due to customer demands, certain data are kept at the customer site or in a hosted or private cloud, but that is the exception not the norm.

The Zapthink take

Start-ups will continue to innovate and leverage cloud computing as a competitive advantage while large, well-established companies will test the waters with non-mission critical solutions first. Large companies will not be able to deliver at the speed of start-ups due to legacy systems and organizational issues, thus conceding to start-ups for certain business opportunities.

Our advice is that larger companies create a separate cloud team that is not bound by the constraints of the existing organization and let them operate as a start-up. Larger companies should also consider funding external start-ups that are working on products and services that fit into their portfolio.

Finally, large companies should also have their merger and acquisition department actively looking for promising start-ups for strategic partnerships, acquisitions, or even buy to kill type strategies. This strategy allows larger companies to focus on their core business while shifting the risks of failed cloud executions to the start-up companies.

If you’re a Licensed ZapThink Architect and you’d like to contribute a guest ZapFlash, please email info@zapthink.com.

This quest post comes courtesy of Mike Kavis, is CTO of M-Dot Network, Vice President and Director of Social Technologies for the Center for the Advancement of the Enterprise Architecture Profession (CAEAP ), and a licensed ZapThink architect.

You may also be interested in:

Thursday, April 15, 2010

Information management takes aim at need for improved business insights from complex data sources

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

Get a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Today's sponsored podcast discussion delves into how to better harness the power of information to drive and improve business insights.

We’ll examine how the tough economy has accelerated the progression toward more data-driven business decisions. To enable speedy proactive business analysis, information management (IM) has arisen as an essential ingredient for making business intelligence (BI) for these decisions pay off.

Yet IM itself can become unwieldy, as well as difficult to automate and scale. So managing IM has become an area for careful investment. Where then should those investments be made for the highest analytic business return? How do companies better compete through the strategic and effective use of its information?

We’ll look at some use case scenarios with executives from HP to learn how effective IM improves customer outcomes, while also identifying where costs can be cut through efficiency and better business decisions.

To get to the root of IM best practices and value, please join me in welcoming our guests, Brooks Esser, Worldwide Marketing Lead for Information Management Solutions at HP; John Santaferraro, Director of Marketing and Industry Communications for BI Solutions at HP, and Vickie Farrell, Manager of Market Strategy for BI Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Santaferraro: The customers that we work with tend to have very complex businesses, and because of that, very complex information requirements. It used to be that they looked primarily at their structured data as a source of insight into the business. More recently, the concern has moved well beyond business intelligence to look at a combination of unstructured data, text data, IM. There’s just a whole lot of different sources of information.

The idea that they can have some practices across the enterprise that would help them better manage information and produce real value and real outcomes for the business is extremely relevant.

If you look at the information worker or the person who has to make decisions on the front line, if you look at those kinds of people, the truth is that most of them need more than just data and analysis. In a lot of cases, they will need a document, a contract. They need all of those different kinds of data to give them different views to be able to make the right decision.

... I’d like to think of it as actually enterprise IM. It’s looking across the entire business and being able to see across the business. It’s information, all types of information as we identify structured, unstructured documents, scanned documents, video assets, media assets.

... By effectively using the information they have and further leveraging the investments that they’ve already made, there is going to be significant cost savings for the business. A lot of it comes out of just having the right insight to be able to reduce costs overall. There are even efficiencies to be had in the processing of information. It can cost a lot of money to capture data, to store it, and cleanse it.

Then it’s about the management, the effective management of all of those information assets to be able to produce real business outcomes and real value for the business. ... Obviously, the companies that figure out how to streamline the handling and the management of their information are going to have major cost reductions overall.

The way to compete

Esser: This is really becoming the way that leading edge companies compete. I’ve seen a lot of research that suggests that CEOs are becoming increasingly interested in leveraging data more effectively in their decision-making processes.

It used to be fairly simple. You would simply identify your best customers, market like heck to them, and try to maximize the revenue derived from your best customers.

Now, what we’re seeing is emphasis on getting the data right and applying analytics to an entire customer base, trying to maximize revenue from a broader customer base.

We’re going to talk about a few cases today where entities got the data right, they now serve their customers better, reduced cost at the same time, and increased their profitability.

... We think of IM as having four pillars. The first is the infrastructure, obviously -- the storage, the data warehousing, information integration that kind of ties the infrastructure together.

The second piece, which is very important, is governance. That includes things like data protection, master data management, compliance, and e-discovery.

The third is information processes. We start talking about paper-based information, digitizing documents, and getting them into the mix. Those first three pillars taken together really form the basis of an IM environment. They’re really the pieces that allow you to get the data right.

The fourth pillar, of course, is the analytics, the insight that business leaders can get from the analytics about the information. The two, obviously, go hand in hand. Rugged information infrastructure for your analytics isn’t any better than poor infrastructure with solid analytics. Getting both pieces of that right is very, very important.

... [And, again,] governance processes are the key to everything I talked about earlier -- the pillars of a solid IM environment. Governance [is] about protecting data, quality, compliance, and the whole idea of master data management -- limiting access and making sure that right people have access to input data and that data is of high-quality.

Farrell: We recently surveyed a number of data warehouse and BI users. We found that 81 percent of them either have a formal data governance process in place or they expect to invest in one in the next 12 months.

... What we’ve seen in the last couple of years is serious attention on investing in that data structure -- getting the data right, as we put it. It's establishing a high level of data quality, a level of trust in the data for users, so that they are able to make use of those tools and really glean from that data the insight and information that they need to better manage their business.

... A couple of years ago, I remember, a lot of pundits were talking about BI becoming pervasive, because tools have gotten more affordable and easier to use. Therefore anybody with a smartphone or PDA or laptop computer was going to be able to do heavy-duty analysis.

Of course, that hasn’t happened. There is more limiting the wide use of BI than the tools themselves. One of the biggest issues is the integration of the data, the quality of the data, and having a data foundation in an environment where the users can really trust it and use it to do the kind of analysis that they need to do.

... The more effectively you bring together the IT people and the business people and get them aligned, the better the acceptance is going to be. You certainly can mandate use of the system, but that’s really not a best practice. That’s not what you want to do.

By making the information easily accessible and relevant to the business users and showing them that they can trust that data, it’s going to be a more effective system, because they are going to be more likely to use it and not just be forced to use it.

Esser: Organizations all over the world are struggling with an expansion of information. In some companies, you’re seeing data doubling one year over the next. It’s creating problems for the storage environment. Managers are looking at processes like de-duplication to try to reduce the quantity of information.

The challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer.



Then, you’re getting pressure from business leaders for timely and accurate information to make decisions with. So, the challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer. It’s a tough job.

Key examples

Farrell: Well, one key example comes to mind. It’s an insurance company that we have worked with for several years. It’s a regional health insurance company faced with competition from national companies. They decided that they needed to make better use of their data to provide better services for their members, the patients as well as the providers, and also to create a more streamlined environment for themselves.

And so, to bring the IT and business users together, they developed an enterprise data warehouse that would be a common resource for all of the data. They ensured that it was accurate and they had a certain level of data quality.

They had outsourced some of the health management systems to other companies. Diabetes was outsourced to one company. Heart disease was outsourced to another company. It was expensive. By bringing it in house, they were able to save the money, but they were also able to do a better job, because they could integrate the data from one patient, and have one view of that patient.

That improved the aggregate wellness score overall for all of their patients. It enabled them to share data with the care providers, because they were confident in the quality of that data. It also saved them some administrative cost, and they recouped the investment in the first year.

... Another thing that we're doing is working with several health organizations in states in the US. We did one project several years ago and we are now in the midst of another one. The idea here is to integrate data from many different sources. This is health data from clinics, schools, hospitals, and so on throughout the state.

Doing this gives you the opportunity to bring together and integrate in a meaningful way data from all these different sources. Once that’s been done, that can serve not only these systems, but also some of the potential systems more real-time systems that we see coming down the line, like emergency surveillance systems that would detect terrorist threat, bioterrorism threats, pandemics, and things like that.

It's important to understand and be able to get this data integrated in a meaningful way, because more real-time applications and more mission-critical applications are coming and there is not going to be the time to do the manual integration.

Santaferraro: We find that a lot of our customers have very disconnected sets of intelligence and information. So, we look at how we can bring that whole world of information together for them and provide a connected intelligence approach. We are actually a complete provider of enterprise class industry-specific IM solutions.

... Probably the hottest topic that I have heard from customers in the last year or so has been around the development of the BI competency center. Again if you go to our BI site, you will find some additional information there about the concept of a BICC.

And the other trend that I am seeing is that a lot of companies are wanting to move from just the BI space with that kind of governance. They want to create an enterprise information competency center, so expanding beyond BI to include all of IM.

We have expertise around several business domains like customer relationship management, risk, and supply chain. We go to market with specific solutions from 13 different industries. As a complete solution provider, we provide everything from infrastructure to financing.

Obviously, HP has all of the infrastructure that a customer needs. We can package their IM solution in a single finance package that hits either CAPEX or OPEX. We've got software offerings. We've got our consulting business that comes in and helps them figure out how to do everything from the strategy that we talked about upfront and planning to the actual implementation.

We can help them break into new areas where we have practices around things like master data management or content management or e-discovery.

Esser: We have a couple of ways to get started. We can start with a business value assessment service. This is service that sets people up with a business case and tracks ROI, once they decide on a project. But, the interesting piece of that is they can choose to focus on data integration, master data management, what have you.

You look at the particular element of IM and build a project around that. This assessment service allows people to identify the element in their IM environment, their current environment, that will give them the best ROI. Or, we can offer them a master planning service which generates really comprehensive IM plan, everything from data protection and information quality to advanced analytics.

Obviously, you can get details on those services and our complete portfolio for that matter at www.hp.com/go/bi and www.hp.com/go/im, as well as at www.hp.com/go/neoview. There is some specific information about the Neoview Advantage enterprise data warehouse platform there.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

Access a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

You may also be interested in:

Wednesday, April 14, 2010

Fog clears on proper precautions for putting more enterprise data safely in clouds

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

The latest BriefingsDirect podcast hones in on managing risks and rewards in the proper placement of enterprise data in cloud computing environments.

Headlines tell us that Internet-based threats are becoming increasingly malicious, damaging, and sophisticated. These reports come just as more companies are adopting cloud practices and placing mission-critical data into cloud hosts, both public and private. Cloud skeptics frequently point to security risks as a reason for cautiously using cloud services. It’s the security around sensitive data that seems to concern many folks inside of enterprises.

There are also regulations and compliance issues that can vary from location to location, country to country and industry by industry. Yet cloud advocates point to the benefits of systemic security as an outcome of cloud architectures and methods. Distributed events and strategies based on cloud computing security solutions should therefore be a priority and prompt even more enterprise data to be stored, shared, and analyzed by a cloud by using strong governance and policy-driven controls.

So, where’s the reality amid the mixed perceptions and vision around cloud-based data? More importantly, what should those evaluating cloud services know about data and security solutions that will help to make their applications and data less vulnerable in general?

We've assembled a panel of HP experts to delve into the dos and don’ts of cloud computing and corporate data. Please welcome Christian Verstraete, Chief Technology Officer for Manufacturing and Distributions Industries Worldwide at HP, and Archie Reed, HP's Chief Technologist for Cloud Security, the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Reed: If you look at the history that we’re dealing with here, companies have been doing those sorts of things with outsourcing models or sharing with partners or indeed community type environments for some time. The big difference with this thing we call cloud computing, is that the vendors advancing the space have not developed comprehensive service level agreements (SLAs), terms of service, and those sorts of things, or are riding on very thin security guarantees.

Therefore, when we start to think about all the attributes of cloud computing -- elasticity, speed of provisioning, and those sorts of things -- the way in which a lot of companies that are offering cloud services get those capabilities, at least today, are by minimizing or doing away with security and protection mechanisms, as well as some of the other guarantees of service levels. That’s not to dismiss their capabilities, their up-time, or anything like that, but the guarantees are not there.

So that arguably is a big difference that I see here. The point that I generally make around the concerns is that companies should not just declare cloud, cloud services, or cloud computing secure or insecure.

It’s all about context and risk analysis. By that, I mean that you need to have a clear understanding of what you’re getting for what price and the risks associated with that and then create a vision about what you want and need from the cloud services. Then, you can put in the security implications of what it is that you’re looking at.

Verstraete: People need to look at the cloud with their eyes wide open. I'm sorry for the stupid wordplay, but the cloud is very foggy, in the sense that there are a lot of unknowns, when you start and when you subscribe to a cloud service. Archie talked about the very limited SLAs, the very limited pieces of information that you receive on the one hand.

On the other hand, when you go for service, there is often a whole supply chain of companies that are actually going to join forces to deliver you that service, and there's no visibility of what actually happens in there.

Considering the risk

I’m not saying that people shouldn't go to the cloud. I actually believe that the cloud is something that is very useful for companies to do things that they have not done in the past -- and I’ll give a couple of examples in a minute. But they should really assess what type of data they actually want to put in the cloud, how risky it would be if that data got public in one way, form, or shape, and assess what the implications are.

As companies are required to work more closely with the rest of their ecosystem, cloud services is an easy way to do that. It’s a concept that is reasonably well-known under the label of community cloud. It’s one of those that is actually starting to pop up.

A lot of companies are interested in doing that sort of thing and are interested in putting data in the cloud to achieve that and address some of the new needs that they have due to the fact that they become leaner in their operations, they become more global, and they're required to work much more closely with their suppliers, their distribution partners, and everybody else.

It’s really understanding, on one hand, what you get into and assessing what makes sense and what doesn’t make sense, what’s really critical for you and what is less critical.

Reed: At the RSA Conference in San Francisco, We spoke about what we called the seven deadly sins of cloud. ... One of the threats was data loss or leakage. In that, you have examples such as insufficient authentication, authorization, and all that, but also lack of encryption or inconsistent use of encryption, operational failures, and data center liability. All these things point to how to protect the data.

One of the key things we put forward as part of thethe Cloud Security Alliance (CSA) announcement that HP was active in was to try and draw out key areas that people need to focus on as they consider the cloud and try and deliver on the promises of what cloud brings to the market.

Although cloud introduces new capabilities and new options for getting services, commonly referred to as infrastructure or platform or software, the security posture of a company does not need to necessarily change significantly -- and I'll say this very carefully -- from what it should be. A lot of companies do not have a good security posture.

You need to understand what regs, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches, and then be able to prove that you did the right thing.



When we talk to folks about how to manage their approach to cloud or security in general, we have a very simple philosophy. We put out a high-level strategy called HP Secure Advantage, and it has three tenets. The first is to protect the data. We go a lot into data classification, data protection mechanisms, the privacy management, and those sorts of things.

The second tenet is to defend the resources which is generally about infrastructure security. In some cases, you have to worry about it less when you go into the cloud per se, because you're not responsible for all the infrastructure, but you do have to understand what infrastructure is in play to feed your risk analysis.

The third part of that validating compliance is the traditional governance, risk, and compliance management aspects. You need to understand what regulations, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches -- and then be able to prove that you did the right thing.

Verstraete: Going to the cloud is actually a very good moment for companies to really sit down and think about what is absolutely critical for my enterprise and what are things that, if they leak out, if they get known, it's not too bad. It's not great in any case, but it's not too bad. And, data classification is a very interesting exercise that enterprises should do, if they really want to go to the cloud, and particularly to the public clouds.

I've seen too many companies jumping in without that step and being burnt in one way, form, or shape. It's sitting down and think through that, thinking through, "What are my key assets? What are the things that I never want to let go that are absolutely critical? On the other hand, what are the things that I quite frankly don't care too much about?" It's building that understanding that is actually critical.

... Today, because of the term "cloud," most of the cloud providers are getting away with providing very little information, setting up SLAs that frankly don't mean a lot. It's quite interesting to read a number of the SLAs from the major either infrastructure-as-a-service (IaaS) or PaaS providers.

Fundamentally, they take no responsibility, or very little responsibility, and they don't tell you what they do to secure the environment in which they ask you to operate. The reason they give is, "Well, if I tell you, hackers can know, and that's going to make it easier for them to hack the environment and to limit our security."

There is a point there, but that makes it difficult for people who really want to have source code, as in your example. That's relevant and important for them, because you have source code that’s not too bad and source code that's very critical. To put that source code in the cloud, if you don't know what's actually being done, is probably worse than being able to make an assessment and have a very clear risk assessment. Then, you know what the level of risk is that you take. Today, you don't know in many situations.

Reed: Also consider that there are things like community clouds out there. I'll give the example of US Department of Defense back in 2008. HP worked with the Defense Information Systems Agency (DISA) to deploy cloud computing infrastructure. And, we created RACE, which is the Rapid Access Computing Environment, to set things up really quickly.

Within that, they share those resources to a community of users in a secure manner and they store all sorts of things in that. And, not to point fingers or anything, but the comment is, "Our cloud is better than Google's."

So, there are secure clouds out there. It's just that when we think about things like the visceral reaction that the cloud is insecure, it's not necessarily correct. It's insecure for certain instances, and we've got to be specific about those instances.

In the case of DISA, they have a highly secured cloud, and that's where we expect things to go and evolve into a set of cloud offerings that are stratified by the level of security they provide, the level of cost, right down to SLA’s and guarantees, and we’re already seeing that in these examples.

Beating the competition

While we’ve alluded to, and actually discussed, specific examples of security concerns and data issues, the fact is, if you get this right, you have the opportunity to accelerate your business, because you can basically break ahead of the competition.

Now, if you’re in a community cloud, standards may help you, or approaches that everyone agrees on may help the overall industry. But, you also get faster access to all that stuff. You also get capacity that you can share with the rest of the community. If you're thinking about cloud in general, in isolation, and by that I mean that you, as an individual organization, are going out and looking for those cloud resources, then you’re going to get that ability to expand well beyond what your internal IT department.

There are lots of things we could close on, of course, but I think that the IT department of today, as far as cloud goes, has the opportunity not only to deliver and better manage what they’re doing in terms of providing services for the organization, but also have a responsibility to do this right and understand the security implications and represent those appropriately to the company such that they can deliver that accelerated capability.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Private cloud models: Moving beyond static grid computing addiction

This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.

By Randy Clark


People don’t talk much about grid computing much these days anymore, but most application teams that require high performance from their infrastructure are actually addicted to grid computing -- whether they know it or not.

Gone are the days of requiring a massive new SMP box to get to the next level of performance. But in today’s world of tight budgets and diverse application needs, the linear scalability inherent in grid technologies becomes meaningless when there are no more blades being added.

This constraint has led grid managers and solution providers to search for new ways to squeeze more capacity from their existing infrastructures, within tight capital expenditure budgets. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

The problem is that grid infrastructures are typically static, with limited-to-no flexibility in changing the application stack parameters – such as OS, middleware, and libraries – and so resource capacity is fixed. By making grids dynamic, however, IT teams can provide a more flexible, agile infrastructure, with lower administration costs and improved service levels.

So how do you make a static grid dynamic? Can it be done in an easy-to-implement and pragmatic, gradual way, with limited impact on the application teams?

By introducing private cloud management capabilities, armed with standard host repurposing tools, any types of grid deployments can go from being static to dynamic.

For example, many firms have deployed multiple grids to serve the various needs of application teams, often using grid infrastructure software from multiple vendors. Implementing a private cloud enables consolidation of all the grid infrastructures to support all the apps through a shared pool approach.

The pool then dynamically allocates resources via each grid workload manager. This provides a phased approach to creating additional capacity through improved utilization, by sharing infrastructure without impacting the application or cluster environments.

The beginning of queue sprawl

T
ake another example. What if the grid teams have already consolidated using a single workload manager? This approach often results in “queue sprawl,” since resource pools are reserved exclusively for each application’s queues.

But by adding standard tools, such as virtual machines (VMs) and dual-boot, resources can be repurposed on demand for high priority applications. In this case, the private cloud platform instructs on which application stack image should be running at any given time. This results in dynamic application stacks across the available infrastructure, such that any suitable physical machine in the cluster can be repurposed on demand for additional capacity.

While many grid professionals consider their grid environments cloud-like already, the advent of cloud computing nonetheless helps make grid environments completely dynamic



Once an existing grid infrastructure is made dynamic and all available capacity is put to use, grid managers can still consider other non-capital spending sources to increase performance even further.

The first step is to scavenge internal underutilized resources that are not owned by the grid team. The under-used resources can range from employee desktop PCs, to VDI farms, and disaster recovery infrastructure or low-priority servers. From these, grid workloads can be launched within a VM on the "scavenged" machines, and then immediately stopped when the owning application or user resumes.

The second major step is to these higher levels of infrastructure productivity direct IT operating budget to external services such as Amazon EC2 and S3. A private cloud solution can centrally manage the integration with and metering of public cloud use (so-called hybrid models), providing additional capacity for “bursty” workloads, or full application environments. And since access to the public cloud is controlled and managed by the grid team, application groups are provided via a seamless service experience -- with higher performance for their total workloads.

While many grid professionals already consider their grid environments cloud-like, the advent of mature cloud computing models can help make grid environments more completely dynamic, providing new avenues for agility, service improvement and cost control.

And by squeezing more from your infrastructure before spending operating budget on external services, you can protect your investment while satisfying users’ insatiable appetite for more performance from the grid.

This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.

You may also be interested in:

Monday, April 12, 2010

Enterprise IT plus social media plus cloud computing equals the future

Two developments last week really solidified for me the collision course between social media concepts and traditional enterprise IT. This is by no means a train wreck, but rather a productive, value-add combination that is sure to make IT departments more responsive to the needs of the businesses and the customers they mutually support.

First, IT consultancy Hinchcliffe & Co. was acquired by Dachis Group. This mashes up Dachis's "social business design" professional services offerings with Hinchcliffe's Enterprise 2.0 architecture, methods and implementations.

The merger shows that social media-enabled business activities need the full involvement of core IT, and that IT has a new and increasingly important role in designing how corporations will find, reach, connect to and service their customers, partners, suppliers -- and the various communities that surround them all.

Terms of the sale for both of the privately held firms was not disclosed, but Hinchcliffe founder and Dion Hinchcliffe told me he'll be helping Dachis Group harness the efficiencies and reach of social media through Enterprise 2.0 for global 2000 corporations.

As he sees it (and I agree), the ability for IT to use rich Internet application technologies, SaaS, cloud, SOA, business intelligence, social-media driven end-user meta data -- all leveraged via SOA integrated, governed and automated business processes -- are changing the nature of business. Companies now know that they can (and should) do business differently, but they don't yet know how to pull all the services and parts together to do it. Same for marketing execs.

Time for IT and marketing to get to know each other better. IT organizations and Enterprise 2.0 methods are increasingly aligned to integrate and leverage the traditional IT strengths with the best of the web, social and marketing. Doing an end-run around IT for advanced marketing is a stop-gap measure, the real solution is bringing IT and web/social/marketing together.

You can't have meaningful and scalable social business strategy at global 2000 firms without the firm hand of IT, newly endowed with modern architectures and tools, on the tiller. A firm like Dion's makes that essential but so far rare connection between the IT culture and the social media marketing pioneers.

"This gets us poised for what happens next: The coming half-decade is going to be a tremendously important and exciting one in the business world as organizations look to fundamentally retool for the 21st century, an era that has quite different expectations and requirements around business and how it gets done," said Hinchcliffe.

The Dachis Group, founded by Jeff Dachis (former Razorfish CEO) in 2008 and well-funded by Austin Ventures, is growing quickly and doing considerable acquiring, including Headshift Ltd. last year. Dion will join as Dachis as senior engagement manager, reporting to Peter Kim, managing director of North American operations.

Another indication of this mega mashup between technology and social media: Salesforce.com's expansion of the private beta testing of its Salesforce Chatter, a Facebook-style social networking platform for enterprises and SMBs. And now AppExchange 2, the next generation of Salesforce's enterprise app storefront, will includes a "ChatterExchange" for social networking business apps.

I saw a demo of Chatter last month at Salesforce headquarters is San Francisco. It has the potential to do what Google Wave does only better, and more targeted as business functions. If I were Lotus, I'd be concerned.

From all this I see a business world soon that no longer begins and ends its days in an email in-box, or portal, but on the "wall" of precisely filtered flow that defines the business process through a social interactions lens, and not a back-office application interface. And that wall can be easily adjusted based on the users activities, policies, etc. Just about anything can be added, or not.

I'm not alone in this vision, of course. Salesforce last week in a New York press conference rolled out "Cloud 2," which has enterprise apps behaving like Twitter, Facebook, or YouTube.

[Incidentally, my old Gillmor Gang cohort and founder, Steve Gillmor, today joins Salesforce.com after leaping and hopping from a rag tag bunch of podcast and blog sites. Congrats, Steve.]

Yep, social networking meets the enterprise. Kind of like chocolate and peanut butter.

Thursday, April 8, 2010

Private cloud computing nudges enterprises closer to 'IT as a service', process orientation and converged infrastructure

S0-called "private cloud computing" actually consists of many maturing technologies, a variety of architectural approaches, and a slew of IT methodologies, many of which have been in development for 20 years or more.

In many ways, the current popularity of cloud computing models marks an intersection of different elements of IT development and a convergence of infrastructure categories. That makes cloud interesting, relevant, and potentially dramatic in its impact. It also makes cloud complex, in terms of attaining the intended positive results.

Yet private cloud adoption -- which I believe is just as important as "public" cloud sourcing options -- may be challenging to implement successfully at strategic or even multiple tactical level. Cloud concepts will most certainly enter into use in many different ways, and, perhaps, uniquely for each adopting organization. So the question is how private cloud adoption can be approached intelligently, flexibly, and with far higher chance of positive and demonstrable business benefit.

The ideas between private and public cloud are pretty similar. You want to be able to deliver and consume a service quickly over the Internet.



I recently has a chance to discuss the anticipated impact of private cloud models and how enterprises are likely to implement them with two HP executives, Rebecca Lawson, director of Worldwide Cloud Marketing at HP, and Bob Meyer, worldwide virtualization lead in HP's Technology Solutions Group. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP also recently delivered a virtual conference on cloud computing. Our discussion came in the lead-up to that conference.

Here are some excerpts:
Rebecca Lawson: Cloud is a word that's been overused and overhyped and we all know it. One of the reasons it's been so popular is because it has a connotation that any kind of cloud service is one that you can access easily over the Internet, by yourself, self-service, and pay for what you use. That's the standard definition of a cloud service.

The ideas between private and public cloud are pretty similar. You want to be able to deliver and consume a service quickly over the Internet. How they're implemented, of course, is quite different. A typical enterprise IT organization has to support different types of applications and workloads, and in the public cloud, most of the providers are pretty specialized in their requirements.

There are lots of different ways of creating, buying, or utilizing different kinds of technology-enabled services. They might be hosted. They might be cloud services. They might be mainframe-based services. They might be homegrown applications. Step one, when you think about private cloud, is to think about, "What services do I need to deliver, how should I deliver them, and how can I make sure that my consumers can have easy access to them when they need them?"

Bob Meyer: Traditionally, what IT has done is delivered built-to-order services. Somebody from a line of business comes to you and says that they need this specific application. Or, somebody in the test environment says that they need a test bed. As the IT supplier internal to the company, it's your job to get together the storage, the server, the network, the apps, and the data. You do all the plumbing yourself and provide that for that specific service.

In the private cloud or public cloud conversation, you will use an IT provider who will likely be providing a mix of services from this point out -- built-to-order, private cloud, public cloud managed services.

The job is to decide what's best for your organization from that mixed bag of services. Which services are right for which delivery model? Which ones make most sense for the business? So, the built-to-order will become less popular, as cloud becomes more prevalent, we believe, but they will certainly co-exist for quite a while.

Nobody can afford to rip and replace these days.



Lawson: Nobody can afford to rip and replace these days, and we don't think that's really necessary. What's necessary is a shift in how you think about things. Think about all the pools of equipment you have. You've got network stuff, server, storage, people, and processes. They tend to be fairly siloed and pretty complex, because you're supporting so many services and so many apps.

In this day and age, you have to get very direct with what technology-enabled services you provide and why, and what's the most efficient means of doing so. One of the great things about the cloud is that it has allowed the whole universe of service providers to expand and specialize.

Companies that are seizing this opportunity and saying, "We're going to take advantage of technology and use it in a proactive way to help build our organization," are doing so in a very aggressive way right now, because they have more choices and can afford to pick the right service to get a certain outcome out of it.

What you want to achieve

A
lot of it depends on what you want to achieve. If what you're going for is to create an environment where every service IT delivers can be easily consumed by people in the lines of business through a service catalog, there are two ways to approach it. One is from the bottom-up, from your infrastructure, your network, your compute, your storage. You need to set yourself up so your services can be sharable.

That means that instead of having dedicated infrastructure components for each application or service, you pool and converge those elements, so that anytime you want to instantiate a service, you can make it easily provisioned and you can make it sharable. That's the bottom-up approach, which is valid and required.

The top-down approach is to say, "How can we make our services consumable?" That means there's a consumer who's a business person, maybe a salesperson, people in accounting, or what have you. They're your consumers.

They want to be able to come into a menu or a portal and order something, just as they'd order something at Starbucks, where they say, "I want this. Show me what my service levels are. Show me what the options are and what the costs are" Press the button, and it automatically goes out, gets the approval, does the provisioning, and you're ready to go.

The catalog becomes that linchpin. It's almost a conversation device.



You want to be able to do that from the top-down. That's not just the automation of it, but also the cultural shift. IT and people in the lines of business have to come together, sit at a table, and say, "What will be rendered in our service catalog? What are the things that you need to accomplish? Based on that, we're going to offer these services in our catalog."

The catalog becomes that linchpin. It's almost a conversation device. It forces IT and the lines of business to align themselves around a series of services and that becomes it. That's how IT establishes itself as a service provider. What I call the litmus test is having a service catalog that defines what people can use and, by inference, what they can’t be using.

A lot of companies -- and our own company, HP, is an example -- have certain policies about what can and can't be used, based on security, corporate policies, or what have you. An implication of moving in this direction is having the right control and governance around the technology services that get used and by whom they get used. Security around certain data access, identity control, and things like that, all come into play with this.

Meyer: Building a private cloud becomes another way you look at providing the best quality services to the business at the lowest cost.

So, if you look at all the things that your mandated to provide the business, you now have another option that says, "Is this a better way for me to be providing these services to the business? Do I drive out risk? Do I drive out cost? Do I drive up agility?" The more choices you have on the back end, if you take that longer term approach and look at private cloud in that context, it really does help you make smarter decisions and set up a more agile business.

Lawson: The real key there is to think about not so much about whether it's going to cost us or save us money, but rather, wouldn't it be great if you knew that for every service you could say how much money that service helped you make, how much revenue came in the door, or how much money that service helped you save?

Unrealistic metric

In a perfect state, you would know that for every service. Of course, that's unrealistic, but for a vast majority of the services that one offers, there should be a very distinctive value metric set up against that. Usually, that value metric out in the commercial world is that you've paid money for it.

Will you save money by establishing a private cloud? Well, yeah, you should. That should be pretty obvious. There should be some savings, if you're doing it right. If you've gone through a pretty structured process of consolidating, virtualizing, standardizing, and automating, it certainly will.

But, an even the better bang for the buck is saying, "With my portfolio of services, that happen to execute in a shared infrastructure environment, not only it might be really efficient, but I know what the business result of it is."

Meyer: Imagine if all the physical components that the servers and network connections, the storage capacity, even the powering of data center were virtualized in a way that can be treated as a pool of resources that you could carve up on demand and assign to different applications. You could automate it in a way to connect all the moving pieces to make the best use of the capacity you have and do that in a standardized way on top of fewer standardized parts.

That's what we mean by convergence in terms of infrastructure. Going back to the point we talked about before, rather than creating dedicated built-to-order infrastructure for every technology-enabled service, infrastructure is made available from adaptive pools that can be shared by any application, optimized, and managed as a service.

It's a great period of opportunity for companies to really harness the various elements and the various possibilities around technology-enabled services and then put them to work.



To get to that point, we mentioned the virtualization part, not just server virtualization but virtualizing the connections between compute, storage, and network and making sure that they could be connected, reconnected, unconnected, on demand, as the services demand. They have to be resilient. You have to build in the resiliency into that converged infrastructure from disaster recovery to things like nonstop fault tolerance.

Lawson: It's a great period of opportunity for companies to really harness the various elements and the various possibilities around technology-enabled services and then put them to work. We help companies do this in any number of ways. From the process and organizational point of view, we've got a lot of ITIL expertise, COBIT, and all kinds of governance and service management expertise within HP.

We help train organizations and we, of course, have a very large services organization, where we outsource these capabilities to enterprises across the globe. We also have a real robust software portfolio that helps companies automate practically every element of the IT function and systems management, literally from the business value of a service all the way down to the bare-metal.

So, we're able to help companies instrument everything, starting with where the money is coming from, and make sure that everything down the line -- the servers, the storage, the networks, and the information -- are all part of the equation. Of course, we offer companies different ways of consuming all of this.

We have products and services that we sell to our customers. We have ways of helping them get these capabilities through our managed services, through the organization previously known as EDS, which is now called Enterprise Services and licensed products, software-as-a-service (SaaS) products, infrastructure as a service (IaaS), all kinds of stuff.

It really depends on each individual customer. We look at their situation and say, "Where are you today, where do you want to get to, and how can we optimize that experience and help you grow into a more efficient, responsive IT organization?"
You may also be interested in: