Friday, February 26, 2010

HP rolls out data center services aimed at boosting IT ROI for global SMBs

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

In a move to tap into the small- to mid-sized business (SMB) data center market, Hewlett-Packard (HP) just rolled out a set of services aimed at helping smaller outfits drive the same IT efficiencies as larger enterprises.

The portfolio is designed to improve efficiency and increase IT budget flexibility, while mitigating risks and maximizing return on investment (ROI) from existing IT skills and assets. The services also target dealing with rapid change and the simplifying of management of multi-vendor environments. HP also launched procurement options for custom integration operations and improvement services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

“Our new services are based on drivers that impact owners of small- to mid-sized data centers,” said Ian Jagger, worldwide marketing manager of Infrastructure and Operations for HP’s Technology Services Group. “These services help our customers deal with the challenge of managing IT complexity and sprawl, space and infrastructure limitations, and limited IT budgets and staff.”

Improving operational efficiency

Recognizing the SMB organization's requirements around speed, efficiency and 24/7 resource accessibility with shared virtual IT services, HP is delivering four new services designed to help clients gain tighter environment-wide control and broader, deeper visibility into support-related functions.

HP Multivendor Support Services works to help clients increase service levels and reduce the complexity and costs of managing heterogeneous IT environments. By exercising global buying power among vendors and suppliers, HP said it can effectively lower the cost of support contracts.

These services are entirely differentiated because only licensed engineers can deliver these services and HP’s competitors don’t have licensed engineers.



“We have been offering multi-vendor support solutions to our customers,” says Dionne Morgan, worldwide solutions marketing manager for HP’s Technology Services group. “In addition to IBM and Dell servers, we also now support Sun servers and Sun Solaris 10 for HP ProLiant servers. And for HP Integrity servers we’re now supporting Novell, SUSE Linux and Microsoft Windows Server 2008.”

On the operational efficiency front, HP also announced HP Insight Remote Support to monitor a customer’s environment around the clock and provide remote diagnostics, troubleshooting and a support solution. HP added support for VMware virtual environments. Meanwhile, HP Active Chat offers real-time Web chat support for problem and the HP Data Center Training Symposium will move to help companies develop a custom training plan to increase the effectiveness of IT staff.

Increasing computing capacity

HP also announced value assessment services structured for data centers up to 5,000 square feet in size. The services work to help SMBs find ways to increase computing capacity and cut energy costs.

The new services include Basic Capacity Analysis for Smaller Footprints Assessment, Infrastructure Condition and Capacity Analysis for Smaller Footprints Assessment, and Energy Efficiency Analysis for Smaller Footprints Assessment.

“These services are entirely differentiated because only licensed engineers can deliver these services and HP’s competitors don’t have licensed engineers,” Jagger says. “Our competitors have to partner with specialist companies to deliver these services. We’re also restructuring these services to be sold by our channel partners.”

Offering flexible purchase options

Finally, HP promises to make it easier for SMBs to procure value services that will help them better manage limited resources and drive business value from their technology infrastructure through HP Units of Service and HP Proactive Select Services.

“We’ve taken the customized services available from our technical services portfolio and converted them into what we call Units of Service,” Jagger says. “A Unit of Service is a deliverable at a highly granular level. Any given custom service could be made up of multiple Units of Service.”

HP Proactive Select Services let clients move to a variable budget model, acquiring expert resources on-demand to address changing data center needs.



HP Units of Service gives SMBs access to value services from HP through channel partners that aim to maximize ROI and set the stage for business growth. For example, SMBs can tap into HP custom data center consulting services such as relocation, integration, operations and improvement.

HP Proactive Select Services let clients move to a variable budget model, acquiring expert resources on-demand to address changing data center needs. HP has included Server Firmware Update Installation Service, Technical Online Seminars, Virtual Tape Library Health Check and LeftHand SAN/iQ Update Service to its portfolio.

“With these services, companies can focus their IT staff on strategic IT investments that differentiate them in the marketplace,” Jagger says. “What you’re seeing here is more and more services brought to customers at a value level through the channel that allows them to focus where they can drive the greatest ROI from staff.”

The SMB IT services and support market is ripe for efficiency and lower total costs. And the SMB arena is also a prime user for upcoming cloud and hybrid-sourced services. So now everything as a service can go anywhere.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Thursday, February 25, 2010

Citrix Online acquires Paglo, launches GoToManage to tear down IT management boundaries for the cloud era

In a move to enter the burgeoning SaaS-based IT management market, Citrix Online announced its acquisition of Menlo Park, Calif.-based Paglo Labs on Wednesday. The first fruits of the acquisition is an integrated web-based platform for monitoring, controlling and supporting IT infrastructure.

Dubbed GoToManage, the new service lets Citrix Online tap into the growing demand for software-as-a-service (Saas)-based IT management, a market Forrester Research predicts will reach $4 billion in 2013. Citrix Online is positioning the latest addition to its online services portfolio as an affordable alternative to premise-based software. [Disclosure: Paglo is a sponsor of BriefingsDirect podcasts. Learn more about Paglo's offerings and value.]

I expect that as more enterprises experiment and adopt more mixed-hosted services -- including cloud, SaaS, IaaS, and outsourced ecosystems solutions -- that web-based management capabilities will become a requirement. In order to manage across boundaries, you need management reach that has mastered those boundaries. On-premises and traditional IT management is clearly not there yet.

Elizabeth Cholawsky, vice president of Products and Services at Citrix Online, explains the reasoning behind the acquisition:
“Our customers increasingly tell us they are interested in adding IT management services to our remote support capabilities. With the growing acceptance of SaaS and the increasing use of IT services in small- and medium-sized businesses, we decided IT management reinforced our remote support strategy.”
The Paglo puzzle piece

According to IDC, Citrix Online was the remote support market leader in 2008 with a 34.7 percent global share via its GoToAssist services. IDC also pegs Citrix Online as the third largest SaaS vendor in the world based on 2007 revenue, but Citrix Online needed Paglo-like log analysis technology in order to offer its customers the next puzzle piece in its full SaaS picture.

Paglo has made a name for itself providing SaaS-based IT search and management services. In short, Paglo helps companies harness and analyze the information explosion coming from all their computer, server, network and log data. Paglo helps companies improve operating efficiencies, gain a clearer understanding of true IT costs and meet compliance requirements.

Now, Paglo serves as the foundation for GoToManage. GoToManage creates an IT "system of record" to give businesses with the ability to discover and identify all network devices, monitor critical servers and applications in real-time, manage network usage, and track configuration changes. Like other Citrix Online products, GoToManage can be accessed from anywhere, and doesn’t require costly server infrastructure.

A seamless transition?

With GoToManage, Citrix Online is once again disrupting the traditional IT model. Brian de Haff, CEO of Paglo, expects a seamless integration for Paglo customers and GoToAssist customers that tap into the new service. With behind-the-scenes integration completed, customers can click on a link and instantly access GoToManage. De Haff also expects Paglo customers to adopt GoToAssist and use the two services in tandem.

Bringing these technologies together is a terrific win for the customers of both companies.



“When we look across the Paglo customer base, the integration of monitoring with remote support is by far the number one requested feature that customers are asking for,” de Haff says. “So bringing these technologies together is a terrific win for the customers of both companies.”

Cholawsky declined to comment whether Citrix Online will make additional acquisitions to add to its portfolio, which also includes GoToMyPC, GoToMeeting, GoToAssist, GoToWebinar, GoToTraining and GoView. What she did say is that Citrix Online is witnessing a large growth spurt, which she expects to continue.

“We’re constantly looking at partners and acquisitions,” Cholawsky says. “With the venture capital investments in smaller companies with great technologies over the past couple of years, acquisitions are a terrific way to grow our company. But whether we develop more organically or go out and partner closely or do more acquisitions, we’ll be investing heavily in the SaaS market.”

Financial terms of the Paglo acquisition were not disclosed.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Tuesday, February 23, 2010

Survey: IT executives experimenting with mostly 'private' cloud architectures

If you want a realistic view of cloud computing adoption – along with an understanding of what motivates IT executives to invest the cloud, what concerns remain, and what initiatives are planned – you can’t limit your frame to a single industry. The full picture only becomes clear through a cross section of research, manufacturing, government and education fields.

That’s the approach Platform Computing took at a recent supercomputing conference. The company late last year surveyed 95 IT executives across a number of fields to offer insight into how organizations are experimenting with cloud computing and how they view the value of private clouds. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

The results: Nearly 85 percent intend to keep their cloud initiatives within their own firewall.

“When deploying a private cloud, organizations will need a management framework that can leverage existing hardware and software investments and support key business applications,” says Peter Nichol, general manager of the HPC Business Unit at Platform Computing. “This survey reaffirms the benefits that private clouds offer – a more flexible and dynamic infrastructure with greater levels of self-service and enterprise application support.”

Most organizations surveyed are experimenting with cloud computing – and experimenting is the key word. Eighty-two percent don’t foresee cloud bursting initiatives any time soon. This suggests an appreciation for private cloud management platforms that are independent of location and ownership, and can provide the needed security in a world of strict regulations around transparency and privacy.

Security is chief concern

Forty-nine percent cite security as a chief concern with cloud computing. Another 31 percent pointed to the complexity of managing clouds, while only 15 percent said cost was an issue. Indeed, security concerns are a force driving many IT execs toward private rather than public clouds. Forty-five percent of organizations considering establishing private clouds as they experiment with ways to improve efficiency, increase their resource pool and build a more flexible infrastructure.

. . . The adoption of cloud computing should follow a sequence of evolutionary steps rather than an overnight revolution.



There seems to be some naïveté over the cloud. Nearly three-quarters of those surveyed don’t expect their IT organization infrastructure to change in the face of cloud computing. But that is not a realistic expectation. The move to cloud computing is an evolutionary one and IT organizations must themselves evolve to meet the demands of the organizations and their users. Ultimately, a willingness to evolve begins with an appreciation of the cloud’s value.

“Cloud computing has provided the impetus for IT to make a much needed shift, but many in the industry are still struggling to understand the value of the cloud,” says Randy Clark, chief marketing officer at Platform Computing. “As organizations continue to experiment with cloud to move toward better efficiency and cost-savings, it is best to bear in mind that to ensure success, the adoption of cloud computing should follow a sequence of evolutionary steps rather than an overnight revolution.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Complex systems engineering helps scale SOA the right way

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Ever since ZapThink published our Business Agility as an Emergent Property of SOA ZapFlash, we've been explaining in our Licensed ZapThink Architect course how SOA implementations must be complex systems in order to deliver on emergent properties like business agility. Yet even though we've expanded our treatment of Complex Systems Engineering (CSE) in the latest version of the course, the reaction of most of our students is typically one of perplexity.

Not that we're really surprised, however. Breaking away from the Traditional Systems Engineering (TSE) way of thinking is a huge leap for most technologists, as it shakes to the foundation how they think about architecture, not just SOA in particular, but even more fundamentally, the role IT plays in the enterprise.

Complex systems: Order from chaos in nature

Complex systems theory is especially fascinating because it describes how many natural phenomena occur. Whenever there is an emergent property in nature -- that is, a property of a system as a whole that the elements of the system do not exhibit -- then that system is a complex system.

Everything from the human mind to the motion of galaxies are emergent properties of their respective systems. Fair enough, but those are all natural complex systems, and we're charged with implementing an artificial, human-made complex system. How we take the lessons from nature and apply them in the IT shop is a question that engenders the perplexity we see on our students' faces.

There is a fundamental flaw in this distinction, however. Making such a distinction between natural and artificial systems is basically a TSE way of thinking because it separates people from their tools. In a traditional IT system, people are the "users," but not inherently part of the system. In many complex systems, however, people aren't just part of the system, they are the system.

. . . The system includes individual people making individual decisions based upon their personal point of view within the system . . .



In fact, any large group of people behaves as a complex system. For example, take a stadium full of people doing the wave. Each individual in the crowd decides whether or not to participate based upon the behavior of other people, but the wave itself has "a mind of its own" -- in other words, the wave behavior is an emergent property of the crowd. Another example would be a traffic jam. An accident in opposing traffic will slow down your side of the freeway every time, even though each individual knows that slowing down to look will cause a jam. You and hundreds of people like you can decide not to slow down to look in order to avoid creating a jam, but the jam forms nevertheless.

In the wave example, no technology of any kind takes a role, while in the traffic example, vehicles affect the behavior of the system to a certain extent. In fact, changing the technology can have a dramatic impact on the behavior of the system: If the traffic consisted of trains instead of automobiles, your train might not slow down at all for a problem on a neighboring track. But regardless of whether it's made up of trains or automobiles, the system includes individual people making individual decisions based upon their personal point of view within the system, and emergent properties result, just as they do in a natural system with no people involved at all.

The enterprise as a complex system

Any human organization is, in fact, a complex system, including those unwieldy beasts we refer to as enterprises. Enterprises all have policies and managers and lines of control, but the overall behavior of the enterprise emerges from the individual behaviors of the participants in it. Furthermore, the emergent behaviors of corporations and governments may depend entirely on the people who belong to such enterprises, independent of technology. But when we do include technology in our enterprises, we can dramatically affect the emergent behavior of those systems, just as switching from cars to trains changes how traffic behaves.

. . . It's certainly true that some architects are too focused on the technology, leaving people out of the equation altogether . . .



So, what do you get when you take traffic and subtract the people? A parking lot! Without the people, what was a complex system is now little more than a collection of individual, traditional systems, namely the cars themselves. Each auto is a traditional system in the sense that the properties it exhibits are the properties its manufacturer designed into it. The best you can expect with TSE, after all, is to deliver a system that does what it's supposed to do.

Too often in the enterprise, people confuse complex systems with collections of traditional systems, which is just as big a mistake as confusing a parking lot full of empty cars with a traffic jam. In fact, architects are often the first to make this mistake. Of course, it's certainly true that some architects are too focused on the technology, leaving people out of the equation altogether, but even for those architects who include people in the architecture, they often do so from a TSE perspective rather than a CSE approach. But no matter how hard you try, designing better steering wheels and leather seats and the like won't prevent traffic jams!

Complex systems thinking and SOA

In traditional systems thinking, then, we have systems and users of those systems, where the users have requirements for the systems. If the systems meet those requirements then everybody's happy.

In complex systems thinking, we have systems made up of technology and people, where the people make decisions and perform actions based upon their own individual circumstances. They interact with the technology in their environments as appropriate, and the technology responds to those interactions based upon the requirements for the complex system as a whole. In many cases, the technology provides a feedback loop that helps the people achieve their individual requirements, just as brake lights in a traffic jam help reduce the chance of collisions.

Such complex systems thinking has been a common theme in many of ZapThink's articles for several years now. Here are some examples:
  • In Best Effort SOA and the SOA Quality Star, we discuss how the business agility requirement complicates the SOA quality challenge. Because agility is an emergent property, we have to establish continuous quality policies that ensure that the delivered system is sufficiently agile. As a result, there's always a trade-off between agility and quality we call "Best Effort SOA."

  • In The Buckaroo Banzai Effect: Location Independence, Service-Oriented Architecture, and the Cloud, we explore the "Next Big Thing" as SOA, Cloud Computing, Web 2.0, and mobile presence converge. Our conclusion? "The Next Big Thing isn't a cloud in the sense of abstracted data centers full of technology; it's a cloud of people, communicating, creating, and conducting business, where the technology is hidden in the mist."

  • In Resilience: The Missing Word in the SOA Conversation, we discuss how SOA implementations must be resilient, that is, they must have self-righting tendencies that help them recover from adverse forces in their environment. Resilience is a property of the component systems in a SOA implementation that allows the overall system to exhibit the emergent property of business agility.

  • Finally, in the more recent The Christmas Day Bomber, Moore's Law, and Enterprise IT, we introduce the concept of a "metapolicy feedback loop" that explicitly describes the relationship between humans tackling governance in the enterprise and the governance technology they leverage for the task. Only by taking a complex systems approach to the problem of governance do organizations have any chance of dealing with the explosion in the quantity and complexity of information in the enterprise over time.
The common elements to all of these arguments are the feedback loops between people and technology at the component level that enables the overall system to continue to meet requirements as those requirements change -- the essence of business agility.

The ZapThink take

If you still find yourself perplexed by this whole complex systems story, it might help to point out that complex systems aren't necessarily complicated. In fact, in a fundamental way they are really quite simple. Traffic jams may be difficult to understand, but individuals driving cars are not.

Best practices like Metadata-driven governance, the Business Service abstraction, and infrastructure and implementation variability, to name a few, are well within reach of today's SOA initiatives. And the great thing about complex systems is that if you take care of the nuts and bolts, the big picture ends up taking care of itself.

For organizations who don't take a complex systems approach to SOA, however, the risks are enormous. As traditional systems scale, they become less agile. Ask any architect who's attempted to hardwire several disparate pieces of middleware together in a large enterprise -- yes, maybe you can get such a rat's nest to work, but it will be expensive and inflexible. If you want to scale your SOA implementation so that it continues to deliver business agility even on the enterprise scale, then the complex systems approach is absolutely essential.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

Thursday, February 18, 2010

Mutual embrace of SOA and cloud computing builds into productivity waltz across the IT landscape

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

The latest BriefingsDirect podcast discussion comes in conjunction with The Open Group’s Enterprise Architecture Practitioners Conference held earlier this month in Seattle.

We assembled a panel to examine service-oriented architecture (SOA) and cloud computing -- the relationships, the inter-reliance and the realities. Three years ago, the IT transformation poster child was SOA, and now we're well into the hype curve around cloud computing, but has one actually given way to the other? Are they linear in their relationship, or perhaps mutually dependent in some ways, and to what degree?

We’ll explore now whether SOA has found new value and relevance as a foundation and perhaps catalyst for cloud computing, especially for so-called private clouds. And, we'll see how the emergence of SOA and cloud may be happening in different places inside of enterprises. Shouldn’t one hand get to quickly know what the other is up to and perhaps even work together?

Enjoy a series of podcasts from The Open Group conference on cloud computing, enterprise architecture, business architecture, Archimate, and cloud security.

Here with us now, however, to plumb the depths of how SOA and cloud computing do or don’t come together, are Dr. Chris Harding, director of the SOA Work Group at The Open Group; Stephen G. Bennett, Senior Enterprise Architect at Oracle, and Peter Coffee, Director of Platform Search for Salesforce.com. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Harding: Five years ago, when we started getting into SOA, there was a huge amount of excitement and a great deal of buzz about it. Now, we can see that the hype cycle has run its course, but we're still seeing a great deal of technical interest in SOA and we're also seeing that companies are using it and are increasing their use of it. So, there is a steady uptake in the use of SOA, although the excitement about it has died down.

It’s very interesting that service orientation is very much a business concept, and SOA has been about the application of that business concept to the technology. Cloud computing, on the other hand, is very much a technical concept. It’s about what you can do with technology over the Internet.

It is a technical concept, but it has had really a big impact on the business structure. So you can see them as complementary. SOA has been the application of business principles into the technology. Cloud is a technical concept, which has had a huge impact on the business. So, yes, there probably are different parts of the organizations looking at cloud and looking at SOA, but there is a big dynamic that says they should be working together on both of them.

Coffee: I've been covering SOA for a long time. I'd say the people who adopted SOA in the previous decade got considerable upside, but those who did not didn’t really suffer any penalty for not doing so.

In the situation we're in now, where the economics of cloud computing are becoming quite compelling, the downside of not having a SOA is becoming quite apparent. If you don’t have a service environment, then your ability to extend your current assets and integrate them with cloud services is going to be somewhat hampered.

So, people are realizing now that the wait-and-see option is more perilous than it used to be. This is accelerating the actual adoption of what we would call SOA, except that’s no longer the label du jour.

Beyond integration

It seems to me that SOA very quickly became a label of products that vendors wanted to sell. So, you saw a lot of things like enterprise service bus (ESB) products and so on.

It became dangerously easy to think that you were doing SOA, if you were buying the tools and failing to appreciate how much of a cultural and management achievement it was to get people to think of themselves not as owners of and the gatekeepers to an IT asset, but instead being publishers of and supporters of a service to other parts of the business.

It’s absolutely critical to understand that you can view SOA as simply a way of integrating the stuff you have, or you can move to the next level and start to think of it as the way you do your business. The way your business units interact with and support each other with the technology is just the enabler for that.

The same is true of the cloud. It's possible to take the existing IT model of isolated applications, each with their own data stores, and replicate that model in the cloud with elastic scalability of capacity. That would be the level of the cloud industry that’s typically called infrastructure as a service (IaaS).

Or, it's possible to use the cloud as a much more interesting and fluid medium for interaction among much more granular and business-oriented services at the level that’s traditionally been called in the industry either platform as a service (PaaS) or software as a service (SaaS). It depends on the level at which you choose to consume other people’s application work, instead of doing new application development yourself.

It’s possible to do SOA without the cloud. It’s possible to do better SOA with it. It is also possible to do an isolated silo-oriented architecture locally and also to do that in a cloud environment. Neither one necessarily implies or impels the other.

Bennett: The majority of large enterprises today are doing SOA in one fashion or another at different levels of maturity, whether that’s from the quite immature approach of seeing it as a pure integration play all the way up to seeing it more as a business agility kind of play.

So, it's becoming a norm and, therefore, we don’t need to keep hyping it or pushing it. We need to use the characteristics it offers with other supporting technology strategies such as cloud

I actually see recession as an opportunity within IT, because it gives you opportunity to reset thinking and reset IT's approach to actually delivering IT to the business.

It's a combination of technologies that are finally ready for prime time, and an ecosystem that’s ready to support those technologies well.



Coffee: The economics of being able to have elastically scalable capacity to be able to handle peak loads without needing to own the peak capacity and wind up with very low utilization rates on your capacity are becoming so compelling that people are asking how they're going to take advantage of this opportunity of this cloud environment.

It's a combination of technologies that are finally ready for prime time, and an ecosystem that’s ready to support those technologies well -- providers of services and providers of expert assistance in using those services.

That’s a very important enabling ware, when your major system integration firms begin fully to understand how they can incorporate cloud services into the portfolio of technologies that they make available to their customers. When you put that all together, the downside of not moving to an SOA becomes an embarrassing lack of ability to take advantages of these incredible economies.

... The combination of SOA, which makes your various business units able to cooperate more effectively, with cloud environments which allow you to handle very "bursty" workloads and conduct very cost-effective pilot projects and scale the ones that work very rapidly, increase the ROI of IT spending.

The IT budget, as a line item, is not conspicuously bigger. In fact, it may actually shrink, because the IT department now is a composer and integrator of stuff that may now be getting done with the operating budget by personnel, who are on the payroll as members of a business unit, instead of members of an IT organization.

Business capability maps

Bennett: What people are talking about is the opportunity to redirect costs to area such as business architecture, and business architecture is part of enterprise architecture (EA). That's not purely IT focused, but the wider concern -- investing stuff like business capability maps to understand exactly where I should utilize SOA and cloud with my organization -- is going to be key.

This will, in turn, enable the consuming enterprises to concentrate on the things that they are particularly good at.



Harding: That certainly must be one of the factors that will enable cloud computing to make enterprises more efficient -- the elasticity and the take-up effect. It also has a major effect on the risk that an enterprise needs to take on. But, there is a bigger factor, which is meant to drive down cost, and that is competition.

If you take service orientation and cloud in combination, you’re seeing the ability of people to buy services from different suppliers, for those suppliers to compete, and for those suppliers to concentrate on the services that they are particularly good at. This will, in turn, enable the consuming enterprises to concentrate on the things that they are particularly good at.

So, you don’t need to dissipate your efforts on running an inefficient IT department, which is not your core business. You can outsource that, get a specialist to do it much better, and concentrate on what you're good at. That is the real dynamic that will improve things economically.

Now, from an Open Group perspective, there is a danger that you may become locked into a particular supplier. Part of our role in promoting open systems is to push for the standards to be in place so that that doesn’t happen. Provided we can prevent that locking, it’s altogether a very healthy situation.

Coffee: The granularity of this marketplace is quite surprising to many people who haven’t looked at it closely. We see already people building applications, in which they have shopped the marketplace and found a cloud storage proposition from one provider, a cloud application development platform from another, social networking algorithms and facilities from yet a third provider and have built some really interesting strategic business solutions. It’s quite startling to many people to realize what a supermarket of services has already come into being.

Bennett: The combination of cloud and SOA obviously brings together kind of speed and modularity. Those basic principles are going to allow us to take evolutionary technologies and approaches and probably revolutionize the way that IT actually interacts with the business.

So, in terms of IT being siloed -- "please develop and look after this application" -- it’s going to be more a move toward collaboration of how we can actually deliver business solutions to the ever-changing business dynamics.

Coffee: Finally, we have an environment in which connectivity and real-time linkage and integration of data and function instead of being costly, brittle, and time-consuming are now nearly free, very resilient, and can be done almost more quickly than they can be described.

This means that people are going to be doing more challenging work and working more closely with business units instead of having their time consumed by arduous, necessary, but relatively low-value tests of infrastructure maintenance.

So the ROI will rise. The relevance to the business of IT will increase. The sophistication of the skills of the person who does IT for a living will be greater 10 years from now than it was 10 years ago or even today, but we’ll all be pretty happy with the results.
There are a series of podcasts from The Open Group conference: on cloud computing, enterprise architecture, business architecture, Archimate, and cloud security.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Wednesday, February 17, 2010

Seeing a golden lining around efficiency, HP expands cloud consulting services portfolio

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Hewlett-Packard (HP) is pushing deeper into the cloud opportunity with new consulting services that aim to help businesses and government agencies speed cloud-based infrastructure adoption and respond more quickly to market demands for efficiency.


Dubbed HP Cloud Design Service, the new offering advises organizations how to quickly design and deploy scalable, cloud-based infrastructures. HP's consulting services come with risk mitigation in mind and support a hybrid sourcing model that encompass private and public cloud options. HP promises its approach will allow organizations to consume and deliver services that support varied workloads. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

"There's a lot of hype out there, and organizations just can't deal with cool, exciting cloud concepts in a vacuum," says Flynn Maloy, vice president of marketing for HP's Technology Services group. "If you even make a tiny pull of cloud services into your IT environment, it touches everything else in the environment. Our HP Cloud Design Service looks at the big picture."

Anatomy of HP Cloud Design

HP is basing the new consulting services on its own experience with demanding cloud environments, including work with the Defense Information Systems Agency to design a cloud infrastructure solution that accelerates the process of provisioning computing services for U.S. military applications.

A year ago companies were skeptical. Last year they were running pilots. Now, companies are trying to figure out how to leverage cloud innovations internally



Here's how HP's Cloud Design Service works: First, HP explores a client's business and technical requirements, as well as existing IT investments. HP then creates a customized cloud infrastructure design blueprint and implementation plan, complete with cost estimates and deployment, testing, operational management, service lifecycle management, government and support guidelines.

HP outlines four key benefits of its cloud consulting service: access to a common, flexible framework for cloud engagements, faster time to delivery with mitigated implementation risks, reduced technology redundancies, and the ability to leverage existing HP and non-HP technology investments. The result, according to HP, is a cloud-specific infrastructure that's safe and effective – and meets business objectives.


Mapping the cloud

HP's Cloud Design Service builds on existing HP efforts in the cloud, including the Cloud Discovery Workshop and the Roadmap Service. The Cloud Design Service acts as the next step in an organization's move into the cloud. 
The updates this week follow earlier moves last summer on cloud consulting services.

As Maloy describes it, the new service sends HP's cloud consultants into an organization's IT environment with sleeves rolled up, ready to help design and build an architecture that leverages the benefits of a shared internal cloud while offering access to external public clouds.

The big question is, are organizations ready to move beyond private clouds to public clouds? Maloy says organizations are kicking the tires, trying to figure out how to bring public cloud innovations into the enterprise. HP, he says, has established best practices to do this safely.

"A year ago companies were skeptical. Last year they were running pilots. Now, companies are trying to figure out how to leverage cloud innovations internally," Maloy says. "Our HP Reference Architecture for Cloud is part of the Cloud Design Service. It has all of the elements we think a robust, well-designed environment takes into account."

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Tuesday, February 16, 2010

HP ‘trims’ SharePoint web doc management risks, builds advanced workflow tools

Today’s enterprises are creating web-based content at breakneck speed. Much of this digital content becomes bona fide business records that demand document management with regulatory compliance and legal discovery demands in mind.

That’s why Hewlett-Packard (HP) recently rolled out a web-based records management solution specifically designed to help Microsoft SharePoint customers lower business risks. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.)

Dubbed HP Total Records Information Management (TRIM) 7, the latest version of HP’s advanced records management solution aims to help organizations transparently manage Microsoft SharePoint Server records – including documents and information stored on SharePoint Server blogs, wikis, discussions, forms, calendars and workflows – in a single environment.

A Content 2.0 explosion

As HP explains it, TRIM 7 opens the door for consolidation and simplified management of stored content in multiple formats. Using HP TRIM 7, organizations can capture, search and manage physical and electronic files with complete transparency.

“The explosion in Content 2.0 blogs, wikis and discussions creates new information management challenges for organizations trying to meet an escalating set of regulation,” says Jonathan Martin, vice president and general manager of Information Management Solutions at HP. “HP TRIM allows customers to marry records management best practices and governance with dynamic collaboration platforms such as SharePoint.”


An end-to-end solution

HP TRIM 7 offers two modules to address the records management needs of SharePoint products and technologies: HP TRIM Records Management and HP TRIM Archiving.

HP TRIM Records Management aims to improve business records management via transparent access to SharePoint Server content held in HP TRIM directly from the SharePoint Server workspace.

The explosion in Content 2.0 blogs, wikis and discussions creates new information management challenges for organizations trying to meet an escalating set of regulations.



Since the U.S. Department of Defense has awarded HP TRIM its 5015.2 v3 certification, HP notes, organizations are assured the highest levels of records management control for enterprise content. HP has also made improvements that promise faster indexing and search capabilities, along with shorter response times for legal discovery, compliance requests and audits.

Closing the records management loop, HP TRIM Archiving works to help customers lower the risk of data loss while reclaiming storage and system resources from SharePoint Server. This module can either archive specific list objects in SharePoint Server or complete SharePoint Server sites. All this means organizations can take entire SharePoint Server sites offline without losing access to information.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Electric Cloud updates software production offerings with parallelization features

Electric Cloud has accelerated the software production management field today with improvements to two key products: ElectricAccelerator and ElectricCommander 3.5.

ElectricAccelerator boasts a new feature that provides parallel processing and subbuild technology. Dubbed "Electrify," the patented technology promises to speed development on private or public compute clouds by applying the benefits of parallelization to new development tools and tasks.

With Electrify, developers can conduct parallel testing or data modeling on their desktop, in a private cloud or on a dedicated server. Meanwhile, the subbuild technology works to help developers avoid unnecessary or broken builds by identifying only the components required for the current project. [Disclosure: Electric Cloud is a sponsor of BriefingsDirect podcasts.]

Removing production bottlenecks

“Our goal is to remove the bottlenecks in software production wherever they exist,” explains Electric Cloud CEO Mike Maciag. “ElectricAccelerator speeds Make, NMAKE, Visual Studio, and Ant builds by 10-20x. With Electrify we are broadening the technology to enable these benefits for virtually any compute-intensive development task.”

Maciag offers the example of teams standardizing on tools like SCons. With Electrify, he says, those teams can leverage the benefits of centralization to speed builds, reduce hardware costs and curb server sprawl. The technology also makes way for developers to support multiple configurations through ElectricAccelerator’s virtualization capabilities. All this means more control for developers and fewer headaches for IT.

Commanding the cloud

Electric Cloud's ElectricCommander 3.5 offers a customizable and extensible version of its tool for automating and managing the build-test-deploy process in software development. Developers can customize ElectricCommander 3.5 to extract and display data from the defect tracker along with relevant build and test results. This lets build managers track the status of each fix and receive notification when QA has resolved the issue.

ElectricCommander 3.5 also offers user interface (UI) customization that lets development teams or managers create a custom screen to create and execute a build or test request with the appropriate parameters.

In other words, the UI is purpose-built for the developer’s role or environment. The new version also automates and manages what Electric Cloud calls “error-prone, manual pieces of the build-test-deploy process” to make software production faster and more efficient.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Friday, February 12, 2010

UShareSoft rolls out on-demand application delivery platform

UShareSoft is working its way deeper into the cloud this week with two new software-as-a-service (SaaS) products that promise to make the lives of IT admins a little easier by cutting engineering costs and speeding time to value.

The UForge Appliance Factory helps IT pros assemble software appliances while the Open Appliance Studio serves as a framework for automatically deploying solutions in the field. Designed to work hand in hand, UShareSoft is hoping its duo of new products will become the means of choice for building and assembling optimized technology stacks for virtual data center and cloud offerings.

Predictable creation and cloning

U
Forge Appliance Factory works to let IT professionals predictably create, re-use, clone and maintain a complete software stack. UShareSoft promises its tools will simplify the delivery of software to physical, virtualized and cloud environments, including Amazon and VMware vCenter, for scale-up and scale-out computing.

France Telecom is among the customers currently testing the new products. UShareSoft expects customers to see advantages such as independence of image format. The company also expects its products to give organizations the ability to control its own software and governance processes.

UShareSoft’s automated process


How will UForge Appliance Factory delivery these benefits? By automating more of the process and relying less on manual tasks to create optimized stacks.

This approach, the company says, helps reduce errors and saves time. For example, UForge Appliance Factory offers one-click generation to many of the industry standard image formats, including Amazon AMI. The Appliance Factor also offers granular construction of cloning and maintenance tools, along with a catalogue of over 60 best of breed open-source projects.

Open Appliance Studio aims to take it one step further by letting IT admins turn an existing software stack into a vApp. The goal is to help independent software vendors (ISVs) better differentiate their products from the competition by giving them the ability to deliver self contained multi-node offerings that can be deployed in minutes to any cloud.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Thursday, February 11, 2010

Smart Grid for data centers better manages electricity to slash IT energy spending, frees-up wasted capacity

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Nowadays, CIOs need to both cut costs and increase performance. Energy has never been more important in working toward this productivity advantage.

It's now time for IT leaders to gain control over energy use -- and misuse -- in enterprise data centers. More often than not, very little energy capacity analysis and planning is being done on data centers that are five years old or older. Even newer data centers don’t always gather and analyze the available energy data being created amid all of the components.

Finally, smarter, more comprehensive energy planning tools and processes are being directed at this problem. It reqiures a lifecycle approach from the data centers to more toward fuller automation.

And so automation software for capacity planning and monitoring has been newly designed and improved to best match long-term energy needs and resources in ways that cut total costs, while gaining the available capacity from old and new data centers.

Such data gathering, analysis and planning can break the inefficiency cycle that plagues many data centers where hotspots can mismatch cooling needs, and underused and under-needed servers are burning up energy needlessly. These so-called Smart Grid solutions jointly cut data center energy costs, reduce carbon emissions, and can dramatically free up capacity from overburdened or inefficient infrastructure.

By gaining far more control over energy use and misuse, solutions such as Hewlett Packard's (HP) Smart Grid for Data Center can increase capacity from existing facilities by 30-50 percent.

This podcast features two executives from HP to delve more deeply into the notion of Smart Grid for Data Center. Now join Doug Oathout, Vice President of Green IT Energy Servers and Storage at HP, and John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: Data center transformation (DCT) is focused on three core concepts, and energy is another key focus for that all to work. The drivers behind data center transformation are customers who are trying to reduce their overall IT spending, either flowing it to the bottom-line or, in most cases, trying to shift that spending away from management and maintenance and onto business projects.

We also see increasing mandates to improve sustainability. It might be expressed as energy efficiency in handling energy costs more effectively or addressing green IT.

DCT is really about helping customers build out a data center strategy and an infrastructure strategy. That is aligned to their business plans and goals and objectives. That infrastructure might be a traditional shared infrastructure model. It might be a fabric infrastructure model of which HP’s converged infrastructure is probably the best and most complete example of that in the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some combination of the above for a lot of customers.

The secret is doing so through an integrated roadmap of data-center projects, like consolidation, business continuity, energy, and such technology initiatives as virtualization and automation.

Problem area

Energy has definitely been a major issue for data-center customers over the past several years. The increased computing capability and demand has increased the power needed in the data center. Many data centers today weren’t designed for modern energy consumption requirements. Even data centers that were designed even five years ago are running out of power, as they move to these dense infrastructures. Of course, older facilities are even further challenged. So, customers can address energy by looking at their facilities.

Increasingly, we're finding that we need to look at management -- managing the infrastructure and managing the facilities in order to address the energy cost issues and the increasing role of regulation and to manage energy related risk in the data center.

That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center as a key way of managing it effectively and dynamically.

Oathout: We're really talking about is a problem around energy capacity in data centers. Most IT professionals or IT managers never see an energy bill from the utility. It's usually handled by the facility. They never really concentrate on solving the energy consumption problem.

Where problems have arisen in the past is when a facility person says that they can’t deploy the next server or storage unit, because they're out of capacity to build that new infrastructure to support a line of business. They have to build a new data center. What we're seeing now is customers starting to peel the onion back a little bit, trying to find out where the energy is going, so they can increase the life of their data center.

To date, very few clients have deployed comprehensive software strategies or facility strategies to corral this energy consumption problem. Customers are turning their focus to how much energy is being absorbed by what and then, how do they get the capacity of the data center increase so they can support the new workloads.

What we're seeing today is that software, hardware, and people need to come together in a process that John described in DCT, an energy audit, or energy management.

All those things need to come together, so that customers can now start taking apart their data center, from an analysis perspective, to find out where they are either over-provisioned or under-provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they can then take some steps to get more capability out of their current solution or get more capability out of their installed equipment by measuring and monitoring the whole environment.

Adding resources

The concept of converged infrastructure applies to data center energy management. You can deploy a particular workload onto an IT infrastructure that is optimally designed to run efficiently and optimally designed to continually run in an efficient way, so that you know you're getting the most productive work from the least energy and the more energy efficient equipment infrastructure sitting underneath it.

As workloads grow over time, you then have the auditing capability built into the software ... so that you can add more resources to that pool to run that application. You're not over-provisioning from the start and you're not under-provisioning, but you're getting the optimal settings over time. That's what's really important for energy, as well as efficiency, as well as operating within a data center environment.

You must have tools, software, and hardware that is not only efficient, but can be optimized and run in an optimized way over a long period of time.

Collect information

The key to that is to understand where the power is going. One of the first things we recommend to a client is to look at how much power is being brought into a data center and then where is it going.

What you want to do is start collecting that information through software to find out how much power is being absorbed by the different pieces of IT equipment and associate that with the workloads that are running on them. Then, you have a better view of what you're doing and how much energy you're using.

Then, you can do some analysis and use some applications like HP SiteScope to do some performance analysis, to say, "Could I match that workload to some other platform in the infrastructure or am I running it in optimal way?"

Over time, what you can do is you can migrate some of your older legacy workloads to more efficient newer IT equipment, and therefore you are basically building up a buffer in your data center, so that you can then go deploy new workloads in that same data center.

You use that software to your benefit, so that you're freeing up capacity, so that you can support the new workload that the businesses need.

The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.



Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure level, the less you need to change the facilities themselves. Of course, the issue with facilities-related work is that it can affect both quality of service and outages and may end up costing you a pretty penny, if you have to retrofit or design new data centers.

Oathout: Smart Grid for Data Centers gives a CIO or a data-center manager a blueprint to manage the energy being consumed within their infrastructure. The first thing that we do with a Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's really understanding how that all works together and how the whole topology comes together.

The second thing we do is visualize all the data. It's very hard to say that this server, that server, or that piece of facilities equipment uses this much power and has this kind of capacity. You really need to see the holistic picture, so you know where the energy is being used and understand where the issues are within a data center.

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.

Today, our servers and our storage are much more efficient than the ones we had three or four years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can you get an analysis that says, "Here is how much energy is being consumed," you can actually set caps on the IT equipment that says you can’t use more than this. Not only can you monitor and manage your power envelope, you can actually get a very predictable one by capping everything in your data center.

You know exactly, how much the max power is going to be for all that equipment. Therefore, you can do much better planning. You get much more efficiency out of your data center, and you get more predictable results, which is one of the things that IT really strives for, from an SLA to getting those predictable results, day in and day out.

Mapping infrastructure

S
o, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's about visualizing it to make decisions. Then, it's about automating and capping what you’ve got, so you have better predictable results and you're managing it, so that you are not having out wires, you're not having problems in your data centers, and you're meeting your SLA.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Tuesday, February 9, 2010

AmberPoint finally gets acquired as Oracle fills in more remaining stack holes

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Thanks go out to Oracle on Feb. 8 for finally putting us out of our suspense. AmberPoint was one of a dwindling group of still-standing software independents delivering run-time governance of the for SOA environments.

It’s a smart move for Oracle as it patches some gaps in its Enterprise Manager offering, not only in SOA runtime governance, but also with business transaction management – and potentially – better visibility to non-Oracle systems. Of course, that visibility will in part depend on the kindness of strangers as AmberPoint partners like Microsoft and Software AG might not be feeling the same degree of love going forward.

We’re surprised that AmberPoint was able to stay independent for as long as it had, because the task that it performs is simply one piece of managing the run-time. When you manage whether services are connecting, delivering the right service levels to the right consumers, ultimately you are looking at a larger problem because services do not exist on their own desert island.

Neither should runtime SOA governance. As we’ve stated again and again, it makes little sense to isolate run-time governance from IT Service Management. The good news is that with the Oracle acquisition, there are potential opportunities, not only for converging runtime SOA governance with application management, but as Oracle digests the Sun acquisition, providing full visibility down to infrastructure level.

Transaction monitoring and optimization will become the next battleground of application performance management. . .



But let’s not get ahead of ourselves here as the emergence of a unified, Oracle on Sun turnkey stack won’t happen overnight. And the challenge of delivering an integrated solution will be as much cultural as technical, as the jurisdictional boundary between software development and IT operations blurs. But we digress.

Nonetheless, over the past couple years, AmberPoint itself has begun reaching out from its island of SOA runtime, as it extended its visibility to business transaction management. AmberPoint is hardly alone here as we’ve seen a number of upstarts like AppDynamics or Bluestripe (typically formed by veterans of Wiley and HP/Mercury), burrowing down into the space of instrumenting transactions from hop to hop. Transaction monitoring and optimization will become the next battleground of application performance management, and it is one that IBM, BMC, CA, HP, and Compuware are hardly likely to passively watch from the sidelines. [Disclosure: CA, HP and Compuware are sponsors of BriefingsDirect podcasts.]

Last one standing

As for whether run-time SOA governance demands a Switzerland-style independent vendor approach, that leaves it up to the last one standing, SOA Software, to fight the good fight. Until now, AmberPoint and SOA Software have competed for the affections of Microsoft; AmberPoint has offered an Express web services monitoring product that is a free plug-in for Visual Studio (a version is also available for Java); SOA Software offers extensive .NET versions of its service policy, portfolio, repository, and service manager offerings.

Nonetheless, although AmberPoint isn’t saying anything outright about the WebLogic (now Oracle's formerly BEA's) share of its 300-customer installed base, that platform was first among equals when it came to R&D investment and presence. BEA previously OEM’ed the AmberPoint management platform, an arrangement that Oracle ironically discontinued; well in this case, the story ends happily ever after. As for SOA Software, we would be surprised if this deal didn’t push it into closer embrace with Microsoft.

Postscript: Thanks to Ann Thomas Manes for updating me on AmberPoint’s alliances. They are/were with SAP, TIBCO Software, and HP, in addition to Microsoft. Their Software AG relationship has faded in recent years. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Of course all this M&A rearranges the dance floor in interesting ways. Oracle currently OEMs HP’s Systinet as its SOA registry, an arrangement that might get awkward now that Oracle’s getting into the hardware business. That will place into question virtually all of AmberPoint’s relationships.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.