Tuesday, September 29, 2009

Akamai joins industry push for rich and fast desktop virtualization services

Call it a trend – and not just a virtual one. Akamai Technologies is the latest tech firm to join the effort to push desktop virtualization into the mainstream with the salient message of swift return on investment (ROI) and lower total costs for PC desktop delivery.

Akamai joins HP, Microsoft, VMware, as well as Citrix, Desktone and a host of others in the quest to advance the cause of desktop virtualization (aka VDI) in a sour economy. Better known for optimizing delivery of web content, video, dynamic transactions and enterprise applications online, Akamai just introduced a managed Internet service that optimizes the delivery of virtualized client applications and PC desktops.

Akamai isn’t starting from scratch. The company is leveraging core technology from its IP Application Accelerator solution to offer a new service that promises cost-efficiency, scalability and the global reach to deliver applications over virtual desktop infrastructure products offered by Citrix, Microsoft and VMware. [Disclosure: Akamai is a sponsor of BriefingsDirect podcasts.]

“We see the desktop virtualization market poised for significant growth and believe that our unique managed services model allows us to work with enterprises on large, global deployments of their virtual desktop infrastructure,” says Willie Tejada, vice president of Akamai’s Application and Site Acceleration group, in a release.

Since Akamai launched its IP Application Accelerator, Tejada reports good traction beyond browser-based applications. Now, he’s betting Akamai’s new customized offering will make room for the company to focus even more on virtualization. He’s also betting enterprise customers will appreciate the new pricing model. With IP Application Accelerator targeted for VDI, Akamai is rolling out concurrent user-based pricing and customized integrations through professional services to virtual desktops.

Significant growth
Tejada is right about one thing: the expected and significant growth of virtual desktop connected devices. Gartner predicts this sector will grow to about 66 million by the end of 2014. That translates to 15 percent of all traditional professional desktop PCs. With these numbers on hand, it’s clear that enterprises are rapidly adopting virtualization as a key component of cost-containment efforts.

I think we're facing an inflection point for desktop virtualization, fueled by the pending Windows 7 release, pent-up refresh demand on PCs generally, and the need for better security and compliance on desktops. Add to that economic drivers of reducing client support labor costs, energy use, and the need to upgrade hardware, and Gartner's numbers look conservative.

Device makers are hastening the move to VDI with thin clients (both PCs and notebooks) that add all the experience of the full PC but in the size of a ham sandwich and for only a few hundred dollars. Hold the mayo!

But there are yet challenges to guaranteeing the performance and scale of VDI across wide area networks. Akamai points out three in particular. First, is the user’s proximity from a centralized virtualization environment. It has a direct impact on performance and availability. Second, virtual protocols consume large amounts of bandwidth. Third, there is traditionally a high cost, as well as uptime issues, associated with private-WAN connections in emerging territories where outsourcing and off-shoring are commonplace.

We see the desktop virtualization market poised for significant growth and believe that our unique managed services model allows us to work with enterprises on large, global deployments of their virtual desktop infrastructure.

Akamai is not only promising its service will overcome all those challenges, it’s also suggesting that working with its solution on the virtualization front may eliminate the need to build out or upgrade costly private networks limited by a preset reach and scale. How does Akamai do this? By allowing for highly scalable and secure virtual desktop deployments to anyone, anywhere, across an Internet-based platform spanning 70 countries.

According to Akamai, its technology is designed to eliminate latency introduced by
Internet routing, packet loss, and constrained throughput. The company also says that performance improvements can be realized through several techniques including dynamic mapping, route optimization, packet redundancy algorithms, and transport protocol optimization.

The story for Akamai’s IP Application Accelerator targeted for VDI. We’ll have to wait and see the case studies of customers relying on the new solution, but the promises are, well, promising. If you have a lot of PCs in calls centers or managing a lot of remote locations, give VDI a look. It's time has come from a technology, network performance, cost and long-term economics perspective.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Monday, September 21, 2009

Part 1 of 4: Web data services extend business intelligence depth and breadth across social, mobile, web domains

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Kapow Technologies.

See popular event speaker Howard Dresner's latest book, Profiles in Performance: Business Intelligence Journeys and the Roadmap for Change, or visit his website.

The latest BriefingsDirect podcast discussion on the future of business intelligence (BI) -- and on bringing more information from more sources into an analytic process, and thereby getting more actionable intelligence out.

The explosion of information from across the Web, from mobile devices, inside of social networks, and from the extended business processes that organizations are now employing all provide an opportunity, but they also provide a challenge.

This information can play a critical role in allowing organizations to gather and refine analytics into new market strategies, better buying decisions, and to be the first into new business development opportunities. The challenge is in getting at these Web data services and bringing them into play with existing BI tools and traditional data sets.

This is the first in a series of podcasts, looking at the future of BI and how Web data services can be brought to bear on better business outcomes.

So, what are Web data services and how can they be acquired? Furthermore, what is the future of BI when these extended data sources are made into strong components of the forecasts and analytics that enterprises need to survive the recession and also to best exploit the growth that follows?

Here to help us explain the benefits of Web data services and BI is Howard Dresner, president and founder of Dresner Advisory Services, and Ron Yu, vice president of marketing at Kapow Technologies. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Dresner: BI is really about empowering end users, as well as their respective organizations, with insight, the ability to develop perspective. In a downturn, what better time is there to have some understanding of some of the forces that are driving the business?

Of course, it's always useful to have the benefit of insight and perspective, even in good times. But, it tends to go from being more outward-focused during good times, focused on markets and acquiring customers and so forth, to being more introspective or internally focused during the bad times, understanding efficiencies and how one can be more productive.

So, BI always has merit and in a downturn it's even more relevant, because we are really less tolerant of being able to make mistakes. We have to execute with even greater precision, and that's really what BI helps us do.

... The future is about focusing on the information and those insights that can empower the individuals, their respective departments, and the enterprise to stay aligned with the mission of that organization.

... If you're trying to develop [such] perspective, bringing as much relevant data or information to bear is a valuable thing to do. A lot of organizations focus just on lots of information. I think that you need to focus on the right information to help the organization and individuals carry out the mission of that organization.

There are lots of information sources. When I first started covering this beat 20 years ago, the available information was largely just internal stores, corporate stores, or databases of information. Now, a lot of the information that ought to be used, and in many cases, is being used, is not just internal information, but is external as well.

There are syndicated sources, but also the entire World Wide Web, where we can learn about our customers and our competitors, as well as a whole host of sources that ought to considered, if we want to be effective in pursuing new markets or even serving our existing customers.

Yu: I fully agree with Howard. It's all about the right data and, given the current global and market conditions, enterprises have cut really deep -- from the line of business, but also into the IT organizations. However, they're still challenged with ways to drive more efficiencies, while also trying to innovate.

The challenges that are being presented are monumental where traditional BI methods and tools are really providing powerful analytical capabilities. At the same time, they're increasingly constrained by limited access to not only relevant data, but how to get timely access to data.

What we see are pockets of departmental use cases, where marketing departments and product managers are starting to look outside in public data sources to bring in valuable information, so they can find out how the products and services are doing in the market.

... Inclusive BI essentially includes new and external data sources for departmental applications, but that's only the beginning. Inclusive BI is a completely new mindset. For every application that IT or line of business develops, it just creates another data silo and another information silo. You have another place that information is disconnected from others.

... There is effectively a new class of BI applications as we have been discussing, that depends on a completely different set of data sources. Web data services is about this agile access and delivery of the right data at the right time.

With different business pressures that are surfacing everyday, this leads to a continuous need for more and more data sources.

... Critical decision-making requires, as Howard was saying earlier, that all business information is easily leveraged whenever it's needed. But today, each application is separate and not joined. This makes the line of business and decision- making very difficult, and it's not in real time.

An easier way

As this dynamic business environment continues to grow, it’s completely infeasible for IT to update their existing data warehouses or to build a new data mart. That can't be the solution. There has to be an easier way to access and extract data exactly where it resides, without having to move data back and forth from data bases, data marts, and data warehouses, which effectively becomes snapshot.

... Web data services provides immediate access to the delivery of this critical data into the business user's BI environment, so that the right timely decisions can be made. It effectively takes these dashboards, reporting, and analytics to the next level for critical decision-making. So when we look deeper into this and how is this actually playing out, it's all about early and precise predictions.

Dresner: ... Some IT organizations have become pretty inflexible. They are focused myopically on some internal sources and are not being responsive to the end user.

To the extent that they can find new tools like Web data services to help them be more effective and more efficient, they are totally open to giving line of business self-service capabilities.



You need to be careful not to suffer from what I call BI myopia, where we are focused just on our internal corporate systems or our financial systems. We need to be responsive. We need to be inclusive of information that can respond to the user's needs as quickly as possible, and sometimes the competency center is the right approach.

I have instances where the users do wrest control and, in my latest book, I have four very interesting case studies. Some are focused on organizations, where it was more IT driven. In other instances, it was business operations or finance driven.

Yu: ... For example, in leading financial services companies, what they're looking for is on this theme of early and precise predictions. How can you leverage information sources that are publicly available, like weather information, to be able to assess the precipitation and rainfall and even the water levels of lakes that directly contribute to hydroelectricity?

If we can gather all that information, and develop a BI system that can aggregate all this information and provide the analytical capabilities, then you can make very important decisions about trading on energy commodities and investment decisions.

Web data services effectively automates this access and extraction of the data and metadata so that IT doesn't have to.



Web data services effectively automates this access and extraction of the data and metadata and things of that nature, so that IT doesn't have to go and build a brand new separate BI system every time line of business comes up with a new business scenario.

... It's about the preciseness of the data source that the line of business already understands. They want to access it, because they're working with that data, they're viewing that data, and they're seeing it through their own applications every single day.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Kapow Technologies.

See popular event speaker Howard Dresner's latest book, Profiles in Performance: Business Intelligence Journeys and the Roadmap for Change, or visit his website.

Process isomorphism: The critical link between SOA and BPM

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

Take the BriefingsDirect middleware/ESB survey now.

By Jason Bloomberg

ZapThink has long championed the close relationship between business process management (BPM) projects and service-oriented architecture (SOA) initiatives. As anyone who has been through our Licensed ZapThink Architect Bootcamp can attest, we have a process-centric view of SOA, where the point to building loosely coupled business services is to support metadata-driven compositions that implement business processes, what we call service-oriented business applications, or SOBAs, for want of a better term.

Nevertheless, there is still confusion on this point, among enterprise practitioners who see BPM as a business effort and SOA as technology-centric, among vendors who see them as separate products in separate markets, and even among pundits who see Services as supporting business functions but not business processes.

On the other hand, there are plenty of enterprise architects who do see the connection between these two initiatives, and who have pulled them together into "BPM enabled by SOA" efforts. This synergy, however, is not automatic, and requires some hard work both among the people focusing on optimizing business processes to better meet changing business needs as well as the team looking to build composable business services that support the business agility and business empowerment drivers for their SOA initiatives.

ZapThink has worked with many such organizations, and over time a distinct best practice pattern has emerged, one that is both fundamental as well as subtle, and as a result, has fallen through the cracks of compendia of SOA patterns: the Process Isomorphism pattern. Understanding this pattern and how to apply it can help organizations pull their BPM and SOA efforts together, and even more importantly, improve the alignment of their SOA initiatives with core business drivers.

What is Process Isomorphism?


An isomorphism is a mathematical concept that expresses a relationship between two concepts that are structurally identical but may differ in their respective implementations. A very simple example would be two tic-tac-toe games, one with the traditional X's and O's, and the other with, say, red dots and blue dots. The game board and the rules are the same, in spite of the difference in symbols the players use to play the games. If two particular games follow the same sequence of moves, they would be isomorphic.

The term process isomorphism usually refers to two processes that are structurally identical, typically between two companies in the same industry. For example, if the order-to-cash process is structurally identical between companies A and B, that is, the same steps in the same order with the same process logic, those processes would be isomorphic, even if the two companies had differences in their underlying technical implementations of the respective processes.

We're using the term differently here, however. Process isomorphism in the SOA context is an isomorphism between a process on the one hand, and the SOBA that implements it on the other. In other words, if you were to model a business process, and as a separate exercise, model the composition of services that implements that process, where those two models have the same structure, then they would be isomorphic.

One conversation that helped crystallize this notion was with John Zachman, who was explaining some of the changes he has recently made to his seminal Zachman Framework. He has renamed Row 3, which had been the System Model row, to the System Logic row. People were confusing the System Model with the physical representation of the system, which resides one row down. Our discussion of process isomorphism is essentially a design practice that relates these two rows of Column 2, the How column. In essence, the process logic model is one level above the service composition model that implements the process logic. The process isomorphism pattern states that these two models should be isomorphic.

Process Isomorphism in Practice

We've been using a wonderful example of Process Isomorphism on our LZA Course for a few years now, courtesy of British Petroleum (BP), who presented at our Practical SOA event in February, 2008 (more about our upcoming Practical SOA event). The presentation focused on how process decomposition is the common language between business and IT efforts, and one of the examples focused on the Well Work Performance process, one of thousands of processes in their oil drilling line of business:



BP's Well Work Performance Process

The Description column in the chart above reflects the four main subprocesses that make up this process. The Sub-Task columns represent individual sub-tasks, or steps in the process. Finally, the Supporting Service Name column indicates the Business Service that implements the corresponding sub-task. The fact that there is a one-to-one correspondence between sub-tasks and supporting Services, combined with the implied correspondence between the process logic and the composition logic, illustrates Process Isomorphism. In this simple example, the process logic is a simple linear sequence, but if the logic were more complex, say with branching and error conditions, then the process would exhibit isomorphism if the composition logic continued to reflect the process logic.

It is important to point out that the one-to-one correspondence between process sub-tasks and supporting Services is by no means a sure thing, and in practice, many organizations fail to design their compositions with such a correspondence. Frequently, the issue is that the SOA effort is excessively bottom-up, where architects specify services based upon existing capabilities. Such bottom-up approaches typically yield services that don't match up with process requirements. Equally common are BPM efforts that are excessively top-down, in that they seek to optimize processes without considering the right level of detail for those processes to enable services to implement steps in the processes. Only by taking an iterative approach where each iteration combines top-down and bottom-up design is an organization likely to achieve Process Isomorphism.

The Process Isomorphism Value Proposition


T
he essential benefit of Process Isomorphism is being able to use the process representation to represent the composition and vice-versa. While these concepts are fundamentally different, in that they live on different rows of the Zachman Framework, the isomorphism relationship allows us the luxury of considering them to be the same thing. In other words, we can discuss the composition as though it were the process, and the process as though it were the composition.

This informal equivalence gives us a variety of benefits. For example, if process steps correspond directly to services, then service reuse is more straightforward to achieve than when the correspondence between steps and services is less clear. Service reuse discussions can be cast in the context of process overlaps. If two processes share a sub-task, then the SOBAs that implement those processes will share the supporting service. In addition, the metadata representation of the composition logic, for example, a BPEL file, will represent the process logic itself. Without process isomorphism, the process logic the BPM team comes up with won't correspond directly to the BPEL logic for the supporting composition. This disconnect can lead directly to misalignment between IT capabilities and business requirements, and also limits business agility, because a lack of clarity into the relationship between process and supporting composition can lead to unintentional tight coupling between the two.

The ZapThink Take


Perhaps the greatest benefit of Process Isomorphism, however, is that it helps to establish a common language between business and IT. The business folks can be talking about processes, and the IT folks can be talking about SOBAs, and at a certain level, they're talking about the same thing. The architect knows they're different concepts, of course, but conversations across the business/IT aisle no longer have to dwell on the differences.

The end result should be a better understanding of the synergies between BPM and SOA. If process specialists want to think of business services as process sub-tasks, then they can go right ahead. Similarly, if technical implementers prefer to think of business processes as being compositions of services, that's fine too. And best of all, when the BPM team draws the process specification on one white board and the SOA team draws the composition specification on another, the two diagrams will look exactly alike. If that's not business/IT alignment, then what is?

This guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

Take the BriefingsDirect middleware/ESB survey now.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Friday, September 18, 2009

Caught between peak and valley -- How CIOs survive today, while positioning for tomorrow

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Download the slides. Sponsor: Hewlett-Packard.

Are CIOs are making the right decisions and adjustments in both strategy and execution as we face a new era in IT priorities? The combination of the down economy, resetting of IT investment patterns, and the need for agile business processes, along with the arrival of some new technologies, are all combining to force CIOs to reevaluate their plans.

What should CIOs make as priorities in the short, medium, and long terms? How can they reduce total cost, while modernizing and transforming IT? What can they do to better support their business requirements? In a nutshell, how can they best prepare for the new economy?

Here to help address the pressing questions during a challenging time -- and yet also a time in which opportunity and differentiation for CIOs beckons -- is Lee Bonham, marketing director for CIO Agenda Programs in HP’s Technology and Solutions Group. The interview is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bonham: We all recognize that we’re in a tough time right now. In a sense, the challenge has become even more difficult over the past six months for CIOs and other decision-makers. Many people are having to make tough decisions about where to spend their scarce investment dollars. The demand for technology to deliver business value is still strong, and it perhaps has even increased, but the supply of funding resources for many organizations has stayed flat or even gone down.

To cope with that, CIOs have to work smarter, not harder, and have to restructure their IT spending. Looking forward, we see, again, a change in the landscape. So, people who have worked through the past six months may need to readjust now.

What that means for CIOs is they need to think about how to position themselves and how to position their organizations to be ready when growth and new opportunity starts to kick in. At the same time, there are some new technologies that CIOs and IT organizations need to think about, position, understand, and start to exploit -- if they’re to gain advantage.

Organizations need to take stock of where they are and implement three strategies:
  • Standardize, optimize, and automate their technology infrastructure -- to make the best use of the systems that they have installed and have available at the moment. Optimizing infrastructure can lead to some rapid financial savings and improved utilization, giving a good return on investment (ROI).
  • Prioritize -- to stop doing some of the projects and programs that they’ve had on their plate and focus their resources in areas that give the best return.
  • Look at new, flexible sourcing options and new ways of financing and funding existing programs to make sure that they are not a drain on capital resources. We’ve been putting forward strategies to help in these three areas to allow our customers to remain competitive and efficient through the downturn. As I said, those needs will carry on, but there are some other challenges that will emerge in the next few months.
Growth may come in emerging markets, in new industry segments, and so on. CIOs need to look at innovation opportunities. Matching the short-term and the long-term is a real difficult question. There needs to be a standard way of measuring the financial benefit of IT investment that helps bridge that gap.

There are tools and techniques that leading CIOs have been putting in place around project prioritization and portfolio management to make sure that they are making the right choices for their investments. We’re seeing quite a difference for those organizations that are using those tools and techniques. They’re getting very significant benefits and savings.

The financial community is looking for fast return -- projects that are going to deliver quick benefits. CIOs need to make sure that they represent their programs and projects in a clear financial way, much more than they have been before this period. Tools like Project and Portfolio Management (PPM) software can help define and outline those financial benefits in a way that financial analysts and CFOs can recognize.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Download the slides. Sponsor: Hewlett-Packard.

Wednesday, September 16, 2009

Jericho Forum aims to guide enterprises through risk mitigation landscape for cloud adoption

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: The Open Group.

My latest podcast discussion comes from The Open Group’s 23rd Enterprise Architecture Practitioners Conference and associated 3rd Security Practitioners Conference in Toronto.

We're talking about security in the cloud and decision-making about cloud choices for enterprises. There has been an awful lot of concern and interest in cloud and security, and they go hand in hand.

We'll delve into some early activities among several standards groups, including the Jericho Forum. They are seeking ways to help organizations approach cloud adoption with security in mind.

Here to help on the journey toward safe cloud adoption, we're joined by Steve Whitlock, a member of the Jericho Board of Management. The interview is conducted by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Whitlock: A lot of discussions around cloud computing get confusing, because cloud computing appears to be encompassing any service over the Internet. The Jericho Forum has developed what they call a Cloud Cube Model that looks at different axis or properties within cloud computing, issues with interoperability, where is the data, where is the service, and how is the service structured.

The Cube came with a focus on three dimensions: whether the cloud was internal

The in-source-outsource question is still relevant. That’s essentially who is doing the work and where their loyalty is.

or external, whether it’s was open or proprietary, and, originally, whether it was insourced or outsourced. ... There are a couple of other dimensions to consider as well. The insource-outsource question is still relevant. That’s essentially who is doing the work and where their loyalty is.

They've also coupled that with the layered model that looks at hierarchical layer of cloud services, starting at the bottom with files services and moving up through development services, and then full applications.

The Jericho Forum made its name early on for de-perimeterization or the idea that barriers between you and your business partners were eroded by the level of connectivity you needed do the business. Cloud computing could be looked at the ultimate form of de-perimeterization. You no longer know even where your data is.

... Similar to SOA, the idea of direct interactive services on demand is a powerful concept. I think the cloud extends it. If you look at some of these other layers, it extends it in ways where I think services could be delivered better.

It would be nice if the cloud-computing providers had standards in this area. I don’t see them yet. I know that other organizations are concerned about those. In general, the three areas concerned with cloud computing are, first, security, which is pretty obvious. Then, standardization. If you invest a lot of intellectual capital and effort into one service and it has to be replaced by another one, can you move all that to the different service? And finally, reliability. Is it going to be there when you need it?

... There are concerns, as I mentioned before -- where the data is and what is the security around the data -- and I think a lot of the cloud providers have good answers. At a really crude level, the cloud providers are probably doing a better job than many of the small non-cloud providers and maybe not as good as large enterprises. I think the issue of reliability is going to come more to the front as the security questions get answered.

... It’s very important to be able to withdraw from a cloud service, if they shut down for some reason. If your business is relying them for day-to-day operations, you need to be able to move to a similar service. This means you need standards on the high level interfaces into these services. With that said, I think the economics will cause many organizations to move to clouds without looking at that carefully.

Formal relationship

The Jericho Forum is also working with the Cloud Security Alliance on their framework and papers. ... It's a very complementary [relationship]. They arose separately, but with overlapping individuals and interests. Today, there is a formal relationship. The Jericho Forum has exchanged board seats with the Cloud Security Alliance, and members of the Jericho Forum are working on several of the individual working groups in the Cloud Security Alliance, as they prepare their version 2.0 of their paper.

... In addition to the cube model, there is the layered model, and some layers are easier to outsource. For example, if it’s storage, you can just encrypt it and not rely on any external security. But, if it’s application development, you obviously can’t encrypt it because you have to be able to run code in the cloud.

I think you have to look at the parts of your business that are sensitive to needs for encryption or export protection and other areas, and see which can fit in there. So, personally identifiable information (PII) data might be an area that’s difficult to move in at the higher application level into the cloud.

I think the interest in how to protect data, no matter

It’s very important to be able to withdraw from a cloud service, if they shut down for some reason. ... You need to be able to move to a similar service.

where it is, is what it really boils down to. IT systems exist to manipulate, share, and process data, and the reliance on perimeter security to protect the data hasn’t worked out, as we’ve tried to be more flexible.

We still don’t have good tools for data protection. The Jericho Forum did write a paper on the need for standards for enterprise information protection and control that would be similar to an intelligent version of rights management, for example.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Download the transcript. Learn more. Sponsor: The Open Group.

Tuesday, September 15, 2009

Economic and climate imperatives combine to elevate Green IT as cost-productive priority

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Welcome to a podcast discussion on Green IT and the many ways to help reduce energy use, stem carbon dioxide creation, and reduce total IT costs -- all at the same time. We're also focusing on how IT can be a benefit to a whole business or corporate-level look at energy use.

We'll look at how current IT planners should view energy concerns, some common approaches to help conserve energy, and at how IT suppliers themselves can make "green" a priority in their new systems and solutions.

[UPDATE: HP on Wednesday released a series of products that help support these Green IT initiatives.]

[UPDATE 2: HP named "most green" IT vendor by Newsweek.]

Here to help us better understand the Green IT issues, technologies, and practices impacting today's enterprise IT installations and the larger businesses they support, we're joined by five executives from HP: Christine Reischl, general manager of HP's Industry Standard Servers; Paul Miller, vice president of Enterprise Servers and Storage Marketing at HP; Michelle Weiss, vice president of marketing for HP's Technology Services; Jeff Wacker, an EDS Fellow, and Doug Oathout, vice president of Green IT for HP's Enterprise Servers and Storage. The panel was moderated be me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Oathout: The current cost of energy continues to rise. The amount of energy used by IT is not going down. So, it's becoming a larger portion of their budget. ... [Executives] want to look at energy use and how they can reduce it, not only from a data center perspective, but also from consumption of the monitors, printers, and desktop PCs as well. So, the first major concern is the cost of energy to run IT.

[They also] want to extend the life of their data center. They don't want to have to spend $10 million, $50 million, or $100 million to build another data center in this economic environment. So, they want to know anything possible, from best practices to new equipment to new cooling designs, to help them extend the life of the data center.

Lastly, they're concerned with regulations coming in the marketplace. A number of countries already have a demand to reduce power consumption through most of their major companies. We have a European Code of Conduct, that's optional for data centers, and then the U.S. has regulations now in front of Congress to start a cap-and-trade system.

IT can multiply the effects of intelligence being built into the system. IT is the backbone of digitization of information, which allows smart business people to make good, sound decisions. ... This is a must-do. The business environment is saying, "You've got to reduce cost," and then the government is going to come in and say, "You're going to have to reduce your energy." So, this is a must-do.

Miller: One of the key issues is who owns the problem of energy within the business and within the data center. IT clearly has a role. The CFO has a role. The data center facilities manager has a role. ... You can't manage what you can't see. There are very limited tools today to understand where energy is being used, how efficient systems are, and how making changes in your data center can help the end customer.

Our expertise in knowing where and how changes to different equipment, different software models, and different service models can drive a significant impact to the amount of energy that customers are using and also help them grow their capacity at the same time.

... Everyone needs an ROI that's as quick as possible. It's gone from 12 months down to 6 months. With our new ProLiant G6 servers, the cost and energy savings alone is so significant, when you tie in technologies like virtualization and the power and performance we have, we're seeing up to three months ROI over older servers by companies being able to save on energy plus software costs.

Reischl: Well, we have been investing in that area for several years now. We will have an energy power cooling roadmap and we will continuously launch innovation as we go along. We also have an overall environment around power and cooling, which we call the Thermal Logic environment. Under this umbrella, we are not only innovating on the hardware side, but on the software side as well, to ensure that we can benefit on both sides for our customers.

In addition to that, HP ProCurve, for example, has switches that now use 40 percent less energy than industry average network switches. We also have our StorageWorks Enterprise Virtual Array, which reduces the cost of power and cooling by 50 percent using thin provisioning and larger capacity disks.

Weiss: IT tends to think in terms of a lifecycle. If you think about ITIL and all of the processes and procedures most IT people follow, they tend to be more process oriented than most groups. But, there is even more understanding now about that latter stage of the lifecycle and not just in terms of disposing of equipment.

The other area that people are really thinking about now is data -- what do you do at the end of the lifecycle of data? How do you keep the data around that you need to, and what do you do about data that you need to archive and maybe put on less energy-consuming devices? That's a very big area.

Wacker: [At EDS] we look for total solutions, as opposed to spot solutions, as we approach the entire ecology, energy, and efficiency triumvirate. It's all three of those things in one. It's not just energy. It's all three.

We look from the origination all the way through the delivery of the data in a business process. Not only do we do the data centers, and run servers, storage, and communications, but we also run applications.

Applications are also high on the order of whether they are green or not. First of all, it means reconciling an application's portfolio, so that you're not running three applications in three different places. That will run three different server platforms and therefore will require more energy.

It's being able to understand the inefficiencies with which we've coded much of our application services in the past, and understanding that there are much more efficient ways to use the emerging technologies and the emerging servers than we've ever used before. So, we have a very high focus on building green applications and reconciling existing portfolios of applications into green portfolios.

How you use IT

Moving onto the business processes, the best data delivered into the worst process will not improve that process at all. It will just have extended it. Business process outsourcing, business process consulting, and understanding how you use IT in the business is continuing to have a very large impact on environmental and green.

You've already identified the major culprit in this. That is that the cost of energy is going to continue to accelerate, and to be higher and higher, and therefore a major component of your cost structure in running IT. So everybody is looking at that.

Cloud is, by its definition, moving a lot of processes into a very few number of boxes -- ultra virtualization, ultra flexibility. So it's a two-sided sword and both sides have to be looked at. One, is for you to be able to get the benefits of the cloud, but the other one is to make sure that the cost of the cloud, both in terms of capabilities as well as the environment, are in your mindset as you contract.

One of the things about what has been called cloud or Adaptive Infrastructure is that you've got to look at it from two sides. One, if you know where you're getting your IT from, you can ask that supplier how green is your IT, and hold that supplier to a high standard of green IT.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Active Endpoints debuts ActiveVOS 7.0 with BPMN 2 support, improved RIA interfaces

Take the BriefingsDirect middleware/ESB survey now.

In a move to meet the growing demand for business process agility, Active Endpoints is readying the next release of its business process management (BPM) suite. The Waltham, Mass.-based modeling tool and process execution firm is rolling out ActiveVOS 7.0 later this month, and I got a sneak peek last week.

Active Endpoints' value has long been modeling, testing, deploying, running and managing business process applications – both system and human tasks. But CEO Mark Taber says version 7 pioneers a new approach to BPM. [Disclosure: Active Endpoints is a sponsor of BriefingsDirect podcasts.]

“Enterprises are looking to a new generation of process applications to increase agility and improve efficiency. As attractive as building business process applications is, it has been hard for many organizations to do so because the tools have, until now, been too cumbersome, proprietary and expensive,” Taber said. “ActiveVOS 7.0 overcomes these challenges by being innovative, lean, open and affordable.”

What’s New in 7.0?

ActiveVOS 7.0 looks and feels different than its predecessors. For starters, the software has a new design canvas that uses the Business Process Modeling Notation (BPMN) 2.0 specification to create executable BPEL processes. On the innovation front, Active Endpoints points to “structured activities” that accelerate process modeling by offering time-saving drag-and-drop constructions.

In viewing a demo of ActiveVOS 7.0, I was struck by how the business analysts needs are targeted visually, with a rich and responsive interface via the AJAX-based forms designer. The latest version uses the "fit" client approaches, leveraging the better graphics and performance of a RIA. I also liked a ease of the process simulation and improved dashboards and auditing.

Moving the presentation tier power from the server to client gives process designers more flexible access to services directly from forms. These forms can issue standard SOAP calls to access services. The result: end users have direct access to information critical to decision-making.

Finally, Active Endpoints’ latest effort debuts ActiveVOS Central, a customizable application that consolidates user interaction with the BPMN into a single user interface. There’s also support for continuous integration and permalinks for ActiveVOS forms.

Active Endpoints isn’t introducing bells and whistles for the sake of rolling out a new iteration. The company points to key benefits for companies that use version 7: reduced dependence on consultants, application delivery on schedule, and more protection for your investment. All of these features aim to improve productivity and quicken results.

As I told the crew at Active Endpoints: Gone are the days when productivity gains could be realized with a new, faster chip -- or a better, faster database. Instead, a "new" Moore’s Law has begun to take hold.

This new era law declares that productivity today is better gained from improving business processes and the way human tasks and machines tasks are combined to rapidly improve results. Productivity needs to come from ongoing process innovation and refinement.

ActiveVOS 7.0 ships this month.

Take the BriefingsDirect middleware/ESB survey now.

Monday, September 14, 2009

Open Group ramps up cloud and security activities as extension of boundaryless organization focus

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.

Standards and open access are increasingly important to users of cloud-based services. Yet security and control also remain top-of-mind for enterprises. How to make the two -- cloud and security -- work in harmony?

The Open Group is leading some of the top efforts to make cloud benefits apply to mission critical IT. To learn more about the venerable group's efforts I recently interviewed Allen Brown, president and CEO of The Open Group. We met at the global organization's 23rd Enterprise Architecture Practitioners Conference in Toronto.

Here are some excerpts:
Brown: We started off in a situation where organizations recognized that they needed to break down the boundaries between their organizations. They're now finding that they need to continue that, and that investing in enterprise architecture (EA) is a solid investment developing for the future. You're not going to stop that just because there is a downturn.

In fact, some of our members who I've been speaking to see EA as critical to ready their organization for coming out of this economic downturn.

... We're seeing the merger of the need for EA with security. We've got a number of security initiatives in areas of architecture, compliance, audit, risk management, trust, and so on. But the key is bringing those two things together, because we're seeing a lot of evidence that there are more concerns about security.

... IT security continues to be a problem area for enterprise IT organizations. It's an area where our members have asked us to focus more. Besides the obvious issues, the move to cloud does introduce some more security concerns, especially for the large organizations, and it continues to be seen as an obstacle.

On the vendor side, the cloud community recognizes they've got to get security, compliance, risk, and audit sorted out. That's the sort of thing our Security Forum will be working on. That provides more opportunity on the vendor side for cloud services.

... We've always had this challenge of how do we breakdown the silos in the IT function. As we're moving towards areas like cloud, we're starting to see some federation of the way in which the IT infrastructure is assembled.

As far as the information, wherever it is, and what parts of it are as a service, you've still got to be able to integrate it, pull it together, and have it in a coherent manner. You’ve got to be able to deliver it not as data, but as information to those cross-functional groups -- those groups within your organization that may be partnering with their business partners. You've got to deliver that as information.

The whole concept of Boundaryless Information Flow, we found, was even more relevant in the world of cloud computing. I believe that cloud is part of an extension of the way that we're going to break down these stovepipes and silos in the IT infrastructure and enable Boundaryless Information Flow to extend.

One of the things that we found internally in moving from the business side of what our architecture is that the stakeholders understand to where the developers can understand, is that you absolutely need that skill in being able to be the person that does the translation. You can deliver to the business guys what it is you're doing in ways that they understand, but you can also interpret it for the technical guys in ways that they can understand.

As this gets more complex, we've got to have the equivalent of city-plan type architects, we've got to have building regulation type architects, and we've got to have the actual solution architect.

... We've come full circle. Now there are concerns about portability around the cloud platform opportunities. It's too early to know how deep the concern is and what the challenges are, but obviously it's something that we're well used to -- looking at how we adopt, adapt, and integrate standards in that area, and how we would look for establishing the best practices.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: The Open Group.