Friday, December 3, 2010

Case study: Enel Green Power uses PPM to gain visibility, orchestrate myriad energy activities across 16 countries

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast series coming to you from the HP Software Universe 2010 Conference in Barcelona.

We're here in the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

This customer case-study from the conference focuses on Enel Green Power and how their Italian utility business has benefited from improved management of core business processes and gained visibility into new energy projects, also adhering to compliance through better planning and the ability to scope out new projects comprehensively. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

To learn about Enel Green Power’s innovative use of project and portfolio management (PPM), I interviewed Massimo Ferriani, CIO of Enel Green Power in Rome.

Here are some excerpts:
Ferriani: Enel Green Power is one of the leaders in the renewables market ... We're in all the most mature technologies such as hydro, geothermal, wind, and solar.

If you think about a matrix to cross technologies and countries, we have a lot of trouble, because we operate four technologies in 16 countries.

It's difficult because we have more than 300 plants all around the world. So, it's an asset portfolio that we have to operate, and we have to reduce the risks.

When we decided to deploy IT platforms, we didn’t think that it was a good idea to deploy conventional-generation IT platforms, but to build up new platforms more fitted to renewables' needs.

We thought about the main objective in deploying these platforms and said, "Okay, maybe we have to deploy platforms that permit us to minimize the portfolio risk, in order to know exactly what production should be." For us, knowing the production is a condition.

We have to know production, and we have to know exactly the production that we're promising to sell to the market.

The business strategy is to manage centrally and operate locally. IT had to follow the strategy. Our main IT platforms are developed with the objective to be global. Global doesn’t mean managing everything centralized, but to manage the IT platform as centralized, because it's better for synergies and in terms of costs. But, because we have to fit local needs, we we have to localize these platforms in 16 countries.

For PPM, as well, we decided to have a global, centralized, unique platform, in order to gather and collect all the data that we get from the field. This is one of the problems that we frequently have because, in effect, the operation is located everywhere. And, it’s not easy to collect information from each field operation.

It’s important to have global IT platforms, because one of our main objectives is that all our people have to work in the same way.

We have lot of plants in the middle of nowhere -- in the middle of the Nevada desert and in the middle of the Mato Grosso in Brazil. We have to gather information from these plants. So, it’s important to have global IT platforms, because one of our main objectives is that all our people have to work in the same way.

It’s also important to set the main goal of the PPM solution. Now, the PPM solution lets Enel Green Power manage its own worldwide portfolio initiatives, both business development side and the plant construction Phase 2, because we have to remember that the business development hands over the construction of the project.

We have to do it through building a unique centralized integrated platform, valuable to all the countries, designed to certify the market value of the pipeline and the potential future production related to that pipeline. For us, it's absolutely important to forecast better, to make budgets, and so on. It had to be designed to support people, our colleagues, in activities like planning, project development, reporting, document management, and so on.

Setting the main goals

So when we decided to deploy this platform, we had a lot of work for a couple of reasons.

First of all, because we wanted to develop an integrated in-house platform in order to map the ... core processes of the project, and at the same time to implement algorithms to develop a portfolio evaluation.

The second was to investigate adopting a standard solution available on the market that allowed us, with little customization, to fit the need of the business. It's important to underline that, when we started this project, it was the end of May, 2010. We already knew we were going to have an IPO. We didn’t know the time exactly, but we had to be ready for the end of October, the estimated date of the IPO.

We adopted the HP solution, because the HP people convinced us that with a minimal set of customization we would be ready for the end of October -- and we did it.

We chose HP because of the ... strong automation in the collection of the data. As I said before, also important for us were simplicity and flexibility. Also, with reference to our geographical distribution everywhere, the adoption of a solution supported with global support was another constraint and was absolutely important.

We needed a standard technology accessible from a lot of countries and with integration with other applications that we have, for example Microsoft Project. We also required scalability and platform growth -- and HP has a strength on this point -- because we are adopting a web service architecture. And, we wanted the viability of a unique homogenous view of mandating KPIs.

For us, the flexibility was one of the three main strengths on this platform and the reasons we chose HP.

We're only in the first phase in order to support the IPO and to support the certification of the market value of the pipeline. But, the main benefits of this platform for the business are acquisition and centralization of the data.

For us, the flexibility was maybe one of the three main strengths on this platform and the reasons we chose HP. But, the best one, as I said before, was the minimum customization we needed in order to fit the first phase. It’s not easy to have only three months time to set 64 workflows, because the local business development wants to fit their workflow on these needs.

It’s important for the automation to monitor all the steps of the workflow, of the individual steps of the process, to manage the workflow authorization of the individual steps, and monitor progress of the individual steps. All these data have to support us in order to plan the strategy. So, there are plenty of benefits and maybe more benefits in the future with the evolution of this platform.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, December 1, 2010

HP Software GM Jonathan Rende on how ALM enables IT to modernize businesses faster

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

Barcelona -- Welcome to a special BriefingsDirect podcast from the HP Software Universe 2010 Conference in Barcelona, an interview with Jonathan Rende, Vice President and General Manager for Applications Business at HP Software.

We're here the week of November 29, 2010 to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Dana Gardner, Principal Analyst at Interarbor Solutions, moderated the discussion just after the roll-out of HP’s big application lifecycle management (ALM) news, the release of ALM 11. [See more on HP's new ALM 11 offerings.]

Here are some excerpts:
Rende: Over the last 25 years that I've been in the business, I've seen two or three such waves [of applications refresh] happen. Every seven to 10 years, the right combination of process and technology changes come along, and it becomes economically the right thing to do for an IT organization to take a fresh look at their application portfolio.

What’s different now than in the previous couple of cycles is that there is no lack of business applications out there. With those kind of impacts and requirements and responsibilities on the business, the agility and innovation of an application, is now synonymous with the agility and innovation of the applications themselves in the business.

It’s not really the case that the people building, provisioning, testing, and defining the applications are lacking or don’t know what they're doing. It’s mostly that the practices and processes they're engaged in are antiquated.

What I mean by that is that today, acquiring or delivering applications in a much more agile manner requires a ton more collaboration and transparency between the teams. Most processes and systems supporting those processes just aren’t set up to do that. We're asking people to do things that they don’t have the tools or wherewithal to complete.

Lifecycle roles

ot only are we bringing together -- through collaboration, transparency, linking, and traceability -- the core app lifecycle roles of business analysts, quality performance, security professionals, and developers, but we're extending that upstream to program management office and project managers. We're extending it upstream to architects. Those are very important constituents upstream who are establishing the standards and the stacks and the technologies that will be used across the organization.

Likewise, downstream, we're extending this to the areas of service management and service mangers who sit on help desks who need to connect. Their lifeblood is the connection with defects. Similarly, people in operations who monitor applications today need to be linked into all the information coming upstream along with those dealing with change and new releases happening all the time.

[ALM advances] extend upstream much further to a whole group of people -- and also downstream to a whole group of audiences.

Number one, they need to be able to share important information. There’s so much change that happens from the time an application project or program begins to the time that it gets delivered. There are a lot of changing requirements, changing learnings from a development perspective, problems that are found that need to be corrected.

All of that needs to be very flexible and iterative. You need those teams to be able to work together in very short cycles, so that they can effectively deliver, not only on time, but many times even more quickly than they did in the past. That’s what’s needed in an organization.

There isn’t a single IT organization in the world that doesn’t have a mixed environment, from a technology perspective.

On top of that, there isn’t a single IT organization in the world that doesn’t have a mixed environment, from a technology perspective. Most organizations don’t choose just Visual Studio to write their applications in -- or just Java. Many have a combination of either of those, or both of those, along with packaged applications off-the-shelf.

So, one of the big requirements is heterogeneity for those applications, and the management of those applications from a lifecycle approach should be accommodating of any environment. That’s a big part of what we do.

You have to be able to maintain and manage all of the information in one place, so that it can be linked, and so you can draw the right, important information in understanding how one activity affects another.

But that process, that information that you link, has to be independent of specific technology stacks. We believe that, over the past few years, not only have we created that in our quality solutions, in our performance solutions, but now we have added to that with our ALM 11 release -- the same concepts but in a much broader sense.

Integrating to other environments

y bringing together those core roles that I mentioned before, we've been able to do that from a requirements perspective, independent of [deployment] stack -- and from a development environment. We integrate to other environments, whether it’s a Microsoft platform, a Java platform, or from CollabNet. The use-cases that we've supported work in all of those environments very tightly -- between requirements and tests -- and pull that information all together in one place.

A business analyst or a subject matter expert who is generating requirements, captures all that information from what he hears of what’s needed, the business processes that need to built, the application, and the way it should work. He captures all of that information, and it needs to reside in one single place. However, if I'm a developer, I need to work off of a list of a set of tasks that build to those requirements.

It’s important that I have a link to that. It’s important that my priorities that I put in place then map to the business needs of those requirements. At the same time, if I'm in quality-, performance-, and security-assurance, I also need to understand the priority of those.

So, while those requirements will fit in one place, they'll change and they'll evolve. I need to be able to understand how that impacts my test plans that I am building.

With ALM 11, we're already seeing returns where organizations are able to cut the delivery time, the time from the inception of the project to the actual release of that project, by 50 percent.

If you look at some of the statistics that are thrown around from third parties that do this research on an annual basis: In almost two-thirds of projects today, application projects still fail. Then, you look at what benefits can be put in place, if you put together the right kind of an approach, system, and automation that supports that approach.

Cutting cost of delivery

We're seeing organizations similarly cut the cost of releasing an application, that whole delivery process -- cut the cost of delivery in half. And, that’s not to mention side benefits that really have a far more reaching impact later on, identifying and eliminating on creation up to 80 percent of the defects that would typically be found in production.

With ALM 11, we're already seeing returns where organizations are able to cut the delivery time, the time from the inception of the project to the actual release of that project, by 50 percent.

As a lot of folks who are close to this will know, finding a defect in production can be up to 500 times more expensive to fix than if you address it when it’s created during the development and the test process. Some really huge benefits and metrics are already coming from our customers who are using ALM 11.

Again, if you go back to the very beginning topic that we discussed, there isn’t a business, there isn’t a business activity, there isn’t a single action within corporate America that doesn’t rely on applications. Those applications -- the performance, the security, and the reliability of those systems -- are synonymous with that of the business itself.

If that’s the case, allowing organizations to deploy business critical processes in half the time, at half the cost, at a much higher level of quality, with a much reduced risk only reflects well on the business, and it’s a necessity, if you are going to be a leader in any industry.

There are so many different options of how people can deploy or choose to operate and run an application -- and those options are also available in the creation of those applications themselves. ALM 11 runs through on-premise deployment, or also through our software as a service (SaaS), so will allow flexibility.

Deep software DNA

oftware and our software business are increasingly important. If you look at the leadership within the company today, our new CEO has a very deep software DNA. Bill Veghte, who came in from Microsoft, has 20 plus years. The rest of the leadership team here also has 20 plus years in enterprise software.

Aside from the business metrics that are so beneficial in software versus other businesses, there is just a real focus on making enterprise software one of the premier businesses within all of HP. You're starting to see that with investments and acquisitions, but also the investment in, more importantly, organic development and what’s coming out.

So, it’s clearly top of list and top of mind when it comes to HP. Our new CEO, Leo Apotheker, has been very clear on that since he came in.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, November 30, 2010

HP's new ALM 11 helps guide IT through shifting landscape of modern application development and service requirements

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Barcelona the week of November 29, 2010. We're here to explore some major enterprise software and solutions, trends and innovations, making news across HP’s ecosystem of customers, partners, and developers. [See more on HP's new ALM 11 offerings.]

To learn more about HP’s application life-cycle management (ALM) news -- and its customer impact from the conference -- please welcome Mark Sarbiewski, Vice President of Product Marketing for HP applications. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Sarbiewski: The legacy approach is not going to be the right path for delivering modern applications. We’ve been hard at work for a couple of years now, recasting and re-inventing our portfolio to match the modern approach to software, going through them one-by-one.

You’ve got changes in how you are organized. You’ve got changes in the approach that people are taking. And, you’ve got brand-new technology in the mix and new ways of actually constructing applications. All of these hold great promise, but great challenges too. That's clashing with the legacy approach that people in the past took in building software.

We talk to our customers about this all of the time. It boils down to the same old changes that we see sort of every 10 years. A new technology comes into play with all its great opportunity and problems, and we revisit how we do this. In the last several years, it’s been about how do I get a global team going, focused on potentially a brand-new process and approach.

What are the new technologies that everybody is employing? We’ve got rich Internet technologies, Web 2.0 approaches and our technology is there. For composite applications, we’ve built a variety of capabilities that help people understand how to make the performance right with those technologies, keep the security and the quality high, while keeping the speed up.

So everything from how do we do performance testing in that environment to testing things that don’t have interfaces. And how do we understand the impact of change on the systems like that? We’ve built capabilities that help people move to Agile as a process approach, things like fundamentally changing how they can do exploratory testing, and how they can bring in automation much sooner in the process of performance, quality, and security.

Lastly, we’ve been very focused on creating a single, unified system that scales to tens of thousands of users. And, it’s a web-based system, so that wherever the team members are located, even if they don’t work for you, they can become a harmonious part of the overall team, 24-hour cycles around the globe. It speeds everything up, but it also keeps everyone on the same page. It’s that kind of anytime, anywhere access that’s just required in this modern approach to software.

How is software really supported?

When I talk to customers, I ask them, how they're supporting software. If we talk about software delivery, it's fundamentally a team sport. There isn't a single stakeholder that does it all. They all have to play and do their part.

When they tell me they’ve got requirements management in Microsoft Word, Excel, or maybe even a requirements tool, and they have a bug database for this, test management for that, and this tool here, on the surface it looks like they fitted everybody with a tool and it must be good. Right?

The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team.

The problem is that the work is not isolated. You might be helping each individual stakeholder out a little bit, but you're not helping the team. The team’s work relates to each other. When requirements get created or changed, it's the ripple effect. What tests have to be modified or newly created? What code then has to be modified? When that code gets checked in, what tests has to be run? It’s the ripple effect of the work we talk about it as workflow automation. It's also the insight to know exactly where you are.

When the real question of how far am I on this project or what quality level am I at -- am I ready to release -- needs to be answered in the context of everyone’s work, I have to understand how many requirements are tested? Is my highest priority stuff working against what code?

So, you see the team aspects of it. There is so much latency in a traditional approach. Even if each player has their own tool, it's how we get that latency out and the finger-pointing and the mis-communication that also results. We take all that out of that process and, lo and behold, we see our customers cutting their delivery times in half, dropping their defect rates by 80 percent or more, and actually doing this more cheaply with fewer people.

In requirements management, one of the big new things that we’ve done is allow the import of business process models (BPMs) into the system. Now, we’ve got the whole business process flow that’s pulled right into the system. It can be pulled right from the systems like Eris or anything that’s putting in the standard business process modeling language (BPML) right into the system.

Business processes-focused

Now, everyone who accesses ALM 11 can see the actual business process. We can start articulating that this is the highest priority flow. This step of the business process, maybe it's check credit or something like that, is an external thing but it's super-important. So, we’ve got to make sure we really test the heck out of that thing. [See more on HP's new ALM 11 offerings.]

Everyone is aligned around what we’re doing, and all the requirements can be articulated in that same priority. The beautiful thing now about having all this in one place is that work connects to everything else. It connects to the test I set up, the test I run, the defects I find, and I can link it even back to the code, because we work with the major development tools like Visual Studio, Eclipse, and CollabNet.

It's hugely important that we connect into the world of developers. They're already comfortable with their tools. We just want to integrate with that work, and that’s really what we’ve done. They become part of the workflow process. They become part of the traceability we have.

What we hear from our customers is that the coolest new technology they want to work with is also the most problematic from a performance standpoint.

The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested.

Modern requirements

We went back to the drawing board and reinvented how well we can understand these great new Web 2.0 technologies, in particular Ajax, which is really pervasive out there. We now can script from within the browser itself.

The big breakthrough there is if the browser can understand it, we can understand it. Before, we were sort of on the outside looking in, trying to figure out what a slider bar really did, and when a slider bar was moved what did that mean.

Now, we can generate a very readable script. I challenge anybody. Even a businessperson can understand, when they're clicking through an application, what gets created for the performance testing script.

We parameterize it. We can script logic there. We can suggest alternate steps. The bottom line is that the coolest new Web 2.0 front ends can now be very easily performance tested. So we don't end up in that situation where it's great, you did a beautiful rich job, and it's such a compelling interface, but only works when 10 people are hitting the application. We've got to fix that problem.

It speeds everything up, because it's so readable and quick. And it just works seamlessly. We've tested against the top 40 websites, and they are out there out using all this great new technology and it's working flawlessly.

Lots of pieces

If you think about a composite application, it's really made up of lots of pieces. There are application services or components. The idea is that if I’ve got something that works really well and I can reuse it as part of and combine it with maybe a few other things or in a couple of new pieces and I get new capability, I've saved money. I’ve moved faster and I'm delivering innovation to the business in a much better, quicker way and it should be rock-solid, because I can trust these components.

The challenge is, I'm not making up software made of lots of bits and pieces. I need to test every individual aspect of it. I need to test how they communicate together and I need to do end-to-end testing.

If I try to create composite apps and reuse all this technology, but it takes me ten times longer to test, I haven’t achieved my ultimate goal which was cheaper, faster and still high quality. So Unified Functional Testing is addressing that very challenge.

We've got Service Test which actually is incredible visual canvas for how I can test things that don't have an interface. One of the big challenges with something that doesn't have an interface is that I can't test it manually, because there are no buttons to push. It's all kind of under the covers. But, we have a wonderful, easy, brand-new reinvented tool here called Service Test that takes care of all that. [See more on HP's new ALM 11 offerings.]

That’s connected and integrated with our functional testing product that allows you to test everything end-to-end in the GUI level. The beautiful thing about our approach is you get to do that end-to-end, GUI level type of testing and the non-GUI stuff all from one solution and you report out all the testing that you get done.

Bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.

So again, bring in a lot of automation to speed it up, keep the quality high and the time down low and you get to see it all kind of come together in one place.

Sprinter is not even a reinvention. It's brand-new thinking about how we can do manual testing in an Agile world. Think of that Instant-On world. It's such a big change when people move to an Agile delivery approach. Everyone on the team now plays kind of a derivative role of what they used to do. Developers take a part of testing, and quality folks have to jump in super-early. It's just a huge change.

What Sprinter brings is a toolset for that tester, for that person who is jumping in, getting right after the code to give immediate feedback, and it's a toolset that allows that tester to automatically figure out what test apps are supposed to go through to drop in data instead of typing it in. I don't have to type it anymore. I can just use an Excel spreadsheet and I can start ripping through screens and tests really fast, because I'm not testing whether it can take the input. I'm testing whether it processes it right. [See more on HP's new ALM 11 offerings.]

Cool tools

nd when I come across an error, there's a tool that allows me to capture those screens, annotate them, and send that back to the developer. What’s our goal when we find a defect? The goal is to explain exactly what was done to create the defect and exactly where it is. There are a whole bunch of cool tools around that.

The last point I’d make about this is called Mirror Testing. It’s super-important. It’s imperative that things like websites actually work across the variety of browsers and operating environments and operating systems, but testing all those combinations is very painful.

Mirror Testing allows the system to work in the background, while someone is testing, say on XP and Internet Explorer, five other systems, different combinations will be driven on the exact same test. I'm sitting in front of it, doing my testing, and in the background, Safari is being tested or Firefox. [See more on HP's new ALM 11 offerings.]

If there is an error on that system, I see it, I mark it, and I send it right away, essentially turning one tester into six. It's really great breakthrough thinking on the part of R&D here and a huge productivity bump.

What we hear from our customers is that they really do want their lives to be simplified, and the conclusion that they have come to in many cases is Post-It Notes, emails, and Word docs. It seems simpler at first and then it quickly falls apart at scale. Conversely, if you have tools that you can only work with in one particular environment, and most enterprises have a lot of those, you end up with a complex mess.

Companies have said, "I have a set of development tools. I probably have some SAP, maybe some Oracle. I’ve got built-in .NET, with Microsoft. I do some Eclipse stuff and I do Java. I’ve got those but if you can work with those and if you can help me get a common approach to requirements, to managing tests, functional performance, security, manage my overall project, and integrate with those tools, you’ve made my life easier."

When we talk about being environment agnostic, that’s what we mean. Our goal is to support better than anyone else in the market the variety of environments that enterprises have. The developers are happy where they are. We want them as part of the process, but we don’t want to yank them out of their environment to participate. So our goal again is to support those environments and connect into that world without disrupting the developer.

And, the other piece that you mentioned is just as important. Most customers aren’t taking one uniform approach to software. They know they’ve got different types of projects. I’ve got some big infrastructure software projects that I am not going to do all the time and I am not going to release every 30 days and a waterfall approach or a sequential approach is perfect for that.

Rock solid

I want to make sure it’s rock solid, that I can afford to take that type of an approach, and it's the right approach. For a whole host of other projects, I want to be much more agile. I want to do 60-day releases or 90-day releases or even more, and it makes sense for those projects. What I don’t want, they tell us, I don’t want every team inventing their own approach for Waterfall, Agile, or custom approaches. I want to be able to help the teams follow a best-practice approach.

As far as the workflow, they can customize it. They can have an Agile best practice, a Waterfall best practice, and even another one if they want. The system helps the team do the right thing and get a common language, common approach, all that stuff. That’s the process kind of agnostic belief we have.

The great news is that today you can download all the solutions that we’ve talked about for trials. We have some online demos that you can check out as well. There are a lot of white papers and other things. You can literally pull the software 30 minutes from now and see what I'm talking about.

On the licensing side, we believe that the simplest approach is a concurrent license, which we have on most of the products that we’ve got here. For all the modules that we’ve been talking about, if you have a concurrent license to the system, you can get any of the modules. And, it’s a nice floating license. You don’t have to count up everybody in your shop and figure out exactly who is going to be using what module.

The concurrent license model is very flexible, nice approach. It’s one we’ve had in the past. We're carrying it forward and we’ll look to continue to simplify and make it easier for customers to understand all the great capabilities and how to simply license so that they can get their teams to their modules for the capability they need.
Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

HP rolls out ALM 11 in Barcelona to expand managed automation for modern applications

Barcelona -- In the midst of what it calls a new wave of application modernization in the enterprise, HP on Tuesday rolled out the latest version of its application lifecycle management (ALM) platform here at the Software Universe conference.

The Application Lifecycle Management 11 platform works to automate application modernization from requirements management through quality and performance. HP sees this as an important innovation in a market where Forrester Consulting predicts 69 percent of IT decision-makers have earmarked 25 percent of their annual IT budget for application modernization—and 30 percent will dedicate over half their budget to the cause. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

“Sixty-seven percent of organizations that have kick-started application modernization projects are failing,” says Jonathan Rende, vice president and general manager of the Applications Solutions business for HP Software & Solutions division. “Application teams that have to build, provision and create new critical business processes can’t keep up because they are relying on the old ways of doing things instead of the new way.”

HP application transformation

The ALM 11 platform and software solutions are part of that “new way.” Components of the HP Application Transformation solutions, these tools work to help enterprises gain control over aging applications and inflexible processes that challenge innovation and agility -- by governing their responsiveness and pace of change. It’s all part of the Instant-On Enterprise that embeds technology into everything it does. ALM 11 essentially automates workflow processes across multiple teams. [See more on HP's new ALM 11 offerings.]

“Applications are central to everything CIOs are doing right now,” Rende says. “It’s literally how companies are differentiating themselves -- and doing so in more efficient and effective ways with more value added. ALM 11 creates a single, unified system that allows business analysts, developers, security professionals, quality professionals and performance professionals to collaborate.”

By establishing this set of criteria, everybody can see what is coming and what the status is, and why there are changes if there are changes.

[Read an interview with HP's Mark Sarbiewski on the uses and benefits of the new ALM portfolio.]

Rende also points to benefits such as risk-based decisions of application releases via ALM Project Planning and Tracking capabilities, rapid application delivery with HP Agile Accelerator 4.0, reduced business risk from application failures, and automatic import of business process models (BPM) into ALM’s Requirements Management to visualize business process flows and augment textual requirements.

Rend notes that HP isn’t working in a vacuum, either. “Everybody has a mix of different applications and environments. Many times they are cobbling them together and integrating because the business processes that are critical cut across many different systems,” Rende says. “Our solution is agnostic to the technologies.”

Release Management

nother major focus of ALM 11 is Release Management -- the ability for program and project managers to establish milestones and criteria and measurements in real-time. The module works to answer the questions, “What’s coming?” and “Is it ready?” or “Has it been tested successfully?”

“Many requirements for new apps come from production, and DevOps sit on that line between operations and applications,” Rende says. “By establishing this set of criteria, release milestones, and GANTT charts, everybody can see what is coming and what the status is, and why there are changes if there are changes.”

HP ALM platform also offers new versions of HP Quality Center and Performance Center 11. These solutions work to help simplify and automate application quality and performance validation to lower operational costs, freeing up investments to innovating applications in the delivery phase.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at and
You may also be interested in:

The adaptive web: Helping to bridge the CIO-CMO divide

This guest post comes courtesy of Dr. Scott Brave, co-founder and CTO of Baynote, a provider of digital marketing optimization solutions. He can be reached at

By Dr. Scott Brave

CIO and CMO. Until very recently, many still believed these two roles couldn’t be more extreme in their differences. The stereotypical CMO was creative and guided by gut feel, whereas the CIO was steadfast, risk averse and driven by empirical evidence.

The emergence of the real-time web and its impact on customer expectations has pulled these two seemingly polar opposite disciplines much closer together. Real-time services like Twitter, Facebook and improved behavioral targeting technologies are pushing consumer expectations for instant, extremely personalized experiences to an all-time high.

This trend has forced the CIO and CMO to work in lockstep on digital marketing initiatives aimed at staying as close as possible to what the customer wants.

However, the reality is that while both sides understand their shared goals depend on working with the other, the relationship between the CIO and CMO is more often not a happy marriage. According to the CMO Council’s recent CMO-CIO Alignment Imperative report, there is a good amount of consensus among CMOs and CIOs on the central role of technology in improving the customer experience, but neither group feels like they are getting the job done.

Biggest struggle

Their single biggest struggle has become all about figuring out ways to adapt the experience – across the web as well as via mobile and email – to seemingly insatiable user expectations. This sentiment is consistent with recent M&A activity that signals the importance of web optimization technology: Adobe acquired Omniture last September; and IBM has gobbled up CoreMetrics, Unica, and most recently, Netezza for $1.7B.

The reality, however, is that current optimization approaches are still very manual and provide a rear-view mirror look at customer intent, making it impossible to target the customer in an accurate and scalable way. Alas, the CIO/CMO dilemma continues.

What we need is to build a smarter approach that allows companies to adapt to their customers’ needs in real-time.

What we need is to build a smarter approach that allows companies to adapt to their customers’ needs in real-time. The concept of collective intelligence, which I’ll address below, will be critical to achieving this vision -- something I like to think of as an adaptive web.” That is, a digital experience that is always relevant and based on users’ current intent and interests. It also must be device-agnostic, especially important given the increased mobility of the online experience -- a challenge analyst firm Forrester calls “the Splinternet.”

The adaptive web is in fact central to what Gartner calls “Context-Aware Computing”, the idea that social analytics and computing will produce knowledge about individual context and preferences, allowing companies to predict and serve them what they want. According to Gartner, this model adapts interactions with the customer based on context, in contrast to today’s experience which is very reactive.

So, how close are we to building a truly end-to-end adaptive web?

To no surprise, there are numerous technical and psychological challenges for building an adaptive web. Namely, I see three primary roadblocks:
  • Privacy Issues: To deliver adaptive experiences, we have to pay attention to what people are doing online in the first place. Different users have varying levels of comfort. We’ll have to find some sort of middle ground where the value of an adaptive experience greatly outweighs users’ privacy concerns.

  • Deciding on the Method: Second, there’s determining the approach itself. Do we need a “metalayer” over the web? Some sort of toolbar or plug-in that could connect users’ entire web experiences across devices? Do ISPs need to get involved at the network level to watch every site users’ visit and how they engage with it? These are all options to consider – some more realistic than others - but the path is murky at best at this point.

  • Determining Users’ Intents: The third obstacle is the biggest obstacle of all: pure science. It’s not a trivial problem to automatically pinpoint and serve up an experience based on a user’s current intent and context. As someone who has devoted his life’s work to studying human/computer interaction, I can’t emphasize this enough. Predicting what people want and need, and adapting their web experience in real-time is perhaps one of the remaining “big picture” challenges facing technologists.
Collective intelligence

Let’s revisit collective intelligence and its role in making the adaptive web a reality. Collective intelligence refers to the process of gathering insight from a group of like-minded individuals online, often implicitly, based on their shared navigation and engagement patterns. A central concept of collective intelligence is to aggregate behaviors of the silent majority of visitors across the spectrum of digital channels, augment that information with the expertise of super-users and provide the most relevant information that meets every individual user’s goals.

Not doing so will have profound implications for their organizations, most notably lost revenues and customer loyalty.

An obvious benefit to using collective intelligence is one of mere scale: it enables machines to draw conclusions about an individual's current intent based on the knowledge and experiences of the larger community. It also gives us the power to efficiently deliver automated and real-time experiences to users. This would be very difficult within any user-by-user scenario, which again poses enormous difficulties in matters of scale.

The CIO and CMO understand why their success depends on better IT/marketing alignment. Now, the challenge will be for them to deliver. While there’s no silver bullet, I believe collective intelligence has the potential to help them form a much more harmonious and strategic partnership. CIOs and CMOs must formulate their strategies for collective intelligence, context-aware computing and other technologies enabling the adaptive web right away.

Not doing so will have profound implications for their organizations, most notably lost revenues and customer loyalty.
This guest post comes courtesy of Dr. Scott Brave, co-founder and CTO of Baynote, a provider of digital marketing optimization solutions. He can be reached at
You may also be interested in:

ThinPrint works to take cloud printing to mainstream

With companies putting more applications and data into Internet clouds, cloud printing is gaining momentum in the enterprise.

Vendors large and small are getting into the game. HP has made major announcements while Google has hinted at the future. Apple has begun services for iOS devices. Smaller companies like HubCast and ThinPrint have entered the fray. Yet, for all the attention, though, cloud printing is still not mainstream.

BriefingsDirect recently caught up with Thorsten Hesse, manager of Innovative Products for ThinPrint, to discuss the business drivers of cloud computing, the various options available, and the obstacles to wider-spread adoption of the technology.

BriefingsDirect: What are the business drivers of cloud printing adoption?

Hesse: In general, talking about printing is quite boring for most people. But people want to print. They need to print. They don’t want to talk about it, but they want to use it. They just want it to work.

Companies spend a lot of money for new printers, for printer management and print driver administration, for unused print outs, unnecessary paper and toner consumption, and for support and help desk. Printing is one of the most cost-intensive things in IT. Many companies also don’t want to be locked in with a specific vendor.

Increasing use

Another aspect is the increasing use of cloud applications and services. How do you print from cloud offerings like Salesforce or Google Apps? Mostly you create a PDF. Well, then you need a device that can print PDFs. Additionally, the use of smartphones, tablets, and other mobile devices becomes more and more common, and these devices can‘t do that, or only in limited quality.

Altogether, there are at least six business drivers for cloud printing:
  • Printing is one of the most cost intensive IT services—and cloud printing can save cost and enhance productivity at the same time.
  • Printing technology today depends highly on printer manufacturers.
  • Companies want print on demand.
  • Companies use cloud applications, very often unplanned.
  • Employees are becoming increasingly mobile.
  • Employees use new types of devices.
BriefingsDirect: What are the different options for cloud printing in terms of delivery?

Hesse: There’re three different delivery models. First, there is private cloud software. The first delivery model is that we sell software to our customers that they install in their environment, for example in their data center or on an Amazon server in the cloud.

This might sound far off, but as soon as customers manage their internal desktops from the cloud with Microsoft Intune, it will be a logical step to do the same with the printers.

They buy, own, and control the software. The other end of the spectrum is a pure cloud printing service. And then in the middle we've got the hybrid cloud, where some parts are run internally in the private cloud and others in the public cloud.

BriefingsDirect: Is cloud printing secure? What makes is it secure?

Hesse: First of all, the user can print content without needing to store it on the device, which brings all the advantages of central data storage -- secure and updated data in one place, no files lost when device is lost, and availability of service. The user can trigger the print job to the printer. He can also identify the printer.

BriefingsDirect: How is cloud printing evolving?

Hesse: Our solution is evolving in many directions. On top of offering print management as a software product that the customer can purchase and install internally, we’ll offer it as a cloud service. This will be a public cloud service. Customers can run it from the cloud. They can then control their internal printing environment from the cloud.

This might sound far off, but as soon as customers manage their internal desktops from the cloud with Microsoft Intune, it will be a logical step to do the same with the printers. This will evolve into a complete print management solution that can then be used not only to control the printing environment, but to build in policies to enhance it along the way.

BriefingsDirect: What is holding businesses back from adopting cloud printing?

Hesse: They mostly don’t know what’s possible, as the discussion is fogged by limited public cloud printing solutions.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at and
You may also be interested in:

Sunday, November 28, 2010

InfoBoom seeks US IT pros to take telephone survey, get stipend

The Infoboom, a site to which I regularly contribute for pay, is doing some research to learn more about how IT and business technology professionals meet their information needs.

The research involves a one-hour telephone interview and they are offering $100 American Express gift certificates to people who complete the survey.

Apply to be interviewed here.

You must be based in the U.S. and you must have significant involvement with business technology, but other than that they are pretty flexible.

This is not a marketing pitch and nobody's going to try to sell you anything. IBM, which underwrites the site, just really wants to know more about what its audience needs. Thanks!

Friday, November 26, 2010

How to automate ALM: Conclusions from new HP book on gaining improved business applications as a process

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.

The latest BriefingsDirect podcast discussion examines a new book on application lifecycle management (ALM) best practices, one that offers new methods and insights for dramatic business services delivery improvement.

The topic of ALM will be a big one at next week's HP Software Universe conference in Barcelona. In anticipation, join us as we explore application lifecycle management (ALM) best practices for overall business services delivery improvement.

In this discussion, the last in a series of three, we underscore the conclusions from the forthcoming book and explain how organizations can begin now to change how they deliver and maintain applications in a fast-changing world.

Complexity, silos of technology and culture, and a shifting landscape of application delivery options have all conspired to reduce the effectiveness of traditional applications approaches. In the forthcoming book, called The Applications Handbook: A Guide to Mastering the Modern Application Lifecycle, the authors evaluate the role and impact of automation and management over an application's lifecycle, as well as delve into the need to gain better control over applications through a holistic governance perspective.

In our first podcast, we focused on the role and impact of automation and management of applications, and emphasized the need to gain control over applications through a holistic lifecycle perspective.

The second discussion in the series looked at how an enterprise, Delta Air Lines, moved successfully to improve its applications’ quality, and gain the ability to deliver better business results from those applications.

Finally, we're here now with the book’s authors to explore their conclusions. Please join me in welcoming Mark Sarbiewski, Vice President of Marketing for HP Applications, and Brad Hipps, Senior Manager of Solution Marketing for HP Applications. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Sarbiewski: The life of an application is generally the same for all companies. There is a spark of an idea: "We need this. We need software to help us do something in the business."

We make an investment decision somehow. We may do this ad hoc. We may do it based on who screams the loudest. But somehow a decision gets made. We build something somehow. We spec it, build it, release it, run it, poorly or not, and hopefully, although certainly not always, eventually we replace it, retire it, and so forth.

We wanted to take a slightly different approach to how we thought about maturity models. There are lots of them in the industry, not so much around ALM, but in sub-disciplines or in different areas. Our focus was the business outcomes that you see at different levels.

We built out a model for ALM maturity, and it’s in the book.

... We see pressure from the business to change how we do things and the technologies we use. From the business side, you see it in a variety of ways. You see, "Oh, it’s the consumerization of IT, and what I see in my consumer world I want in IT. I see this all moving fast and I don’t feel my business moving." You see that pressure.

But, you absolutely see pressure to change from the bottom-up, from the teams themselves. We want to work in a different way. We want to be able to execute faster. The whole move of agile has been, in large part, if not primarily built, then driven from development and delivery teams up. So, there is a huge motivation there.

If you can understand the results that you are seeing, that ought to help you figure out where you could be. What we've seen is a progression from the spectrum of companies, ... [many] have fairly immature processes.

We see people just getting started, and they have a relatively ad hoc, narrow, point tool, with lots of manual work. It doesn’t mean they are never successful, but results vary highly. They're very mixed. Some project teams are great, and it all depends on the project team, and the next one may stink.

So our idea around maturity -- and tying it to outcomes -- is the results that we see. ... It all comes back to the results. What kind of results am I seeing? If you look at the model in the book, it’s pretty easy to peg yourself as to where you are and the kinds of benefits you'd see from moving up that maturity curve.

There’s a lot of pride when you see the metrics go in the right way. The feedback that I've seen for our clients that do this really well is where the business comes back and says, "Oh, my God. The responsiveness is incredible. Even if I'm not getting the massive stuff that I used to get once every two years, I'm seeing movement on a regular basis, and I love it." And lot of clients that we talk to are really fired up about that.

What we hear from our clients is that things are hyper-competitive and that technology, in particular software and applications, is a huge competitive advantage. So, our ability to move fast and beat the competitors to the punch with capability is enormously important.

More of a scorecard

Hipps: We configured this model trying deliberately not to be ultra-prescriptive. There are many heavy-duty models that do exist, and people can dig into those to their heart’s content. This is as much a maturity scorecard as anything.

One of the examples that you might see or one of the ways you might begin to engage yourself is something like defect leakage. Defect leakage refers to the number of defects that you discover in live in the application that you could have caught earlier.

We have some figures that show that the average is in the neighborhood of 40 percent of application defects that leak into production and are discovered in live. They could have been caught earlier. It may be little higher than 40 percent, which is a fairly shocking number.

But on the high end, the world-class customers we worked with, see less than 5 percent of defects working their way into production. So right off the bat there, you're talking an 80 percent-plus drop in the number of defects that you're experiencing in a live environment, with all the attendant cost savings, brand improvement, and good will in the business that you would expect.

That’s one example of the kind of thing that you can look at, tease out, and begin to get a sense of where might I sit maturity wise. From that, you can potentially take a cue as to where is it that I want to start, where is it that I want to make the biggest investment, as I look to make myself more mature.

There are hosts of sophisticated KPIs we can design for ourselves, but one of the key ones was, "I want to know what the business thinks of us, and whether we are trending in the right direction."

Speaking from the application domain, our friends in the agile communities have been the leading champions of this notion. Our default stand [as development teams] was one of being change-averse.

By that, I mean that there was this whole contractual relationship with business. You tell us what you need, and we're going to document it as best as we can, down to having all the semicolons in the right place.

"We're going to break out the quill pens and ink our signatures. Forever shall it be, and if you change anything here, we're going to hit you with the request for change, and it will go through a cycle of six weeks and maybe we'll agree to it," etc., etc. The longest time that was the mindset. You can look at that and say it's awful, but when I had far fewer applications, and they took far longer to build, it was just the way of the world.

The recognition today for all of the reasons we've talked about in this podcast and others, our applications are everywhere. They're always on. There is nothing I can do in a business that isn't going to touch the application. It fundamentally means, we need to sweep from the table, that notion of being change-averse. Instead, we need to be in a position of embracing change. We do need to be change-ready.

The leading traits

As Mark said, we need to be architected and engineered, from our people process technology perspective, to put ourselves in a position to be that way. In the book, we talk a bit about some of the principles we think come into play for change ready organizations. But, that's why it is one of the leading traits, the leading principles, in world-class organizations.

This could be a mantra of sorts: Think big, start small, scale quickly. The basic idea of think big is the idea that you want to spend some time making sure that you’ve all got a shared vision of where you want to be, and we talk a bit about whether that was a maturity model -- these principles of predictability and repeatability, etc.

Hopefully we've set at least some suggested guidelines for constructing what your end state might look like. But, this point about thinking big is that, as we all know, certainly in IT but probably anywhere, it's every easy to fall into a state of analysis paralysis. We've got to figure out exactly the right metrics to decide exactly what we're going to be. We've got to figure out precisely what our time-line is.

We sort of can borrow from our friends in agile, who have said that you've got to understand the perimeter of what it is you want to accomplish, but still it's bound to change. Those perimeters are bound to shift. You're bound to discover things about yourselves, your organizations, what's feasible, and what's not in the process of actually trying to get there.

It's important to set yourself an objective and make sure it's a shared objective. It's just as critical to get going to not fall into a trap of endless planning and reconsideration of plans.

So, it's important to set yourself an objective and make sure it's a shared objective. It's just as critical to get going to not fall into a trap of endless planning and reconsideration of plans.

If, you then pluck the low-hanging fruit, the easy things we could do starting this week, starting tomorrow, to advance us at least generally toward these ends, this end objective, that's great. Then, it becomes a matter of just continuing to move, scale, and adapt.

Somewhere, we make the point that, as an application team, certainly at least as an application member, I cared a lot more about measurable progress, seeing things actually advancing and getting better. Then, I cared less about how shiningly brilliant the end-state was going to be or exactly how we were going to get there.

Unconscious sabotage

Sarbiewski: I spent a number of years in a former life doing process change for companies. There were some trade secretes in the firm I worked with. They recognized some unchanging facts that that people can consciously or unconsciously sabotage the greatest plans, any process you want, or any kind of a change.

You have to start with people. It does involve all the people-process-technology in that order, but it's the people considerations. Do we have that shared vision? Who are the skeptics? Where do we think this could go wrong? Are we committed to getting there?

There were some questions we’d as we were embarking on making this change. First of all we said, what project or what pilot -- if we did these changes on it -- would people in the organization say, "If it works for that project, it will work for us as an organization."

So, find that visible pilot project, not one that’s an exception. Don’t find one where there are four developers and they are in the same room. If you try something new, people can say, "Well, of course, it worked for that, but that’s so atypical." So, find that project.

Beyond that, find the champion who is really respected in the organization, but skeptical of the change. We would go looking for one or two people who were open-minded enough to really give it a go, but maybe steeped in how we’ve done it, and have been very successful in how we’ve done it. Then, people can say, "That’s the kind of project we do, so you need to be able to make it work there. If Joe or Mary or whoever it is, if they buy into and it works for them, I believe."

The one other thing I’d say is start thinking about those types of metrics, those cross-silo and lifecycle-oriented goals and metrics.

Maybe, let's reward jointly the operations and the dev teams, if they’ve met those customer satisfaction goals, those service level agreements (SLAs), and those low counts of defects in production. You start to create a different dynamic, when you think more about lifecycle goals and cross-team goals.

Hipps: The spirit of this book, and probably the spirit of a lot of these kinds of books, ... If I have one hope, it’s that we haven’t been so pie-in-the-sky in our thinking that somebody reads this and says, "Yeah, nice idea, but it will never happen here."

So, that would be my hope -- somebody takes one single way that’s implementable in the near-term within their organization.

Sarbiewski: What I’m hoping is that in these hundred or so odd pages that executives in these enterprises that we're talking to have that opportunity to take just a couple hours and have somebody give them a chance to think about how important software is, and what the true life of an application is.

Once you start to go down that path and you start to say, wait a minute, 10, 15 years of evolving this capability, what does that mean? When things are live and I’ve got hot request from the business to make a change, what needs to happen? How much money will I spend on that?

The one "aha" moment is seeing that the 12 to 15 years matter, when I’m delivering value to the business and innovating for the business. In order to be successful during those 10 to 15 years, I will make different decisions when I build this thing. I will focus on a process.

I will build the automation to a different level, because I’ve stopped thinking that my job is done when I go live. If that’s truly the job, you’ll make a lot of shortcut decisions to get to go live. But, if you think bigger, you think about the full life of an application and what it delivers to the business.

All of a sudden, it makes a whole lot more sense to do things a bit differently, to set myself up for 10 years or 15 years of success with the business, as opposed to a moment when I can say, "Yup, I achieved a milestone."

For more information on Application Lifecycle Management and how to gain an advantage from application modernization, please click here.

Listen to the podcast. Find it on iTunes/iPod and Read a full transcript or download a copy. Sponsor: HP.

Transcript of a sponsored BriefingsDirect podcast, the third in a series discussing a new book on ALM and it's goal of helping businesses become change ready. Copyright Interarbor Solutions, LLC, 2005-2010. All rights reserved.

You may also be interested in: