Monday, June 28, 2010

Ariba Live discussion: How cloud alters landscape for ecommerce, procurement and supply chain management

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Ariba.

Welcome to a special BriefingsDirect podcast coming to you from the Ariba LIVE 2010 Conference in Orlando.

This podcast is a presentation of a May 25 stage-based panel event on the implications of cloud computing for procurement, supply-chain management, and a host of other business functions. For those of you unable to attend the actual conference, please now listen to this lively and informative panel by a group of noted industry analysts.

Here is the moderator of our discussion, Tim Minahan, Chief Marketing Officer at Ariba.

Minahan: When discussing heady topics like the cloud, procurement, and finance, and looking at the future of business-to-business (B2B) commerce, we thought it important for you to hear from the experts. So we have assembled a panel of the leading analysts -- the folks that you turn to to benchmark your performance, uncover best practices, and make IT buying decisions.

I'd like to welcome our panelists: Mickey North Rizza from AMR Research (a Gartner company), Chris Sawchuk from The Hackett Group, Robert Mahowald from IDC, and Bruce Guptill from Saugatuck Technology.

Here are some excerpts from the discussion:
Guptill: The first thing is to figure out how to handle this cloud thing. It's the single most disruptive influence that we've seen in not just IT, but in how IT is bought, used, paid for, and how that affects how everybody does business. So how is it accounted for? Who has responsibility for managing what aspects?

If you have some of it on-premise and some of it out in the cloud, who is responsible? How is it managed? How is that budgeted for?

If you have some of it on-premise and some of it out in the cloud, who is responsible? How is it managed? How is that budgeted for? It changes the way we operate as a business, because it changes the way we spend, the way we buy, and the way we manage. It's very, very disruptive, and policies and practices really haven’t caught up yet to the reality, and we're not getting a breather. The change is accelerating.

Sawchuk: When we ask procurement executives what are they focused on going into 2010 from a technology standpoint, the number one area is just utilizing better the technology investments that they have made -- digesting them. So, it's a lot of the basics -- cleaning up our master data and just getting more utilization on our eProcurement, eSourcing types of tools in the organization.

But there are a couple of emerging trends that are occurring in the most progressive procurement organizations, in three areas. One is around collaborative technologies. Why is it so difficult to do this in business, when it's so easy with Facebook and all that type of stuff in the non-business type of world? It's not just externally that this applies, but internally as well.

The cloud offers a way to do that a lot more quickly, for less cost, in a way that is still as secure and authenticated as it would be in my IT shop.



Number two, around better management of the knowledge and intelligence across the organization, structured, unstructured, internal, and external types of information.

And lastly, driving more agility into the procurement service delivery model, which includes the technology tools.

Mahowald: For the last 10 years or so, we have seen lines of business start to get more acclimated using software-as-a-service (SaaS) services. Some of those lessons are how those services are delivered and filtered back to IT.

Virtualization, automation, and standardization are finding their ways into our IT departments and they're finding ways to do things like reduce the number of physical assets they spend their time counting, and keep them up and running, and rely more and more on external services that can safely provide the functionality that their users require.

And the typical scenario is that, if I am in the line of business and I want to build an application, or I need to have access to an IT service, I've got to go to my IT team. It can often be long and time-consuming to get that thing spun up and tested, kick all the tires, and get it up and running in the environment that is being used.

The cloud offers a way to do that a lot more quickly, for less cost, in a way that is still as secure and authenticated as it would be in my IT shop, and probably done in a way that is much, much more service enabled, for the ultimate constituency I want to serve, my user, the internal user. So, it's a big opportunity.

North Rizza: Basically, what we're seeing is that companies have a lot of pent up demand over the last couple of years. They haven't been able to change some of their business processes and automate them the way they would like to. What they've been doing is standing back, trying to get more out of their ERP systems or basic business processes. They've had to make a lot of cuts and they're not getting everything they need. What we're finding now is that spending is starting to pick up.

We're also finding that companies are looking for alternative deployment models. They're starting to say, "What can I do above and beyond just the technology application? Where else can I look for services and other opportunities that are, one, going to quickly drive value to my line of business buyer, because those are the folks that do the business day in and day out? They're the ones that need to make a difference. And finally, how do I do it quickly, without a lot of disruption, very flexible, and a great investment, but a really quick return on that investment?"

Sawchuk: When we asked CFOs in the broader enterprise, coming into 2010, what was the number one area of focus for them, it was cash. When we asked the same question to the procurement executives and community, it was cost. Cash was number 10. So the question is, are we misaligned or do we feel that we have done everything we can over the last 18-24 months and there’s nothing more to do?

When you look at this, procurement and the data as just being cost focused are fading. We've got to get much more balanced in the way we actually deliver our value, not just cost, but also working capital and other areas as well.

You wanted some examples of what these world-class organizations do around working capital and how they do it well. Number one, they measure it. They bring visibility to it. They put it on their scorecards. They have cash conversions, cycle time matrix, DPO, DIO, etc.

Number two, they manage it and the source-to-settle, purchase-to-pay process.

Number three, they create collaborative communities with procurement, with the business, finance, and treasury, around working capital strategies and objectives.

And, fourth, they actually compensate. We see organizations out there where some of the procurement folks and these folks on these collaborative communities are compensating. Up to one-third of their compensation is based on their achievement of working capital objectives.

Mahowald: In many IT organizations, as much as 55 percent of the budget is spent on keeping systems running, and that involves paying for the ongoing license and maintenance and support of software and hardware and all the power pipe cost that it takes to run an IT center.

The ability to reduce some of those costs by outsourcing them in lower-cost subscription models that are operating costs is an enormously helpful transition for many customers. CIOs that we talk to are excited about introducing cloud services and also what we call naked compute services or offsite storage to improve the efficiency of certain applications that are widely used in the organization or offsite development platforms, where they can actually build applications.

It’s a major activity for many IT organizations to build new applications, objects, and customizations on-site. If they can offshore that and not have to pay application licenses or infrastructure cost, that’s a big help to them in lowering their fixed-cost structure. Ultimately, it's a big help to make IT organizations much more lean and responsive to their needs.

Guptill: If you can take the software and put it in the cloud, and if you can take the hardware and the infrastructure service, the IT, and put it in the cloud and take advantage of that, we have all these vendors -- let's take Ariba for an example -- that have these terrific technologies, applications, and the expertise to use them. Why can’t that be delivered and used as a service, as a utility, cloud-based or otherwise?

Then, we have the business logic, we have the software, the applications, the functionality, and the technology, to make it happen. We can do that as an as-needed, on-demand, or subscription basis. It removes a lot of the fixed cost that we've been talking about. It reduces our reliance on fixed assets or fixed cost for what could be cyclical or temporary needs in terms of functionality. It's basically outsourcing business tasks, business functions, or business processes to the cloud. It's "cloud temping" basically.

Over time, these things start from very simple, straightforward, and standardized capabilities, similar to what SaaS, or infrastructure as a service (IaaS) started as, but we are seeing them start to evolve into more configurable or more customizable capabilities.

Pool of functionality

S
o that we can now -- it's just starting now, but will be much more over the course of the next four or five years -- take advantage of a large pool of business functionality that we don’t want to buy. It's not just a technology. It's not just a software. But it's the business tasks that we don’t want to buy, we don’t want to train, and we don’t want on our books. We can rent those as we need them, and when the work is done, they retire back to the cloud.

North Rizza: We found that 96 percent of those in our studies are using cloud-based solutions, but out of that 96 percent, 46 percent are geared into a hybrid cloud solution. And by hybrid we mean that they're actually using cloud technology applications. They're optimizing those against their IT on-premise investments, and further, they're extending the capabilities into cloud services technology. So they're looking at the whole gamut.

When it's executed well and done well, it allows you to execute on your working capital and supplier payment types of strategies.



The second part of that is the next leading area, and that’s 41 percent around a private cloud. The difference there is that they're looking at technology capabilities from the cloud and they're putting that with their ERP or on-premise IT investments, but they're not necessarily extending those capabilities.

... We found that those that actually deployed cloud solutions, technologies, and services and put them out there, found anywhere from 5-7 percent difference in greater value, just by deploying, versus those that are thinking about it or trying to get into the mode of, "We want to go down that path and we are thinking about that investment process."

What were the benefits? It's really interesting. The first is that they were able to drive more revenue. Understandably, if we get those cloud-based solutions, we're going to drive more revenue. If you think about that gap from 5-9 percent, that’s huge, on a revenue standpoint.

Two other points: the cost-to-serve model. They're able to look at what their costs are, what are costing to serve from the enterprise, all the way through their trading partners, all the way back out into where the demand cycle begins, from a supply chain perspective. They get more savings, and those two go hand in hand. Then lastly, it's around that business cycle time improvement aspect.

... So, while we see this as a big area, and companies keep going down this path, one of the things we also find is that it really means a sharper focus on master data management (MDM), your business processes, how that’s orchestrated, both inside the enterprise and externally into your trading partners, and understanding your governance structure. We'll see more and more of that come out, as time goes on here.

Sawchuk: We've been talking about the cloud. How does it help? First of all, and you've heard a lot about this, cloud gives you much faster, easier, and more economical access to technology solutions. Now that you're connected, you can speed the transactions across your supply base, etc.

More importantly, it gives you much more predictability in your ability to execute. For example, a lot of us say we moved our terms. We moved our terms from 45 to 60 days. When we do that, the suppliers say, "When we were on 45, you couldn't pay me on time. You moved it to 60. Can you pay me now on time?" It gives you some predictability in the execution. That's important to them.

Number two is, if you negotiate early pay discounts, you have the ability to execute and take advantage of those kinds of things that you have in your commercial agreement.

The cloud also does a couple of things. It certainly brings much more visibility to the overall activities that are occurring across the entire source-to-settle process. But also, once you are connected in this whole cloud environment, it certainly gives you access to intelligent services that exist out there. I'm talking about working capital, things like information about the financial health of your suppliers, their historical performance, the cost of capital, etc.

That kind of collision between outside the cloud and inside the organization is going to change and it could change business pretty dramatically.



Mahowald: We talked about lower cost, leaner IT organizations, because they are able to source outside of the organization, and get lower cost services. We think that kind of collision between outside the cloud and inside the organization is going to change and it could change business pretty dramatically.

Where business happens

A
nother thing is that, when you've got solutions that are brought in by business users -- maybe it's a salesforce.com or some other SaaS application -- it's important to them, and it's important for them, to get agility and speed to that functionality, but there are going to be many places where you are going to be brought outside of your organization, because that's where business happens.

Whether it's in a commerce cloud or another forum or marketplace for the exchange of products, you will be forced there essentially to do business, to maintain your presence in the game, see that transparency, and have it help your business. We think that's probably the most likely place for that collision to occur.

Guptill: We've researched, interviewed, and surveyed a little over 7,000 executives worldwide -- finance, procurement, HR, IT, line of business -- over the last six or seven years about what it is that they want to do with cloud IT, whether it's SaaS or IaaS, platform as a service (PaaS) or whatever. In every single case so far, they're using it to add to what they have. It's filling in the gaps. It's enabling better efficiencies, better cost. It's delivering benefits that they could not get earlier cost effectively.

When you think about it, that’s the pattern of IT investment over the last 50- 60 years. It's very, very rare that we replace what we have with whatever new is coming in. There's all this hype about new stuff is coming and it's going to change everything. It's going to get rid of this. We are going to dump that.

Within four to five years, by year end 2015, more than 50 percent of new IT spending will be in the cloud for the first time.



Our latest survey research, which we are just in the process of publishing right now, very strongly indicates that within four to five years, by year end 2015, more than 50 percent of new IT spending will be in the cloud for the first time. That’s within four or five years. But, that means that about 50 percent, or a little less than half, is still going to be on-premise, so that stuff is not going away.

So, over time, what's going to happen is that we have a series of decisions to make. What costs are we trying to control? How are we going to change our purchasing, procurement, management, payment, relationship management, and so on?

Then, as our traditional on-premise systems, not all of them, but as each one comes up, as they reach the end of their useful life, what do we do? Because traditionally, we would add to them, we would just build out around them, until they take over the entire data center, or we would outsource. Now, we have a combination. We can put some in the cloud and some on-premise.

Those are the decisions that we're going to have to face, as we go ahead. What goes out there? What stays in here? What goes in between? The stuff has to be made to work together. Who has that responsibility? What's it going to cost? How is that going to be budgeted? And how are we going to manage all this?
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Ariba.

You may also be interested in:

Thursday, June 24, 2010

HP unleashes barrage of offerings to add more 'oomph' to Converged Infrastructure benefits

Building on the momentum of its Converged Infrastructure strategy, HP this week announced new server, storage, network and power management technologies to enable clients to shift millions of dollars in operational costs to activities that drive business innovation.

HP’s Converged Infrastructure provides a blueprint for clients that want to eliminate sprawl, complexity and excess maintenance costs. The introductions at HP Tech Forum in Las Vegas include advancements in HP BladeSystem, including several new servers, as well as innovations in HP Virtual Connect and HP BladeSystem Matrix.

Also announced were power management technologies to automate energy awareness and control of IT systems across the data center, as well as storage software that provides new levels of simplicity and automation through a single, unified architecture for data deduplication. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The Tech Forum announcements come on the heels of the previous week's HP Software Universe news.

Among the new offerings are:
  • Three new HP ProLiant scale-up servers, which offer several industry firsts with memory footprints of up to 2 terabytes (TB) and “self-healing” memory capabilities that maximize application uptime with a 200 percent boost in availability. Optimized for data-intensive workloads, the servers reduce data center footprint, complexity, and costs with a consolidation ratio up to 91-to-1.

  • Seven new HP ProLiant G7 server blades with 1 TB of memory and integrated 10Gb Virtual Connect FlexFabric technology for I/O scalability. These systems can support up to four times more virtual machines, while requiring 66 percent less hardware.

  • The new HP Virtual Connect FlexFabric module, a single device that connects servers to any Fibre Channel, Ethernet and iSCSI network. This eliminates the need for multiple interconnects.

  • HP BladeSystem Matrix software, which offers one-touch, self-service provisioning of applications. The “all inclusive” converged infrastructure offering enables private clouds by allowing clients to deploy complex IT environments in minutes. As a result, HP claims that clients can reduce their total cost of ownership up to 56 percent compared to traditional IT infrastructures.

  • HP Intelligent Power Discovery, which creates an automated, energy-aware network between HP ProLiant servers, third-party facility management tools, and data center power grids. The software provides greater transparency and insight into power usage by creating a real-time, graphical map of energy usage across servers and facilities. By accurately provisioning energy, HP estimates that clients can extend the lives of their data centers and save up to $5 million per every 1,000 servers in one year.

  • HP StoreOnce, a solution to automate data deduplication across the enterprise with a single unified architecture. It is built on patented innovation and features designed by HP Labs, the company’s research arm and reduces costs by eliminating multiple stored versions of the same data.
You may also be interested in:

Wednesday, June 23, 2010

I collaborate, therefore I think, therefore I am ... an enterprise

They say we have big brains because we have all needed to work better together over the past 150,000 years. The more people work together, the more tools they need to make collaboration a productive art, rather than a befuddled mess.

Rather than wait for human evolution to keep up the pace, Salesforce.com yesterday delivered its Chatter cloud-based collaboration service, based on social networking methods more common on Facebook or Twitter.

Directed first at enterprise help desk and customer service processes, Chatter has the strong potential to become a company-wide collaboration accelerant. And if that happens, more data and insights into what people do to solve problems can be identified, refined, repeated, automated, and extended. The cloud model makes this easy to afford, to get to and to expand.

Chatter, and its ilk quickly sprouting up elsewhere, can foster better, targeted and self-directing collaboration; can spur and capture the data about processes in progress, and can become a service feature within nearly any business application, process or ecosystem.

Email can not do this. Instant messaging, no. Portals, not quite. Chatter shows that email's role is overextended, counter-productive, and in need of a replacement.

But what caught me by surprise in watching Salesforce.com's Chairman and CEO Marc Benioff introduce Chatter yesterday in San Jose, CA, was not that consumer-focused social media motifs have a place in the productivity enterprise portfolio. What screams to me of "killer app" is that the data from social interaction are freer in the enterprise than they are on the open Web.

The data and analytics derived from Chatter are not defined by privacy boundaries, or the attempt to define and maintain them. Any user company controls the data, and so the data is free to be cultivated, consumed, analyzed, reused, extended, captured, codified, integrated, innovated from.

The data from social media and network activities in the enterprise therefore is far more free and open to the enterprise needs than Facebook, Twitter, Yahoo or Google are free to use or share the data they have about what their (your) users do.

Virtual scrum for the rest of us

Chatter helps to sort out what comes next when processes and people collide. Like with our communicative ancestors as they faced challenges in the dynamic wild, self-selecting groupings, pairings and open-ended dialogue about how to react to a situation can arise and amend via Chatter. This is a tool that entices and abets collaboration, rather than confines or stifles it. Too often machine-made silos confine today's online interactions into point-to-point email threads that swiftly run aground.

The sweet spots to try out these cloud- and participant-accelerated cooperative scrums are help desk, service desk and IT. But it can and will go much further. Already more than 6,000 of Salesforce.com's customers have adopted the new social collaboration application, out of 77,300 potential customers for the San Francisco-based SaaS business applications and services development platform provider.

HP has seen the powerful confluence of IT functions and social networking tools and UIs, as evidenced by its limited-beta 48Upper SaaS collaboration tool. I received a demo of 48Upper last week, and all the things that make Chatter powerful work there as well. I hope HP targets 48Upper beyond the IT department. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

For while such uses as in software development and IT support are a no-brainers for Chatter and 48Upper, as they align well with agile and scrum methods, this is but the beginning. In a fast-paced business world, these app dev principles now have huge relevancy across many more business functions and processes. And Chatter can be the catalyst for doing so. It fits in well with Japanese kaizen and Deming-derived thinking too.

What's more, Metcalfe's Law has a supporting role, in that the more people that use Chatter, the more valuable it is; and there's a qualitative branch to the support -- the better the dialogue and sharing, the higher quality the thinking in the Chatter ongoing scrum, the more everyone benefits. This is the 100,000-year-old self-reinforcing frontal lobes cognition that makes us our human best ... together.

HP's Anton Knolmar recaps highlights of Software Universe conference, looks to future

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast, an interview with Anton Knolmar, Vice President of Marketing for HP Software & Solutions, conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

The one-on-one discussion comes to you from the HP Software Universe 2010 Conference in Washington D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

Here are some excerpts:
Knolmar: I'm really excited about having so many customers here. We've been sold-out, which is a good sign. Customers are also really interested in sharing their solutions and sharing their information with us. At the end of the day, where we are totally committed is providing value to those customers.

We kicked it off the first day on the main stage, with our new Executive Vice President, Bill Veghte, talking about IT as an inflection point and how, with our solution portfolio, can help our customers provide even greater value for their organizations. That was a good lead-in.

I was even more excited, when we had customers on stage. Delta Air Lines’ Theresa Wise did a fantastic job explaining the challenges they were facing with integrating and acquiring Northwest Airlines, and getting those two companies together using our portfolio.

We got compliments and feedback about Dara Torres and what she was showing on stage here, on how you can compete, independent of what age, if you try to give your best in your personal, private, and business life. This was a good learning experience for all of our customers.

Then, we moved to the next event, our blockbuster product announcement, BSM 9.0, rolling this one out across the world, with different solutions in a single pane of glass, with the automation, and simplification.

It’s not "one solution fits all," and that’s what we are trying to do with our customers as well -- a really customized solution approach.



The feedback we received from our customers is that this is exactly what they've been looking for. And, they are even looking forward to more simplification. The simpler we can make it for them in their complex life, in their complex environment, whatever comes in from cloud, from virtualization, from new technologies, the better they all feel and the better we can serve them.

It wasn't just one customer who had one story to tell. We had to set aside an executive track, where we had a different levels of customers, talking about the problems and how they're facing problems. It’s not "one solution fits all," and that’s what we are trying to do with our customers as well -- a really customized solution approach.

What they're telling us in terms of this broad range of delivery is that it's a huge opportunity for everyone in the cloud. Also, everyone is saying, "We hate the word cloud," but that’s the word everyone uses. The delivery models that are out there at the moment, the new technology, the mobility factor, the growth of the smartphones, the mobile devices, is a big thing, and will be more in the future.

Being future-ready

Our customers are still challenged with their current environment, with their legacy environment. They say, "We still have mainframes to manage and all this new technology is coming in here." What they're trying to do is, and what we are trying to equip them with the current portfolio that we have, is to manage, monitor, and make the best out of the current investments, but also with our solutions portfolio, to be future ready.

So whatever new technology comes out, they're equipped and they can adopt this immediately in their current environment. They should be really happy with what we announced this week to be future ready for their future investments, as well whatever comes up.

his was an exciting moment for us, getting our blockbuster out. A new blockbuster is coming, so stay tuned for that. That happens in September. We will also take Software Universe on the road. The next event is happening in Israel in a few weeks. We have a big crowd coming in, 1,500 customers, which is a huge gathering for Israelis.

The other piece is that we have HP TechForum, which is our sister conference, where we get the enterprise business, going on in Las Vegas this week. We're definitely excited. Stay tuned here. We're in Europe, in Barcelona, at the end of November, with our next Software Universe event.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Tuesday, June 22, 2010

HP's Anand Eswaran on pragmatic new approaches to IT solutions and simplicity for 'everything as a service' era

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast, an interview with Anand Eswaran, Vice President of Professional Services for HP Software & Solutions, conducted by Dana Gardner, Principal Analyst at Interarbor Solutions.

The one-on-one discussion comes to you from the HP Software Universe 2010 Conference in Washington D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

Here are some excerpts:
Eswaran: When a customer is thinking about a solution, they make a buying decision. The next step for them is to deploy the products they buy as a result of that solution, which they committed to from a roadmap standpoint. Once they finish that, they have to operate and maintain the solution they put in place.

The classic problem in the industry is that, when the customer has a problem, after they have deployed the solution, they call the support organization. The support organization, if they determine the problem is actually with the project and the customizations, cannot support it. Then, the customer will be punted back to the consulting organization, whoever they used. In some ways, the industry plays a little bit of ping-pong with the customer, which is a really bad place to be.

What we're trying to do is get to the heart of it and say that we cannot introduce our organizational complexity to the customer. We want to make it simple. We want to make it transparent to them.

Everybody talks about business outcomes, but if there are multiple organizations responsible for the same business outcome for the customer, then, in my view, nobody is responsible for the business outcome for the customer. That’s the second thing at the heart of the problem we're trying to solve.

At Software Universe we announced the launch of a new portfolio element called Solution Management Services, and I'll refer to it as SMS through this conversation.

Very briefly, what it does is offer the ability for us to support the entire solution for the customer, which is different from the past, where software companies could only support the product. That’s the heart of what it means. But, it’s the first step in a very large industry transformation we are ushering in.

Services convergence

Where we're going with this is that we're looking at what we call the concept of services convergence, where we're trying to make sure that we support the full solution for the customer, remove internal organizational complexity, and truly commit to, and take accountability for, the business outcome for the customer.

Specifically what it means is that we've put up an 18-month roadmap to fuse the services and the support organizations into one entity. We basically take care of the customer across the full lifecycle of the solution, build the solution, deploy the solution, and maintain the solution, They they have one entity, one organization, one set of people to go to across the entire lifecycle. That’s what we're doing.

To put it back in the context of what I talked about at the new portfolio launch, SMS is the first step and a bridge to get to eventual services convergence. SMS is a new portfolio with which the consulting organization is offering the ability to support the solution, until we get to one entity as true services in front of the customer. That’s what SMS is. It’s a bridge to get to services convergence.

Our goal is to support the full solution, no matter what percentage of it is not HP Software products.



The cool part is that this is an industry-leading thing. You don’t see services convergence, that’s industry leading.

Just as SMS is the first step toward services convergence, services convergence is the critical step to offering "Everything as a Service" for the customer. If you don’t have the organizations aligned internally, if you don’t have the ability to truly support the full lifecycle for the customer, you can never get to a point of offering Everything as a Service for the customer.

If you look at services as an industry, it hasn't evolved for the last 40 or 50 years. It’s the only industry in technology which has remained fairly static. Outside of a little bit of inflection on labor arbitrage, offshoring, and the entire BPO industry, which emerged in the 1990s, it's not changed.

Moving the needle

Our goal is to move the needle to have the ability to offer Everything as a Service. Anything that is noncompetitive, anything that is not core to the business of an organization, should be a commodity and should be a service. Services convergence allows us to offer Everything as a Service to the customer. That’s where we are heading.

As we look at it, we see the biggest value in first treating it as a horizontal. Because this is going to be such an inflection point in how technology is consumed by the customers, we want to get the process, we want to get the outcomes, and we want to get what this means for the customer right the first time.

Once we get there, the obvious next step is to overlay that horizontal process of offering Everything as a Service.



Once we get there, the obvious next step is to overlay that horizontal process of offering Everything as a Service, with vertical and industry taxonomies.

When you talk about inflection points in the history of technology, the Internet probably was the biggest so far. We're probably at something that is going to be as big, in terms of how consumption happens for customers. Everything non-core, everything noncompetitive is a service, is a commodity.

There are many different mechanisms of consumption. Cloud is one of them. It’s going to take a little bit of maturity for customers to evolve to a private cloud, and then eventually consume anything non-core and noncompetitive as part of the public cloud.

We're getting geared, whether it’s infrastructure, data centers, software assets, automation software, or whether it is consulting expertise, to weave all of that together. We've geared up now to be able, as a best practice, to offer multi-source, hybrid delivery, depending on, one, the customer appetite, and two, where we want to lead the industry, not react to the industry.

A different approach

If you look at the last few years and at the roadmap which HP has built, whether it is software assets, like Mercury, Peregrine, Opsware, and all of it coming together, whether it is the consulting assets, like the acquisition of EDS, which is now called HP Enterprise Services (ES), there was a method to the madness.

We want to approach [the market] in a very different way. We want to tell the customer, "You have a 5 percent defect level across the entire stack, from databases and networks, all the way up to your application layer. And that’s causing you a spend of $200 million to offer true business outcomes to your customer, the business."

Instead of offering a project to help them mitigate the risk and cost, our offer is different. We are saying, "We'll take a 5 percent defect level and take it to 2.5 percent in 18 months. That will save you north of a $100 million of cost." Our pricing proposal at that point is a percentage of the money we save you. That’s truly getting to the gut of business outcomes for the customer.

It also does one really cool thing. It changes the pattern of approvals that anybody needs to get to go do a project, because we are talking about money and tangible outcomes, which we will bring about for you.

The last five years is the reason we're at the point that we are going to lead the industry in offering Everything as a Service.



That's not going to be possible without the assets we have consolidated from a software, hardware, or ES standpoint. All of this comes together and that makes it possible.

When you talk about inflection points in the history of technology, the Internet probably was the biggest so far. We're probably at something that is going to be as big, in terms of how consumption happens for customers. Everything non-core, everything noncompetitive is a service, is a commodity.

There are many different mechanisms of consumption. Cloud is one of them. It’s going to take a little bit of maturity for customers to evolve to a private cloud, and then eventually consume anything non-core and noncompetitive as part of the public cloud.

We're getting geared, whether it’s infrastructure, data centers, software assets, automation software, or whether it is consulting expertise, to weave all of that together. We've geared up now to be able, as a best practice, to offer multi-source, hybrid delivery, depending on, one, the customer appetite, and two, where we want to lead the industry, not react to the industry.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Fiberlink Communications rolls out cloud-based patch-management service

Keeping mobile devices patched and protected -- and making it safer for enterprise employees to work on the Web -- is giving IT admins plenty of headaches. Fiberlink Communications is offering an aspirin, of sorts, with a new cloud-based patch management service.

Dubbed MaaS360 Patched Management from the Cloud Service, the service works to mitigate the risks of the mobile, Internet-connected workforce by streamlining management and deployment of security patches to PCs, laptops, and mobile devices that connect to external wireless networks. The new service promises to protect against data breaches while battling the trend toward inflating help desk costs (along with slowed employee productivity).

“More than ever, patch management is a critical part of IT operations. Enterprises cannot just rely on Microsoft’s monthly patch updates for their entire patch maintenance strategy,” says John Nielsen, a product manager at Fiberlink. Nielsen says the service also covers common applications from vendors like Apple, Adobe and Sun.

The case for cloud-based patch deployment

IT administrators are already aware of how dangerous it is not to keep security software and patches up to date, but Fiberlink is nonetheless hammering home the message about the perils of inadequate patch management because it sees a disconnect between the knowledge of the danger and actual IT practices.

At issue may be enterprise IT policies that only focus on operating system patches and fail to take into account Java, QuickTime and other common apps in the enterprise today. But when malware infects those applications, it can send a ripple throughout the enterprise. Fiberlink is pointing to industry research to bolster its case for keeping software and patches current.

For example, the Ponemon Institute reports that the cost of a data breach increased to $6.75 million in 2009. And the Quant Patch Management Survey reveals that 50 percent of enterprises do not have a formal patch-management process, 54 percent do not measure compliance with patch-management policies and 68 percent do not track patch time-to-deployment.

MaaS360 in action


Fiberlink is aiming to make it so convenient to keep systems and software up to date with the MaaS360 Patch Management from the Cloud Service that the enterprise will take notice. The service not only tracks and pushes patches for operating systems, applications and vulnerabilities, it also uses analysis techniques to make sure the patches are applied properly and that all files are current. The service offers up reporting and analytics so IT admins can monitor what is going on.

“Prior to MaaS360 we had to use four different consoles to check the AV, firewall and patch compliance of our corporate and remote users,” says Bill Dawson, Technical Services Manager, Mizuno USA. “The MaaS360 portal brings all that data together so we can quickly assess our compliance level and zoom in on problem areas with the drill-down function. For the first time in my memory we don't have to jump through hoops to track our software.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Monday, June 21, 2010

Aster Data delivers 30 analytic packages and MapReduce functions for mainstream data analytics

Aster Data, which provides data management and data processing platform for big data analytic applications, today announced the delivery of over 30 ready-to-use advanced analytic packages and more than 1,000 MapReduce-ready functions to enable rapid development of rich analytic applications on big data sets.

The solution is a massively parallel database with an integrated analytics engine that leverages the MapReduce framework for large-scale data processing and couples SQL with MapReduce.

The expanded suite of pre-packaged SQL-MapReduce and MapReduce-ready functions accelerates the ability to build rich analytic applications that process terabytes to petabytes of data. [Disclosure: Aster Data, San Carlos, Calif., is a sponsor of BriefingsDirect podcasts.]

We have found that analytics that previously took weeks to months of SQL coding can now be built in days with richer analytic power than what is possible with SQL alone.



Traditional data management platforms and analytic solutions do not scale to big data volumes and restrict business insight to views that only represent a sample of data, which can lead to undiscovered patterns, restricted analysis and missed critical events. MapReduce is emerging as a parallel data processing standard, but often requires extensive learning time and specialized programming skills.

Coupling the SQL language with MapReduce eliminates the need to learn MapReduce programming or parallel programming concepts. Other benefits of this coupling include:
  • Making MapReduce applications usable by anyone with a SQL skill-set.
  • Enabling rich analytic applications to be built in days due to the simplicity of SQL-MapReduce and Aster Data’s suite of pre-built analytic functions.
  • Delivering ultra-high performance on big data, achieved by embedding 100 percent of the analytics processing in-database, eliminating data movement.
  • Automatically parallelizing both the data and application processing with SQL-MapReduce for extremely high performance on large data sets.
New functions

Aster Data also announced today a significant expansion in the library of MapReduce-ready functions available in Aster Data nCluster. The Aster Data nPath function is only one example of more than 1,000 functions now delivered through over 40 packages available with the Aster Data Analytic Foundation for Aster Data nCluster 4.5 and above.

These new functions cover a wide range of advanced analytic use cases from graph analysis to statistical analysis to predictive analytics, that bring extremely high value business functions out of the box that accelerate application development. Examples include:
  • Text Analysis: Allows customers to "tokenize" count and position or count the occurrences of words as well as track the positions of words/multi-word phrases.
  • Cluster Analysis: Includes segmentation techniques, like k-Means, which groups data into naturally occurring clusters.
  • Utilities: Includes high value data transformation computations. For example developers can now simply unpack and pack nested data as well as anti-select, or allow the return of all columns except for those that are specified.
Aster also revealed new partners that are working closely with Aster Data’s data-analytics server, nCluster, to simplify development of highly-advanced and interactive analytic applications that process extremely large data volumes. Partners using SQL-MapReduce for rich analytics include Cobi Systems, Ermas Consulting, and Impetus Technologies.

Amiya Mansingh of Cobi Systems said, “There’s no question that Aster Data’s solution and SQL-MapReduce offers a powerful, yet easy-to-use framework to build rich, high performance applications on big data sets. We have found that analytics that previously took weeks to months of SQL coding can now be built in days with richer analytic power than what is possible with SQL alone. ”

You may also be interested in:

Friday, June 18, 2010

Motorola shows dramatic savings in IT operations costs with 'ERP for IT' tools based on HP PPM

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series, coming to you from the HP Software Universe 2010 Conference in Washington, D.C. We're here the week of June 14, 2010, to explore some major enterprise software and solutions trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

Our next customer case study focuses on Motorola in the area of productivity, cost optimization, and their IT efficiency efforts -- a winner of HP's Excellence Award this year.

We're going to hear more about that from Judy Murrah, Senior Director of IT, at Motorola. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Murrah: We sat down with our business partners, top leadership on both sides -- our CIO and the business presidents and executive teams -- and talked through every business function. That’s the place where we started and where we saw the magic unfold.

We looked at it on a scale of business competitiveness and how important that particular business function is to the business. Then, on the other axis, if you picture the famous 2×2 matrix, we looked at the complexity and cost of that business function.

We did that for every business function we have. We laid it out and then talked through where we would like those functions to move in the future. By mapping it out visually, it helped us to know that some areas were just costing more money than the value they brought to the business. When you see that, you put data on a piece of paper, and you have a visual, it is a very good way to align business and IT around a common goal.

... You don’t really think too much about change and cost optimization being related, but we have had, over time, a very complex IT environment grow. We have thousands of systems in a company that has grown organically and through mergers, acquisitions, and divestitures.

Just to give you an example, if we talk about engineering as a business function, to Motorola, which is a technology company, that’s a critical competitive differentiator, very important, high on the scale of competitiveness. If we look at the complexity and cost of running that today, in Motorola, we have a lot of systems and it’s a high-cost area.

We have somewhere in the neighborhood of 1,800 systems in the company. We manage about 1,000 projects per year that flow out of these decisions. We have about 1,500 employees in the IT organization and are very heavily outsourced in some of the functions. So, we have another few thousand folks who we consider a part of the team, and that’s who have all made this happen.

In order to really be part of the business imperatives to move forward in next-generation business processes, it was too complex to make changes. So, we focused on reducing those systems and doing it in a way that was directly aligned to business change and the directions they would like to go into.

Cost optimization is top of mind

My role at Motorola IT is in what we call CIO Operations. I'm responsible for our project management office (PMO) portfolio, quality, communications, and other activities that support our IT operations. Cost optimization is on everybody’s mind these days, especially with the economy the way it is, and with many business initiatives out there.

The only way we could have managed this is our implementation of one tool and one process, that’s used across the whole Motorola IT environment -- HP’s Project and Portfolio Management Center (PPM). It gives us one place where we contain our "source of truth" for our investment dollars, for the priorities of the business request coming through, and for the things that we've decided to work on.

In that tool, we have every one of our people resources named, as well as what they're working on, and we look at their utilization and movement to the most critical areas. We also manage our project execution to the timelines, schedules, and budgets that we commit to our business partners.

Dashboards and reporting

What’s very important then is that all of this underlying data and management process that we use can be presented back to the business in very good dashboards and reporting, so that we all stay on top of where we are and can be proactive on change, if it’s needed.

About a year ago we moved from a hosted environment, internal to Motorola, to the HP software-as-a-service (SaaS) environment. It works like a charm. No issues with performance. We have had great responsiveness from HP. It does help reduce our support cost, somewhere around 40 to 50 percent.

Moving from hosted to SaaS didn’t affect usability, adoption, or anything. That really was almost seamless. We were using the same application before and after.

I always talk about how IT is sometimes like the cobbler’s children, as the old saying goes. It’s very difficult to justify the investment in IT tools at some points in time, unless you have ones like this, that are showing payback to the business and you use them in a way that everyone is now depending on it. It does become the enterprise resource planning (ERP) system of the IT organization.

In the last two years we have reduced our cost structure by about 40 percent. That is a big number to do while the business is operating. We have also, on our large projects that we run through the system, shown about a 150 percent payback or return on investment (ROI) for those. That means that the value of the investment for us was placed in the right places.

We have also, on our large projects that we run through the system, shown about a 150 percent payback or return on investment (ROI) for those. That means that the value of the investment for us was placed in the right places.



We've been able to reduce IT support costs by about 25 percent. Previous to this more consolidated system, we were operating in such silos that there were many people doing the same things. So by consolidating, we eliminated about 25 percent of the wasted work.

I think a couple of areas that we need to work at going forward are more on our application support area. That's bringing the tool to manage resources and activities and support operations, tying it a little more tightly into our financial management, and getting a little more granular on the skills and our ability to move our resources around from place to place.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: