Wednesday, October 7, 2009

Survey says slow, kludgy business processes hamper competitiveness

Corporations, are your business processes slowing you down? If so, you are in good company. Seventy-two percent of organizations say their business processes take too long and need to be streamlined.

So says a new independent survey conducted by Vanson Bourne for Progress Software.

The survey had a single goal, to determine the tools and processes large companies have put in place to support operational responsiveness and the ability to make "real-time" decisions. Vanson Bourne surveyed 400 large companies in the United States and Western Europe to develop its findings.

The bottom line: An overwhelming majority of businesses still feel they have a ways to go before they are equipped to respond to market or customer changes quickly enough to compete well in a global marketplace.

“The quest for faster operational responsiveness is becoming more urgent now that external factors such as social networking have boosted speed of response,” says Dr. Giles Nelson, senior director of strategy at the Apama division of Progress Software. “If organizations can’t keep up with the pace of customer feedback, they will find themselves exposed to competitive threats.”

I recently reached a similar conclusion in a podcast discussion with IT analyst Howard Dresner, with an emphasis on business intelligence (BI) in the stew of real-time requirements. Other firms I've worked with, such as Active Endpoints and BP Logix, call the value "nimble" or the ability to quick orchestrate and adapt processes.

[UPDATE: TIBCO today delivered its iProcess Spotfire product for real-time BI aligned to business process management.]

Sure is a lot of emphasis on real-time data, analysis and process reactivity nowadays! No process like the present, I always say. [Disclosure: TIBCO and Progress are sponsors of BriefingsDirect podcasts.]

On average, 22 percent of U.S. companies surveyed by Vanson Bourne admitted that, by the time they noticed it, they had missed the opportunity to react competitively to a change or trend affecting one of their processes. A lack of information seems to be fueling the problem. More than half of companies identified information gaps in decision-making as a cause.

The good news is that surveyed companies have solutions to the information gap in mind, namely access to real-time data. Ninety-four percent of companies cited the importance of real-time data – and the majority of those companies are making moves to gather it. Some 82 percent are planning to invest in real-time technology by mid-2010 in an effort to speed up internal processes, they said.

As Nelson at Apama sees it, bad news now travels very quickly – and companies need to make sure they’re not stuck in the slow lane when it comes to responding to customer issues.

“The overwhelming majority of people we spoke to recognize the importance of responding quickly to customers and to be much more responsive to changes in market conditions. Unfortunately, in most cases at present the process and information reporting infrastructure can’t match that vision,” Nelson says. “Business Event Processing is becoming the way of dealing with this decision-making lag.”

I'd add a bit more. What we're actually seeing is that corporations now see that they must be able to analyze and act in Internet time. Many of us webby and social-media types have known that for some time, but the urgency has now hit the mainstream bricks (not just the clicks).

Furthermore, the payoffs from becoming a real-time-oriented organization will go far beyond knowing what's being said about you on Twitter. As the economy has shown in the last year, those who can move fast and move well will survive and thrive. The others will find themselves in a downward spiral.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post.

Monday, October 5, 2009

HP roadmap dramatically reduces energy consumption across data centers

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

P
roducing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.

The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.

The latest BriefingsDirect podcast discussion therefore targets significantly reducing energy consumption across data centers strategically. In it we examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.

By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.

To help learn more about significantly reducing energy consumption across data centers, we welcome two experts from HP: John Bennett, worldwide director, Data Center Transformation Solutions , and Ian Jagger, worldwide marketing manager for Data Center Services. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.

The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.

The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.

... We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.

... If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.

So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.

That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.

We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.

Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.

... Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.

... If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.

Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.

What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.

. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.



Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.

Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

... One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.

That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.

Bennett: What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.

It's not just the applications and the portfolio. ... It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.

In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.

... For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking.

For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop.

Jagger: The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.

Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.
Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Part 2 of 4: Web data services provide ease of data access and distribution from variety of sources, destinations

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Kapow Technologies.

A
s enterprises seek to gain better insights into their markets, processes, and business development opportunities, they face a daunting challenge -- how to identify, gather, cleanse, and manage all of the relevant data and content being generated across the Web.

As the recession forces the need to identify and evaluate new revenue sources, businesses need to capture such web data services for business intelligence (BI) to work better and fuller. In Part 1 of our web data series we discussed how external data has grown in both volume and importance across internal Internet, social networks, portals, and applications in recent years.

Enterprises need to know what's going on and what's being said about their markets across those markets. They need to share those web data service inferences quickly and easily across their internal users. The more relevant and useful content that enters into BI tools, the more powerful the BI outcomes -- especially as we look outside the enterprise for fast shifting trends and business opportunities.

In this podcast, Part 2 of the series with Kapow Technologies, we identify how BI and web data services come together, and explore such additional subjects as text analytics and cloud computing. So, how to get started and how to affordably manage web data services with BI and business consumers as intelligence and insights?

To find out, we brought together Jim Kobielus, senior analyst at Forrester Research, and Stefan Andreasen, co-founder and chief technology officer at Kapow Technologies. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Kobielus: The more relevant content you bring into your analytic environment the better, in terms of having a single view or access in a unified fashion to all the information that might be relevant to any possible decision you might make. But, clearly, there are lots of caveats, "gotchas," and trade-offs there.

One of these is that it becomes very expensive to discover, to capture, and to do all the relevant transformation, cleansing, storage, and delivery of all of that content. It becomes very expensive, especially as you bring more unstructured information from your content management system (CMS) or various applications from desktops and from social networks.

... Filtering the fire hose of this content is where this topic of web data services for BI comes in. Web data services describes that end-to-end analytic information pipe-lining process. It's really a fire hose that you filter at various points, so that the end users turn on their tap and they're not blown away by a massive stream. Rather, it's a stream of liquid intelligence that is palatable and consumable.

Andreasen: There is a fire hose of data out there, but some of that data is flowing easily, but some of it might only be dripping and some might be inaccessible.

Think about it this way. The relevant data for your BI applications is located in various places. One is in your internal business applications. Another is your software-as-a-service (SaaS) business application, like Salesforce, etc. Others are at your business partners, your retailers, or your suppliers. Another one is at government. The last one is on the World Wide Web in those tens of millions of applications and data sources.

Accessible via browser

Today, all of this data that I just described is more or less accessible in a web browser. Web data services allow you to access all these data sources, using the interface that the web browser is already using. It delivers that result in a real-time, relative, and relevant way into SQL databases, directly into BI tools, or to even service enabled and encapsulated data. It delivers the benefits that IT can now better serve the analysts need for new data, which is almost always the case.

What's even more important is that incremental daily improvement of existing reports. Analysts sit there, they find some new data source, and they say, "It would be really good, if I could add this column of data to my report, maybe replace this data, or if I could get this amount of data in real-time rather than just once a week." So it's those kinds of improvements that web data services also really can help with.

Kobielus: At Forrester, we see traditional BI as a basic analytics environment, with ad-hoc query, OLAP, and the like. That's traditional BI -- it's the core of pretty much every enterprise's environment.

Advanced analytics -- building on that initial investment and getting to this notion of an incremental add-on environment -- is really where a lot of established BI users are going. Advanced analytics means building on those core reporting, querying, and those other features with such tools as data mining and text analytics, but also complex event processing (CEP) with a front-end interactive visualization layer that often enables mashups of their own views by the end users.

... We see a strong push in the industry toward smashing those silos and bringing them all together. A big driver of that trend is that users, the enterprises, are demanding unified access to market intelligence and customer intelligence that's bubbling up from this massive Web 2.0 infrastructure, social networks, blogs, Twitter and the like.

Andreasen: Traditionally, for BI, we've been trying to gather all the data into one unified, centralized repository, and accessing the data from there. But, the world is getting more diverse and the data is spread in more and different silos. What companies realize today is that we need to get service-level access to the data, where they reside, rather than trying to assemble them all.

...Web data services can encapsulate or wrap the data silos that were residing with their business partners into services -- SOAP services, REST services, etc. -- and thereby get automated access to the data directly into the BI tool.

... So, tomorrow's data stores for BI, and today's as well, is really a combination of accessing data in your central data repositories and then accessing them where they reside. ... Think about it. I'm an analyst and I work with the data. I feel I own the data. I type the data in. Then, when I need it in my report, I cannot get it there. It's like owning the house, but not having the key to the house. So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.

Tools are lacking

Today, the IT department often lacks tools to deliver those custom feeds that the line of business is asking for. But, with web data services, you can actually deliver these feeds. The data that IT is asking for is almost always data they already know, see, and work with in the business applications, with the business partners, etc. They work with the data. They see them in the browsers, but they cannot get the custom feeds. With the web data services product, IT can deliver those custom feeds in a very short time.

Kobielus: The user feels frustration, because they go on the Web and into Google and can see the whole universe of information that's out there. So, for a mashup vision to be reality, organizations have got to go the next step.

... It's good to have these pre-configured connections through extract, transform and load (ETL) and the like into their data warehouse from various sources. But, there should also be ideally feeds in from various data aggregators. There are many commercial data aggregators out there who can provide discovery of a much broader range of data types -- financial, regulatory, and what not.

Also, within this ideal environment there should be user-driven source discovery through search, through pub-sub, and a variety of means. If all these source-discovery capabilities are provided in a unified environment with common tooling and interfaces, and are all feeding information and allowing users to dynamically update the information sets available to them in real-time, then that's the nirvana.

Andreasen: This is where Kapow and web data services come in, as a disruptive new way of solving a problem of delivering the data -- the real-time relevant data that the analyst needs.

The way it works is that, when you work with the data in a browser, you see it visually, you click on it, and you navigate tables and so on. The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.

...The beauty with web data services is that it's really accessing the data through the application front end, using credentials and encryptions that are already in place and approved. You're using the existing security mechanism to access the data, rather than opening up new security holes, with all the risk that that includes.

... This means that you access and work with the data in the world in which the end users see the data. It's all with no coding. It's all visual, all point and click. Any IT person can, with our product, turn data that you see in a browser into a real feed, a custom feed, virtually in minutes or in a few hours for something that would typically take days, weeks, or months -- or may even be impossible.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Kapow Technologies.

Thursday, October 1, 2009

Cloud computing by industry: Novel ways to collaborate via extended business processes

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

W
elcome to a podcast discussion on how to make the most of cloud computing for innovative solving of industry-level problems. As enterprises seek to exploit cloud computing, business leaders are focused on new productivity benefits. Yet, the IT folks need to focus on the technology in order to propel those business solutions forward.

As enterprises confront cloud computing, they want to know what's going to enable new and potentially revolutionary business outcomes. How will business process innovation -- necessitated by the reset economy -- gain from using cloud-based services, models, and solutions?

Early examples of applying cloud to industry challenges, such as the recent GS1 Canada Food Recall Initiative, show that doing things in new ways can have huge payoffs.

We'll learn about the HP Cloud Product Recall Platform that provides the underlying infrastructure for the GS1 Canada food recall solution, and we will dig deeper into what cloud computing means for companies in the manufacturing and distribution industries and the "new era" of Moore's Law.

Here to help explain the benefits of cloud computing and vertical business transformation, we're joined by Mick Keyes, senior architect in the HP Chief Technology Office; Rebecca Lawson, director of Worldwide Cloud Marketing at HP, and Chris Coughlan, director of HP's Track and Trace Cloud Competency Center. The dicussion is koderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Lawson: Everyone knows that "cloud" is a word that tends to get hugely overused. We try to think about what kinds of problems our customers are trying to solve, and what are some new technologies that are here now, or that are coming down the pike, to help them solve problems that currently can't be solved with traditional business processing approaches.

Rather than the cloud being about just reducing costs, by moving workloads to somebody else's virtual machine, we take a customer point of view -- in this case, manufacturing -- to say, "What are the problems that manufacturers have that can't be solved by traditional supply chain or business processing the way that we know it today, with all the implicated integrations and such?"

As we move forward, we see that, different vertical markets -- for example, manufacturing or pharmaceuticals -- will start to have ecosystems evolve around them. These ecosystems will be a place or a dynamic that has technology-enabled services, cloud services that are accessible and sharable and help the collaboration and sharing across different constituents in that vertical market.

We think that, just as social networks have helped us all connect on a personal level with friends from the past and such, vertical ecosystems will serve business interests across large bodies of companies, organizations, or constituents, so that they can start to share, collaborate, and solve different kinds of issues that are germane to that industry.

A great example of that is what we're doing with the manufacturing industry around our collaboration with GS1, where we are solving problems related to traceability and recall.

Keyes: If you look at supply chains, food is a good example. It's one of the more complicated ones, actually. You can have anywhere up to 15-20 different entities involved in a supply chain.

In reality, you've got a farmer out there growing some food. When he harvests that food, he's got to move it to different manufacturers, processors, wholesalers, transportation, and to retail, before it finally gets to the actual consumer itself. There is a lot of data being gathered at each stage of that supply chain.

Coughlan: As a consumer, it gives you a lot more confidence that the health and safety issues are being dealt with, because, in some cases, this is a life and death situation. The sooner you solve the problem, the sooner everybody knows about it. You have a better opportunity of potentially saving lives.

So we really look at it from a positive view also, about how this is creating benefits from a business point of view.



As well as that, you're looking at brand protection and you're also looking at removing from the supply chain things that could have further knock-on effects as well.

Keyes: In the traditional way we looked at how that supply chain has traceability, they would have the, infamous -- I would call it -- "one step up, one step down" exchange of data, which meant really that each entity in the supply chain exchanged information with the next one in line.

That's fine, but it's costly. Also, it doesn't allow for good visibility into the total supply chain, which is what the end goal actually is.

What we are saying to industry at the moment -- and this is our thesis here that we are actually developing -- is that, HP, with a cloud platform, will provide the hub, where people can either send data or allow us to access data. What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.

... We have SaaS now, not just to any individual entity in the supply chain, but anybody who subscribes to our hub. We can aggregate all the information, and we're able to give them back very valuable information on how their product is used further up the supply chain. So we really look at it from a positive view also, about how this is creating benefits from a business point of view.

So, depending on what type of industry you're in, we're looking at this platform as being almost a repeatable type of offering, and you can start to lay out individual or specific industry services around this.

We're also looking at how you integrate this into the whole social-networking arena, because that's information and data out there. People are looking to consume information, or get involved in information sharing to a certain degree. We see that as a cool component also that we can perhaps do some BI around and be able to offer information to industry, consumers, and the regulatory bodies fairly quickly.

Coughlan: The point there is that cloud is enabling a convergence between enterprises. It's enabling enterprise collaboration, first of all, and then it's going one step further, where it's enabling the convergence of that enterprise collaboration with Web 2.0.

You can overlay a whole pile of things --carbon footprints, dietary information, and ethical food. Not only is it going to be in the food area, as we said. It's going to be along every manufacturing supply chain -- pharmaceuticals, the motor industry, or whatever.

Lawson: The key to this is that this technology is not causing the manufacturers to do a lot of work. ... It's not a lot of effort on my part to participate in the benefits of being in that traceability and recall ecosystem, because I and all the other people along that supply chain are all contributing the relevant data that we already have. That's going to serve a greater whole, and we can all tap into that data as well.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Kapow and StrikeIron team-up to offer web data services capabilities to SMBs

Kapow Technologies has joined forced with StrikeIron to give small and medium-sized businesses (SMB) a leg up in accessing, using, and sharing Web-based data.

Kapow's Web Data Services 7.0.0 will allow SMBs to wrap any Web site or Web application into RSS feeds or REST Web services. [Disclosure: Kapow is a sponsor of BriefingsDirect podcasts.]

Under Kapow's strategic partnership with StrikeIron, Web Data Services 7.0.0, which is available immediately, will be offered on StrikeIron's Web Services Catalog. The software-as-a-service (SaaS) distribution engine allows developers and business users to integrate live data from private and public Web applications and Web sites.

By using Kapow's latest offering, SMBs that need enterprise-class Web data services access and quality will have automated and structured access without resorting, as they did previously, to cutting and pasting the data from a Web browser. [Learn more about Web data services and business inteligence.]

Kapow's “no coding” technology enables companies to rapidly build, test and deploy standard RSS data feeds and REST web services delivery of real-time web data directly into common business applications such as Microsoft Excel, NetSuite or Salesforce as well as any RSS feed reader.

Kapow can also deliver feeds and services directly to any application builder that can access data in standard RSS, JSON and XML format, including IBM Mashup Center, IBM Rational EGL, JackBe and WaveMaker.

The feeds and services are constructed by a visual point-and-click desktop tool that enables users to create “robots” that automate the navigation and interaction with any Web application or Web site, providing secure and reliable access to the underlying data and business logic. This enables the collection of web intelligence and market data in real-time.

Under the terms of the agreement, Kapow will maintain full technical and operational responsibility for Kapow Web Data Services, including enhancements and upgrades. StrikeIron will provide the commercialization capabilities, handling all customer relationship management functions, including sales, billing, and account support.