Wednesday, October 14, 2009

CEO interview: Workday’s Aneel Bhusri on advancing SaaS and cloud models for improved ERP

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Workday.

T
he latest BriefingsDirect podcast is an executive interview with a software-as-a-service (SaaS) upstart Workday, a human capital management (HCM), financial management, payroll, worker spend management, and workday benefits network provider.

I had the pleasure to recently sit down with Workday’s co-founder and co-CEO, Aneel Bhusri, who is responsible for the company’s overall strategy and day-to-day operations.

Bhusri, who also helped bring PeopleSoft to huge success, explains how Workday is raising the bar on employee life-cycle productivity by lowering IT costs through the SaaS model for full enterprise resource planning (ERP).

More than that, Workday is also demonstrating what I consider a roadmap to the future advantages in cloud computing. The interview is conducted by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bhusri: We're very similar to PeopleSoft in some areas, and in other areas, quite different. We have the same culture -- focused on employees first and customers second. We focus on integrity. We focus on innovation. We brought that same culture to Workday, and our customers are very happy.

The pedigree of the team starts with my co-founder, Dave Duffield. He's an icon in the software industry. He's known for high integrity, innovation, and customer service. Many of us, like me, have been with him for 17 years now and we share that vision and that culture with him. We have set out to build the next great software company.

Much like PeopleSoft, we are taking advantage of a technology shift. PeopleSoft benefited from the shift from mainframe to client-server. When Workday started, people weren’t as focused on how big the shift was from client-server or on-premise computing to what is now called cloud computing or, back then, SaaS.

It now seems like it's even bigger than the shift from mainframe to client-server. This is a massive shift and you see it all across. That's the big difference. We are obviously leveraging a very different technology base.

The thing that Dave and I both took away from PeopleSoft is that you have to stay on top of innovation, and that's what Workday is doing. We are innovating where the large ERP vendors have stopped.

One of the reasons why the margins are so high for the [legacy ERP vendors] is that they are at the tail end of the technology life cycle. They are not really innovating.



... One of the reasons why the margins are so high for the [legacy ERP vendors] is that they are at the tail end of the technology life cycle. They are not really innovating. They are collecting maintenance payments. We all know that maintenance is very, very profitable. Well, when you start in a new technology, it's mostly investing. Usually, when the profitability rates get that high, it means that there is a new technology around the corner that will start cutting into those profitability rates.

... ERP is now 15 years old and just needs to be rewritten. The world has changed so dramatically since the original ERPs were written.

Back then, companies were thinking about being global. Now, they are global. People were not even thinking about the Internet, and now the Internet exists. That was before Sarbanes-Oxley and before the emergence of the iPhone and BlackBerry. All these things pile together to say that it's time to go back and rewrite core ERP. It's no longer valid in today’s world.

... These last nine months have been challenging for everyone. We, as a system-of-record vendor, saw fewer projects out there. At the same time, because of our new model and the cost benefits of the SaaS solutions, we were probably more relevant than we might have been without the economic downturn.

... As the Workday system has gotten more robust, we've really focused on the Fortune 1000 companies, our biggest being Flextronics. Those large, complex organizations with global requirements have a great opportunity for cost savings.

When you add it altogether . . . it averages out consistently to about a 50 percent cost saving over a five-year period.



We had companies that were planning on implementing the traditional legacy systems, but could not afford it. A great example is Sony Pictures Entertainment. They already own the licenses to the SAP HR system, and yet, after careful consideration, determined they didn't have the budget to implement it.

... They will be live in five months, and they will get the benefit of about a 50 percent cost savings, if not more. They basically quoted it as one-half the time at one-third the cost.

... When you add it altogether, really do it on an apples-to-apples basis, and look at what we have taken over for the customers, it averages out consistently to about a 50 percent cost saving over a five-year period.

... The data we have now is not theoretical. It's now based on 60 of our 100-plus customers. Being in production, we have been able to go back and monitor it. The good news about our cost is that it's all-in-one subscription cost, so we know exactly what the costs were for running the Workday system.

... [Many customers] decided that they were not going to take the major upgrade from one of those ERP vendors. A major upgrade is much like a new implementation and it's cost prohibitive.

With our focus on continuing innovation, they are not stuck in time. Every customer gets upgraded every four months to the most current version of the system. So as we are innovating, they are all taking the advantage of that innovation, whether it's in usability, functionality, or a new business model.

I like to think about it as building at web speed, and that's how Google, Amazon, and eBay think about it. New features come out very quickly. There are no old versions of Amazon and eBay that they have to worry about supporting. It's one system for all users. We're able to leverage those same principles that they are and bring out capabilities very quickly, so a customer can identify something that's important to them.

If you can get your administrative applications, your non-mission critical applications . . . delivered from a vendor . . . why not focus your resources on the core enterprise apps you have?



... I think we are a lot like Salesforce. Dave and I have a very good relationship with Marc Benioff. They're focused on CRM, and we're focused on ERP. I think the big difference is that they are focused on becoming a platform vendor, and we are really very focused on staying as an application vendor.

... If you can get your administrative applications, your non-mission critical applications -- CRM, HR, payroll, and accounting -- delivered from a vendor, and you can manage them to service-level agreements (SLAs), why not focus your resources on the core enterprise apps you have?

More and more CIOs are getting that. It does free up data-center space. It also frees up human resources and IT to focus in on what's core to their business. HR and accounting don't have to be specialized in running that system. They have to know HR and accounting, but they don't have to be specialized in running those systems.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Workday.

Tuesday, October 13, 2009

Engine Yard draws funding as it ushers more developers onto the Ruby services train

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Developers are a mighty stubborn bunch. Unlike the rest of the enterprise IT market, where a convergence of forces have favored a nobody gets fired for buying IBM, Oracle, SAP, or Microsoft, developers have no such herding instincts. Developers do not always get with the [enterprise] program.

For evidence, recall what happened the last time that the development market faced such consolidation. In the wake of web 1.0, the formerly fragmented development market – which used to revolve around dozens of languages and frameworks – congealed down to Java vs .NET camps. That was so 2002, however, as in the interim, developers have gravitated toward choosing their own alternatives.

The result was an explosion of what former Burton Group analyst Richard Monson Haefel termed the Rebel Frameworks (that was back in 2004), and more recently in the resurgence of scripting languages. In essence, developers didn’t take the future as inevitable, and for good reason: the so-called future of development circa 2002 was built on the assumption that everyone would gravitate to enterprise-class frameworks.

Java and .NET were engineered on the assumption that the future of enterprise and Internet computing would be based on complex, multitier distributed transactional systems. It was accompanied by a growing risk-aversion: Buy only from vendors that you expect will remain viable. Not surprisingly, enterprise computing procurements narrowed to IOSM (IBM, Oracle, SAP, Microsoft).

Different dynamic

But the developer community lives to a different dynamic. In an age of open source, expertise for development frameworks and languages get dispersed; vendor viability becomes less of a concern. More importantly, developers only want to get the job done, and anyway, the tasks that they perform typically fall under the enterprise radar.

Whereas a CFO may be concerned over the approach an ERP system may employ to managing financial system or supply chain processes, they are not going to care about development languages or frameworks.

The result is that developers remain independent minded, and that independence accounts for the popularity of alternatives to enterprise development platforms, with Ruby on Rails being the latest to enter the spotlight.

In one sense, Ruby’s path to prominence parallels Java in that the language was originally invented for another purpose. But there the similarity ends as, in Ruby’s case, no corporate entity really owned it. Ruby is a simple scripting language that became a viable alternative for web developers once David Heinemeier Hansson invented the Rails framework. The good news, Rails makes it easy to use Ruby to write relatively simple web database applications. Examples of Rails’ simplicity include:
  • Eliminating the need to write configuration files for mapping requests to actions

  • Avoiding multi-threading issues because Rails will not pool controller (logic) instances

  • Dispensing with object-relational mapping files; instead, Rails automates much of this and tends to use very simplified naming conventions.
The bad news is that there are performance limitations and difficulties in handling more complex distributed transaction applications. But the good news is that when it comes to web apps, the vast majority are quite rudimentary, thank you.

The result has propelled a wave of alternative stacks, such as LAMP (Linux-Apache web server-MySQL-and either PHP, Python, or Perl) or, more recently, Ruby on Rails. At the other end of the spectrum, the Spring Framework takes the same principle – simplification – to ease the pain of writing complex Java EE applications – but that’s not the segment addressed by PHP, MySQL, or Ruby on Rails. It reinforces the fact that, unlike the rest of the enterprise software market, developers don’t necessarily take orders from up top. Nobody told them to implement these alternative frameworks and languages.

Although hardly the only cloud provider out there that supports RoR development, Engine Yard’s business is currently on a 2x growth streak. Funding stages the company either for IPO or buy out.



The latest reminder of the strength of grassroots markets in the developer sector is Engine Yard’s securing of $19 million in C funding last week. The backing comes from some of the same players that also funded SpringSource (which was recently acquired by VMware). Some of the backing also comes from Amazon, whose Jeff Bezos owns outright 37Signals, the Chicago-based provider of project management software that employs Heinemeier Hansson. For the record, there is plenty of RoR presence in Amazon Web Services.

Engine Yard is an Infrastructure-as-a-Service (IaaS) provider that has optimized the RoR stack for runtime. Although hardly the only cloud provider out there that supports RoR development, Engine Yard’s business is currently on a 2x growth streak. Funding stages the company either for IPO or buy out.

At this point the script sounds similar to SpringSource whose new owner, VMware, is launching a development and runtime cloud that will eventually become VMware’s Java counterpart to Microsoft Azure.

It’s tempting to wonder whether a similar path will become reality for Engine Yard. The answer is that the question itself is too narrow. It is inevitable that a development and runtime cloud paired with enterprise plumbing (e.g., OS, hypervisor) will materialize for Ruby on Rails. With its $19 million funding, Engine Yard has the chance to gain critical mass mindshare in the RoR community – but don’t rule out rivals like Joyent yet.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Friday, October 9, 2009

IT architects seek to bridge gap between cloud vision and reality

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

T
he popularity of the concepts around cloud computing have caught many IT departments off-guard.

While business and financial leaders have become enamored of the expected economic and agility payoffs from cloud models, IT planners often lack structured plans or even a rudimentary roadmap of how to attain cloud benefits from their current IT environment.

New market data gathered from recent HP workshops on early cloud adoption and data center transformation shows a wide and deep gulf between the desire to leverage cloud method and the ability to dependably deliver or consume cloud-based services.

So, how do those tasked with a cloud strategy proceed? How do they exercise caution and risk reduction, while also showing swift progress toward an "Everything as a Service" world? How do they pick and choose among a burgeoning variety of sourcing options for IT and business services and accurately identify the ones that make the most sense, and which adhere to existing performance, governance and security guidelines?

It's an awful lot to digest. As one recent HP cloud workshop attendee said, “We're interested in knowing how to build, structure, and document a cloud services portfolio with actual service definitions and specifications.”

Here to help better understand how to properly develop a roadmap to cloud computing adoption in the enterprise, we're joined by three experts from HP: Ewald Comhaire, global practice manager of Data Center Transformation at HP Technology Services; Ken Hamilton, worldwide director for Cloud Computing Portfolio in the HP Technology Services Division, and Ian Jagger, worldwide marketing manager for Data Center Services at HP. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Comhaire: Independent of how we define cloud -- and there are obviously lots of definitions out there -- and also independent of what value cloud can bring or what type of cloud services we are discussing, it's very clear that the cloud service providers are basically setting a new benchmark for how IT specific services are delivered to the business.

Whether it's from a scalability, a pay-per-use model, or a flexibility and speed element or whether it's the fact that it can be accessed and delivered anywhere on the network, it clearly creates some kind of pressure on many IT organizations.

... These companies will have tremendous benefits on the thinking model, the organizing for a service centric delivery model, but they may need to just work a little bit on the architecture. For example, how can they address scalability and the way that supply and demand are aligned to each other, or maybe how they charge back for some of these services in a more pay-as-you-go way versus an allocation based way.

These companies will already have a big head start. Of course, if you're working on an internal cloud, have things like virtualization in place, have consolidated your environment, as well as putting more service management processes in place around ITIL and service management, this will benefit the company greatly. We'll want to have the cloud strategy rolling out in the near future.

Jagger: ... If there are critical applications that you seek for your business, and they're available through the cloud, either from a service provider or through the shared services model, that's going to be far more efficient and cost-effective, subject to terms of ... pay-per-use and security. But once security is addressed, there are definite cost and efficiency advantages.

Hamilton: We're seeing a growing interest in cloud specifically around cost savings. Certainly, in this economy, cost savings and switching from a capital-based model to an operational model, with the flexibility that implies, is something that a number of companies are interested in.

But, I'd also like to underscore that, as we've discussed, the definition of cloud and the variety of different, and sometimes confusing possibilities around cloud, are things that customers want to get control of. They want to be able to understand what the full range of benefits might be.

In a typical internal

So, cost savings as well as agility and new business capabilities really are the three main types of benefits that we are seeing customers go after.

environment it may take weeks or months to deploy a server populated in a particular fashion. In that same internal cloud environment that time to market can be as little as hours or minutes, along with some of the increased functionality.

So, cost savings as well as agility and new business capabilities really are the three main types of benefits that we are seeing customers go after.

Because of the service orientation, this puts a greater emphasis on understanding not just the technological underpinnings, but the contractual service level elements and the virtual elements that go with this.

Comhaire: We often talk about all the benefits, but obviously, specifically for our enterprise customers, there's also an interesting list of inhibitors. In every workshop that we do, we ask our participants to rank what they believe are the biggest inhibitors, either for themselves to consume cloud services or, if they want to become a provider, what do they believe will be inhibiting their potential customers to acquire or consume the services that they are looking for? Consistently, we see five key themes coming as major inhibitors:.

A lot of companies have value chains that they've built, but what if some of the parts of that value chain are in the cloud? Have I lost too much control? Am I too much dependent?


  • Loss of control. That means I am now totally dependent on my cloud-service provider in my value chain.
  • Lack of trust in your cloud service provider. That could have to do with the question of whether they'll still be in business five years from now, and also things like price-hikes
  • Security and vulnerability. Some of that is perceived. If you architect it well, best-practice cloud-service providers can do a great job of actually being more secure than a traditional enterprise dedicated environment. Difficulties around identity management and all of the things to integrate security between the consumer and the provider that are an additional complexity there.
  • Confidentiality concerning data, because what guarantees do we have, for example, that an employee at a service provider can't take that data and sell it to a government or some other third party?
  • Reliability -- is the service going to be up enough of the time? Will it be down at moments that are not convenient?
Hamilton: [To get started], the most important thing is to make sure that the executive decision makers have a common understanding of what they might want to achieve with cloud. To that end, we've developed a Cloud Discovery Workshop, which is really a way of being able to frame the cloud decision points and to bring the executive decision makers together.

This Cloud Discovery Workshop does a great job of engaging the executive team in a very focused amount of time, as little as an afternoon, to be able to walk through the key steps around defining a common definition for their view of cloud. It's not just our view or some other vendor's view, but their definition of cloud and the benefits that they might be able to accrue.

They, specifically drill that down into particular areas with a return on investment (ROI) focus, the infrastructure capabilities that might be required, as well as the service management operational and some of the more esoteric capabilities that go hand in hand, addressing security, privacy, and other areas of risk. It's just making sure that they've got a very clear way of being able to document that, and then move forward into more detailed design, if that's the direction they want to move in.

Comhaire: From the workshop customers basically get a better view of the strategy they want to go for. We have an initial discussion on the portfolio and we talk also a little bit about the desired state. In the roadmap service, we actually take that to the next level. So we really start off with that desired state.

We have defined the capability model with five levels of capability. We don't want to call it the maturity model, because for every company, the highest maturity isn't necessarily their desired state or their end state. So, it's unfair to name it "maturity." It's more a capability or an implementation model for the cloud. We have five levels of maturity and then six domains of capabilities.

... One piece of core advice we always give is, "Keep it simple." Rather than bring out a whole portfolio of cloud services, start with one. And, that one service may not have all the functionality that you're dreaming of, but become good at doing a more simplified things faster than trying to overdo it and then end up with a five- or six-year's project, when the whole market will be changed when you can roll out. A lot of the best practice in building the roadmap is to simplify it, so it does not become this four- or five-year project that takes way too long to execute.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

Wednesday, October 7, 2009

Successful data center transformation usually requires overdue rethinking of the network

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Special Offer: Gain insight into best practices for transforming your data center by downloading three new data center transformation whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

M
ost enterprise networks are the result of a patchwork effect of bringing in equipment as needed over the years to fight the fire of the day, with little emphasis on strategy and the anticipation of future requirements. That's why it's necessary to reevaluate network architectures in light of newer and evolving IT demands, and overall moves to next-generation data centers.

Nowadays, we see that network requirements have, and are, shifting as IT departments adopt improvements such as virtualization, software as a service (SaaS), cloud computing, and service-oriented architecture (SOA).

The network loads and demands continue to shift under the weight of Web-facing applications and services, security and regulatory compliance, governance, ever-greater data sets, and global-area service distribution and related performance management.

It doesn't make sense to embark upon a data-center transformation journey without a strong emphasis on network transformation as well. Indeed, the two ought to be brought together, converging to an increasing degree over time.

I recently interviewed three thought leaders at HP on network transformation to help explain the evolving role of network transformation and to rationalize the strategic approach to planning and specifying present and future enterprise networks. They are Lin Nease, director of Emerging Technologies, HP ProCurve; John Bennett, worldwide director, Data Center Transformation Solutions, and Mike Thessen, practice principal, Network Infrastructure Solutions Practice in the HP Network Solutions Group.

Here are some excerpts:
Bennett: Data-center transformation is really about helping customers build out a next-generation data center, an adaptive infrastructure, that is designed to not only meet the current business needs, but to lay the foundation for the plans and strategies of the organization going forward.

In many cases, the IT infrastructure, including the facilities, the servers, the network, and storage environments can actually be a hindrance to investing more in business services and having the agility and flexibility that people want to have, and will need to have, in increasingly competitive environments.

When we talk about that, very typically we talk a lot about facilities, servers, and storage. For many people, the networking environment is ubiquitous. It's there. But, what we discover, when we lift the covers, is that you have an environment that may be taking lots of resources to manage and keep up-to-date.

... The networking infrastructure becomes key, as an integration fabric, not just between users in business services, but also between the infrastructure devices in the data center itself.

That's why we need to look at network transformation to make sure that the networking environment itself is aligned to the strategies of the data center, that the data center infrastructure is architected to support those goals, and that you transform what you have and what you have grown historically over decades into what hopefully will be a "lean, mean, fighting machine."

Nease: The network has basically evolved as a result of the emergence of the Internet and all forms of communications that share the network as a system. The server side of the network, where applications are hosted, is only one dimension that tugs at the network design in terms of requirements.

You find that the needs of any particular corner of the enterprise network can easily be lost on the network, because the network, as a whole, is designed for multiple constituencies, and those constituencies have created a lot of situations and requirements that are in themselves special cases.

In the data center, in particular, we've seen the emergence of a formalized virtualization layer now coming about and many, many server connections that are no longer physical. The history of networking says that I can take advantage of the fact that I have this concept of a link or a port that is one-to-one with a particular service.

That is no longer the case. What we’re seeing with virtualization is challenging the current design of the network. That is one of the requirements that are tugging at a change or provoking a change in overall enterprise network design.

... Too often people are compelled by a technology approach to rethink how they are doing networking. IT professionals will hear the overtures of various vendors saying, "This is the next greatest technology. It will maybe enable you to do all sorts of new things." Then, people waste a lot of time focusing on the technology enablement, without actually starting with what the heck they're trying to enable in the first place.

Thessen: In years past, you were effectively just providing local area network (LAN) and wide area network (WAN) connectivity. Servers were on the network, and they got facilities from the network to transport their data over to the users.

Now, everything is becoming converged over this network -- "everything" being data storage, and telephony. So, it's requiring more towers inside of corporate IT to come together to truly understand how this system is going to work together.

Nease: [Service orientation] is the only way out. With the new complexity that has emerged, and the fact that traditional designs can no longer rely on physical barriers to implement policies, we have reached a point, where we need an architecture for the network that builds in explicit concepts of policy decisions and policy enforcement.

The only way out is to regard the network itself as a service that provides connectivity between stations -- call them logical servers, call them users, or call them applications. In fact, that very layering alone has forced us to think through the concept of offering the network as a service.

Bennett: ... In parallel with that, we see an increasing drive and demand for virtualizing storage to have it both be more efficiently and effectively used inside the data center environment, but also to service and support the virtualized business services running in virtualized servers. That, in turn, carries into the networking fabric of making sure that you can manage the network connections on the fly.

Virtualization is not only becoming pervasive, but clearly the networking fabric itself is going to be key to delivering high quality business services in that environment.

Thessen: ... Networks need to be prepared for the convergence of the communication paths for data and storage connectivity inside the data center. That's the whole conversion -- enhance, Ethernet, Fiber Channel over Ethernet. That's the newest leg of the virtualization aspect of the data center.

Bennett: Fundamentally, convergence is about better integration across the technology stacks that help deliver business services. We're saying that we don't need separate, dedicated connections between servers for high availability from the connections that we use to the storage devices to have both a high-volume traffic and high-frequency traffic accesses to data for the business services or that we have for the network devices and the connections between them for the topology of the networking environment.

Rather, we are saying that today we can have one environment capable of supporting all of these needs, architected properly for particular customer's needs, and we bring into the environment separate communications infrastructures for voice.

So, we're really establishing, in effect, a common nervous system. Think about the data center and the organization as the human body. We're really building up the nervous system, connecting everything in the body effectively, both for high-volume needs and for high-frequency access needs.

Thessen: ... The

Without understanding who is talking to whom, how applications communicate, and how applications get access to other IT services, such as directory services and so forth, it's really difficult to secure them appropriately.

most important thing is really still the brutal standardization -- network modularity, logical separation, utilizing those virtualization techniques that I talked about a few minutes ago, and very well-defined communications flows for those applications.

Additionally, you need those communication flows especially in these SaaS or cloud-computing, or convergence environments to truly secure those environments appropriately. Without understanding who is talking to whom, how applications communicate, and how applications get access to other IT services, such as directory services and so forth, it's really difficult to secure them appropriately.

... What we focus on is really developing a good strategy first. Then, we define the requirements that go along with business strategy, perform analysis work against the current situation and the future state requirements, and then develop the solutions specific for the client's particular situation, utilizing perhaps a mix of products and technologies.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Special Offer: Gain insight into best practices for transforming your data center by downloading three new data center transformation whitepapers from HP at www.hp.com/go/dctpodcastwhitepapers.

Survey says slow, kludgy business processes hamper competitiveness

Corporations, are your business processes slowing you down? If so, you are in good company. Seventy-two percent of organizations say their business processes take too long and need to be streamlined.

So says a new independent survey conducted by Vanson Bourne for Progress Software.

The survey had a single goal, to determine the tools and processes large companies have put in place to support operational responsiveness and the ability to make "real-time" decisions. Vanson Bourne surveyed 400 large companies in the United States and Western Europe to develop its findings.

The bottom line: An overwhelming majority of businesses still feel they have a ways to go before they are equipped to respond to market or customer changes quickly enough to compete well in a global marketplace.

“The quest for faster operational responsiveness is becoming more urgent now that external factors such as social networking have boosted speed of response,” says Dr. Giles Nelson, senior director of strategy at the Apama division of Progress Software. “If organizations can’t keep up with the pace of customer feedback, they will find themselves exposed to competitive threats.”

I recently reached a similar conclusion in a podcast discussion with IT analyst Howard Dresner, with an emphasis on business intelligence (BI) in the stew of real-time requirements. Other firms I've worked with, such as Active Endpoints and BP Logix, call the value "nimble" or the ability to quick orchestrate and adapt processes.

[UPDATE: TIBCO today delivered its iProcess Spotfire product for real-time BI aligned to business process management.]

Sure is a lot of emphasis on real-time data, analysis and process reactivity nowadays! No process like the present, I always say. [Disclosure: TIBCO and Progress are sponsors of BriefingsDirect podcasts.]

On average, 22 percent of U.S. companies surveyed by Vanson Bourne admitted that, by the time they noticed it, they had missed the opportunity to react competitively to a change or trend affecting one of their processes. A lack of information seems to be fueling the problem. More than half of companies identified information gaps in decision-making as a cause.

The good news is that surveyed companies have solutions to the information gap in mind, namely access to real-time data. Ninety-four percent of companies cited the importance of real-time data – and the majority of those companies are making moves to gather it. Some 82 percent are planning to invest in real-time technology by mid-2010 in an effort to speed up internal processes, they said.

As Nelson at Apama sees it, bad news now travels very quickly – and companies need to make sure they’re not stuck in the slow lane when it comes to responding to customer issues.

“The overwhelming majority of people we spoke to recognize the importance of responding quickly to customers and to be much more responsive to changes in market conditions. Unfortunately, in most cases at present the process and information reporting infrastructure can’t match that vision,” Nelson says. “Business Event Processing is becoming the way of dealing with this decision-making lag.”

I'd add a bit more. What we're actually seeing is that corporations now see that they must be able to analyze and act in Internet time. Many of us webby and social-media types have known that for some time, but the urgency has now hit the mainstream bricks (not just the clicks).

Furthermore, the payoffs from becoming a real-time-oriented organization will go far beyond knowing what's being said about you on Twitter. As the economy has shown in the last year, those who can move fast and move well will survive and thrive. The others will find themselves in a downward spiral.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post.

Monday, October 5, 2009

HP roadmap dramatically reduces energy consumption across data centers

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

P
roducing meaningful, long-term energy savings in IT operations depends on a strategic planning and execution process.

The goal is to seek out long-term gains from prudent, short-term investments, whenever possible. It makes little sense to invest piecemeal in areas that offer poor returns, when a careful cost-benefit analysis for each specific enterprise can identify the true wellsprings of IT energy conservation.

The latest BriefingsDirect podcast discussion therefore targets significantly reducing energy consumption across data centers strategically. In it we examine four major areas that result in the most energy policy bang for the buck -- virtualization, application modernization, data-center infrastructure best practices, and properly planning and building out new data-center facilities.

By focusing on these major areas, but with a strict appreciation of the current and preceding IT patterns and specific requirements for each data center, real energy savings -- and productivity gains -- are in the offing.

To help learn more about significantly reducing energy consumption across data centers, we welcome two experts from HP: John Bennett, worldwide director, Data Center Transformation Solutions , and Ian Jagger, worldwide marketing manager for Data Center Services. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: We, as an industry, are full of advice around best practices for what people should be taking a look at. We provide these wonderful lists of things that they should pay attention to -- things like hot and cold aisles, running your data center hotter, and modernizing your infrastructure, consolidating it, virtualizing it, and things of that ilk.

The mistakes that customers do make is that they have this laundry list and, without any further insight into what will matter the most to them, they start implementing these things.

The real opportunity is to take a step back and assess the return from any one of these individual best practices. Which one should I do first and why? What's the technology case and what's the business case for them? That's an area that people seem to really struggle with.

... We know very well that modern infrastructure, modern servers, modern storage, and modern networking items are much more energy efficient than their predecessors from even two or three years ago.

... If we look at the total energy picture and the infrastructure itself -- in particular, the server and storage environment -- one of the fundamental objectives for virtualization is to dramatically increase the utilization of the assets you have.

With x86 servers, we see utilization rates typically in the 10 percent range. So, while there are a lot interesting benefits that come from virtualization from an energy efficiency point of view, we're basically eliminating the need for a lot of server units by making much better use of a smaller number of units.

So, consolidation and modernization, which reduces the number of units you have, and then multiplying that with virtualization, can result in significant decreases in server and storage-unit counts, which goes a long way toward affecting energy consumption from an infrastructure point of view.

That can be augmented, by the way, by doing application modernization, so you can eliminate legacy systems and infrastructure and move some of those services to a shared infrastructure as well.

We're talking about collapsing infrastructure requirements by factors of 5, 6, or 10. You're going from 10 or 20 old servers to perhaps a couple of servers running much more efficiently. And, with modernization at play, you can actually increase that multiplication.

These are very significant from a server point of view on the storage side. You're eliminating the need for sparsely used dedicated storage and moving to a shared, or virtualized storage environment, with the same kind of cost saving ratios at play here. So, it's a profound impact in the infrastructure environment.

Jagger: Going back to the original point that John made, we have had the tendency in the past to look at cooling or energy efficiency coming from the technology side of the business and the industry. More recently, thankfully, we are tending to look at that in a more converged view between IT technology, the facility itself, and the interplay between the two.

... Each customer has a different situation from the next, depending on how the infrastructure is laid out, the age of the data center, and even the climatic location of the data center. All of these have enormous impact on the customer's individual situation.

... If we're looking, for example, at the situation where a customer needs a new data center, then it makes sense for that customer to look at all the cases put together -- application modernization, virtualization, and also data center design itself.

Here is where it all stands to converge from an energy perspective. Data centers are expensive things to build, without doubt. Everyone recognizes that and everybody looks at ways not to build a new data center. But, the point is that a data center is there to run applications that drive business value for the company itself.

What we don't do a good job of is understanding those applications in the application catalog and the relative importance of each in terms of priority and availability. What we tend to do is treat them all with the same level of availability. That is just inherent in terms of how the industry has grown up in the last 20-30 years or so. Availability is king. Well, energy has challenged that kingship if you like, and so it is open to question.

. . . Converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.



Now, you could look at designing a facility, where you have within the facility specific PODs (groups of compute resources) that would be designed according to the application catalog's availability and priority requirements, tone down the tooling infrastructure that is responsible for those particular areas, and just retain specific PODs for those that do require the highest levels of availability.

Just by doing that, by converging the facility design with application modernization, takes millions and millions of dollars of data center construction costs, and of course the ongoing operating costs derived from burning energy to cool it at the end of the day.

... One of the smartest things you can actually do as a business, as an IT manager, is to actually go and talk to your utility company and ask them what rebates are available for energy savings. They typically will offer you ways of addressing how you can improve your energy efficiency within the data center.

That is a great starting point, where your energy becomes measurable. Taking an action on reducing your energy, not just hits your operating cost, but actually allows you to get rebates from your energy company at the same time. It's a no-brainer.

Bennett: What we are advising customers to do is take a more complete view of the resources and assets that go into delivering business services to the company.

It's not just the applications and the portfolio. ... It's the data center facilities themselves and how they are optimized for this purpose -- both from a data center perspective and from the facility-as-a-building perspective.

In considering them comprehensively in working with the facilities team, as well as the IT teams, you can actually deliver a lot of incremental value -- and a lot of significant savings to the organization.

... For customers who are very explicitly concerned about energy and how to reduce their energy cost and energy consumption, we have an Energy Analysis Assessment service. It's a great way to get started to determine which of the best practices will have the highest impact on you personally, and to allow you to do the cherry-picking.

For customers who are looking at things a little more comprehensively, energy analysis and energy efficiency are two aspects of a data-center transformation process. We have a data center transformation workshop.

Jagger: The premise here is to understand possible savings or the possible efficiency available to you through forensic analysis and modeling. That has got to be the starting point, and then understanding the costs of building that efficiency.

Then, you need a plan that shows those costs and savings and the priorities in terms of structure and infrastructure, have that work in a converged way with IT, and of course the payback on the investment that's required to build it in the first place.
Gain more insights into data center transformation best practices by downloading free whitepapers at http://www.hp.com/go/dctpodcastwhitepapers.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Learn more. Sponsor: Hewlett-Packard.

Part 2 of 4: Web data services provide ease of data access and distribution from variety of sources, destinations

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Kapow Technologies.

A
s enterprises seek to gain better insights into their markets, processes, and business development opportunities, they face a daunting challenge -- how to identify, gather, cleanse, and manage all of the relevant data and content being generated across the Web.

As the recession forces the need to identify and evaluate new revenue sources, businesses need to capture such web data services for business intelligence (BI) to work better and fuller. In Part 1 of our web data series we discussed how external data has grown in both volume and importance across internal Internet, social networks, portals, and applications in recent years.

Enterprises need to know what's going on and what's being said about their markets across those markets. They need to share those web data service inferences quickly and easily across their internal users. The more relevant and useful content that enters into BI tools, the more powerful the BI outcomes -- especially as we look outside the enterprise for fast shifting trends and business opportunities.

In this podcast, Part 2 of the series with Kapow Technologies, we identify how BI and web data services come together, and explore such additional subjects as text analytics and cloud computing. So, how to get started and how to affordably manage web data services with BI and business consumers as intelligence and insights?

To find out, we brought together Jim Kobielus, senior analyst at Forrester Research, and Stefan Andreasen, co-founder and chief technology officer at Kapow Technologies. The discussion is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Kobielus: The more relevant content you bring into your analytic environment the better, in terms of having a single view or access in a unified fashion to all the information that might be relevant to any possible decision you might make. But, clearly, there are lots of caveats, "gotchas," and trade-offs there.

One of these is that it becomes very expensive to discover, to capture, and to do all the relevant transformation, cleansing, storage, and delivery of all of that content. It becomes very expensive, especially as you bring more unstructured information from your content management system (CMS) or various applications from desktops and from social networks.

... Filtering the fire hose of this content is where this topic of web data services for BI comes in. Web data services describes that end-to-end analytic information pipe-lining process. It's really a fire hose that you filter at various points, so that the end users turn on their tap and they're not blown away by a massive stream. Rather, it's a stream of liquid intelligence that is palatable and consumable.

Andreasen: There is a fire hose of data out there, but some of that data is flowing easily, but some of it might only be dripping and some might be inaccessible.

Think about it this way. The relevant data for your BI applications is located in various places. One is in your internal business applications. Another is your software-as-a-service (SaaS) business application, like Salesforce, etc. Others are at your business partners, your retailers, or your suppliers. Another one is at government. The last one is on the World Wide Web in those tens of millions of applications and data sources.

Accessible via browser

Today, all of this data that I just described is more or less accessible in a web browser. Web data services allow you to access all these data sources, using the interface that the web browser is already using. It delivers that result in a real-time, relative, and relevant way into SQL databases, directly into BI tools, or to even service enabled and encapsulated data. It delivers the benefits that IT can now better serve the analysts need for new data, which is almost always the case.

What's even more important is that incremental daily improvement of existing reports. Analysts sit there, they find some new data source, and they say, "It would be really good, if I could add this column of data to my report, maybe replace this data, or if I could get this amount of data in real-time rather than just once a week." So it's those kinds of improvements that web data services also really can help with.

Kobielus: At Forrester, we see traditional BI as a basic analytics environment, with ad-hoc query, OLAP, and the like. That's traditional BI -- it's the core of pretty much every enterprise's environment.

Advanced analytics -- building on that initial investment and getting to this notion of an incremental add-on environment -- is really where a lot of established BI users are going. Advanced analytics means building on those core reporting, querying, and those other features with such tools as data mining and text analytics, but also complex event processing (CEP) with a front-end interactive visualization layer that often enables mashups of their own views by the end users.

... We see a strong push in the industry toward smashing those silos and bringing them all together. A big driver of that trend is that users, the enterprises, are demanding unified access to market intelligence and customer intelligence that's bubbling up from this massive Web 2.0 infrastructure, social networks, blogs, Twitter and the like.

Andreasen: Traditionally, for BI, we've been trying to gather all the data into one unified, centralized repository, and accessing the data from there. But, the world is getting more diverse and the data is spread in more and different silos. What companies realize today is that we need to get service-level access to the data, where they reside, rather than trying to assemble them all.

...Web data services can encapsulate or wrap the data silos that were residing with their business partners into services -- SOAP services, REST services, etc. -- and thereby get automated access to the data directly into the BI tool.

... So, tomorrow's data stores for BI, and today's as well, is really a combination of accessing data in your central data repositories and then accessing them where they reside. ... Think about it. I'm an analyst and I work with the data. I feel I own the data. I type the data in. Then, when I need it in my report, I cannot get it there. It's like owning the house, but not having the key to the house. So, breaking down this barrier and giving them the key to the house, or actually giving IT a way to deliver the key to the house, is critical for the agility of BI going forward.

Tools are lacking

Today, the IT department often lacks tools to deliver those custom feeds that the line of business is asking for. But, with web data services, you can actually deliver these feeds. The data that IT is asking for is almost always data they already know, see, and work with in the business applications, with the business partners, etc. They work with the data. They see them in the browsers, but they cannot get the custom feeds. With the web data services product, IT can deliver those custom feeds in a very short time.

Kobielus: The user feels frustration, because they go on the Web and into Google and can see the whole universe of information that's out there. So, for a mashup vision to be reality, organizations have got to go the next step.

... It's good to have these pre-configured connections through extract, transform and load (ETL) and the like into their data warehouse from various sources. But, there should also be ideally feeds in from various data aggregators. There are many commercial data aggregators out there who can provide discovery of a much broader range of data types -- financial, regulatory, and what not.

Also, within this ideal environment there should be user-driven source discovery through search, through pub-sub, and a variety of means. If all these source-discovery capabilities are provided in a unified environment with common tooling and interfaces, and are all feeding information and allowing users to dynamically update the information sets available to them in real-time, then that's the nirvana.

Andreasen: This is where Kapow and web data services come in, as a disruptive new way of solving a problem of delivering the data -- the real-time relevant data that the analyst needs.

The way it works is that, when you work with the data in a browser, you see it visually, you click on it, and you navigate tables and so on. The way our product works is that it allows you to instruct our system how to interact with a web application, just the same way as the line of business user.

...The beauty with web data services is that it's really accessing the data through the application front end, using credentials and encryptions that are already in place and approved. You're using the existing security mechanism to access the data, rather than opening up new security holes, with all the risk that that includes.

... This means that you access and work with the data in the world in which the end users see the data. It's all with no coding. It's all visual, all point and click. Any IT person can, with our product, turn data that you see in a browser into a real feed, a custom feed, virtually in minutes or in a few hours for something that would typically take days, weeks, or months -- or may even be impossible.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Kapow Technologies.

Thursday, October 1, 2009

Cloud computing by industry: Novel ways to collaborate via extended business processes

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.

W
elcome to a podcast discussion on how to make the most of cloud computing for innovative solving of industry-level problems. As enterprises seek to exploit cloud computing, business leaders are focused on new productivity benefits. Yet, the IT folks need to focus on the technology in order to propel those business solutions forward.

As enterprises confront cloud computing, they want to know what's going to enable new and potentially revolutionary business outcomes. How will business process innovation -- necessitated by the reset economy -- gain from using cloud-based services, models, and solutions?

Early examples of applying cloud to industry challenges, such as the recent GS1 Canada Food Recall Initiative, show that doing things in new ways can have huge payoffs.

We'll learn about the HP Cloud Product Recall Platform that provides the underlying infrastructure for the GS1 Canada food recall solution, and we will dig deeper into what cloud computing means for companies in the manufacturing and distribution industries and the "new era" of Moore's Law.

Here to help explain the benefits of cloud computing and vertical business transformation, we're joined by Mick Keyes, senior architect in the HP Chief Technology Office; Rebecca Lawson, director of Worldwide Cloud Marketing at HP, and Chris Coughlan, director of HP's Track and Trace Cloud Competency Center. The dicussion is koderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Lawson: Everyone knows that "cloud" is a word that tends to get hugely overused. We try to think about what kinds of problems our customers are trying to solve, and what are some new technologies that are here now, or that are coming down the pike, to help them solve problems that currently can't be solved with traditional business processing approaches.

Rather than the cloud being about just reducing costs, by moving workloads to somebody else's virtual machine, we take a customer point of view -- in this case, manufacturing -- to say, "What are the problems that manufacturers have that can't be solved by traditional supply chain or business processing the way that we know it today, with all the implicated integrations and such?"

As we move forward, we see that, different vertical markets -- for example, manufacturing or pharmaceuticals -- will start to have ecosystems evolve around them. These ecosystems will be a place or a dynamic that has technology-enabled services, cloud services that are accessible and sharable and help the collaboration and sharing across different constituents in that vertical market.

We think that, just as social networks have helped us all connect on a personal level with friends from the past and such, vertical ecosystems will serve business interests across large bodies of companies, organizations, or constituents, so that they can start to share, collaborate, and solve different kinds of issues that are germane to that industry.

A great example of that is what we're doing with the manufacturing industry around our collaboration with GS1, where we are solving problems related to traceability and recall.

Keyes: If you look at supply chains, food is a good example. It's one of the more complicated ones, actually. You can have anywhere up to 15-20 different entities involved in a supply chain.

In reality, you've got a farmer out there growing some food. When he harvests that food, he's got to move it to different manufacturers, processors, wholesalers, transportation, and to retail, before it finally gets to the actual consumer itself. There is a lot of data being gathered at each stage of that supply chain.

Coughlan: As a consumer, it gives you a lot more confidence that the health and safety issues are being dealt with, because, in some cases, this is a life and death situation. The sooner you solve the problem, the sooner everybody knows about it. You have a better opportunity of potentially saving lives.

So we really look at it from a positive view also, about how this is creating benefits from a business point of view.



As well as that, you're looking at brand protection and you're also looking at removing from the supply chain things that could have further knock-on effects as well.

Keyes: In the traditional way we looked at how that supply chain has traceability, they would have the, infamous -- I would call it -- "one step up, one step down" exchange of data, which meant really that each entity in the supply chain exchanged information with the next one in line.

That's fine, but it's costly. Also, it doesn't allow for good visibility into the total supply chain, which is what the end goal actually is.

What we are saying to industry at the moment -- and this is our thesis here that we are actually developing -- is that, HP, with a cloud platform, will provide the hub, where people can either send data or allow us to access data. What a cloud will do is aggregate different piece of information to provide value to all elements of the supply chain to give greater visibility into the supply itself.

... We have SaaS now, not just to any individual entity in the supply chain, but anybody who subscribes to our hub. We can aggregate all the information, and we're able to give them back very valuable information on how their product is used further up the supply chain. So we really look at it from a positive view also, about how this is creating benefits from a business point of view.

So, depending on what type of industry you're in, we're looking at this platform as being almost a repeatable type of offering, and you can start to lay out individual or specific industry services around this.

We're also looking at how you integrate this into the whole social-networking arena, because that's information and data out there. People are looking to consume information, or get involved in information sharing to a certain degree. We see that as a cool component also that we can perhaps do some BI around and be able to offer information to industry, consumers, and the regulatory bodies fairly quickly.

Coughlan: The point there is that cloud is enabling a convergence between enterprises. It's enabling enterprise collaboration, first of all, and then it's going one step further, where it's enabling the convergence of that enterprise collaboration with Web 2.0.

You can overlay a whole pile of things --carbon footprints, dietary information, and ethical food. Not only is it going to be in the food area, as we said. It's going to be along every manufacturing supply chain -- pharmaceuticals, the motor industry, or whatever.

Lawson: The key to this is that this technology is not causing the manufacturers to do a lot of work. ... It's not a lot of effort on my part to participate in the benefits of being in that traceability and recall ecosystem, because I and all the other people along that supply chain are all contributing the relevant data that we already have. That's going to serve a greater whole, and we can all tap into that data as well.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. View a full transcript or download the transcript. Sponsor: Hewlett-Packard.

Free Offer: Get a complimentary copy of the new book Cloud Computing For Dummies courtesy of Hewlett-Packard at www.hp.com/go/cloudpodcastoffer.