Thursday, May 13, 2010

Just-in-Time Resourcing provides strategic and productive visibility into professional services staffing decisions

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Compuware.

For more information on resource utilization, read RTM's whitepaper "The ROI of Resource Utilization -- Measuring and Capturing the Real Business Value of Your People."

Learn more about Compuware Changepoint.

The latest BriefingsDirect enterprise technology update discussion focuses on how technology suppliers can get the most from resource utilization and management in the global services economy.

Increasingly, sellers of IT are finding it harder to win large software and hardware capital purchases contracts, which traditionally followed three- to seven-year obsolescence and refresh cycles. The shifts in technology and business models accelerated by the recession are forcing these vendors in particular to adopt more of a professional services revenue model.

Buyers of technology, on the other hand, are moving to IT shared services and software-as-a-service (SaaS) models to get off of the capital outlays roller coaster. They want smoother and more predictable operating and charging models, beginning with long-term professional services and outsourcing engagements.

Both the buyer and seller of services therefore need to focus on the implementation and integration of solutions, placing a complex burden on the services delivery personnel themselves, as well as those who managing the services providers.

We’re here to find out some new, best ways of managing and automating these intellectual resources that support the professional services lifecycle. We’ll see how recent research shows that more of a just-in-time (JIT) methodology is required to keep the skills in balance with myriad project requirements and obligations.

To learn more about resource utilization and management in the global services economy, we're joined by Lori Ellsworth, Vice President of Changepoint Solutions at Compuware, the sponsor of this podcast, and by Mark Sloan, Chief Operating Officer of RTM Consulting. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Ellsworth: The change and the focus on professional services is moving from something that was nice to have, to something that is necessary to have to be successful.

Software companies are a great example. Historically, companies in that sector may have done mostly product business and less service. Services are now necessary to deliver success, and the services business is a very healthy part of the software business and is contributing significantly to the bottom-line.

Now, organizations have to understand how to get a handle on the people they have working for them, how best utilize them, and how to make sure that your employees, those assets, are challenged and happy, but that you are delivering that service to provide value to your customers.

There needs to be more discipline, more information, and a better process for decision-making and forward planning, so that the organization can scale and scale in a financially successful way.

So, the stakes are higher, in terms of the discipline and the approach that we need to take to manage that professional services part of the business.

Sloan: At RTM Consulting, one of our core areas of focus is in this area of resource management. How can you get the right person in the right place at the right time and drive up utilization, but at the same time, make sure that you're delivering value to your end customers and leaving them satisfied and coming back for more?

When a software company shows up with its professional services arm, the client is expecting that each and every one of the people who show up is an expert in the software, the technology, and the implementation process. The days of people learning on the job and coming up to speed are long gone.

The challenge today is for companies to get visibility into the type of work that’s coming down the pike, so that they can proactively train their internal resources and be prepared for that work, so that when they do show up, they are the experts.

We’ve actually taken the principles of JIT manufacturing and directed them to the professional services organization [via in new service definitions of JIT.]

Just as 30 years ago, any manufacturing company had big inventories of supplies, finished products, sitting in their warehouse. Ten or 15 years ago, the big services organizations were able to have excess resources on the bench, in the office, waiting for that next project to arrive.

What we’ve done is taken those same principles -- forecasting what the future scenarios look like, what the demands look like, and then translating that back into how many resources you are going to need, the types of resources, the skills those resources need to have.

You can, at that right moment, bring on a new employee, go to a third-party contractor to fulfill that demand, or give yourself enough advanced notice to cross-train your existing resources on new technologies, new products, so that they can work across your portfolio and not just focus on one particular area.

Getting to the solution

Ellsworth: There are four critical success factors, but also the building-block approach. In other words, you need to start with the fundamental. You need to understand your people and their skills and get that view of your business. Then, you can start to add levels of maturity, look at forecasting, look at different models for resource allocation, and bring in project management.

As organizations start to put the buildings blocks in place, and adopt the disciplines and build the processes that work in their business, [they can have trouble] scaling that.

You can make that work within a small team or across a couple of small teams, but ... you need visibility ... to scale that to your entire services organization, including management. [But] you can't scale and reinforce that discipline without automation.

The two really have to go together. One won’t be successful without the other in a large professional services organization. Automation brings the scale factor.

The ability to measure and monitoring is something that Mark also highlights as critical success factors. Again, you’ve got a large group of people with a lot of activity going on. There's lots of data, but you have to roll that up to the management level to make it valuable to help drive decisions in the business.

... Our focus has been on driving that view as a professional services organization, but importantly driving that view inside the context of the broader company.

It starts with those building blocks around who are your resources, what are their capabilities, and where are they being utilized. It brings you to the next level of maturity in terms of being able to look at forecasts and do some demand and capacity planning.

And then it goes even further from a resource perspective to that professional development side. Let's look at the gaps in the next six to nine months. Where can we identify resources and put them on a development plan to fill those gaps?

We're managing the day-to-day business of a professional services organization and going beyond that to deal with project management, engagement management, and right through to billing for a professional services organization and for technology companies that also have a strong product side of a business.

The paybacks can be, and are, significant. First and foremost, is really speed to revenue and cash flow.



The Changepoint solution has been active and working with customers in their professional services organization for many years, going back to the late 1990’s. We also deliver a project portfolio management capability to allow them to manage products and manage delivery of those product applications.

Sloan: The paybacks can be, and are, significant. First and foremost, is really speed to revenue and cash flow. Lori mentioned that doing this in a large services organization is critical and an enabling technology is required to make that happen.

I’d argue the same for small professional services organizations. Having the information that tools like Changepoint can put at your fingertips, you can quickly identify people in your organization that have the right skills, that off the top of your head you might not think of, and staff projects quickly with the appropriate resources, ultimately enabling you to get that revenue.

Billable utilization

Secondly, you start to see a significant lift in overall billable utilization. This is for the professional services organization. Again, by getting better visibility into the skills that different resources have, you realize you have many more people in the organization that can do work than you think of.

For more information on resource utilization, read RTM's whitepaper "The ROI of Resource Utilization -- Measuring and Capturing the Real Business Value of Your People."

Learn more about Compuware Changepoint.

Other research points to the fact that companies who do this development of staff and get projects started on time are significantly more likely to finish their projects on budget and on time and drive significantly positive customer satisfaction.

Companies that aren’t able to do this -- take an extra five, 10, or 15 days to fill some of the slots on a project -- tend to go over-budget, don’t get it done on time, and, as a result, have poor customer satisfaction. If you think about it, it's back to that mantra, "Do it right the first time." This process helps you do that.

Ellsworth: As you're adding discipline and increasing maturity, there is participation from the practitioner, if you can position the value to them in terms of increased opportunity or an ability for them to better manage their schedule and not be burnt out. They have access to different opportunities. It's very valuable and can help them actively participate in moving the business forward and not kind of fight against it.

A broader pool of resources comes there to help you respond to customers which just increases the need to understand who those resources are and what they can bring to the table to support these services.

Customers of mine, in Europe for example, are quoting that on a year-over-year basis, they are able to reduce non-productive time -- and therefore the cost of that non-productive time -- by 16 percent.

Other customers will articulate the value of this entire solution in terms of revenue increase, the focus of getting control over their resources, who they have and how they can most effectively deploy them. Another customer of mine in Europe talks about a 30 percent increase in revenue, linked directly to implementing some of these practices in getting that control over their resources.

Sloan: The same lessons apply to shared services organizations, such as internal, large IT departments managing multiple projects per year to deploy technology.

They can leverage the technology that Changepoint offers to keep track of the people, where they are deployed, what skills they have, what new projects are coming in, and achieve a similar increase in productive utilization of those resources. But to your point, in terms of creative organizations, this would apply to any organization that is focused on moving people with particular skill sets to a unique project.

When we architect a solution for clients, it’s a unique solution taking into account the various constraints and the environment of that client.



That includes engineering services organizations, creative agencies that are moving talent from one project to the next -- anyone who relies on definite skills and knowledge that aren’t just easily interchangeable. This helps forecast where you can get the biggest bang for the buck with those people.

In terms of getting started, when we typically work with clients, we come in and do a quick assess and architect phase where we’ll take a look at how resource management is being done today, compare that to the best practices that we’ve defined for JIT Resourcing, and identify areas where you are strong and areas where there is an opportunity for change and improvement. When we architect a solution for clients, it’s a unique solution taking into account the various constraints and the environment of that client.

JIT Resourcing is a defined approach. We have recognized that there are unique aspects to every business, and can tailor the solution to fit there.

By deploying these processes now, you can start to learn the continuous improvement that’s needed, but be enabled as more and more of your clients go to SaaS, but you’ve got to have to deploy people with the moment’s notice.

You're going to get much better at predicting and forecasting what your future needs are, enabling you to align your resources and capabilities accordingly. You want to achieve the benefits we talked about -- speed to revenue, speed to cash-flow, and zero idle resources.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Compuware.

For more information on resource utilization, read RTM's whitepaper "The ROI of Resource Utilization -- Measuring and Capturing the Real Business Value of Your People."

Learn more about Compuware Changepoint.

You may also be interested in:

Wednesday, May 12, 2010

SAP buys Sybase, gets back in the race

The torrent of major IT acquisitions notched another milestone today when German business applications powerhouse SAP announced plans to buy fast-growing database and mobility vendor Sybase of California for $5.8 billion.

The news comes as the IT vendor space is witnessing an historic consolidation, via both acquisitions and partnerships. From HP buying Palm, to IBM buying Cast Iron, to EMC partnering with Cisco, to Oracle absorbing Sun Microsystems, the rush is on to present a new all-in-one face to the enterprise IT buying community.

As I said in my earlier post today -- in analyzing product news from HP, IBM and TIBCO -- the receding recession has provided a catalyst for a much larger shift in how IT is done and delivered. These tier-one vendors know something big is up in IT, beyond business as usual, beyond a typical turnaround in the business cycle.

SAP and Sybase are very complementary, from the business, technology and market penetration perspectives. But the price of $65 in cash for each Sybase share by SAP -- a 44 percent premium to Sybase's average price over the past three months -- shows that this is no marriage of convenience.

It's more like a shotgun wedding, and the shotgun is being aimed by a rapidly changing IT environment that favors scale, comprehensive products and services, and global delivery capabilities. A big war chest and a yen for cloud computing don't hurt either.

SAP needed to get back in the Big Game to remain a top-tier IT vendor. Sybase fills major gaps in SAP's portfolio, and gives it an instant chance to play in rapidly changing mobile market.

Sybase has not been ailing, but growing quite well, mostly from its core database and tools businesses. Sybase took a big departure a few years ago with a big swing into mobility infrastructure for enterprises. They have done well, but the stakes in the last year has grown higher as netbooks, smartphones, iPhones and iPads have made mobility the client-side growth markets.

Sybase would not likely grow organically into more aspects of IT, despite it's core strengths and large presence in Asia and on Wall Street. SAP gives to Sybase the larger business applications and sheer global scale to enter the tier-one vendor space faster than it could alone.

But this is no slam-dunk. It's risky. SAP acquisitions have been spotty in terms of numbers, size and success. These companies are very different culturally and geographically. Sybase has a strong engineering streaks, which is a good fit -- if the politics can be worked out.

The level of risk, like the price, indicates that there's a hint of desperation in the SAP-Sybase meld, if not in terms of survival at least in terms of the grasping to deal with an IT landscape that is rapidly turning into a handful of mega vendors.

Now that the flood gates on M&A mania have been opened, one has to wonder what will be next for Red Hat, TIBCO, BMC, Progress Software, Novell, Citrix and the dwindling number of larger tier-two IT infrastructure vendors.

Major IT vendor offerings point to a new era of profound IT economic transformation

Gut-wrenching recessions have a way of changing things ... for people, families, and companies. They can also, perhaps like no other event, provoke change in large IT vendors like HP, IBM, TIBCO and Oracle.

Based on this week's HP announcements and last week's IBM Impact conference, these two of the very largest, full-service, global IT vendors are betting -- now that the recession has, at the least, bottomed out -- that the extent of change now upon us is more than just another business cycle come full circle.

Far more, these vendors see that the recession has provided a catalyst for a much larger shift in how IT is done and delivered. It's no coincidence that the interest in cloud computing and innovative IT sourcing options, for example, peaked when the recession was at its deepest.

The idea garnering wide attention in the darkest days was not just to save money by downsizing, but to also to start doing things very differently -- to truly innovate, to change the very economics of IT. But now that the worst is over, simply saving money via old IT methods, I'll wager, will prove a lot more expensive in real terms than rapidly investing in new ways of providing IT value as services.

That doesn't mean that some enterprise IT organizations won't try to go right back to business as usual. And some of the IT vendors, with their license auditors in tow, are counting on it.

It does mean that the enterprises that can actually change how they do and pay for IT in the post-recession economy may have an escalating advantage over those that do not.

Not the same old song and dance

HP this week announced the equivalent of a Swiss Army knife for IT transformation, with about as many blades and instruments as there are ways to attack the data center transformation gordian knot. The HP services, software, and sourcing offerings are designed to guide enterprises -- from the starting points of their choosing -- through a seismic transition from cost containment to IT innovation. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Last week, IBM boldly scooped up Cast Iron Systems, a cloud-to-IT integration engine maker, and further polished its view that the way to a smarter planet is via better business processes and a deep understanding of vertical industries, automation and how IT (with professional services) can bring them together. My colleague Tony Baer at Ovum delves into IBM's recasting of the definition of business applications and acceptance of the partly cloudy future.

[UPDATE: IBM CEO Sam Palmisano outlines IBM's 2015 roadmap.]

TIBCO this week at its annual user conference delivered a dozen major announcements and stepped even more boldly into cloud models, too. TIBCO's "Enterprise 3.0" vision emphasizes the importance of real-time and massive scale processing, an integrated development-to-deployment to business process management capability, and now the option of building out an enterprise private cloud to public cloud synergy using partners like Amazon Web Services. TIBCO is also embedding BI capabilities deeply across the portfolio. [Disclosure: TIBCO is a past-sponsor of BriefingsDirect podcasts.]

Oracle, for its part, made good on its "software, hardware, complete" vision via a cameo (and somewhat buffoon-like) appearance by Chairman and CEO Larry Ellison in the debut of the movie Iron Man 2 last week. Perhaps we should expect a fist-sized "arc reactor" for database appliances in the near future? Yet Oracle is also recently drinking deeply from the cloud well, given some its recent speeches by executives as it digests the Sun Microsystems acquisition.

The point is that these vendors know something big is up in IT, beyond business as usual. We're seeing bold moves by them all, from acquisitions to restructuring to Hollywood-delivered group-think and not-so-subliminal brand imagery.

HP tackles the IT funding conundrum

HP is looking to actually help enterprises fund these transformative times. HP's economic rationale for moving to innovation now goes beyond the need for swift and verifiable ROI in IT investments. Additionally, HP is banking on the high and painful costs of not being able to move well in dynamic markets, of incurring costs from inertia, rather than from investing for advancement.

Most urgently, IT cannot miss out in supporting businesses as they face rapid growth and savvy competitors across global markets, says HP.

More succinctly, HP's message from this week's announcements comes as a warning that going back to the old IT ways, of sliding back to the economics of expensive waste as a proxy for brittle peak reliability, risks missing the lessons of the recession.

HP is therefore taking a three-pronged approach to making adoption of innovations the new mantra of IT. The first approach finds way to deliver self-funding projects. The second leverages modern architecture and methodologies so IT organizations can quickly and easily add new functionality, making change the constant. The third approach shows how to freeing up funds trapped in on-going IT operations based on older IT economics.

As enterprises are faced with transformation from old to more modern IT, many are caught in an inertia of avoidance -- frozen by the complexity and scale of the task, according to new research supported by HP. What's needed is incremental change that pays for itself along the way, but which remains aligned with the strategic transformation and direction.

The HP focus on self-funding projects, therefore, includes offering qualified clients a complimentary, hands-on HP Applications Modernization Transformation Experience session that illustrates IT modernization and its benefits. The goal: By retiring legacy applications and eliminating complexity in technology environments, organizations are able to self-fund their modernization journeys.

Cost of lost opportunity

“The phrase ‘time is money’ rings true here, as 99 percent of organizations say that innovation gridlock cost them in lost time,” said Thomas E. Hogan, executive vice president of sales, marketing and strategy for HP Enterprise Business, in a release. “By breaking the innovation gridlock, organizations can regain time to market and capitalize on new opportunities.” More at www.hp.com/go/breakthegridlock2010.

According to research conducted on behalf of HP by Coleman Parkes Research:
  • Some 95 percent of business and technology executives said innovation gridlock resulted in lost opportunities for their organizations.
Together the promise of cloud, the constraints of the recession, and the quick-paced requirements of modern business agility have conspired to expose the weaknesses of plain old IT ... stack upon stack, brittle apps astride brittle apps, and rack by rack of under-utilized workloads alienated from their fit-for-purpose potential.

HP says the cost of doing nothing to transform IT is too great to ignore. IBM is transforming the very definition of business services and applications with plant-wide efficiencies in mind. TIBCO is refining software delivery that steps up to the cloud challenge. Oracle is enclosing its software in an optimized "iron" support infrastructure to improve performance to cost ratios dramatically.

All these vendors will still sell you the good old IT systems the good old ways. But they are also coming up with some big new tricks. Who will take them up on their hedge against a truly transformative IT future?

You may also be interested in:

Monday, May 10, 2010

Open Group's Cloud Workgroup delivers new white paper on business ROI of cloud computing

This guest post comes courtesy of Mark Skilton of Capgemini Global Applications and The Open Group.

By Mark Skilton

The Open Group’s Cloud Work Group has published a white paper, “Building ROI from Cloud Computing," that’s getting quite a lot of positive attention about cloud-delivered business benefits.

The paper, of which I'm a contributing author, looks at various ways to measure ROI from cloud models, and includes a questionnaire as well as some useful metrics to show a long list of demonstrable business benefits from cloud adoption. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Many experts view cloud computing as a technological change brought about by the convergence of several new and existing technologies. Techies tend to like it for the following characteristics:
  • The performance is the same if scaled for one, to a hundred, or a thousand users with consistent service-level characteristics.

  • It frees applications from being locked into devices or locations.

  • Users only pay for what they use and with no or minimal up-front investment costs.

  • The service is on-demand, able to scale up and down with near instant availability.

  • It enables access to applications and information from any access point.
But this is only half of the story. These technical characteristics can also be found in many non-disruptive IT solutions. What's also creating business buzz? The rate of change and magnitude of cost reduction and specific technical performance impact of cloud computing, that's what.

And these benefits aren’t just incremental -- they can give up to a 10-times cost-efficiency improvement.

The capacity-utilization curve

The famous graph used by Amazon Web Services illustrates the capacity versus utilization curve and has become an icon in cloud computing circles. The model illustrates the central idea around cloud-based services, enabled through an on-demand business provisioning model to meet actual usage.

Years from now, when cloud computing is seen in a historical context, the capacity versus utilization curve will be an iconic model that had the same effect as previous well known business models.


This matters to business because avoiding the cost impact of over-provisioning and under-provisioning forms a core precept of cloud computing. This is in addition to the opportunity for cost, revenue, and margin advantages of business services enabled by rapid deployment of cloud services -- with low entry cost, and the potential to therefore quickly enter and exploit new markets.

Years from now, when cloud computing is seen in a historical context, the capacity versus utilization curve will be an iconic model that had the same effect as previous well-known business models.

Eight ways to cloud computing ROI

The current view of capacity and utilization is a technology provider viewpoint, and is essentially based on key performance indicators, rather than business benefit metrics.

IT capacity -- as measured by storage, CPU cycles, network bandwidth, or workload memory capacity -- forms an indicator of performance, while IT utilization -- as measured by up-time availability and volume of usage -- is an indicator of activity and usability.

But effective cost/performance ratios and levels of usage activity don’t necessarily imply proportional business benefits. They’re just indicators of business activity that are not in themselves more valuable than lower operating cost.

The Open Group’s new paper, however, uncovers eight business metrics that translate the indicators of the capacity-utilization curve to significant and tangible benefits to the business:
  1. The speed and rate of change of cost reduction and cost of adoption/de-adoption is faster in cloud models, creating additional cost transformation benefits.

  2. Optimal total cost of ownership, where you can select, design, configure and run infrastructure and applications best-suited for business needs. Traditionally this may be decoupled as IT projects hand-off to production services -- but in cloud environments these can be joined up.

  3. Rapid provisioning scales up and down to follow business activity as it expands and grows, shrinking the provisioning time from weeks to hours.

  4. Increase margin and cost control by enabling revenue growth and cost-control opportunities to pursue new customers and markets for business growth and service improvement.

  5. Dynamic usage with elastic provisioning and service management targets real end-usage and business needs for functionality as the scope of users and services evolve.

  6. Risk and compliance improvement is possible by leveraging the cloud’s "green" capabilities through shared services.

  7. Enhanced capacity utilization helps users avoid over-provisioning and under-provisioning of IT to improve smarter business services.

  8. Access to business skills and capability improvement is made possible through cloud sourcing, on-demand solutions.
A full copy of the Cloud ROI paper is freely available on The Open Group’s website: http://www.opengroup.org/cloud/whitepapers/ccroi/index.htm.

Mark Skilton is currently global director responsible for applications strategy and service offer development for Capgemini Global Applications Outsourcing Services. He is also the co-chair of The Open Group Cloud Work Group, focused on helping companies to improve ROI with their cloud computing initiatives. Mark can be contacted at mark.skilton@capgemini.com.

You may also be interested in:

Friday, May 7, 2010

Delivering data analytics through Workday SaaS ERP apps empowers business managers at actual decision points

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Workday.

See a demo on how Workday BI offers business users a new experience for accessing the key information to make smart decisions.

About Workday
This BriefingsDirect podcast features software-as-a-service (SaaS) upstart Workday, provider of enterprise solutions for human resources management, financial management, payroll, spend management, and benefits management.

Can software-as-a-service (SaaS) applications actually accelerate the use and power of business analytics?

We're going to help answer that by examining a human capital management (HCM) and enterprise resource planning (ERP) SaaS provider, Workday, and show how easily customizable views on data and analytics can have a big impact on how managers and knowledge workers operate.

Historically, the back office business applications that support companies have been distinct from the category of business intelligence (BI). Certainly, applications have had certain ways of extracting analytics, but the interfaces were often complex, unique, and infrequently used.

By using SaaS applications and rich Internet technologies that create different interface capabilities -- as well as a wellspring of integration and governance on the back-end of these business applications (built on a common architecture) -- more actionable data gets to those who can use it best. They get to use it on their terms, as our case today will show, for HCM or human resources managers in large enterprises.

The trick to making this work is to balance the needs that govern and control the data and analytics, but also opening up the insights to more users in a flexible, intuitive way. The ability to identify, gather, and manipulate data for business analysis on the terms of the end-user has huge benefits. As we enter what I like to call the data-driven decade, I think nearly all business decisions are going to need more data from now on.

To learn more about how the application and interfaces are the analytics, with apologies to Marshall McLuhan, please join me in welcoming Stan Swete, Vice President of Product Strategy and the CTO at Workday; Jim Kobielus, Senior Analyst for BI and Analytics at Forrester Research, and Seth Grimes, Principal Consultant at Alta Plana Corp., and a contributing editor at TechWeb's Intelligent Enterprise. The discussion is moderated by me, BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Swete: When I think of how BI is done, primarily in enterprises, I think of Excel spreadsheets, and there are some good reasons for that, but there’s also some disadvantages that that brings.

When I look at the emergence of separate BI tools, one driver was the fact that data comes from all kinds of disparate data sources, and it needs aggregation and special tooling to help overcome that problem.

Also, traditional enterprise applications have been written for what I would call the back-office user. While they do a very good job of securing access to data, they don’t do a very good job of painting a relevant picture for the operational side of the business.

A big driver for BI was taking the information that’s in the enterprise systems and putting a view on some dimensionality that managers or the operational side of the business could relate to. I don’t think apps have done that very well, and that’s where a lot of BI originated as well.

From a Workday perspective, we think that you're going to always need to have separate tools to be data aggregators, to get some intelligence out of data from disparate sources. But, when the data can be focused on the data in a single application, we think there is an opportunity for the people who build that application to build in more BI, so that separate tooling is not needed. That’s what we think we are doing at Workday.

Kobielus: Being able to pull data from wherever into your Excel spreadsheet and model it and visualize it is how most people have done decision, support, and modeling for a long time in the business world.

... I like what you said, that the interface is the analytics. That’s exactly true. Fundamentally, BI is all about delivering action and more intelligence to decision agents. The analytics are the payload, and they are accessed by the decision agents through an interface or interfaces. Really, the interfaces have to fit and really plug into every decision point.

... In the cloud, it has to be like a cloud data warehouse ecosystem, but it also has to be a interface. The interfaces between this cloud enterprise data warehouse (EDW) and all the back-end transactional systems have to be through cloud and service oriented architecture (SOA) approaches as well.

What we are really talking about is a data virtualization layer for cloud analytics to enable the delivery of analytics pervasively throughout the organization.

Grimes: We're definitely in a data-driven decade, but there’s just so much data out there that maybe we should extend that metaphor of driving a bit.

The real destination here is business value, and what provides the roadmap to get from data to business value is the competencies, experiences, and the knowledge of business managers and users.

It’s the systems, the data warehouses, that Jim was talking about, but also hosted, as-a-service types of systems, which really focus on delivering the BI capabilities that people need. Those are the great vehicle for getting to that business value destination, using all of that data to drive you along in that direction.

Swete: The thing that frequently gets left out is a focus on the transactional apps themselves and the things they can do to support pervasive analytics.

For disparate data sources, you're going to need data warehouses. Any time you've got aggregation and separate reporting tools, you're going to need to build interfaces.

But, if you think back to how you introduced this topic Dana, how you introduced SaaS, is when you look at IT’s involvement, if interfaces need to get built to convey data, IT has to get involved to make sure that some level of security is maintained.

From Workday’s point of view, what you want to do is reduce the times when you have to move data just to do analysis. We think that there is a role that you can play in applications where -- and this gets IT out of it -- if your application, that is the originator of transactional data, can also support a level of BI and business insight, IT does not have to become as involved, because they bought the app with the trust in the security model that’s inherent to the application.

What we're trying to is leverage the fact that we can be trusted to secure access to data. Then, what we try to do is widen the access within the application itself, so that we don’t have to have separate data sources and interfaces.

This doesn’t cover all cases. You still need data aggregation. But, where the majority of the data is sourced in a transaction system, in our case HR, we think that we, the apps vendor, can be relied on to do more BI.

What we've been working on is constantly enhancing managers' abilities to get access to their data. Up through 2009, that took the form of trying to enhance our report writer and deliver more options for reports, either the option to render reports in a small footprint, we call it Worklet, and view it side by side, whether they are snippets of data, or the option to create more advanced reports.

This is an ability to enhance our built-in report writer to allow managers or back-office personnel to directly create what become little analysis cues.



We had introduced a nice option last year to create what we call contextual reporting, the ability to sort of start with your data -- looking at a worker -- and then create a report about workers from there, with guidance as to all the Workday fields, where they applied to the worker. That made it easier for a manager not to have to search or even remember parts of our data dictionary. They could just look at the data they knew.

This year, we're taking, we think, a major step forward in introducing what we are calling custom analytics. This is an ability to enhance our built-in report writer to allow managers or back-office personnel to directly create what become little analysis cues. We call them matrix reports.

That’s a new report type in our report writer. Basically, you very quickly -- and importantly without coding or migrating data to a separate tool, but by pointing and clicking in our report writer -- get one of these matrix reports that allows slicing and dicing of the data and drilling down into the data in multiple dimensions. In fact, the tool automatically starts with every dimension of the data that we know about based on the source you gave us.

We're trying to make it simple to get this analysis into the hands of managers to analyze their data.

Self-service information

Kobielus: What you are saying there is very important. What you just mentioned there, Stan, is one thing I left off in my previous discussion, which is self-service information and exploration through hierarchical and dimensional drill down and also mashup in collaborative sharing of your mashups.

It's where the entire BI space is going, both traditional, big specialized BI vendors, but also vendors like yourself, who are embedding this technology into back office apps, and have adopted a similar architecture. The users want all the power and they're being given the power to do all of that.

... My colleague, Boris Evelson, surveyed IT decision makers -- we have, in fact, in the last few years -- on the priorities for BI and analytics. What they're adopting, what projects they are green lighting, more and more of them involve self-service, pervasive BI, specifically where you have more self-service, development, mashup style environments, where there is more SaaS for quick provisioning.

What we're seeing now is that there is the beginnings of a tipping point here, where IT is more than happy to, as you have all indicated, outsource much of the BI that they have been managing themselves, because, in many ways, the running of a BI system is not a core competency for most companies, especially small and mid-market companies.

Grimes: Add in the web. The web is going to be a great mechanism for interconnecting all of the distributed systems that you might have and bringing in additional data that might be germane to your business problems, that isn’t held inside your firewall, and all that kind of stuff. The web is definitely a fact nowadays and it’s so reliable finally that you can run operational systems on top of it.

That’s where some of the stuff that Stan was talking about comes into play. Data movement between systems does create vulnerability. So, it's really great, when you can bundle or package multiple functional components on a single platform.

Swete: When we think about reporting at Workday, we have three things in mind. We're trying to make the development of access to data simple. So that’s why we try to make it always -- never involve coding. We don’t want it to be an IT project. Maybe it's going to be a more sophisticated use of the creation of reports. So, we want it to be simple to share the reports out.

The second word that’s top of my list is relevance. We want the customers to guide themselves to the relevant data that they want to analyze. We try to put that data at hand easily, so they can get access to it. Once they're analyzing the data, since we are a transaction system, we think we can do a better job of being able to take action off of what the insight was.

I call it transanalytics. It's a combination of transaction systems and analytics systems. And really it's a closed loop. It must be.



So, we always have what we call related actions as a part of all the reports that you can create, so you can get to either another report or to a task you might want to do based on something a report is showing you.

Then, the final thing, because BI is complex, we also want to be open. Open means that it still has to be easy to get data out of Workday and into the hands of other systems that can do data aggregation.

Kobielus: That’s interesting -- the related action and the capability. I see a lot of movement in that area by a lot of BI vendors to embed action links into analytics. I think the term has been coined before. I call it transanalytics. It's a combination of transaction systems and analytics systems. And really it's a closed loop. It must be.

It's actionable intelligence. So, duh, then shouldn't you put an action link in the intelligence to make it really truly actionable? It's inevitable that that’s going to be part of the core uptake for all such solutions everywhere.

... The analytics themselves though -- the analysis and the intelligence -- are a core competency they want to give the users: information workers, business analysts, subject matter experts. That's the real game, and they don't want to outsource those people or their intelligence and their insights. They want to give them the tools they need to get their jobs done.

What's happening is that more and more companies, more and more work cultures, are analytic savvy. So, there is a virtuous cycle, where you give users more self-service -- user friendly, and dare I say, fun -- BI capabilities or tools that they can use themselves. They get ever more analytics savvy. They get hungry for more analysis. They want more data. They want more ways to visualize and so forth. That virtuous cycle plays into everything that we are seeing in the BI space right now.

Cost analysis

Swete: Or vision is that, as we can widen our footprint from an application standpoint, the payoff for what our end-users can do in terms of analysis just increases dramatically. Right now, it's attaching cost to your HR operations' data. In the future, we see augmenting HR to include more and more talent data. We're at work on that today, and we are very excited about dragging in business results and drawing that into the picture of overall performance.

And Workday has already built up more than just HCM. We offer financial management applications and have spend-management applications.

So a big part of how we're trying to develop our apps is to have very tight integration. In fact, we prefer not even to talk about integration, but we want these particular applications to be pieces of a whole. From a BI perspective, we wanted to be that. We believe that, as a customer widens their footprint with us, the value of what they can get out of their analysis is only going to increase.

You look at your workforce. You look at what they have achieved through their project work. You look at how they have graded out on that from the classical HR performance point of view. But, then you can take a hard look at what business results have generated. We think that that's a very interesting and holistic picture that our customers should be able to twist and turn with the tools we have been talking about today.

Grimes: There is a kind of truism in the analytics world that one plus one equals three. When you apply multiple methods, when you join multiple datasets, you often get out much more than the sum of what you can get with any pair of single methods or any pair of single datasets.

Some users are really going to get down and dirty with the data and with the analytical methods, and you want to support them, but you also want to deliver appropriate sophistication of analytics to other users.



If you can enable that kind of cross-business functions, cross-analytical functions, cross-datasets, then your end-users are going to end up farther along in terms of optimizing the overall business picture and overall business performance, as well as the individual functional areas, than they were before. That's just a truism, and I have seen it play out in a variety of organizations and a variety of businesses.

Swete: The thing that always occurs to me as an advantage of SaaS is that SaaS is a change delivery vehicle. If you look at the trend that we have been talking about, this sort of marrying up transactional systems with BI systems, it’s happening from both ends. The BI vendors are trying to get closer to the transactional systems and then transactional systems are trying to offer more built-in intelligence. That trend has several steps, many, many more steps forward.

The one thing that’s different about SaaS is that, if you have got a community of customers and you have got this vision for delivering built-in BI, you are on a journey. We are not at an endpoint. And, you can be on that journey with SaaS and make the entire trip.

In an on-premise model, you might make that journey, but each stop along the way is going to be three years and not multiple steps during the year. And, you might never get all the way to the end if you are a customer today.

SaaS offers the opportunity to allow vendors to learn from their customers, continue to feed innovation into their customers, and continue to add value, whereas the on-premise model does not offer that.

See a demo on how Workday BI offers business users a new experience for accessing the key information to make smart decisions.

About Workday
This BriefingsDirect podcast features software-as-a-service (SaaS) upstart Workday, provider of enterprise solutions for human resources management, financial management, payroll, spend management, and benefits management.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: Workday.

You may also be interested in:

Wednesday, May 5, 2010

rPath brings data center automation to Windows environments

rPath is making taking a deeper step into the data center automation market with support now for Windows servers.

On Tuesday, the company moved beyond its Linux solutions to also now support Microsoft Windows with a new solution based on the .NET framework. rPath is working to fill a void in the marketplace for tools that let IT admins deploy, configure, maintain, troubleshoot—and automate—.NET apps and runtimes. [Disclosure: rPath is a sponsor of BriefingsDirect podcasts.]

rPath’s market research shows a diversity (hairball?) of .NET production environments, with assemblies of scripts and manual processes stitched together by what the company describes as “harried operations engineers trying to cope with unstable, rapidly changing applications and exploding scale.” In other words, heterogeneity is no stranger to pure Windows environments.

Cutting operating expenses

With its decision to support Windows (Server 2003 R2 and above), rPath also pointed to Microsoft research that shows 47 percent of enterprise IT operating expenses are related to deployment management and incident management activities. The company also noted the increasing pressure to cut costs, while improving agility and responsiveness, as drivers for the adoption of automation solutions.

“Our success with Linux is driving demand for a solution that automates deployment and maintenance of Windows-based software systems,” said Jake Sorofman, chief marketing officer for rPath, in a release. “Today, the absence of automation solutions for Windows and the complexity of .NET applications deployments make this one of the thorniest challenges in enterprise IT.”

Automation and modeling for .NET environments

rPath will bring its automation technology to the Windows environment, complete with automated packaging and deployment of .NET and Windows apps, scalable updates, unified software automation and configuration, and release lifecycle management. This could come in pretty handy when taking these stacks to virtualized environments, be they HyperV, other hypervisors or in clouds like Amazon EC2.

In effect, the rPath automation approach helps bridge the often deep gap between design-time activities and run-time production management. The gap exists for both Linux and Windows environs, for sure. Any end-to-end automation is welcome, especially in virtualization settings.

What also intrigues me, and should interest Microsoft, is the ability to use the rPath solutions for migration and co-existence in dual production settings. A lot more enterprises are running both Windows and Linux, than are running only one or the other. I may want to be more smart, as a efficiency-minded CIO, in better picking and choosing what to run where apps-wise, and to automate more of the two platforms in some unison.

Can you say "common release automation?" Sure you can.

But that may be getting a bit ahead of the game. So let’s look at each area of the new rPath offerings a little more closely.

By storing CIM data under the same version control umbrella as software, rPath’s solution streamlines deployment and update consistent systems.



With automated packaging and deployment, rPath customers who use Windows environments can automatically discover and resolve dependencies. What’s more, policies will define how applications should be packages. This yields a self-contained, end-to-end system that’s ready to deploy to any physical, virtual or cloud environment.

rPath also offers system modeling that IT admins can use to control the flow of change into deployed systems. The solution also allows IT admins to apply updates incrementally to only the components that need to be changed. This approach eliminates unnecessary changes that can lead to downtime. As rPath explains it, deep version control also offers a solid foundation to reproduce, rollback and troubleshoot apps.

A unified solution with lifecycle management

Here’s how this unified system works: By storing common information model (CIM) data under the same version control umbrella as software, rPath’s solution streamlines deployment and update consistent systems. The solution does this by simultaneously deploying applications and their supporting configurations over the Windows management application programming interface (API) (WMI). rPath bills the new solution as the first real configuration management system for Windows.

Finally, rPath’s release lifecycle management platform offers a shared repository of version-controlled system artifacts that drive consistent reuse across the release lifecycle, consistent handoffs across the release lifecycle to eliminate the risk of configuration drift, and automated dependency discovery and resolution to minimize the risk of deployment failures and outages. rPath claims this dramatically accelerates and improves the quality of release processes.

rPath support for automated deployment and configuration of applications built using the Microsoft .NET Framework will be available in the third quarter of 2010.

For a white paper on "Automating .NET Application Deployment and Configuration" go to: http://www.rpath.com/corp/images/stories/white_papers/rPath_WP_Windows.pdf
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, May 4, 2010

Just as the vendor-speak turns from SOA, the users are actually embracing it

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

There is a core disconnect between what gets analysts and journalists excited, and what gains traction with the customers who consume the technologies that keep our whole ecosystem in business.

OK, guilty as charged, we analysts get off on hearing about what’s new and what’s breaking the envelope, but that’s the last thing that enterprise customers want to hear.

Excluding reference customers (who have a separate set of motivations that often revolve around a vendor productizing something that would otherwise be custom developed), most want the tried and true, or at least innovative technology that at has matured through the rough spots and is no longer version 1.0.

It’s a thought that crystallized as we bounced impressions of this year’s IBM Impact 2010 event with colleagues like Dorothy Alexander and Marcia Kaufman, who shared perceptions that, while this year’s headlines or trends seemed a bit anticlimactic, that there was real evidence that customers were actually “doing” whatever it is that we associate with SOA.

[Note: See a roundup of Impact news.]

Forget about the architectural journeys that you’ve heard about SOA; SOA is an enterprise architectural pattern that is a means to an end.

Forget about the architectural journeys that you’ve heard about with SOA; SOA is an enterprise architectural pattern that is a means to an end. It’s not a new argument; it was central to the "SOA is dead" debate that flared up with Anne Thomas Manes’ famous or infamous post of almost a year and a half ago, and of the subsequent debates and hand wringing that ensued.

IBM’s so-called SOA conference, Impact, doesn’t even include SOA in its name. But until now SOA was the implicit rationale for this WebSphere middleware stack conference to exist. Yet more and more the focus is about the stack that SOA enables; and more and more, about the composite business applications that IBM’s SOA stack enables.

IBM won’t call it the applications business, but when you put vertical industry frameworks, business rules, business process management, and analytics together, it’s not simply a plumbing stack, but a collection of software tools and vertical industry templates that become the new de facto applications that bolt atop and aside the core application portfolio that enterprises already have and are not likely to replace.

Something old, something new

In past years, this conference was used to introduce game changers, such as the acquisition of Webify that placed IBM Software firmly on the road to verticalizing its middleware.

This year the buzz was about something old becoming something new again. IBM’s acquisition of Cast Iron, as dissected well by colleagues Dana Gardner and James Governor, reflects the fact that after all these years of talking flattened architectures, especially using the ESB style, that enterprise integration (or application-to-application, or A2A) hubs never went out of style. There are still plenty of instances of packaged apps out there that need to be interfaced.

The problem is no different from a decade ago when the first wave of EAI hubs emerged to productize systems integration of enterprise packages. While the EAI business model never scaled well in its time because of the need for too much customization, experience, commoditization of templates, and emergence of cheap appliances provided economic solutions to this model.

More importantly, the emergence of multi-tenanted SaaS applications, like Salesforce.com, Workday and many others, have imposed a relatively stable target data schema plus a need of integration of cloud and on-premises applications. Informatica has made a strong run with its partnership with Salesforce, but Informatica is part of a broader data integration platform that for some customers is overkill. By contrast, niche players like Cast Iron which only do data translation have begun to thrive with a Blue Chip customer list.

Of course Cast Iron is not IBM’s first appliance play. That distinction goes to DataPower, which originally made its name with specialized appliances that accelerated compute-intensive XML processing and SSL encryption. While we were thinking about potential synergy, such as applying some of DataPower’s XML acceleration technology to A2A workloads, IBM’s middleware head Craig Hayman responded to us that IBM saw Cast Iron’s technology as a separate use-case. But they did demonstrated that the software of Cast Iron could, and would, literally run on DataPower’s own iron.

IBM could go the opposite direction and infuse some of this A2A transformation as services that could be transformed and accelerated by the traditional DataPower line.

Of course, you could say that Cast Iron overlaps the application connectors from IBM’s Crossworlds acquisition, but those connectors, which were really overlay applications (Crossworlds used to call them “collaborations”), have been repurposed by IBM as BPM technology for WebSphere Process Server.

Arguably, there is much technology from IBM’s Ascential acquisition focused purely on data transformation that also overlaps here. But Cast Iron’s value add to IBM is the way those integrations are packaged, and the fact that they have been developed especially for integrations to and from SaaS applications – no more and no less.

IBM has gained the right sized tool for the job. IBM has decided to walk a safe tightrope here; it doesn’t want to weigh Cast Iron’s simplicity (a key strength down) with added bells and whistles from the rest of its integration stack. But the integration doesn’t have to go in one direction –weighing down Cast Iron with richer but more complex functionality. IBM could go the opposite direction and infuse some of this A2A transformation as services that could be transformed and accelerated by the traditional DataPower line.

This is a similar issue that IBM has faced with Lombardi, a deal that it closed back in January. They’ve taken the obvious first step in “blue washing” the flagship Lombardi Teamworks BPM product, which is now rebranded IBM WebSphere Lombardi Edition and bundled with WebSphere Application Serve 7 and DB2 Express under the covers.

The more pressing question is what to do with Lombardi’s elegantly straightforward Blueprint process definition tool and IBM WebSphere BlueWorks BPM, which is more of a collaboration and best practices definition rather than modeling tool (and still in beta). The good news is that IBM is trying the right thing in not cluttering Blueprint (now rebranded IBM BPM Blueprint), but the bad news is that there is still confusion with IBM’s mixed messages of a consistent branding umbrella but uncertainty regarding product synergy or convergence.

Back to the main point however: while SOA was the original impetus for the Impact event, it is now receding to a more appropriate supporting role.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in: