Monday, January 12, 2009

Workday builds out SaaS bellwether for human capital management services and costs controls

Responding to the need for agile compensation and incentives management in a tough global economy, Workday has delivered new versions of its innovative Human Capital Management (HCM) software-as-a-service (SAAS) solutions.

The new services offer richer costs and compensation management features, more business services like payroll, and also improved access to global process insights and analytics. These additions are designed to effectively and swiftly help guide employees through change and to improve business productivity and responsiveness.

Until now, talent management offerings have evolved as add-ons to legacy systems, creating new silos of information and an incomplete view of worker performance. Because Workday's on-demand business applications are built on a service oriented architecture (SOA), more coordinated services can be brought to the full human management equation.

Furthermore, by allowing for integration across the data from these services -- with centralized control even across global regions and disparate workforces -- a new element of business intelligence (BI) for human resources management becomes possible. [Disclosure: Workday is a sponsor of BriefingsDirect podcasts.]

I've not been alone in my viewing Workday as a poster child for where SaaS business applications are headed. The user interface, using Adobe technology, deep use of SOA infrastructure approaches, and philosophy that managing people well is core to almost any business process place Workday out in front of many business and IT trends.

But the new offerings also point up a burgeoning value of cloud computing. Easier but controlled access to centralized data provides the ability to apply analytics and advanced queries to more human resources and processes data. Better data in, better results out. This helps coordinate the managment of people more closely to the management of dynamic business goals. And it helps cut the lag between wanting to instill business change, and then finding the path to informing and incentivizing employees with less waste and confusion.

And, of course, secure access to employee trends data provides a two-way street: Derive insights through larger data sets analysis and BI, and also gain the abilty to hasten and promote business processes through the execution and enforcement of incentives and compensation management more fully and quickly.

Both of these values are essential in an economy rife with mergers and acquisitions, consolidation, workforce re-allocations, shifting customer requirements, new sales strategies and the need to be fleet in shifting incentives to align with dynamic market conditions.

Consider too that a SaaS approach to HCM improves the access to data sets that can align and automate the interplay between customer relationship management (CRM) insights (regardless of hosting models) and HCM change management. Isn't there a key relationship between what goes on with customers and what then need to go on with employees? Sure is in the sales department. Yet bringing intelligence, analysis and execution automation to these disparate functions has been manual, incomplete, difficult and murky.

That should soon change. Included in Update 6 from Workday, now in Pleasanton, Calif., are Pay for Performance and Worker Spend Management improvements.

Worker Spend Management means that spending activity is automatically tied to workers and can be linked to projects or activities via tags, called Worktags, so managers and business leaders have a complete view of total worker cost -- including both compensation and the resources used to get work done.

Previously, tying spending activity and behavior to individual positions, people, workgroups, teams and business purpose has been impossible without expensive analytic solutions or manual spreadsheets.

Pay for Performance features tie performance reviews, team performance and company performance to compensation -- providing managers with recommended targets based on a broad range of configurable variables and business results. Decision and assessment support includes target versus actual reporting and actionable analytics, enabling organizations and managers to achieve actual performance-based rewards.

Brnaching out into adding more business services to the HCM portfolio, Workday has also announced the general availability of Workday Payroll for the U.S. This offering delivers payroll processing coupled with the company's other solutions. And other payroll approaches --internal or via outsourced payroll providers -- will continue to be supported and integrated to the offerings, said Workday.

The Workday system, however, leverages a global calculation engine and payroll framework, allowing Workday to centrally localize payroll for regions and countries without the redevelopment efforts associated with traditional on-premise systems.

Also included in Workday Update 6 is a significant expansion of Benefits Network, a set of pre-packaged integrations with popular benefits carriers. The Benefits Network includes connections to 49 providers, with plans for 21 more in the next month.

Workday is an on-demand financial management and human capital management solutions vendor. It was founded by David Duffield, best known as the co-founder and former chairman of PeopleSoft, which grew to be the world’s second-largest application software company before being acquired by Oracle in 2005. Workday aquired Cape Clear Software in early 2008.

Whether you're implementing HCM solutions, I'd keep an eye on Workday's progress. They are moving the concepts on SaaS and cloud in a pragmatic way for such large businesses as Flextronics and Chiquta brands. I'm especially keen to see how the BI and analytics values help to undergird the SOA and cloud innovations that Workday has built into their systems from the very beginning.

It will, in the age of debates about SOA's relavance, be fascinating to see if on-demand providers can bank on the SOA efficiencies and agility, while leveraging cloud models to help customers gain better productivity while also cutting their internal delivery costs and promoting new abstractions of integration and BI.

Friday, January 9, 2009

Predicting vitality of 'SOA' completely misses the point -- legacy IT is dead

While the software market gnashes its teeth over how alive service oriented architecture (SOA) is, the much more important opportunity -- and perhaps unique in the history of IT -- is being overlooked.

There's never better a better time to kill off your legacy IT systems.

The next two years presents the architects and strategists of enterprise IT an unprecedented and probably not to be repeated chance to re-factor the way they do business. Microsoft CEO Steve Ballmer is mulling over the implications of a "reset," rather than a recession, and it is the correct way to look at this period.

Here's why: It's long been an uncomfortable reality that the means of computing for the past 20 years have piled up inside of data centers, expensive and outdated but too complex and costly to replace. Nobody has wanted to rip and replace because of the transition pain and uncertainty. The old guard has been presented with a lot of good excuses for simply bearing the load of aging systems' costs while still piling on more new systems.

We now have the unique option of lowering the tolerance for the ongoing cost of the old, while finding far fewer excuses for putting off the pain of change. In other words, now is the time for rip and replace. But it's more than that, it's time for executing on IT transformation writ large, of moving beyond physical systems and into the hybrid pool of myriad services ... with the end goal of finding the right combination of systems and services for each and every IT problem set.

And defining IT differently needs to be done, too, while we're at it. We need to stop thinking of IT as an attached appendage of each and every specific and isolated enterprise. Yep, 2000 fully operational and massive appendages for the Global 2000. All costly, hugely redundant, unique largely only in how complex and costly they are all on their own.

Instead, IT should be seen as a set of problems to be solved by the best means, and common means should be sought for a great deal of the load. Rather than an IT appendage at each enterprises' actual locations, solutions should be brought to the IT problems by any best means. It means catching up to reality. The reality is that the boundaries of IT are permeable, malleable and dynamic. Being caught in the old world of on-premises and monolithic systems for each application and data set is at odds with what is available efficiently as services -- internal, external and hybrid. Corporations have long sought these methods for procuring other business services, and IT is no different.

As Moore's Law and other modern IT productivity improvements have drastically cut the cost of newer IT solutions and technologies, it has made the costs of maintaining the legacy systems all the higher. The personnel and maintenance arsenals these systems require simply to keep producing a static productivity benefit are in a word ... wasteful. We have been through a long period of spending a lot of money at integration capabilities to extend the value of aging systems by trying to conjure a multiplier effect of 1 system integrated to 1 system equals 3 systems value. More often than not the math does not return a high enough rate of return.

These effects have given enterprise IT the stigma of black box cost centers. The business strategists feel IT is an extortion racket. They fear more of the same, but now with less corporate revenues and therefore less IT budget to work with.

And that's where SOA's death comes in. If the ROI on the money spent to achieve SOA benefits is not overwhelming, or transparent, the logic goes, then the effort is mute, dead, not worth the pain. Actually the opposite is true, and now more than ever.

The analogy of trying to change the wings of an airplane while keeping it flying is often used to describe the quandary of re-architecting your IT universe to an appreciable level while also meeting the SLAs and expanding the capacity and reliability of the older systems. In other words, the relentless pressure of keeping up with growth in the use of and demand on IT has handicapped the task of modernization. With a tight budget since 2001, many IT departments are far too busy putting out the fires of meeting demand to be much involved with resetting the way in which IT is conducted.

One of the chief pain points in avoiding the rip and replaces necessary to move aggressively to modern IT services -- services for integration/interoperability, for infrastructure resources, for app dev, for data management, for software as a service (SaaS), for cloud, for hybrids, for business process modeling, for many aspects of the IT lifecycle -- is that IT has been too busy, too stretched. It's consequently perceived as too risky ... Too hard to sell to the bean counters. Conventional logic holds that this only get worse in a recession.

Increasing demand for IT performance has been a convenient excuse to stay the legacy course, keep investments in new technology modest -- after all, we don't have the time to absorb SOA properly. Let's just get more Band-Aids. This all, like a giant Ponzi scheme, works quite well when the economy and profits are growing. We now know those days are over for some considerable period of time.

And so here we may find the silver lining, for IT strategists at least, in the rapid and severe economic contraction now upon us. A confluence of variables should tip the scales to make IT transformation more practical and attainable than ever. But you may only have a year or two to capitalize on this opportunity.

I suggest that IT organizations look to a new breed of triage for their existing IT universe, and cull out and rip out as much as possible. Kill it. Seek the newer -- dare I say it -- vital SOA and SaaS/cloud alternatives. Examine how open source software and models make sense. Liberally deploy virtualization. Look at how virtual desktop infrastructure (VDI) makes sense for more workers. Examine how a netbook or mobile device can fill the needs of more users in more places in more ways.

And here's the key: Actually define the business processes you need to support first, then identify the resources that the users and customers need to act on these processes, and procure and integrate the IT services (however best available) to fulfill the model. Repeat. Reuse assets as appropriate. Govern the whole she-bang centrally with policy and automation as the goal. This is SOA. It is not dead.

At the same time, boot up the next generation data centers that can play in hybrid services models and at lean total costs -- and inject as much logic and data from the older systems as possible. Use the SaaS and cloud options available now to replace the older systems, and then decide the best medium-term means to produce or procure those services. Find a governance model that allows you to manage the services and resources regardless of where they reside or how they are procured. Rip out and kill the remaining legacy systems that cost you dearly and provide static or declining productivity. You can do this. Now is the time. Be brave.

Many companies will slowly go under, go bankrupt or plunge into a new ownership form that produces the ultimate reset for IT. It is the off button. Just turn the IT off and sell the hardware on eBay. Those firms -- or remaining valued elements of the old firms -- that emerge from the ashes of these drastic business restructurings will also get an abrupt IT reset. They may be able to begin anew with services as central, with SaaS as pervasive, and with cloud-based app dev the norm. But that's some tough sledding to get to more productive and agile IT.

Many more companies will see a period of reduced demand on the IT systems. If you lay off 15% of the workforce, there's bound to be more slack in the demand on applications (once you've provisioned the employees off properly). If your revenues decline by 30%, there will be more slack on the demand on applications and data servers. If you merge with another company, there's a lot of IT redundancy to remove.

You no longer have the excuse of being too busy and too capacity-strained to entertain those ultimately productivity-rich systems resets and to embrace SOA. And you can attain the IT modernization benefits now at far lower capital costs because you can bargain stridently and successfully with the integrators, hardware vendors, software providers, and all the rest. You find qualified employees to hire. You can seek out more IT services on a per-user, per-month subscription model. No need to pay for the IT behind those gone 15% of laid-off employees until you rehire them, and then watch your IT costs become far more commensurate to your actual needs. You won't need the huge capital outlays first, and the productivity later.

Those companies that make this transition now will be powerfully more agile, with lower total IT costs and the ability to swiftly exploit new SOA, SaaS and cloud innovations over the coming years. You need to both survive the recession and position yourself to dominate afterward, in the brave new world. You'll need the right IT mentality and models to do it.

So the actual costs of meaningful change in IT for the next few years will come at an historically low real cost, with very high rates of return after the transition. And the portion of IT spend devoted to capital outlays will decline. And you can bargain (perhaps even push out payments for 6 months or a year) on the professional services, integrators, outsourcers, and other transitional expenses. Other aspects of the global economy are facing a reset, as are governaments, and IT should be a leader not a laggard -- both as an example and as an enabler to the larger transitions.

Now is the time to rip and replace your thinking about IT, and so you'll want to replace your legacy systems and obsolete IT solution models with vital and efficient SOA processes and hybrid IT resources acquisition models.

Actually, now to think of it, high-cost and lock-in legacy IT is what is really dead, finally. RIP.

Wednesday, January 7, 2009

Webinar: IT analysts delve into desktop as service/VDI cloud opportunities for enterprises and telcos

Read a full transcript of the webinar discussion. Listen and watch.

Many of us expect that delivery of the full PC desktop experience and applications as a service will grow in use and value. For many users inside of enterprises, at call centers and for point of sale uses, their requirements can be me with a low-cost thin client and desktop as a service (DaaS) approach. The technology is largely here today.

But stark economics may end up driving the adoption. The value and cost savings from virtualization techniques escalate as they extend from servers to applications to the PC experience itself.

We're also seeing a lot of churn in the concept of the client device itself. The notion of a full-fledged tower PC for every desktop use scenario -- and the associated costs of maintenance, support, security, and upgrades -- is giving way to the right device for the use case. Why not the right software service mix for the right use case too?

For hosting organizations, telcos, cloud services providers and software as a service providers, the allure of providing a complete PC desktop and the required applications as a subscription is enticing. But this is a big subject, and many people are still just wrapping their heads around the implications.

Consequently, I recently participated in a webinar with virtual desktop infrastructure vendor Desktone that I think really gets to some core issues and insights on VDI and the models that support it. Joining me in the discussion are Jeff Fisher, senior director of strategic development at Desktone; Rachel Chalmers, research director of infrastructure management, at The 451 Group, and Robin Bloor, analyst at Hurwitz & Associates.

Here are some excerpts:
Fisher: Clearly, everyone is talking about cloud computing. You can’t look anywhere within IT and not hear about it. It’s amazing to see it surpassing even the frenzy around virtualization. In fact, most of the conversations people are having today are around virtualization and how it can take place in the cloud. Everyone wants to focus on all the benefits, including anytime/anywhere access and subscription economics.

However, like any other major trend that unfolds in IT, there are a number of challenges with the cloud. When people talk about cloud computing with respect to the enterprise, in most cases they’re talking about virtualizing server workloads and moving those workloads into a service provider cloud.

Clearly, that shift introduces a number of challenges. Most notable is the challenge of data security. Because server workloads are very tightly-coupled with their data tier, when you move the server or the server instance, you have to move the data. Most IT folks are not really comfortable with having their data reside in a service provider or external data center.

For that reason Desktone believes that it’s actually going to be virtual desktops, not servers, that are the better place to start and are going to be what jump starts this whole enterprise adoption of cloud computing.

The reason is pretty simple. Most fixed corporate desktop environments -- those are desktops that have a permanent home within your enterprise – already probably have their application and user data abstracted away from the actual desktop. The data is not stored locally. It’s stored somewhere on the network, whether it’s security credentials within the Active Directory (AD), whether it’s home drives that store user data, or it’s the back end of client-server applications. All the back end systems run within your data center.

The other interesting thing is this notion of the service-provider cloud, which is that it actually can traverse both the enterprise and the service provider data centers.

So, depending on the use case, service providers can either keep the virtual infrastructure and the racks powering that virtual infrastructure in their data center or they can, in certain cases, put the physical infrastructure within the enterprise data center, what we call the customer premise equipment model. The most important thing is that it doesn’t break the model.

There is flexibility in the location of the actual hosting infrastructure. Yet, no matter where it resides, whether it’s in a service provider data center or an enterprise data center, the service provider still owns and operates it and the enterprise still pays for it as a subscription.

Chalmers: There are three sensible places to run a desktop virtual machine. One is on the physical client, which gives you a whole bunch of benefits around the ability to encrypt and lock down a laptop and manage it remotely. One is to run it on the server, which is the very similarly tried-and-tested VM with VDI or Citrix XenDesktop method. That’s appropriate for a lot of these cases, but when you run out of server capacity or storage in the server-hosted desktop virtualization model a lot of companies would like elastic access to off-site resources.

This is particularly appropriate, for example, for retailers who see a big balloon in staffing – short-term and temporary staffing around the holiday seasons, although possibly not this year -- or for companies that are doing things off-shore and want to provide developer desktops in a very flexible way, or in education where companies get big summer classes, for example, and want to fire up a whole bunch of desktops for their students.

This kind of elastic provisioning is exactly what we see on the server virtualization side around cloud bursting. On the desktop side, you might want to do cloud bursting. You might even want to permanently host those desktops up in the cloud with a hosting provider and you want exactly the same things that you want from a server cloud deployment. You want a very, very clean interface between the cloud resources and the enterprise resources and you want a very, very granular charge back in billing.

And so, we see cloud-hosted desktop virtualization as a special case of server-hosted desktop virtualization.

Gardner: I think we’re entering a new era in how people conceive of compute resources. ... What’s happening now is that organizations are starting to re-evaluate the notion that a one-size-fits-all PC paradigm makes sense.

We have lots of different slices of different types of productivity workers. As Rachel mentioned, some come and go on a seasonal basis, some come and go on a project basis. We’re really looking at a slice-and-dice productivity in a new way, and that forces the organization to really re-evaluate the whole notion of application delivery.

When you start moving toward virtualization and you start re-thinking about infrastructure, you start re-thinking the relationship between hardware and software. You start re-thinking the relationship between tools and the deployment platform, as you elevate the virtualization and isolate applications away from the platform, and you start re-thinking about delivery.

If you take the step toward terminal services and delivering some applications across the wire from a server-based host, that continues to tip this a little bit toward, “Okay, if I could do it with a couple of apps, why not look at more? If I could do it with apps, why not with desktop? If I can do it with one desktop, why not with a mobile tier?”

If we look at the cost pressures that organizations are under, recognizing that it’s maintenance and support, and risk management and patch management that end up being the lion’s share of the cost of these systems, we’re really at a compelling point where the cost and the availability of different alternatives has really sparked sort of a re-thinking.

So we’re really going through this period of transformation, and I think that virtualization has been a catalyst to VDI and that VDI is therefore a catalyst into cloud. If you can do it through your servers, somebody else can do it through theirs.

When we start really seeing total costs tip as a result, the delta between doing it yourself and then doing it through some of these newer approaches is just super-compelling. Now that we’re entering into an economic period, where we’re challenged with top-line and bottom-line growth, people are not going to take baby steps. They’re going to be looking for transformative, real game-changing types of steps.

This is particularly relevant if they’re commodity level types of applications and services. It could be communications and messaging, it could be certain accounting or back office functions. It just makes a lot of sense to start re-evaluating. What we haven’t seen, unfortunately, is some clear methodologies about how to make these decisions and boundaries inside of organizations with any sort of common framework or approach.

It’s still a one-off company by company approach -- which workers should we keep on a full-fledged PC? Who should we put on a mobile Internet device, for example? Who could go into a cloud-based applications hosting type of scenario that you’ve been describing?

It’s still up in the air and I’m hoping that professional services and systems integrators over the next months and years will actually come up with some standard methodologies for going in and examining the cost-benefit analysis, what types of users and what types of functions and what types of applications it makes sense to put into these different ... environments.

Bloor: One of the things that is really important about what’s happening here with the virtualization of the desktop is just the very simple fact that desktop costs have never been well under control.

The interesting thing is that with end users that we’ve been talking to earlier this year, when they look at their user populations, they normally come to the conclusion that something like 70 or 80 percent of PC users are actually using the PC in a really simple way. The virtualization of those particular units is an awful lot easier to contemplate than the sophisticated population of heavy workstation use and so on.

With the trend that’s actually in operation here, and especially with the cloud option where you no longer need to be concerned about whether your data center actually has the capacity to do that kind of thing, there’s an opportunity with a simple investment of time to make a real big difference in the way the desktop is managed.

... From the corporate point of view, if you’re somebody that’s running a thousand desktops or more it’s a problem. It is a problem in terms of an awful lot of things but mostly it’s a support issue and it’s a management issue. When you get an implementation that involves changing the desktop from a PC to a thin client and you don’t put anything into the data center, it improves.

You’ve now got a situation where you don’t need cages in the data center running PC blades or running virtualized blades to actually provide the service. You don’t need to implement the networking stuff, the brokering capability, boost the networking in case it’s clashing with anything else, or re-engineer networks.

All you do is you go straight into the cloud and you have control of the cloud from the cloud. It’s not going to be completely pain free obviously, but it’s a fairly pain-free implementation.

It absolutely stunned me that a cloud offering became available earlier this year because that meant that somebody would actually have had to have been thinking about this two years ago in order to put together the technology that would enable such an offering like that.

Gardner: This is a much more lucid and rational architecture. We’ve found ourselves, over the past 15 or 20 years, sort of the victim of a disjointed market roll out. We really didn’t anticipate the role of the Internet, when client-server came about. Client-server came about quickly just after local area networks (LANs) were established.

We really hadn’t even rationalized how a LAN should work properly, before we were off and running into bringing browsers in TCP/IP stacks. So, in a sense, we've been tripping over and bouncing around from one very rapid shift in technology to another. I think we’re finally starting to think back and say, “Okay, what’s the real rational, proper architectural approach to this?”

We recognize that it’s not just going to be a PC on every desktop. It’s going to be a broadband Internet connection in every coat pocket, regardless of where you are. That fundamentally changes things. We’re still catching up to that shift.

Bloor: From an architect’s point of view, if nobody had influenced you in any way and you were just asked to draw out a sense of a virtualization of services to end users, you would probably head in this direction. I have no doubt about it. I’ve been an architect in my time, and it’s just very appealing. It looks like what Desktone DaaS has here is resources under control, and we’ve never had that with a PC.
Read a full transcript of the webinar discussion. Listen and watch.

Monday, January 5, 2009

A technical look at how parallel processing brings vast new capabilities to large-scale BI and data analysis

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Read a full transcript of the discussion.

Internet-scale data collecting, swarms of sensors outputs, and content clouds from the mobile device fabric -- as well as enterprises piling up ever more kinds of analytics metadata to analyze -- have stretched traditional data-management models to the breaking point.

Yet advances in parallel processing using multi-core chipsets have prompted new software approaches such as MapReduce that can handle these data chores at surprisingly low total cost. The technical response to oceans of data is something that has been building for some time. But the time now seems ripe to bring the technical solutions of lower-cost parallel computing advances into play with the economic imperatives of huge data crunching requirements.

And so just what are the technical underpinnings that support the new demands being placed on, and by, extreme data sets? What economies of scale can we anticipate? How will these advances spur the movement of data to Internet cloud models?

BriefingsDirect's Dana Gardner put these and other questions to a panel of new data architecture experts, to plumb into how parallelism, modern data infrastructure, and MapReduce technologies come together. He spoke with Joe Hellerstein, professor of computer science at UC Berkeley; Robin Bloor, analyst at Hurwitz & Associates, and Luke Lonergan, CTO and co-founder at Greenplum.

Here are some excerpts:
Data growth has been following and exceeding Moore's Law over time. What we've been seeing is that the data sets that people are gathering and storing over time have been doubling at a rate of even faster than every 18 months. ... We're going to see all kinds of large organizations gathering data from all sorts of automated sources.

... What's changed in the last few years is that clock speeds on processors have stopped doubling every 18 months. ... Instead, what they are doing is putting more processing cores on every chip. You can expect the number of processors on your chip to double every 18 months, but they're not going to get any faster.

So data is growing faster, and we have chips basically standing still, but you're getting more of them. If you want to take advantage of that data, you're going to have to program in parallel to make use of all those processors on the chips. That's the confluence that's happening.

There are very many people storing and analyzing more data. We're very encouraged that most of our customers are finding new uses for data that are earning them more money. Consequently, the driver to analyze more and more data continues to grow. As our customers get more successful, this use of data is becoming really important.

It's easy to parallelize the data. You break it up into little chunks and you throw it out to different machines. What can we do cleverly in computing with that kind of a framework? There are a lot of ideas for how to move forward ... where you are taking this massively parallel data-flow approach.

One thing that's kind of invisible is that there is a lot of data out there that's not being analyzed fast enough to be analyzed effectively. That's something that I think parallelism is going to address. ... The only reason not to gather that data is when you run out of affordable processing and storage. Anybody with the budget will have as much data as they can budget for and will try to monetize that. It's going to be pervasive.

The core problem we've solved is the ability for our engine to redistribute the data and the computation on the fly, as these queries and analysis are being performed. ... The combination of the software-switch interconnect, which Greenplum built into the Greenplum product, and the underlying use of commodity parallel computers, is brought together in this database system that makes it possible to do SQL query and languages like MapReduce with automatic parallelism.

Businesses have invested a tremendous amount of their time over the last 15 to 25 years in SQL, and some of the more traditional kinds of business analysis that pay off very well are ensconced in that programming model. So, packaging a system that can do transactional, mixed workloads with large amounts of concurrency, with applications that use the SQL paradigm, is very important.

Packaging this together as software plus hardware, making that available as a reference architecture for customers, has been very important and has been very successful in our accounts at New York Stock Exchange, Fox, MySpace, and many others.

The combination of SQL and MapReduce in a unified way in programming environments ... is a very pragmatic [step] that can help with people's ability to get their hands on data in an organization. ... You want to have the same access to all your data via either an SQL interface or a MapReduce programming interface. ... You ought to be able to access those with whatever language suits you, mix and match.

Some things are easier to do in MapReduce, and some things are easier to do in SQL, even when you know both. Good programmers have a lot of tools in their tool belt. They like to be able to use whatever tool is appropriate for the task. Having both of these things interleaved is really quite helpful.

[The solution] is about users being able to gain access to all that power. What really turned the corner for general data analysis using SQL is the ability for a user to not to have to worry about what kind of table structure they have. They can have lots of small tables joining to lots of big tables, and big tables joining to each other.

What the developer needs is an engine that doesn't care how the data is distributed, per se, just being able to use all of that parallelism on the problems of interest. ... The physical model of how the database is distributed in a shared nothing architecture in a Greenplum system is not visible to the developer.

There are a couple of questions about how an individual organization's data will end up in the cloud. Inevitably it will, but in the short-term, people like to keep their data close, particularly database data that's traditionally been in the warehouses, very carefully managed. ... It's going to be some time until we really see everybody's data warehouses up in the cloud. ... How long will it be until you really get big volumes of data in the cloud[?] The answer is that certainly new applications will be up there. We may start to see old data getting uploaded in the cloud as well.

We'll start to see big data sets up there that don't necessarily belong to anyone, and they are going to be big. In that environment, you can imagine big data analytics will have to run in the cloud, because that's where the data will be. One of the fun things about the cloud that's really exciting is the elasticity of the resources. You don't buy yourself a data center full of machines, but you rent as many machines as you need for a task.

If you have a task that's going to look at a lot of data, you would rent a lot of machines for a few hours, and then you would shrink your pool. What this is going to allow people to do is that even small organizations may, for a short period of time, look at an enormous amount of data, which perhaps doesn't originate in their own data production environment, but is something that they want to utilize for their purposes.

Disk densities show no signs of slowing down. So, data is going to be essentially no cost. The data-gathering infrastructure is also going to be mechanized. We're going through what I call the industrial revolution of data production. We're just going to build machines to generate data, because we think we can get value out of that data, and we can store it essentially for free.

The compute cost of multi-core with parallelism is going to continue Moore's Law. It's just going to continue it in a parallel programming environment. If we can get all those cores looking at all that data, it won't cost much to do that, and the cost of that will continue to shrink by half.

The only real barrier to the process is to make those systems easy to program and manageable. Cloud helps somewhat with manageability, and programming environments like SQL and MapReduce are well-suited to parallelism. We're going to just see an enormous use of data analysis over time. It's just going to grow, because it gets cheaper and cheaper and bigger and bigger.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Monday, December 29, 2008

BriefingsDirect analysts make 2009 predictions for enterprise IT, SOA, cloud and business intelligence

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 35, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events with a panel of IT analysts.

In this episode, recorded Dec. 19, 2008, our guests make their top five predictions for IT in 2009. We're going to look at what trends may have changed in 2008, but with an emphasis on the impacts for IT users, and buyers and sellers in the coming year.

Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Joe McKendrick, independent analyst and prolific blogger; Dave Linthicum, founder of Linthicum Group; Mike Meehan, senior analyst at Current Analysis, and JP Morgenthal, senior analyst at Burton Group. Our discussion is hosted and moderated by me, BriefingDirect's Dana Gardner.

Here are some excerpts ...
Gardner's Top Five Predictions for 2009:

1) Shadow IT. Spending from shadow IT activities will actually grow, and that the amount of money devoted to shadow IT activities will come from outside traditional IT budgets, from a variety of different sources, maybe even petty cash, and we'll see a bit of growth in these rogue activities. Moving into these areas for business development purposes is going to be an overwhelming temptation. We will see a flattening, and in many cases a reduction, in officially sanctioned IT activities.

2) Cut Costs. Inside of traditional IT we're going to find a lot of new ways to quickly cut costs. This is going to be a drill for organizations to not spend money or spend less money. Virtualization will be a big part of that. Hypervisors will perhaps go commodity, and the value-add in the virtualized environment is going to be at the stacks -- virtualized stacks or containers at the applications level. There will be a blurring between which WOA activities happen inside IT and outside. We're going to see a lot more dumping of Unix and mainframes. We are going to sunset a lot of applications that aren't essential and save on the underlying costs of supporting them.

3) High-Scale Business Intelligence (BI). Extreme BI will require a move up scale to larger sets of data, larger sets of content, and more mingling or joining of disparate types of data and content in order to draw inferences about what the customers are willing to do and pay across both B2B and B2C activities. We'll start to see an increased use of multi-core and parallelism to support these BI activities.

4) No Stomach for Upgrades. Upgrades will suffer. Were not going to see a lot of swapping out of one system for another, unless there's a very compelling return-on-investment (ROI) scenario with verifiable short-term metrics. This is going to hurt companies like SAP and Microsoft, and Oracle and IBM to a lesser extent, given their diversification. I think Windows 7 is in trouble. People are not going to just run to Windows 7. They're going to continue to stay with XP. This makes the timing around the Vista debacle all the more injurious to Microsoft. This provides an opening for Linux and non-Microsoft virtualization. It also means Microsoft needs to move to its cloud offerings all the more quickly, which then could actually spell earnings troubles for the company.

5) Social Data-CRM Mashups. The role of social media and networks will continue to grow and be impactful for enterprises, as marketers and salespeople begin to look to these organizations from the metadata and inference about what customers are willing to buy, particularly under tight economic conditions. There's going to be a need to tie traditional customer relationship management (CRM) and sales applications with some sort of a process overlay into the metadata that's available from these Web-based cloud environments, where users have shared so much inference and data about themselves. I look for some mashups between social data and the sales and business development applications and data.

Kobielus's Top Five Predictions for 2009:

1) Obama. The new administration will most likely appoint a national chief technology officer or a national tech policy coordinator. Obama is going to choose a heavy hitter who has huge credibility and stature in the IT space. It's going to be someone who's going to focus on SOA at a national level, in terms of how we, as a country, can take advantage of reusing agility, transformation, optimization, and all the other benefits that come from SOA properly implemented across different agencies.

2) Cloud Computing. Clouds are going to become less of a work in progress, in terms of public clouds and private clouds, and become more of a mature reality, in terms of how enterprises acquire functionality, how they acquire applications and platforms. Clouds will stratify, which means that the vendors, like Google, Microsoft, and Amazon and others with their cloud offerings, will build full stacks, strata, in their cloud services that include all the appropriate layers, application components, integration services, and platforms. So, the industry will converge on a more of a reference model for cloud in 2009.

3) Recession. We are in a deep funk, and it might get a lot worse before it gets better. That's clearly hammering all IT budgets everywhere. They're going to put a freeze on projects. They're going to delay or cancel upgrades. Users are going to dip into petty cash and go around IT to get what they need. They're going to go to cloud offerings.

4) Governance, Risk and Compliance (GRC). Government is cracking down. If it has to bail out the financial-services industry, bail out the auto industry, and bail out other industries, the government is not going to do it with no strings attached. Compliance, regulations, reporting requirements, the whole apparatus of GRC will be brought to bear on the industries that the government is saving and bailing out.

5) Social Networking. Social networking will pervade everything in terms of applications and services. We'll see more BI become social networking, in the sense of mashup as a style of BI application, reporting, dashboards, and development. Mashups for user self-service BI development will come to the fore. It will be a huge theme in the BI space in 2009 and beyond of that.

Baer's Top Five Predictions for 2009:

1) Cost Savings. It's going to put a lot more emphasis on using the resources and infrastructure that you already have. It's going to damp down entering into new long-term contracts for anything. You'll actually see little less emphasis on outsourcing, because that does imply a long-term contract. I don't think anyone is really doing any meaningful projecting beyond Q1.

2) Low Cost or No Cost IT. It's going to be a lot of low cost, no cost. There will be a lot more use of open source, a lot more. This is definitely the year that the cloud and Software as a Service (SaaS) come into their own.

3) Managed Clouds. I think it's going to be managed clouds. Essentially, to take advantage of raw clouds, like Amazon EC2 you have to put in more of your own management infrastructure. I don't see the use of what I would call "clouds in the wild." I see more managed clouds from that standpoint.

4) IT Service Management. For IT organizations, it's going to dictate more attention to IT service management to show that we're not just keeping systems going and keeping the lights on, but more along the lines of, "Here are the services that we're delivering to the business," as they try to justify the systems. On the back-end, it will be "Use more of what you have," and huge renewed investments in BI.

5) GRC. It's going to take a while for this to unfold -- you just don't regulate overnight -- but there will be much greater attention to GRC.

Shimmin's Top Five Predictions for 2009:

1) Collaborative Social Networks. Vendors will tackle enterprise-plus-consumer based social networks, a blended view of those. Enterprise-focused vendors are going to do more than simply sink info from public sites like Facebook. They're going to take that information and build into or out from the enterprise into those social networks and drive information from those. It's going to become a two-way street.

2) Cloud Software. I see the vendors within the collaboration space settling beyond the small and medium business (SMB) market and looking more toward the larger enterprises that are looking to squeeze more out of their existing IT infrastructure or cut costs. Folks like IBM and Microsoft have already shown us that they can hit the long tail with stuff like Bluehouse and Microsoft Online Services (MOS) for collaboration. But, you're going to see vendors like Cisco and Oracle take up this challenge with more of a focus on managed hosting services that look more like SaaS, but they are really managed.

3) Enterprise Oligarchy Models. Enterprises are going to move away from a steep hierarchy, or the word might be "oligarchy," of an organizational model internally. To become not just more efficient and agile, companies are going to want to self organize to create these internal ecosystems where organizations are built around employee experience, associations, interests, and energy levels -- what they want to focus on. This allows companies to more efficiently harness the users. People are going to be tasked with setting up their own BI queries and mashing up their own applications.

4) Blended Internal and External Communities. In terms of communities, both internally and externally -- I am seeing silos breakdown between those. Gone are the days of consumer-faced social networking and enterprise-faced social networking existing as independent entities. Thanks to user profile standards like OpenID and expansion of APIs, community providers and third-party aggregation and integration tool vendors are going to allow applications and users to flow between what were heretofore closed communities.

5) Virtual Worlds Gain Foothold. I think we're going to see that change how virtual networks can be utilized inside the enterprise. I'm looking for virtual worlds to gain a foothold in the enterprise. It's not just for marketing and sales, but also to support B2B and B2C communities, where effective communication between your supply channel members is really paramount. We'll see virtual worlds actually make an impact in terms of allowing these global, loosely coupled entities communicate more effectively in 2009.

McKendrick's Top Five Predictions for 2009:

1) It's the Economy. Recession planning is so 2008, because SOA, which I focus on as well as IT, is a long-term process. You need to look three years down the road. The economy is going to turn around. I see it turning around at some point in 2009.

2) IT Can't Cut Too Much More. IT has already been tight. IT has been tight since the dot-bomb era of 2001-2002. There probably is not going to be a huge diminishment in IT departments, because of the fact that the budgets have been lean, things have already been tight, companies already know, or have been running very efficiently, and IT departments have been overworked as it is.

3) Enterprise 2.0. The recession and downturn isn't going to be like it's been in the past. People are more empowered with social networking tools, as employees and as people looking for jobs. They're looking to start new businesses. We have a lot of tools available to us now that we didn't have back in 2000. People don't have to be victims of an economic downturn, as they have been in the past. We have the capability to network across the globe. We have the capability to start new businesses.

4) Cloud Economics. I just heard about another company that spent about $200 for its first two months of IT. They don't have to go out and buy servers. They don't have to go out and buy disk arrays, and worry about the maintenance, hiring people, and know how to maintain those things. We are going to see folks -- maybe IT people, or people who work for vendors and have been laid off -- have the ability to start their own business at a very low cost of entry.

5) Low-Cost Methods to Reach Markets. With the social-networking and cloud-computing phenomena, companies have these tools to employ low-cost methods to reach their markets and to interact with their customers. A marketing campaign doesn't have to cost $200,000 to reach your customers. You can use the social network, the Web 2.0 tools, to interact and collaborate and find out what's going on in your markets at a very relatively low cost.

Linthicum's Top Five Predictions for 2009:

1) Cloud Computing Matures. The interest in cloud computing, which I have been focusing on in my career, at least for the last eight years, is finally going to come into its own. What we're going to see in 2009 is a lot of startups, specifically some cloud-computing startups. You're going to see even more around what I call "cloud mediation." That is guys like RightScale, and a few other folks in the space that sit between you and the major cloud providers. They basically mediate issues around data semantics, performance management, load balancing, and those sorts of things.

2) Open Cloud Services. A big hole in the cloud computing movement so far is that most of the solutions out there, even the database solutions, are proprietary. They use different APIs, different interfaces, and different sets of standards. It's going to be a play for a lot of companies to get in there and provide more reliable infrastructure in and between these various guys out there.

3) Some Cloud Social Connections. The links to social networking will be there. They're not going to be quite as pervasive as everybody thinks. Social networking is going to have its place, but once we figure it out, it will be, "Okay, yeah." It's going to have its value, but we're just going to move on as far as this revolution goes. I don't think that's going to happen in 2009. People are going to use it as a marketing opportunity, just like they used email, Web sites and those sorts of things, and now blogging opportunities, but eventually it's just going to fall into place.

4) Rogue Clouds and PaaS. There will be a huge explosion in the rogue cloud movement, and also the platform-as-a-service (PaaS) space. The architects and CIOs out there are going to be scrambling around trying to figure out how to place governance around that. Everybody is going to be building applications, typically using free platforms like Google App Engine. They're going to start launching these things into production, and there is going to be no rhyme or reason around how they fit into the existing infrastructure.

5) SOA Gets Cloudy. There's going to be a larger focus on inter-domain SOA technology. The focus will still be on the short-term tactical and the ability to provide quick value in the SOA space to justify it, so you can get additional funding. As we start building these things, people are going to look at the departments that are implementing their SOA projects and try to figure out how to bind these things at an enterprise level. I call this the micro domain versus the macro domain. On the downside, the jig will be up for poor SOA technology vendors out there. Guys who haven't been able to get acquired or haven't been able to hit that inflection point ... are going to eventually just going to have the plug pulled. And, 2009 is going to be when it's going to happen. They're just going to run out of steam. SOA predates when the buzzword was created, and it's going to postdate when the word "SOA" was created. It's going to morph into different things, and the cloud computing movement is going to get into it and define it in different directions. The whole SOA movement is going to be more defined by the cloud.

Meehan's Top Five Predictions for 2009:

1) Take My Hardware, Please. Back in 2001, when that recession hit, all of a sudden you could buy wonderful amounts of IT gear on eBay for next to nothing. I remember talking to one guy who was smiling like a Cheshire Cat, because he had replaced $45,000 worth of Unix with $500 worth of Linux. I think you are going to see a lot of that. Expect a glut of servers and storage gear and network gear, and you are going to be able to get it cheap and affordable. That's going to hit the storage and network and server companies.

2) Tough License Negotiations. CIOs are ... going to be asked to cut budget, and there is only so much flesh you can cut out before you have to deal with that maintenance license. I think every company in the world is aware of the fact that they pay more in licenses than they want to. They have always theoretically wanted to lower those costs. The pressure now is going to be too great for them to not consider options. This is going to be great for open source companies, which are going to be able to come in and say, alright, you don't have to pay me a rolling license, here is my support cost, see how much its going to lower your license. It is going to be bad for Microsoft, because again, to a degree they are becoming commoditized across their portfolio, and that's going to hit them right in the breadbasket. This should hit some enterprise resource planning (ERP) vendors too. Anybody who can sell SaaS in the ERP market is going to be doing better. I think you are going to see some erosion on the SAP and Oracle side, as far as enterprise apps go.

3) Easier Integration. "Make my life easier or go away." That basically means, users are going to need productivity and ease-of-use integration. You're going to see those in requests for proposals (RFPs). If they're not stated explicitly, they will be there implicitly. Don't come in and tell me how much work I'm going to have to do to make all of this come together. Come in and tell me how this is going to make my life easier on day one. The companies that can deliver that will be the ones making the sales. The ones who are telling you that you're going to need to do eight months of work to get this up and running are going to be pushed to the back burner.

4) Smooth SOA. What you're going to see in a lot of the SOA projects out there in particular is, "All right. Make it easy for me to assemble an application. Make it easy for me to reuse my assets. Make it easy for me to modify my existing applications. Make it easy for me to integrate different applications and even information between different divisions of my company." You almost want it to be governable on the fly. What you really want is that you don't have to dedicate too much time and resources to undertake these functions. Users aren't going to have that much time or that many resources. So, how quickly can I do things now, as opposed to how thoroughly can I do things? You're going to want to be thorough to an extent, but really it's going to be speed to market and speed to end of project that's going to be a determinant in there.

5) Telecom Realignment. The U.S. government is going to start treating telecom like its our national road system, and you are going to see some serious investment in that area. That's going to become one of the key points in the economic stimulus package that you're going to see. I also think you are going to see European telcos begin to encroach, either through acquisition or just through offering services into the U.S. market. ... The last one, HP buys Sun. Somebody is going to get bought this year, somebody fairly big. I'm saying HP is buying Sun.

Morgenthal's Top Five Predictions for 2009:

1) Business Process Focus. We're going to see a greater focus on the business process. Not business process management (BPM) per se, although initially people will target that. I think SOA is dead, and I believe companies have no stomach for IT initiatives that cannot immediately be attributed to a value. They're going to do some small-scale business process re-engineering, they're going to get tremendous value from it. They're going to see that simplification is the way to go. Why are we doing all these complex things -- this hooking to that, hooking to this, hooking to that? I can just go into this one box and get everything done there. The age of disposable computing is here.

2) Social Networking Backlash. Everyone is getting into it, having a little fun. Certain ones of us are on the leading edge. We're already getting bombarded and tired. We're already fried and overloaded from these social networks. The new people think it's a great new toy. Give it a couple of years and you are going to see a tremendous backlash. You're going to see a rise of firms that will get paid to get people off the grid.

3) Era of Anti-IT. The pain from the economy is going to impact the open-systems market. We're seeing the rise of what I call the "anti-IT." You read about people reaching into petty cash, doing things on the cheap, finding other ways to get things done. The one that's going to be the biggest impact is that people are treating open source like free software. That will destroy the open-source market for sure. It's the death knell. I remind every one of my customers of that ... open source is not free software. You're either contributing dollars to the team that's doing it, or you are contributing your time and effort. It's not free software. You just don't take it and use it. That will be the death knell for open source for sure.

4) Millennial Workforce Shifts. The millennial workforce is starting. This is going to change everything, and it's starting to already. These people have attitude that I haven't seen in a workforce since marketing people came out in the dot-com era. They definitely feel like, "I want my toys. I want to be able to use my phone at work. I want to use my computer at work. I want to be able to access my sites at work." I see companies dealing with this issue in a unique way. Their first inclination isn't to push back with the old adage and the old way of talking about it, saying, "Hey, it's our way or the highway. We've got the money." It's "Okay, what do you want?" This is going to really change things. How? It's yet to be seen, but clearly the introduction of a much more mobile force, more telecommuters.

5) Digital Rights Management Changes. There's a big change coming in Digital Rights Management (DRM) and patent and copyright. It's being lead by this initiative out of Harvard with the Recording Industry Association of America (RIAA). RIAA may have just started a war for everybody in the industry who has any copyright or any patent infringement suit. A Harvard law class, I believe, represented by a Harvard law professor [Charles Nesson], is backing it. They're representing it as unconstitutional. So this case could be landmark for DRM, copyright infringement, and patent infringement. It would have a tremendous impact going into the potential for a startup economy. Landmark cases like this will do a lot to further the opportunities of these firms to go out there and build something without worrying, "Am I going to get taken out by Microsoft? Am I going to get taken out by Apple? I can't afford that." It's really interesting what could happen, given the cases like this are now falling on the side of the small guy, and not on the side of big companies.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.