Monday, January 5, 2009

A technical look at how parallel processing brings vast new capabilities to large-scale BI and data analysis

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Read a full transcript of the discussion.

Internet-scale data collecting, swarms of sensors outputs, and content clouds from the mobile device fabric -- as well as enterprises piling up ever more kinds of analytics metadata to analyze -- have stretched traditional data-management models to the breaking point.

Yet advances in parallel processing using multi-core chipsets have prompted new software approaches such as MapReduce that can handle these data chores at surprisingly low total cost. The technical response to oceans of data is something that has been building for some time. But the time now seems ripe to bring the technical solutions of lower-cost parallel computing advances into play with the economic imperatives of huge data crunching requirements.

And so just what are the technical underpinnings that support the new demands being placed on, and by, extreme data sets? What economies of scale can we anticipate? How will these advances spur the movement of data to Internet cloud models?

BriefingsDirect's Dana Gardner put these and other questions to a panel of new data architecture experts, to plumb into how parallelism, modern data infrastructure, and MapReduce technologies come together. He spoke with Joe Hellerstein, professor of computer science at UC Berkeley; Robin Bloor, analyst at Hurwitz & Associates, and Luke Lonergan, CTO and co-founder at Greenplum.

Here are some excerpts:
Data growth has been following and exceeding Moore's Law over time. What we've been seeing is that the data sets that people are gathering and storing over time have been doubling at a rate of even faster than every 18 months. ... We're going to see all kinds of large organizations gathering data from all sorts of automated sources.

... What's changed in the last few years is that clock speeds on processors have stopped doubling every 18 months. ... Instead, what they are doing is putting more processing cores on every chip. You can expect the number of processors on your chip to double every 18 months, but they're not going to get any faster.

So data is growing faster, and we have chips basically standing still, but you're getting more of them. If you want to take advantage of that data, you're going to have to program in parallel to make use of all those processors on the chips. That's the confluence that's happening.

There are very many people storing and analyzing more data. We're very encouraged that most of our customers are finding new uses for data that are earning them more money. Consequently, the driver to analyze more and more data continues to grow. As our customers get more successful, this use of data is becoming really important.

It's easy to parallelize the data. You break it up into little chunks and you throw it out to different machines. What can we do cleverly in computing with that kind of a framework? There are a lot of ideas for how to move forward ... where you are taking this massively parallel data-flow approach.

One thing that's kind of invisible is that there is a lot of data out there that's not being analyzed fast enough to be analyzed effectively. That's something that I think parallelism is going to address. ... The only reason not to gather that data is when you run out of affordable processing and storage. Anybody with the budget will have as much data as they can budget for and will try to monetize that. It's going to be pervasive.

The core problem we've solved is the ability for our engine to redistribute the data and the computation on the fly, as these queries and analysis are being performed. ... The combination of the software-switch interconnect, which Greenplum built into the Greenplum product, and the underlying use of commodity parallel computers, is brought together in this database system that makes it possible to do SQL query and languages like MapReduce with automatic parallelism.

Businesses have invested a tremendous amount of their time over the last 15 to 25 years in SQL, and some of the more traditional kinds of business analysis that pay off very well are ensconced in that programming model. So, packaging a system that can do transactional, mixed workloads with large amounts of concurrency, with applications that use the SQL paradigm, is very important.

Packaging this together as software plus hardware, making that available as a reference architecture for customers, has been very important and has been very successful in our accounts at New York Stock Exchange, Fox, MySpace, and many others.

The combination of SQL and MapReduce in a unified way in programming environments ... is a very pragmatic [step] that can help with people's ability to get their hands on data in an organization. ... You want to have the same access to all your data via either an SQL interface or a MapReduce programming interface. ... You ought to be able to access those with whatever language suits you, mix and match.

Some things are easier to do in MapReduce, and some things are easier to do in SQL, even when you know both. Good programmers have a lot of tools in their tool belt. They like to be able to use whatever tool is appropriate for the task. Having both of these things interleaved is really quite helpful.

[The solution] is about users being able to gain access to all that power. What really turned the corner for general data analysis using SQL is the ability for a user to not to have to worry about what kind of table structure they have. They can have lots of small tables joining to lots of big tables, and big tables joining to each other.

What the developer needs is an engine that doesn't care how the data is distributed, per se, just being able to use all of that parallelism on the problems of interest. ... The physical model of how the database is distributed in a shared nothing architecture in a Greenplum system is not visible to the developer.

There are a couple of questions about how an individual organization's data will end up in the cloud. Inevitably it will, but in the short-term, people like to keep their data close, particularly database data that's traditionally been in the warehouses, very carefully managed. ... It's going to be some time until we really see everybody's data warehouses up in the cloud. ... How long will it be until you really get big volumes of data in the cloud[?] The answer is that certainly new applications will be up there. We may start to see old data getting uploaded in the cloud as well.

We'll start to see big data sets up there that don't necessarily belong to anyone, and they are going to be big. In that environment, you can imagine big data analytics will have to run in the cloud, because that's where the data will be. One of the fun things about the cloud that's really exciting is the elasticity of the resources. You don't buy yourself a data center full of machines, but you rent as many machines as you need for a task.

If you have a task that's going to look at a lot of data, you would rent a lot of machines for a few hours, and then you would shrink your pool. What this is going to allow people to do is that even small organizations may, for a short period of time, look at an enormous amount of data, which perhaps doesn't originate in their own data production environment, but is something that they want to utilize for their purposes.

Disk densities show no signs of slowing down. So, data is going to be essentially no cost. The data-gathering infrastructure is also going to be mechanized. We're going through what I call the industrial revolution of data production. We're just going to build machines to generate data, because we think we can get value out of that data, and we can store it essentially for free.

The compute cost of multi-core with parallelism is going to continue Moore's Law. It's just going to continue it in a parallel programming environment. If we can get all those cores looking at all that data, it won't cost much to do that, and the cost of that will continue to shrink by half.

The only real barrier to the process is to make those systems easy to program and manageable. Cloud helps somewhat with manageability, and programming environments like SQL and MapReduce are well-suited to parallelism. We're going to just see an enormous use of data analysis over time. It's just going to grow, because it gets cheaper and cheaper and bigger and bigger.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Monday, December 29, 2008

BriefingsDirect analysts make 2009 predictions for enterprise IT, SOA, cloud and business intelligence

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Analyst Insights Edition, Vol. 35, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events with a panel of IT analysts.

In this episode, recorded Dec. 19, 2008, our guests make their top five predictions for IT in 2009. We're going to look at what trends may have changed in 2008, but with an emphasis on the impacts for IT users, and buyers and sellers in the coming year.

Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis; Joe McKendrick, independent analyst and prolific blogger; Dave Linthicum, founder of Linthicum Group; Mike Meehan, senior analyst at Current Analysis, and JP Morgenthal, senior analyst at Burton Group. Our discussion is hosted and moderated by me, BriefingDirect's Dana Gardner.

Here are some excerpts ...
Gardner's Top Five Predictions for 2009:

1) Shadow IT. Spending from shadow IT activities will actually grow, and that the amount of money devoted to shadow IT activities will come from outside traditional IT budgets, from a variety of different sources, maybe even petty cash, and we'll see a bit of growth in these rogue activities. Moving into these areas for business development purposes is going to be an overwhelming temptation. We will see a flattening, and in many cases a reduction, in officially sanctioned IT activities.

2) Cut Costs. Inside of traditional IT we're going to find a lot of new ways to quickly cut costs. This is going to be a drill for organizations to not spend money or spend less money. Virtualization will be a big part of that. Hypervisors will perhaps go commodity, and the value-add in the virtualized environment is going to be at the stacks -- virtualized stacks or containers at the applications level. There will be a blurring between which WOA activities happen inside IT and outside. We're going to see a lot more dumping of Unix and mainframes. We are going to sunset a lot of applications that aren't essential and save on the underlying costs of supporting them.

3) High-Scale Business Intelligence (BI). Extreme BI will require a move up scale to larger sets of data, larger sets of content, and more mingling or joining of disparate types of data and content in order to draw inferences about what the customers are willing to do and pay across both B2B and B2C activities. We'll start to see an increased use of multi-core and parallelism to support these BI activities.

4) No Stomach for Upgrades. Upgrades will suffer. Were not going to see a lot of swapping out of one system for another, unless there's a very compelling return-on-investment (ROI) scenario with verifiable short-term metrics. This is going to hurt companies like SAP and Microsoft, and Oracle and IBM to a lesser extent, given their diversification. I think Windows 7 is in trouble. People are not going to just run to Windows 7. They're going to continue to stay with XP. This makes the timing around the Vista debacle all the more injurious to Microsoft. This provides an opening for Linux and non-Microsoft virtualization. It also means Microsoft needs to move to its cloud offerings all the more quickly, which then could actually spell earnings troubles for the company.

5) Social Data-CRM Mashups. The role of social media and networks will continue to grow and be impactful for enterprises, as marketers and salespeople begin to look to these organizations from the metadata and inference about what customers are willing to buy, particularly under tight economic conditions. There's going to be a need to tie traditional customer relationship management (CRM) and sales applications with some sort of a process overlay into the metadata that's available from these Web-based cloud environments, where users have shared so much inference and data about themselves. I look for some mashups between social data and the sales and business development applications and data.

Kobielus's Top Five Predictions for 2009:

1) Obama. The new administration will most likely appoint a national chief technology officer or a national tech policy coordinator. Obama is going to choose a heavy hitter who has huge credibility and stature in the IT space. It's going to be someone who's going to focus on SOA at a national level, in terms of how we, as a country, can take advantage of reusing agility, transformation, optimization, and all the other benefits that come from SOA properly implemented across different agencies.

2) Cloud Computing. Clouds are going to become less of a work in progress, in terms of public clouds and private clouds, and become more of a mature reality, in terms of how enterprises acquire functionality, how they acquire applications and platforms. Clouds will stratify, which means that the vendors, like Google, Microsoft, and Amazon and others with their cloud offerings, will build full stacks, strata, in their cloud services that include all the appropriate layers, application components, integration services, and platforms. So, the industry will converge on a more of a reference model for cloud in 2009.

3) Recession. We are in a deep funk, and it might get a lot worse before it gets better. That's clearly hammering all IT budgets everywhere. They're going to put a freeze on projects. They're going to delay or cancel upgrades. Users are going to dip into petty cash and go around IT to get what they need. They're going to go to cloud offerings.

4) Governance, Risk and Compliance (GRC). Government is cracking down. If it has to bail out the financial-services industry, bail out the auto industry, and bail out other industries, the government is not going to do it with no strings attached. Compliance, regulations, reporting requirements, the whole apparatus of GRC will be brought to bear on the industries that the government is saving and bailing out.

5) Social Networking. Social networking will pervade everything in terms of applications and services. We'll see more BI become social networking, in the sense of mashup as a style of BI application, reporting, dashboards, and development. Mashups for user self-service BI development will come to the fore. It will be a huge theme in the BI space in 2009 and beyond of that.

Baer's Top Five Predictions for 2009:

1) Cost Savings. It's going to put a lot more emphasis on using the resources and infrastructure that you already have. It's going to damp down entering into new long-term contracts for anything. You'll actually see little less emphasis on outsourcing, because that does imply a long-term contract. I don't think anyone is really doing any meaningful projecting beyond Q1.

2) Low Cost or No Cost IT. It's going to be a lot of low cost, no cost. There will be a lot more use of open source, a lot more. This is definitely the year that the cloud and Software as a Service (SaaS) come into their own.

3) Managed Clouds. I think it's going to be managed clouds. Essentially, to take advantage of raw clouds, like Amazon EC2 you have to put in more of your own management infrastructure. I don't see the use of what I would call "clouds in the wild." I see more managed clouds from that standpoint.

4) IT Service Management. For IT organizations, it's going to dictate more attention to IT service management to show that we're not just keeping systems going and keeping the lights on, but more along the lines of, "Here are the services that we're delivering to the business," as they try to justify the systems. On the back-end, it will be "Use more of what you have," and huge renewed investments in BI.

5) GRC. It's going to take a while for this to unfold -- you just don't regulate overnight -- but there will be much greater attention to GRC.

Shimmin's Top Five Predictions for 2009:

1) Collaborative Social Networks. Vendors will tackle enterprise-plus-consumer based social networks, a blended view of those. Enterprise-focused vendors are going to do more than simply sink info from public sites like Facebook. They're going to take that information and build into or out from the enterprise into those social networks and drive information from those. It's going to become a two-way street.

2) Cloud Software. I see the vendors within the collaboration space settling beyond the small and medium business (SMB) market and looking more toward the larger enterprises that are looking to squeeze more out of their existing IT infrastructure or cut costs. Folks like IBM and Microsoft have already shown us that they can hit the long tail with stuff like Bluehouse and Microsoft Online Services (MOS) for collaboration. But, you're going to see vendors like Cisco and Oracle take up this challenge with more of a focus on managed hosting services that look more like SaaS, but they are really managed.

3) Enterprise Oligarchy Models. Enterprises are going to move away from a steep hierarchy, or the word might be "oligarchy," of an organizational model internally. To become not just more efficient and agile, companies are going to want to self organize to create these internal ecosystems where organizations are built around employee experience, associations, interests, and energy levels -- what they want to focus on. This allows companies to more efficiently harness the users. People are going to be tasked with setting up their own BI queries and mashing up their own applications.

4) Blended Internal and External Communities. In terms of communities, both internally and externally -- I am seeing silos breakdown between those. Gone are the days of consumer-faced social networking and enterprise-faced social networking existing as independent entities. Thanks to user profile standards like OpenID and expansion of APIs, community providers and third-party aggregation and integration tool vendors are going to allow applications and users to flow between what were heretofore closed communities.

5) Virtual Worlds Gain Foothold. I think we're going to see that change how virtual networks can be utilized inside the enterprise. I'm looking for virtual worlds to gain a foothold in the enterprise. It's not just for marketing and sales, but also to support B2B and B2C communities, where effective communication between your supply channel members is really paramount. We'll see virtual worlds actually make an impact in terms of allowing these global, loosely coupled entities communicate more effectively in 2009.

McKendrick's Top Five Predictions for 2009:

1) It's the Economy. Recession planning is so 2008, because SOA, which I focus on as well as IT, is a long-term process. You need to look three years down the road. The economy is going to turn around. I see it turning around at some point in 2009.

2) IT Can't Cut Too Much More. IT has already been tight. IT has been tight since the dot-bomb era of 2001-2002. There probably is not going to be a huge diminishment in IT departments, because of the fact that the budgets have been lean, things have already been tight, companies already know, or have been running very efficiently, and IT departments have been overworked as it is.

3) Enterprise 2.0. The recession and downturn isn't going to be like it's been in the past. People are more empowered with social networking tools, as employees and as people looking for jobs. They're looking to start new businesses. We have a lot of tools available to us now that we didn't have back in 2000. People don't have to be victims of an economic downturn, as they have been in the past. We have the capability to network across the globe. We have the capability to start new businesses.

4) Cloud Economics. I just heard about another company that spent about $200 for its first two months of IT. They don't have to go out and buy servers. They don't have to go out and buy disk arrays, and worry about the maintenance, hiring people, and know how to maintain those things. We are going to see folks -- maybe IT people, or people who work for vendors and have been laid off -- have the ability to start their own business at a very low cost of entry.

5) Low-Cost Methods to Reach Markets. With the social-networking and cloud-computing phenomena, companies have these tools to employ low-cost methods to reach their markets and to interact with their customers. A marketing campaign doesn't have to cost $200,000 to reach your customers. You can use the social network, the Web 2.0 tools, to interact and collaborate and find out what's going on in your markets at a very relatively low cost.

Linthicum's Top Five Predictions for 2009:

1) Cloud Computing Matures. The interest in cloud computing, which I have been focusing on in my career, at least for the last eight years, is finally going to come into its own. What we're going to see in 2009 is a lot of startups, specifically some cloud-computing startups. You're going to see even more around what I call "cloud mediation." That is guys like RightScale, and a few other folks in the space that sit between you and the major cloud providers. They basically mediate issues around data semantics, performance management, load balancing, and those sorts of things.

2) Open Cloud Services. A big hole in the cloud computing movement so far is that most of the solutions out there, even the database solutions, are proprietary. They use different APIs, different interfaces, and different sets of standards. It's going to be a play for a lot of companies to get in there and provide more reliable infrastructure in and between these various guys out there.

3) Some Cloud Social Connections. The links to social networking will be there. They're not going to be quite as pervasive as everybody thinks. Social networking is going to have its place, but once we figure it out, it will be, "Okay, yeah." It's going to have its value, but we're just going to move on as far as this revolution goes. I don't think that's going to happen in 2009. People are going to use it as a marketing opportunity, just like they used email, Web sites and those sorts of things, and now blogging opportunities, but eventually it's just going to fall into place.

4) Rogue Clouds and PaaS. There will be a huge explosion in the rogue cloud movement, and also the platform-as-a-service (PaaS) space. The architects and CIOs out there are going to be scrambling around trying to figure out how to place governance around that. Everybody is going to be building applications, typically using free platforms like Google App Engine. They're going to start launching these things into production, and there is going to be no rhyme or reason around how they fit into the existing infrastructure.

5) SOA Gets Cloudy. There's going to be a larger focus on inter-domain SOA technology. The focus will still be on the short-term tactical and the ability to provide quick value in the SOA space to justify it, so you can get additional funding. As we start building these things, people are going to look at the departments that are implementing their SOA projects and try to figure out how to bind these things at an enterprise level. I call this the micro domain versus the macro domain. On the downside, the jig will be up for poor SOA technology vendors out there. Guys who haven't been able to get acquired or haven't been able to hit that inflection point ... are going to eventually just going to have the plug pulled. And, 2009 is going to be when it's going to happen. They're just going to run out of steam. SOA predates when the buzzword was created, and it's going to postdate when the word "SOA" was created. It's going to morph into different things, and the cloud computing movement is going to get into it and define it in different directions. The whole SOA movement is going to be more defined by the cloud.

Meehan's Top Five Predictions for 2009:

1) Take My Hardware, Please. Back in 2001, when that recession hit, all of a sudden you could buy wonderful amounts of IT gear on eBay for next to nothing. I remember talking to one guy who was smiling like a Cheshire Cat, because he had replaced $45,000 worth of Unix with $500 worth of Linux. I think you are going to see a lot of that. Expect a glut of servers and storage gear and network gear, and you are going to be able to get it cheap and affordable. That's going to hit the storage and network and server companies.

2) Tough License Negotiations. CIOs are ... going to be asked to cut budget, and there is only so much flesh you can cut out before you have to deal with that maintenance license. I think every company in the world is aware of the fact that they pay more in licenses than they want to. They have always theoretically wanted to lower those costs. The pressure now is going to be too great for them to not consider options. This is going to be great for open source companies, which are going to be able to come in and say, alright, you don't have to pay me a rolling license, here is my support cost, see how much its going to lower your license. It is going to be bad for Microsoft, because again, to a degree they are becoming commoditized across their portfolio, and that's going to hit them right in the breadbasket. This should hit some enterprise resource planning (ERP) vendors too. Anybody who can sell SaaS in the ERP market is going to be doing better. I think you are going to see some erosion on the SAP and Oracle side, as far as enterprise apps go.

3) Easier Integration. "Make my life easier or go away." That basically means, users are going to need productivity and ease-of-use integration. You're going to see those in requests for proposals (RFPs). If they're not stated explicitly, they will be there implicitly. Don't come in and tell me how much work I'm going to have to do to make all of this come together. Come in and tell me how this is going to make my life easier on day one. The companies that can deliver that will be the ones making the sales. The ones who are telling you that you're going to need to do eight months of work to get this up and running are going to be pushed to the back burner.

4) Smooth SOA. What you're going to see in a lot of the SOA projects out there in particular is, "All right. Make it easy for me to assemble an application. Make it easy for me to reuse my assets. Make it easy for me to modify my existing applications. Make it easy for me to integrate different applications and even information between different divisions of my company." You almost want it to be governable on the fly. What you really want is that you don't have to dedicate too much time and resources to undertake these functions. Users aren't going to have that much time or that many resources. So, how quickly can I do things now, as opposed to how thoroughly can I do things? You're going to want to be thorough to an extent, but really it's going to be speed to market and speed to end of project that's going to be a determinant in there.

5) Telecom Realignment. The U.S. government is going to start treating telecom like its our national road system, and you are going to see some serious investment in that area. That's going to become one of the key points in the economic stimulus package that you're going to see. I also think you are going to see European telcos begin to encroach, either through acquisition or just through offering services into the U.S. market. ... The last one, HP buys Sun. Somebody is going to get bought this year, somebody fairly big. I'm saying HP is buying Sun.

Morgenthal's Top Five Predictions for 2009:

1) Business Process Focus. We're going to see a greater focus on the business process. Not business process management (BPM) per se, although initially people will target that. I think SOA is dead, and I believe companies have no stomach for IT initiatives that cannot immediately be attributed to a value. They're going to do some small-scale business process re-engineering, they're going to get tremendous value from it. They're going to see that simplification is the way to go. Why are we doing all these complex things -- this hooking to that, hooking to this, hooking to that? I can just go into this one box and get everything done there. The age of disposable computing is here.

2) Social Networking Backlash. Everyone is getting into it, having a little fun. Certain ones of us are on the leading edge. We're already getting bombarded and tired. We're already fried and overloaded from these social networks. The new people think it's a great new toy. Give it a couple of years and you are going to see a tremendous backlash. You're going to see a rise of firms that will get paid to get people off the grid.

3) Era of Anti-IT. The pain from the economy is going to impact the open-systems market. We're seeing the rise of what I call the "anti-IT." You read about people reaching into petty cash, doing things on the cheap, finding other ways to get things done. The one that's going to be the biggest impact is that people are treating open source like free software. That will destroy the open-source market for sure. It's the death knell. I remind every one of my customers of that ... open source is not free software. You're either contributing dollars to the team that's doing it, or you are contributing your time and effort. It's not free software. You just don't take it and use it. That will be the death knell for open source for sure.

4) Millennial Workforce Shifts. The millennial workforce is starting. This is going to change everything, and it's starting to already. These people have attitude that I haven't seen in a workforce since marketing people came out in the dot-com era. They definitely feel like, "I want my toys. I want to be able to use my phone at work. I want to use my computer at work. I want to be able to access my sites at work." I see companies dealing with this issue in a unique way. Their first inclination isn't to push back with the old adage and the old way of talking about it, saying, "Hey, it's our way or the highway. We've got the money." It's "Okay, what do you want?" This is going to really change things. How? It's yet to be seen, but clearly the introduction of a much more mobile force, more telecommuters.

5) Digital Rights Management Changes. There's a big change coming in Digital Rights Management (DRM) and patent and copyright. It's being lead by this initiative out of Harvard with the Recording Industry Association of America (RIAA). RIAA may have just started a war for everybody in the industry who has any copyright or any patent infringement suit. A Harvard law class, I believe, represented by a Harvard law professor [Charles Nesson], is backing it. They're representing it as unconstitutional. So this case could be landmark for DRM, copyright infringement, and patent infringement. It would have a tremendous impact going into the potential for a startup economy. Landmark cases like this will do a lot to further the opportunities of these firms to go out there and build something without worrying, "Am I going to get taken out by Microsoft? Am I going to get taken out by Apple? I can't afford that." It's really interesting what could happen, given the cases like this are now falling on the side of the small guy, and not on the side of big companies.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Tuesday, December 16, 2008

MapReduce-scale analytics change BI game as enterprises need to mine ever-expanding data sets

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Read a full transcript of the discussion.

Internet-scale data sets and Web-scale analytics have placed a different set of requirements on software infrastructure and data processing techniques. More types of companies and organizations are seeking new inferences and insights across a variety of massive data sets -- some into the petabyte scale.

How can all this data be shifted and analyzed quickly, and how can we deliver the results to an inclusive class of business-focused users? Following the lead of such Web-scale innovators as Google, and through the leveraging of powerful performance characteristics of parallel computing on top of industry-standard hardware, such companies as Greenplum are now focusing on how MapReduce approaches are changing business intelligence (BI) and the data-management game.

BI has become a killer application over the past few years, and we're now extending that beyond enterprise-class computing into cloud-class computing. The amount of data and content -- and the need for innovative analytics from across the Internet -- is still growing rapidly, even though we have harsh economic times.

To provide an in-depth look at how parallelism, modern data infrastructure, and MapReduce technologies come together in the new age, BriefingsDirect's Dana Gardner recently spoke with Tim O’Reilly, CEO and founder of O’Reilly Media and blogger; Jim Kobielus, senior analyst at Forrester Research, and Scott Yara, president and co-founder at Greenplum.

Here are some excerpts:
Kobielus: A number of things are happening ... and the trend continues to grow. In terms of the data sets, it’s becoming ever more massive for analytics. It’s equivalent to Moore’s Law, in the sense that every several years, the size of the average data warehouse or data mart grows by an order of magnitude.

Why are data warehouses bulking up so rapidly? One key thing is that organizations, especially in tough times when they're trying to cut costs, continue to consolidate a lot of disparate data sets into fewer data centers, onto fewer servers, and into fewer data warehouses that become ever-more important for their BI and advanced analytics.

What we're seeing is that more data warehouses are becoming enterprise data warehouses and are becoming multi-domain and multi-subject. You used to have tactical data marts, one for your customer data, one for your product data, one for your finance data, and so forth. Now, the enterprise data warehouse is becoming the be all and end all -- one hub for all of those sets.

Also, the data warehouse is becoming more than a data warehouse. It's becoming a full-fledged content warehouse, not just structured relational data, but unstructured and semi-structured data -- from XML, from your enterprise content management system, from the Web, from various formats.

O'Reilly: In the first age of computing, business models were dominated by hardware. In the second age, they were dominated by software. What started to happen in the 1990s ... open source started to create new business models around data, and, in particular, around network applications that built huge data sets through user participation. That’s the essence of what I call Web 2.0.

Look at Google. It's a BI company, based on massive data sets, where, first of all, they are spidering all the activity off of the Web, and that’s one layer. Then, they do this detailed analysis of the link structure of that Web, and that’s another layer. Then, they start saying, "Well, what else can we find? They start looking at click stream data. They start looking at browsing history, and where people go afterward. Think of all the data. Then, they deliver service against that.

That’s the essence of Web 2.0, building a massive data set, doing real-time analytics against it, and then figuring out what services you can deliver. What’s happening today is that movement is transferring from the consumer Web into business.

... When we think about where this is going, we first have to understand that everybody is connected all the time via applications, and this is accelerating, for example, via mobile. The need for real-time analytics against massive data sets is universal. ... This is a real frontier of competitive advantage. You look at the way that new technologies are being explored by startups. So many of the advantages are in data.

Yara: We're now entering this new cycle, where companies are going to be defined by their ability to capture and make use of the data and the user contributions that are coming from their customers and community. That is really being able to make parallel computing a reality.

... If you look at running applications on a much cheaper and much more efficient set of commodity systems and consolidating applications through virtualization, that would be a really compelling thing, and we've seen a multi-billion dollar industry born of that.

... We're talking about using parallel computing techniques, open-source software, and commodity hardware. It’s literally a 10- to 100-fold improvement in price performance. When the cost of data analysis comes down 10 to 100 times, that’s when new things become possible.

... Business is now driven by Web 2.0, by the success of Google, and by their own use and actions of the Web realizing how important data is to their own businesses. That’s become a very big driver, because it turns out that parallel computing, combined with commodity hardware, is a very disruptive platform for doing large-scale data analysis. ... Google has become a thought leader in how to do this, and there are a lot of companies creating technologies and models that are emblematic of that.

Kobielus: ... Power users are the ones who are going to do the bulk of the BI and analytics application development in this new paradigm. This will mean that for the traditional high priesthood of data modelers and developers and data mining specialists, more and more of this development will be offloaded from them, so they can do more sophisticated statistical analysis. ... The front office is the actual end user.

O'Reilly: ... The breakthroughs are coming from the ability of people to discern meaning in data. That meaning sometimes is very difficult to extract, but the more data you have, the better you can be at it. ... Getting more tools for handling larger and more complex data sets, and in particular, being able to mix data sets, is critical. ... That fits with this idea of crossing data sets being one of the new competencies that people are going to have to get better at.

Kobielus: Traditionally, data warehouses existed to provide you with perfect hindsight on the customer -- historical data, massive historical data, hopefully on the customer, and that 360 degree view of everything about the customer and everything they have ever done in the past, back to the dawn of recorded time.

Now, it’s coming down to managing that customer relationship and evolving and growing with that relationship. You have to have not so much a past or historical view, but a future view on that customer. You need to know that customer and where they are going better than they know themselves. ... That’s where the killer app of the online recommendation engine becomes critical.

Feed all [possible data and content] into a recommendation engine, which is a predictive-analytics model running inside the data warehouse. That can optimize that customer’s interaction at every touch point. Let’s say they're dealing with a call-center person live. The call-center person knows exactly how the world looks to that customer right now and has a really good sense for what that customer might need now or might need in three month, six months, or a year, in terms of new services or products, because other customers like them are doing similar things.

Yara: ... You're going to see lots of cases where for traditional businesses that are selling services and products to other businesses, the aggregation of data is going to be interesting and relevant. At the same time, you have companies where even the internal analysis of their data is something they haven’t been able to do before.

... These companies actually have access to amazing amounts of information about the customers and businesses. They are saying, "Why can’t we, at the point of interaction -- like eBay, Amazon, or some of these recommended engines -- start to take some of this aggregate information and turn it into improving businesses in the way that the Web companies have done so successfully. That’s going to be true for B2C businesses, as well as for B2B companies.

We're just at the beginning of that. That’s fundamentally what’s so exciting about Greenplum and where we're headed.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Greenplum.

Monday, December 15, 2008

IT systems analytics become more crucial as cloud and SaaS adoption raises complexity bar

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.

Read a full transcript of the discussion.

Software-as-a-service (SaaS) and cloud computing are changing the nature of IT systems' performance requirements and heightening expectations for end users from online applications and services.

Increasingly, an extended level of visibility, management, and performance will apply to those serving up applications as services, regardless of their hosting origins or models. The more the apps and services fulfill a need, the more the users will expect even better results and performance.

In other words, the more these organizations succeed, the more they need to scale, leverage virtualization and cloud infrastructure methods, embark of service oriented architecture (SOA) and then keep all the trains running fast and on time. Using the latest tools and analytics -- the equivalent of business intelligence (BI) for IT -- on the systems and across the gathering complexity becomes essential.

To learn more about how systems log tools and analysis are aiding providers of cloud and SaaS, I recently spoke with fellow blogger Phil Wainewright, an independent analyst and director at Procullux Ventures, and SaaS blogger at ZDNet and ebizQ, as well as with Jian Zhen, senior director of product management at LogLogic.

Here are some excerpts:
One thing that's happening is that the SaaS infrastructure is getting more complicated, because more choice is emerging. In the past people might have gone to one or two SaaS vendors in very isolated environments or isolated use cases. What we're now finding is that people are aggregating different SaaS services. ... We're actually looking at different layers of not just SaaS, but also platform as a service (PaaS), which are customizable applications, rather than the more packaged applications that we saw in the first generation of SaaS. We're seeing more utility and cloud platforms and a whole range of options in between.

That means people are really using different resources and having to keep tabs on all those different resources. Where in the past, all of an IT organizations' resources were under their own control, they now have to operate in this more open environment, where trust and visibility as to what's going on are major factors.

If you're going to take advantage of SaaS properly, then you need to move to more of a SOA internally. That makes it easier to start to aggregate or integrate these different mashups, these different services. At the end of the day, the end users aren't going to be bothered whether the application is delivered from the enhanced data center or from a third-party provider outside the firewall, as long as it works and gives them the business results they're looking for.

You have to worry not only about who is accessing the information within your company firewall, but now you have all this data that's sitting outside of the firewall in another environment. That could be a PaaS, as Phil said, it could be a SaaS, an application that's sitting out there. How do you control that access? How do you monitor that access. That's one of the key issues that IT has to worry about.

Obviously, there are data governance issues and activity monitoring issues. Now, from a performance and operational perspective, you have to worry about, are my systems performing, are these applications that I am renting, or platforms or utilities I am renting, are they performing to my spec? How do I ensure that the service providers can give me the SLAs that I need.

... What SaaS providers have been learning is that they need to get better at giving more information to their customers about what is going wrong when the service is not up or the service is not performing as expected. The SaaS industry is still learning about that. So, there is that element on that side.

On the IT side, the IT people have spent too much time worrying about reasons why they didn't want to deal with SaaS or cloud providers. They've been dealing with issues like what if does go down, or how can I trust the security? Yes, it does go down sometimes, but it's up 99.7 percent of the time or 99.9 percent of the time, which is better than most organizations can afford to do with their own services.

Let's shift the emphasis from, "It's broken, so I won't use it," to a more mature attitude, which says, "It will be up most of the time, but when it does break, how do I make sure that I remain accountable, as the IT manager, the IT Director, or the CIO. How do I remain accountable for those services to my organization, and how do I make sure that I can pinpoint the cause of the problem, and get it rectified as quickly as possible?"

One of the great quotes that we recently got from a customer is, "You can outsource responsibility, but not accountability." So, it fits right into what Phil what was saying about being accountable and about your own environment.

The requirement to comply with government regulations and industry mandates really doesn't change all that much, just because of SaaS or because a company is going into the cloud. What it means is that the end users are still responsible for complying with Sarbanes-Oxley (SOX), payment cared industry (PCI) standards, the Health Insurance Portability and Accountability Act (HIPAA), and other regulations. It also means that these customers will also expect the same type of reports that they get out of their own systems.

BI for IT, or IT intelligence, as I have used the term before, is really about getting more information out of the IT infrastructure; whether it's internal IT infrastructure or external IT infrastructure, such as the cloud.

Traditionally, administrators have always used logs as one of the tools to help them analyze and understand the infrastructure, both from a security and operational perspective. For example, one of the recent reports from Price Waterhouse, I believe, says that the number one method for identifying security incidents and operational problems is through logs.

We can provide them that information, both from an internal and external perspective. We work with a lot of service providers, as you know, companies like SAVVIS, VeriSign, Verizon Business Services, to provide the tools for them to analyze service provider infrastructures as well.

A lot of that information can be gathered into a central location, correlated, and presented as business intelligence or business activity monitoring for the IT infrastructure.

Increasingly, it comes back to IT accountability. If your service provider does go down, and if the logs show that the performance was degrading gradually over a period of time, then you should have known that. You should have been doing the analysis over time, so that you were ahead of that curve and were able to challenge the provider before the system went down.

If it's a good provider, which comes back to the question you asked, then the provider should be on top of that before the customer finds out. Increasingly, we'll see the quality of reporting that providers are doing to customers go up dramatically. The best providers will understand that the more visibility and transparency they provide the customers about the quality of service they are delivering, the more confidence and trust their customers will have in that service.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. More related podcasts. Sponsor: LogLogic.

Sunday, December 14, 2008

BriefingsDirect analysts handicap large IT vendors on how cloud trend impacts them

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Insights Edition, Vol. 34, a periodic discussion and dissection of software, services, services-oriented-architecture (SOA) and compute cloud-related news and events, with a panel of IT analysts and guests.

In this episode, recorded Nov. 21, our experts focus on the impact that cloud computing will have on the large, established IT vendors. We really are only beginning to understand how the IT services delivery, data management, and economic models of cloud computing will impact the market. If this shift is as large and inevitable as many of us think, the impact on the current IT business landscape will also be large. Some will do well, and some will not. All, I expect, will need to adapt, and the shifts are certainly exacerbated by the deepening global recession.

Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Brad Shimmin, principal analyst at Current Analysis, and Joe McKendrick, independent analyst and prolific blogger on ZDNet and ebizQ. Our discussion is produced and moderated by me, Dana Gardner.

Here are some excerpts:
Baer: In terms of who is best positioned for all this, I think it's a little too early to tell, because most of the large vendors are only just starting to put their feet in the water. Obviously, IBM, HP, and Microsoft are making moves. SAP has actually had a couple of stumbles on the way there. Oracle has sort of a sitting-on-the-fence strategy.

If we are going to talk about who has consistently positioned themselves as being the poster child, it has been Marc Benioff over at Salesforce.com, where they have evolved from a customer relationship management (CRM) application that you access on demand to expand towards Platform as a Service (PaaS).

Gardner: Who can get kicked in the teeth by this thing?

Baer: Well, Microsoft clearly could get kicked in the teeth, and that's obviously why they've come out with their resource strategy and with their various live-office strategies. Microsoft clearly has the most to lose, because they've been very identified with the rich client.

Gardner: Yet Microsoft has an opportunity to shoot for the moon. They have all the essential pieces. They have a very difficult transformation to make in terms of their business. They have a lot of cash in the bank, and we're in a transformational period.

I think Microsoft has an opportunity to make an offer that developers can't resist -- and probably no one else is in a position to do it -- which is to say, "We will have at least one of the top three clouds. We're going to give you the tools and give you simplicity that Joe the plumber can develop, and we're going to make sure that you have a huge audience of both consumers and businesses that we're going to line up for you."

McKendrick: They've already made a lot of moves in this direction: Software plus Services, the Live offerings. They're already positioning a lot of their product line. They work with Amazon and have offerings through the Amazon service as well. Microsoft gets into everything. Wherever you look, in the enterprise or in computing, they have some kind of offering there. Sometimes, the things don't take off for a while. They sit and bide their time, and eventually it takes hold.

... Thinking about the Microsoft plus Yahoo, it makes really good sense for them both to be a real powerhouse together in cloud computing. Earlier, I stressed that the providers who dominate the cloud world will be those that focus on extreme scalability, scale out, shared nothing, massively parallel processing being able to sift and analyze petabyte upon petabyte of data from all over especially of the Web 2.0 world especially clickstream information, and so forth.

Gardner: On the other hand, for those not shooting for the whole package, is this going to democratize IT?

Shimmin: When you look at the strategy vendors like IBM has, Sun will have, and Cisco has, in terms of how they're rolling out anything that's in the cloud -- whether its PaaS, infrastructure as a service, or SaaS -- they all seem to be doing two things.

One is that they are taking some point solutions that they are going direct with, like IBM with Bluehouse, for example. Secondly, they are going after an independent software vendor (ISV) market. They want to empower folks like amazon.com, Panorama, Pervasive, Peer1, Mosso, Akamai, Boomi, and all those guys. They're really looking to empower them to go out and deliver services.

What these companies are doing is allowing this broader feel, allowing this channel of service providers to exist, using their software and their services, and, in some cases, their actual data-center resources.

Gardner: I think you're saying that the organization that can provide the best ecology of partners and provide the best environment to thrive for many other players will do best, whereas, in the past, it seemed that, as an IT vendor, having the most installed base and the most lock-in offered the path to who did best.

Shimmin: Exactly.

Kobielus: I think you hit the nail on the head, Dana, when you pointed out that success in the emerging cloud arena depends on having a very broad and deep ecology of partners. I see the partner ecosystem as the new platform for cloud computing, being able to put together a group of partners that provide various differentiated features and services within an overall cloud-computing environment.

Then, the hub partner, as it were, provides some core, enabling infrastructure that binds them all together. Core infrastructures such as, for example, a core analytic environment or distributed data-warehousing environment that manages all of the structured, unstructured, and semi-structured data, manages all of the very compute-intensive analytical workloads, CPUs, and other resources that many or all of the partner solutions can tap into -- a basic utility computing environment.

Shimmin: When you look at a company like Microsoft, they seem to be slow to market, and then, once they enter the market, they go really, really fast. They seem to be going really, really fast at the moment with two things, because they have both. They have the infrastructure and they also have the apps. They're going to have both paths.

They have the Azure platform, which is truly a PaaS offering that you use to build your own applications. So it's a layer above the Amazon EC2 infrastructure as a service.

Then they have the full-on SaaS-type products with Microsoft Online Services, which has in it almost the entirety of their collaboration software. So, they have actually sort of leapfrogged IBM Bluehouse a little bit with that.

The point is that these vendors are really looking at their portfolios and seeing which ones fit either of those two models. They're not committing to one or the other, Dana. They're really trying to tackle both ends [the infrastructure and the apps].

McKendrick: Just about every small ISV coming on the market now is offering a SaaS model. This is the way to go with the emerging smaller software-development companies.

For the larger developers, ISVs that are already well-established, it's now another delivery mechanism, another channel to reach their customer base. There are a lot of efficiencies. When you have a cloud model or are working with a cloud model, you don't have to worry about making sure all your customers receive the latest upgrade or deal with problems customers may be having with conflicting software. It's all done once. You do the upgrade once, test it, ensure the quality, deliver it, and it's all done in one location. It makes their job a lot easier.

Kobielus: What the whole trend toward SOA started was the gradual dissolution or deconstruction of the underlying platforms, as you mentioned -- OSs, development environments, and the declarative programming languages. This is all buggy-whip territory now in terms of what large and small software vendors are developing to. Pretty much everyone is now developing to a virtualized SOA, cloud environment.

Most of the large and small vendors that I talk to ... are really looking at more of a flex-sourcing approach to delivering solutions to market. ... Most of the vendors that I talk to now have three broad go-to market delivery approaches for flexible delivery of applications or of solutions. They have everything as a service approach, the appliance approach, and the packaged, licensed software approach.

If you look at cloud computing as a Venn diagram, with many smaller bubbles within it, one of the hugest bubbles is this notion of flexible packaging and sourcing of solution functionality.

The "Chinese Wall" between internal hosting and external hosting is dissolving, as more and more organizations say, "You know what. We want to do data warehousing. We'll license a software from vendor X. We might also use their hosted offerings for these particular data marts. We also might go with an appliance from them, for either our data warehousing hub, a particular operational data store, or another deployment wall where the appliance form factor makes most sense."

Baer: The vision of SOA is that it runs both ways. You publish services and you consume services. ... Through SOA, perhaps companies can look at increasing capacity or tapping into capacity as needed in a grid like fashion, either with each other, or with a provider out there such as Amazon or IBM.

Shimmin: When you look at companies like IBM and Microsoft -- Microsoft with their Software plus Services, and IBM with their Foundation Start Appliance, coupled with their Bluehouse software as a service, coupled with their on-premises collaboration software -- you're talking about a solution that spans those three delivery mechanisms.

The pressures I'm talking about that are making that so for the enterprise buyer is that you don't want to have a full SaaS deployment, and you don't want to have a full appliance deployment. When you consider issues like ownership of data, privacy of that data or SOAs, even transaction volumes, there are facets of your enterprise application that are best suited to running in an appliance, in your data center, or in the cloud.

So, these vendors we are talking about here clearly recognize that need, and are trying to re-architect their software so it can run across those three channels in different ways.

McKendrick: The underlying architecture that a lot of vendors are moving toward to enable that degree of flexible deployment of different form factor -- hosted service, appliance, and packaged license software -- is the notion of shared nothing, massively parallel processing for extreme scale-out capabilities and extreme scale up as well.

In a federated model, where you have different clusters that can be internal, external, or in combinations specialized to particular roles within the application environment, some might be optimized for data warehousing, some might be optimized for business-process management and workflow, and others might be optimized for the upfront delivery, Web 2.0, REST, and all that. But, having shared nothing, massively parallel processing, with a federated middleware fabric in an SOA context, is where everybody is moving their platform and strategy.

Gardner: How about Amazon? That would be in my thinking a pretty good candidate for prom queen right now. Perhaps there will be some polygamy at the prom, because Amazon could team up potentially with say an Oracle and a Salesforce. Can you imagine such a pairing?

Kobielus: Yeah, because Oracle, a couple of months ago, announced that you can now take your existing Oracle database licenses and you can move them to the Amazon EC cloud and the Amazon storage service. So, to a degree, that partnership foreshadows possibly a larger relationship between those two companies going forward.

I think its really an interesting pairing of Oracle plus Amazon. Once again, I always have to hit the analytics thing on the head, because I think database analytics or cloud-scalable analytics is going to be a key differentiator for most application vendors.

Shimmin: With regards to Microsoft's channel, as you and Jim were saying, Microsoft is definitely going to be the queen bee and they are definitely going to make it beneficial to this channel to work with them in their cloud initiatives. At the same time, it's also Microsoft's greatest risk.

When you look at their PaaS with Azure, that makes sense for the channel, because how the channel differentiates is by the services they provide their customers directly, and that comes from developing code. But, when you talk about Microsoft's online services, Office Live, and those things, they are in a very precarious predicament of undercutting the values that their channel partners provide.

They're literally saying, "Hey, why do you need a channel partner for the SMB market, just come right to us and give us your credit card, which you can do for a certain number of dollars a month, and you are running."

Gardner: Right, so perhaps Microsoft has the golden opportunity but the transition is perilous, and execution has to be perfect. Just as we had back in the "anti" days, when all of the Unix vendors got together and created what they called the "anti-Microsoft coalition," all these other cloud providers, ISVs, developers, and all the PaaS people are going to get together and try to provide more of a marketplace, in order to if not staunch Microsoft, at least create that democratic approach to cloud.

Shimmin: I can’t believe I'm saying this -- Microsoft has really done something spectacular here, because it all comes back to the developer. What the developer does drives what software you run on the server, in many cases. What Microsoft has done with the Software-plus-Services program initiative, right now, today, using the 3.5 .NET framework in Windows 2008, you can write code that can be dropped in the cloud or on the desktop automatically. You can just write a rule that says, "If I reach a certain service level agreement (SLA), just kick this piece of code to the cloud."

Gardner: So Microsoft and not the business becomes the arbiter.

Shimmin: Exactly.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Charter Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Wednesday, December 10, 2008

More than cost savings alone, cloud computing will transform business, say HP and Capgemini

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Read related white paper. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

Many enterprises and service providers are now grappling with how cloud models and economics will impact them. The specter of a challenging business climate may well hasten the need to seek IT resources that are supported through greater cloud computing approaches -- to save money, as well as to better reach global audiences and gain Web-scale efficiencies.

The goal is to take advantage of what cloud models offer, but to do so with low risk and in alignment with enterprise IT dictates and requirements around management, security, governance, and visibility. There are a host of innovations around the various cloud models that are now just emerging and that we're only beginning to discover. These amount to being able to do business in new ways by using cloud models to accomplish things that simply could not be done before.

To better understand the value and opportunity unfolding around cloud computing, I recently interviewed Andy Mulholland, global chief technology officer at Capgemini; Tim Hall, director of services-oriented architecture (SOA) products at HP Software and Solutions, and Russ Daniels, vice president and CTO of cloud services strategy at HP.

A new white paper, "Capgemini: The Cloud and SOA: Creating an Architecture for Today and for the Future," on some of these very same issues has been published by Capgemini. It is available free (registration required) via download here.

Furthermore, Capgamini's Mulholland, will be delivering several presentations on these challenges and opportunities at the HP Software Universe conference in Vienna this week at a time when Amazon's Web Services are quickly gaining traction in Europe.

Here are some excerpts from the podcast:
When we talk about the cloud ... it's a new model for constructing software. It's a new design pattern, and it allows you to solve problems that really have been out of reach. You can take business needs, which if you tried to address them in the context of traditional IT design and delivery models, would tend to fail or under deliver.

The cloud allows you to go after those problems, to open new markets for the business, to allow it to reach out to customers that it hasn't been able to get to, to improve its differentiation in the market, and to contribute to the real goals of the business itself. That's what we think is exciting about the cloud.

There is this premise that [cloud computing] can help me look at how I manage and reduce my cost. Perhaps more importantly, we should say it the other way around. It enables me to address how I deal with a more variable business pattern and pay for what I need when I need it.

Many of the things a business does today are relatively fixed. ... But what we have is a growing desire and a growing need to find new things in the front office, about how we run our business more effectively, how we get into markets more effectively, and how we trade better. These tend to be small, fast-moving projects. They make a very big difference, and we simply don't want the same time scale in provisioning for them.

Increasingly, probably over the next couple of years, people don't want to spend capital on them. They'll want to pay for them operationally. They represent a new market, a new technique, a new set of standards, and a new set of technologies. All of that comes together in where cloud is going to go and make the difference to businesses.

... You start to recognize there are already a number of very well known brands that sell through the Internet and combine their services. ... The challenge in this is how it moves from being something that a handful of Web-based businesses are using. How do more businesses learn how to exploit that market and take their share of commercial revenue from that market?

When we think about the cloud, we don't think it's just a matter of how infrastructure is packaged, but it's really a combination of the impact of service oriented architecture (SOA) starting to break apart applications. We think more about the services to separate out the data from the applications, so that you can get at the data without having to go through complex application integrations.

There's another piece around taking advantage of Web 2.0 innovations, which includes both how you can create rich user experiences in the context of browsers in these remote execution models, but also significantly it's the social dimension. How can you take advantage of the innovation that's occurred in the consumer space by understanding the importance of bringing people together?

In many companies, they're trying to exploit these things, but they are doing it with a complete lack of structure. By bringing in a cloud model successfully, you're actually introducing some structure to support the very activities that people are increasingly experimenting with in their businesses today.

... If you have been doing new stuff, and you are building new stuff inside the organization, you really ought to have started doing that around SOA. If you're using services correctly internally, then of course, you can cross the firewall and start to use services outside, and blend them together.

Folks are looking at this as an integration technology, instead of a complete transformation of how they deliver service orientation or business services more comprehensively and more flexibly to address some of the unique challenges that the business is facing. ... SOA adoption, as a transformational agenda, is a microcosm of some concepts that apply very specifically to cloud and preparing people for cloud adoption.

What we find with our customers is that many workloads are important to the business, but they are not mission critical. In many of these workloads, good-enough delivery is good enough. ... Distinguishing between those types of workloads, identifying those where good-enough delivery is appropriate, and moving those into virtualized and automated delivery models, positions you to take advantage of external infrastructure capabilities as appropriate.

The key challenge for any IT organization is to understand what the business really needs, where the business value is, and how technology can help deliver that. This question of business-IT alignment is always the heart of the problem, and it will be certainly be true in terms of how the business chooses to go after cloud-based opportunities.

We think the cloud is great for connecting. It's great for connecting business to business. It's great for connecting business to its customers. ... Where is connecting important to your business? That's ultimately a business question, not a technology question. The focus should be on having people who can map from what the business needs to understanding how to exploit this new expressiveness that the cloud brings to solve the most pressing challenges, or to exploit the most exciting opportunities that the business faces.

... [It comes down to] the difference between interactions, which is a lot of this new market, and transactions, which was the old IT market. When you look at any IT system, it's fundamentally about getting a safe transaction to record what you have done. But, if you think about someone trying to decide what they're going to buy from you, like buying an airline ticket, deciding which flight and how much money they're going to pay and which extras they're going to have, it's a lot of interactions.

... We don't think the cloud is great for “transactionality,” for deep, technical reasons. ... The place where the cloud is great is where you're not focused on supporting transactions, but interactions, where you are connecting. It's being able to take state from participants in an extended supply chain and propagate that information up through data feeds, up into a cloud service.

For example, that information might be related to the carbon footprint related to material flowing through an extended supply chain. Each of the participants in an extended supply chain can simply publish a data stream that captures the carbon footprint of the materials that they will be producing. Now, you can run analytics in the cloud, using search-like algorithms, to answer questions about the carbon footprint for some end products. You don't have to do the detailed process integrations. You don't have to provide detailed transactional integrations across the supply chain system to support it.

It's exactly that new expressiveness that allows us to go after problems that we really couldn't have done affordably in the past. Because we couldn't do them that way, we ended up doing things manually and in emergencies. If you think about product traceability, it's the same problem, very difficult to deal with from a technology integration perspective in the traditional ways. As a result, when there's a problem, we have people pawing through information spreadsheets manually and providing the answers too late to be helpful.

... The cloud allows you to deliver the business results that matter. In other words, it really has to be thought of in the context of IT’s technology for business, and the key business challenges that we see our customers facing today are how to develop new markets? How do they take advantage of the abilities they have and deliver them to new customers? How can they understand better what their customers need, and how can they fit in and connect with them?

The cloud provides great capabilities for that. We think that it's still early, and you can see the promise in things like the recommendation engines that you find at online shopping sites. You're searching for something, and, based on your buying history, your demographics, your search behaviors, and then comparing that to the behaviors of others, the site can provide you with suggestions about other things that might be of interest to you as well. The technology helps identify your intentions and then offers suggestions to help you find things better suited for your needs than what you could have expressed or identified yourself.

That's a wonderful opportunity, and to be able to expand that approach into more and more of the ways that a business connects has huge implications.

Relatively speaking, [cloud computing] is unstoppable. The question is whether you'll crash into it or migrate into it. Why is it unstoppable? Because we're watching a business shift, people have to find ways to compete better in the market. Much of that is around. "How do I add smart services? How do I make products more available? How do I communicate directly and intimately with people, so they know what they want to buy from me?" All of those things are already developing in many businesses today, and people are building solutions to do that, sometimes gracefully, and sometimes not at all gracefully.

In other words, just as we had with the PC, where we basically were driven into it, some companies got there in a very ungraceful way and had to figure out afterward how to sort out the mess. Others did have a strategy, and emerged in a very graceful way. I think we're in the same situation. Users wanting social software have taken us there to run and do things better. We've been taken there by businesses needing to get into new markets.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Read the related white paper. Sponsor: Hewlett-Packard.

Linthicum podcast: Cloud computing plus recession equals IT transformation

I had the pleasure of joining long-time IT thought leader and IT executive Dave Linthicum this week for a podcast on cloud computing.

We get into a discussion on the alignment of the dour economic climate with the newer services architectures for mixing and matching IT assets and resources with more flexibility. We also look into some recent news events relating to cloud and the impact and analysis on these issues.

Dave has created a new consulting emphasis on cloud computing, and is finding strong interest from enterprises in the subject and its effects. He also joins me regularly on the BriefingsDirect Analyst Insights podcast series.

If you're interested in understanding how cloud computing is affecting your company and IT department -- especially as a change agent during transformative times -- you may enjoy and benefit from the podcast. You can also subscribe to the ongoing Linthicum cloud podcast or find it on iTunes.

Tuesday, December 9, 2008

Remote support offers enterprises avenue to cut operational costs while improving IT systems reliability

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The trend around use of remote support software for monitoring, remediation and IT maintenance automation is gaining steam in the global enterprise IT market. I certainly expect that as companies become even more cost conscious that they will seek to further reduce their total cost of IT operations in any way possible. Remote support best practices and effective use will therefore become even more prominent.

The goal of remote support software and services is to free up on-premises IT personnel to focus on what they do best and to offload routine chores to organizations that can leverage the Internet do IT support remotely at high efficiency and lower cost. The benefits of remote support have become very popular among PC owners, and now the value is becoming popular as a cloud computing service for general server support and data center maintenance worldwide.

To better understand the options for better remote monitoring, resolving, and automating of the ongoing performance support of IT systems, I recently interviewed Dionne Morgan, worldwide marketing manager in HP Technology Services, and Claudia Ulrich, communications manager in Delivery Engineering at HP.

Here are some excerpts:
At many companies, IT managers understand that ongoing administration and maintenance of their existing infrastructure consumes most of the IT budget. ... Far too much time has been spent by IT staff on managing, monitoring, and troubleshooting their IT infrastructure. Obviously, this can be very expensive in both time and money. Too often, there's increased risk and unplanned downtime, which lead to an inability to meet business objectives and achieve business outcomes.

We're also finding that system complexity is adding to the problem. ... When a problem occurs in the infrastructure, finding the source and the nature of the problem -- and then coming up with the resolution -- can also be a daunting task.

It could be anything from actual hardware failure and trying to detect exactly where within the system the failure has occurred, to a need for additional memory or additional hard drive space. Those are some of the typical problems that our customers are facing, and those are the problems where you can automate the process of identifying the nature of that problem and coming up with the solution.

[Enterprises are] moving from traditional phone-in [and help desk] support and on-site delivery to automated events reporting. This is also called "phone home" capabilities. Adding to customers' manageability solution the ability to monitor the complete enterprise environment by automatically submitting incidents to the remote support provider to increase the level of services, which in return improves availability and reduces service cost for the customer.

One reason this helps to manage personnel is because it's going to be constantly monitoring the environment 24/7. Even at the end of the day, when the staff goes home, the system is still monitoring and it helps to filter the actual events that are coming through, so that the IT organization can prioritize which of those events they need to take action on. It's actually removing some of the mundane task of troubleshooting and prioritizing the events or the incidents.

We're looking at the complete, heterogeneous IT environment. This includes servers, storage, network, not only from HP, but also from selected vendors like IBM and Dell servers, as well as Brocade and Cisco switches. ... This also includes industry trends toward virtualization, as well as blade, and cloud computing, as they evolve.

I believe that down the road we'll see an expansion of the products that are covered by remote support. We'll begin to look at the total environment, in addition to the infrastructure. We'll also see organizations looking at how to automate processes, how to help with monitoring and troubleshooting applications.

[For now], remote support is a critical piece of establishing the next-generation data center. HP has defined six enablers to build this next generation data center, and HP Remote Service Pack (RSP) can definitely contribute to these enablers. ... We're really looking at this one foundation to enable consolidation and modernization of data centers, and also to be able to transition between the two, using a common management system.

If you think about Information Technology Infrastructure Library (ITIL) and the fact that we have a lifecycle that includes strategy, design, transition, operation, and continued service improvement, this is going to help to automate many of those support processes that you need on an ongoing operational basis and incident management. They can assist with help desk management and asset management.

What we have found with customers is that, when they are using these remote support tools, they're actually able to reduce the amount of time they spend in troubleshooting by 20 percent and they're also able to increase the accuracy of the diagnosis by over 99 percent. So, with these remote support tools, if they're monitoring the heterogeneous environment that will actually speed up the process of troubleshooting and isolating the problem.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.