Thursday, April 29, 2010

HP's 48Upper moves IT beyond the 'anti-social' motif into a more community-oriented, socialized flow future

I'm not saying that IT departments have a PR problem, but when was the last time you saw a button saying, "Hug an IT person today"?

What I am saying is that IT is largely misunderstood outside the walls of the IT environment. And the crusty silos inside of IT can make their own cultural connections tenuous, too.

The fact is that IT over the decades has been put into the unenviable role of having to say "No" more often than "Yes." At least that's the perception.

IT forms an expensive reality check as businesses seek to reinvent themselves, and sometimes even to adapt quickly in their own markets. Saying "No" it isn't fun, but it's often truth. This is because computers are tangible, complex and logical, and businesses are, well ... dynamic, human-oriented, emotion-driven, creative and crowd-driven. Computers take a long time to set up properly and are best managed centrally, while consensus-oriented businesses change their minds twice a quarter (for better or worse).

Yes, the IT guys inhabit the happy-go-lucky no-man's land between the gaping culture chasms of bits and bytes reality versus the bubbly new business models, market inflection points, and charisma-driven leadership visionaries' next big thing.

Worse, when asked to explain why "Yes" has to mean "No" to keep the IT systems from crapping out or security holes from opening, the business side of the enterprise usually gets a technical answer from the IT guys (and gals). It's like they are all speaking different languages, coming from different planets, with different cultural references. A recipe for ... well, something short of blissful harmony. Right?

Yet, at the same time, today's visionary business workers and managers keep finding "Yes" coming from off of the Web from the likes of Google, Amazon, Microsoft and the SaaS applications providers. The comparison of free or low-cost Web-based wonders does not stack up so well against the traditional IT department restraint. The comparison might be unfair, but it's being made ... a lot.

Most disruptively, the social networks like Facebook, LinkedIn and Twitter are saying a lot more than just "Yes" to users -- they're saying, "Let's relate in whole new ways, kids!" The preferred medium of interaction has moved rapidly away from a world of email and static business application interfaces to "rivers" and "walls" of free-flowing information and group-speak insights. Actual work is indeed somehow getting done through friend conversations and chatty affinity groups linked by interests, concerns, proximity and even dynamic business processes.

So nowadays, IT has more than an image problem. It has a socialization problem that's not going away any time soon. So why shouldn't IT get social too in order to remain relevant and useful?

HP Software has taken notice, and is building out a new yet-unreleased social media approach to how IT does business. It may very well allow to IT to say "Yes" more often. But more importantly socially collaborative IT can relate to itself and its constituents in effective and savvy new ways.

HP's goal is to foster far better collaboration and knowledge sharing among and between IT practitioners, as well as make the shared services movement align with the social software phenomenon in as many directions as possible. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Called 48Upper (apparently named after an HP skunk works location in Cupertino, CA), the new IT-focused collaboration and socialized interfaces approach is being readied for release at some point in mid-2010. There's already a web site stub at www.48upper.com and a YouTube video there that portrays a new cultural identity for IT.

I was intrigued by a recent introductory chat with HP's Matt Schvimmer, Senior Director, Marketing and Business Development at 48Upper. He explained that IT people are not reflective of white lab coat stereotypes, that there's a huge opportunity to manage IT better using the tools common now among social networks and SaaS processes. His blog has more.

Matt was kind enough to share an early (dare I say, exclusive) look at an in-development (ie alpha) screen shot of 48Upper. It does meld worlds, for sure.


IT clearly needs to bridge its internal silos -- such as between development and operations, networks and servers, architects and developers. And, as stated, IT can go a long way to better communicate with the business users and leaders. So why shouldn't a Facebook-like set of applications and services accomplish both at once?

HP is not alone in seeing the value of mashups between social media methods and processes with business functions and governance. Salesforce.com has brought Chatter to the ERP suite (and beyond). Social business consultancies are springing up. Google Wave is making some of its own. Twitter and Facebook are finding their values extended deeply into the business world, whether sanctioned by IT or not.

What jumps out at me from 48Upper is how well social media interfaces and methods align with modern IT architectures and automation advances, such as IT shared services, SOA, cloud computing, and webby app development. A SOA is a great back-end for a social media front-end, so to speak.

An ESB is a great fit for a fast-paced, events-driven, policy-directed fabric of processes that is fast yet controlled. In a sense, SOA makes the scale and manageability of socialized business processes possible. The SOA can drive the applications services as well as the interactions as social gatherings. Is it any wonder HP sees an opportunity here?

By applying governance to social media activities, the best of the new sharing, and the needs of the IT requirements around access and security control, can co-exist. And -- as all of this social activity managed by a SOA churns along -- a ton of data and inference information is generated, allowing for information management and business intelligence tools to be brought into the mix.

That sets up virtuous cycles of adoption refined by data-driven analytics that help shape the next fluid iteration of the business processes (modeled and managed, of course). It allows the best of people-level sharing and innovation to be empowered by IT, and by the IT workers.

So perhaps it's time for IT to find a new way of saying, "Yes." Or at least have a vibrant conversation about it.

Wednesday, April 28, 2010

VMforce: Cloud mates with Java marriage of necessity for VMware and Salesforce.com

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Go to any vendor conference and it gets hard to avoid what has become “The Obligatory Cloud Presentation” or “Slide.” It’s beyond this discussion to discuss hype vs. reality, but potential benefits like the elasticity of the cloud have made the idea too difficult to dismiss, even if most large enterprises remain wary of trusting the brunt of their mission systems to some external host, SAS 70 certification or otherwise.

So it’s not surprising that cloud has become a strategic objective for VMware and SpringSource -- both before after the acquisition that brought them together. VMware was busy forming its vCloud strategy to stay a step ahead of rivals that seek to make VMware’s core virtualization hypervisor business commodity, while SpringSource acquired CloudFoundry to take its expanding Java stack to the cloud (even as such options were coming available for .NET and emerging web languages and frameworks like Ruby on Rails).

Following last summer’s VMware SpringSource acquisition, the obvious path would have placed SpringSource as the application development stack that would elevate vCloud from raw infrastructure as a service (IaaS) to a full development platform. That remains the goal, but it’s hardly the shortest path to VMware’s strategic goals.

At this point, VMware still is getting its arms around the assets that are now under its umbrella with SpringSource. As we speculated last summer, we should see some of the features of the Spring framework itself, such as dependency injection (which abstracts dependencies so developers don’t have to worry about writing all the necessary configuration files), applied to managing virtualization. But that’s for another time, another day.

VMware’s more pressing need is to make vSphere the de facto standard for managing virtualization and making vCloud, the de facto standard for cloud virtualization. (Actually, if you think about it, it is virtualization squared: OS instances virtualized from hardware, and hardware virtualized form infrastructure.)

In turn, Salesforce.com wants to become the de facto cloud alternative to Google, Microsoft, IBM, and when they get serious, Oracle and SAP. The dilemma is that Salesforce up until now has built its own walled garden. That was fine when you were confining this to CRM and third-party AppExchange providers who piggybacked on Salesforce’s own multi-tenanted infrastructure using its own proprietary Force.com environment with its “Java-like” Apex stored procedures language.

But at the end of the day, Apex is not going to evolve into anything more than a Salesforce.com niche development platform, and Force.com is not about to challenge Microsoft .NET, or Java for that matter.

The challenge is that Salesforce, having made the modern incarnation of remote hosted computing palatable to the enterprise mainstream, now finds itself in a larger fishbowl outgunned in sheer scale by Amazon and Google, and outside the enterprise, the on-premises Java mainstream. Salesforce Chairman and CEO Marc Benioff conceded as much at the VMforce launch this week, characterizing Java as “the No. 1 developer language in the enterprise.”

So VMforce is the marriage of two suitors that each needed their own leapfrogs: VMware transitions into a ready-made cloud-based Java stack with existing brand recognition, and Salesforce.com steps up to the wider Java enterprise mainstream opportunity.

Apps written using the Spring Java stack will gain access to Force.com's community and services such as search, identity and security, workflow, reporting and analytics, web services integration API, and mobile deployment. But it also means dilution of some features that make Force.com platform what it is; the biggest departure is away from the Apex language stored procedures architecture that runs directly inside the Salesforce.com relational database.

Salesforce pragmatically trades scalability of a unitary architecture for scalability through a virtualized one.

It really means that Salesforce morphs into a different creature, and now must decide whom it means to compete with because -- it’s not just Oracle business applications anymore.

Our bets are splitting the difference with Amazon, as other SaaS providers like IBM that don’t want to get weighed down by sunk costs have already done. If Salesforce wants to become the enterprise Java platform-as-a-Service (PaaS) leader, it will have to ramp up capacity, and matching Amazon or Google in a capital investment race is a nearly hopeless proposition.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Monday, April 26, 2010

HP rolls out application modernization tools on heels of Forrester survey showing need for better app lifecycle management

Application lifecycle productivity is proving an escalating challenge in today’s enterprise. Bloated app portfolios and obsolete technologies can stifle business agility and productivity, according to a new Forrester Research IT trends survey.

A full 80 percent of IT decision makers queried cited obsolete and overly complex technology platforms as making "significant" or "critical impact" on application delivery productivity. Another 76 percent cited the negative impact of "cumbersome software development lifecycle processes," while 73 percent said it was "difficult to change legacy applications," said Forrester's consulting division. The study is available. [Disclosure: HP is a sponsor of BriefingsDirect podcasts].

A sharp focus on overcoming the challenges associated with improving applications quality and productivity is leading to a growing demand for applications modernization. Specifically, agility, cost reduction and innovation are driving modernization efforts, the Forrester survey concludes.

Fifty-one percent of Forrester’s respondents are currently modernizing software development lifecycle tools, including software testing processes. But are enterprises truly realizing the benefits of application modernization efforts?

On Monday, HP rolled out a set of application quality tools that focus on increasing business agility and reducing time to market to help more companies answer "yes" to that question. The new solutions are part of the HP Application Lifecycle Management portfolio, a key component of HP’s Application Transformation solutions to help enterprises manage shifting business demands.

New challenges, new tools

HP Service Test Management (STM) 10.5 and the enhanced HP Functional Testing 10.0 work to advance application modernization efforts in two ways. First, the tools make it easier for enterprises to focus on hindrances to application quality. Second, the tools improve the all-important line of sight between development and quality assurance teams.

“To maintain a competitive edge in today’s dynamic IT environment, it is critical for business applications to rapidly support changes without compromising quality or performance,” says Jonathan Rende, vice president and general manager of HP’s Business Technology Optimization Applications, Software and Solutions division.

HP STM 10.5 works to mitigate risk and improve business ability by setting the stage for more collaboration between development and quality assurance teams. Built on HP Quality Center, enterprises are using HP STM 10.5 to increase testing efficiency and overall throughput of application components and shared services.

Meanwhile, HP Functional Testing 10.0 ensures application quality to address changing business demands. It even offers a new Web 2.0 Feature Pack and Extensibility Accelerator that supports Web 2.0 apps and lets IT admins test any rich Internet apps technology.

“It is critical for us, particularly in the financial industry, to react rapidly to development changes early on in the testing life cycles,” says Mat Gookin, test automation lead at Suntrust Banks. “We look to ... flexible technology that keeps application quality performance high and operations cost low so we can focus on preventing risks and providing value to our end users.”

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Friday, April 23, 2010

Freed from data center requirements, cloud computing gives start-ups the fast-track to innovate, compete

This quest post comes courtesy of Mike Kavis, is CTO of M-Dot Network, Vice President and Director of Social Technologies for the Center for the Advancement of the Enterprise Architecture Profession (CAEAP ), and a licensed ZapThink architect.

By Mike Kavis

Cloud computing is grabbing a lot of headlines these days. As we have seen with SOA in the past, there is a lot of confusion of what cloud computing is, a lot of resistance to change, and a lot of vendors repackaging their products and calling it cloud-enabled.

While many analysts, vendors, journalists, and big companies argue back and forth about semantics, economic models, and viability of cloud computing, start-ups are innovating and deploying in the cloud at warp speed for a fraction of the cost.

This begs the question, “Can large organizations keep up with the pace of change and innovation that we are seeing from start-ups?”

Innovate or die

Unlike large, well-established companies, start-ups don’t have the time or money to debate the merits of cloud computing. In fact, a start-up will have a hard time getting funded if they choose to build data centers, unless building data centers is their core competency.

Start-ups are looking for two things: Speed to market and keeping the burn rate to a minimum. Cloud computing provides both. Speed to market is accomplished by eliminating long procurement cycles for hardware and software, outsourcing various management and security functions to the cloud service providers, and the automation of scaling up and down resources as needed.

The low burn rate can be achieved by not assuming all of the costs of physical data centers (cooling, rent, labor, etc.), only paying for the resources you use, and freeing up resources to work on core business functions.

I happen to be a CTO of a start-up. For us, without cloud computing, we would not even be in business. We are a retail technology company that aggregates digital coupons from numerous content providers and automatically redeems these coupons in real time at the point of sale when customers shop.

These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough.



To provide this service, we need to have highly scalable, reliable, and secure infrastructure in multiple locations across the nation and eventually across the globe. The amount of capital required to build these data centers ourselves and hire the staff to manage them is at least 10 times the amount we are spending to build our 100 percent cloud-based platform. There are a hand full of large companies who own the paper coupon industry.

You would think that they would easily be the leaders in the digital coupon industry. These highly successful companies are so bogged down in legacy systems and have so much invested in on-premise data centers that they just cannot move fast enough and build the new digital solutions cheap enough to compete with a handful of start-ups that are racing to sign up all the retailers for this service.

Oh, the irony of it all! The bigger companies have a ton of talent, well established data centers and best practices, and lots of capital. Yet the cash strapped start-ups are able to innovate faster, cheaper, and produce legacy-free solutions that are designed specifically to address a new opportunity driven by increased mobile usage and a surge in the redemption rates of both web and mobile coupons due to economic pressures.

My story is just one use case where we see start-ups grabbing accounts that used to be a honey pot for larger organizations. Take a look at the innovation coming out of the medical, education, home health services, and social networking areas to name a few and you will see many smaller, newer companies providing superior products and services at lower cost (or free) and quicker to market.

While bigger companies are trying to change their cultures to be more agile, to do “more with less” -- and to better align business and IT -- good start-ups just focus on delivery as a means of survival.

Legacy systems and company culture as anchors

Start-ups get to start with a blank sheet of paper and design solutions to specifically take advantage of cloud computing whether they leverage SaaS, PaaS, or IaaS services or a combination of all three. For large companies, the shift to the cloud is a much tougher undertaking.

First, someone has to sell the concept of cloud computing to senior management to secure funding to undertake a cloud based initiative. Second, most companies have years of legacy systems to deal with. Most, if not all of these systems were never designed to be deployed or to integrate with systems deployed outside of an on-premise data center.

Often the risk/reward for re-engineering existing systems to take advantage of the cloud is not economically feasible and has limited value for the end users. If it is not broke don’t fix it!

Smarter companies will start new products and services in the cloud. This approach makes more sense, but there are still issues like internal resistance to change, skill gaps, outdated processes/best practices, and a host of organizational challenges that can get in the way. Like we witnessed with SOA, organization change management is a critical element for successfully implementing any disruptive technology.

The culture for most start-ups is entrepreneurial by nature. The focus is on speed, low cost, results.



Resistance to change and communication silos can and will kill these types of initiatives. Start-ups don’t have these issues, or at least they shouldn’t. Start-ups define their culture from inception. The culture for most start-ups is entrepreneurial by nature. The focus is on speed, low cost, results.

Large companies also have tons of assets that are depreciating on the books and armies of people trained on how to manage stuff on-site. Many of these companies want the benefits of the cloud without given up control that they are used to having. This often leads them down an ill advised path to build private clouds within their data center.

To make matters worse, some even use the same technology partners that supply their on-premise servers without giving the proper evaluation to the thought leading vendors in this space. When you see people arguing about the economics of the cloud, this is why. The cloud is economically feasible when you do not procure and manage the infrastructure on-site.

With private clouds, you give up much of the benefits of cloud computing in return for control. Hybrid clouds offer the best of both worlds but even hybrids add a layer of complexity and manageability that may drive costs higher than desired.

We see that start-ups are leveraging the public cloud for almost everything. There are a few exceptions where due to customer demands, certain data are kept at the customer site or in a hosted or private cloud, but that is the exception not the norm.

The Zapthink take

Start-ups will continue to innovate and leverage cloud computing as a competitive advantage while large, well-established companies will test the waters with non-mission critical solutions first. Large companies will not be able to deliver at the speed of start-ups due to legacy systems and organizational issues, thus conceding to start-ups for certain business opportunities.

Our advice is that larger companies create a separate cloud team that is not bound by the constraints of the existing organization and let them operate as a start-up. Larger companies should also consider funding external start-ups that are working on products and services that fit into their portfolio.

Finally, large companies should also have their merger and acquisition department actively looking for promising start-ups for strategic partnerships, acquisitions, or even buy to kill type strategies. This strategy allows larger companies to focus on their core business while shifting the risks of failed cloud executions to the start-up companies.

If you’re a Licensed ZapThink Architect and you’d like to contribute a guest ZapFlash, please email info@zapthink.com.

This quest post comes courtesy of Mike Kavis, is CTO of M-Dot Network, Vice President and Director of Social Technologies for the Center for the Advancement of the Enterprise Architecture Profession (CAEAP ), and a licensed ZapThink architect.

You may also be interested in:

Wednesday, April 21, 2010

With Jigsaw buy, Salesforce.com shows that lead generation is the new advertising

Salesforce.com's buy of Jigsaw is the latest, most indicative market mover in the transition to a lead generation economy.

Twitter's forays into a sponsored tweets business model announced last week at Chirp is another. Yahoo selling its soul to Microsoft for Bing is another. And just about everything that Google does is but another. And everything that Facebook does? Ditto. Apple loves the idea, one download at a time. Amazon? One purchase at a time.

These players are poised to grease the skids leading to a lead generation economy, one that makes conventional and current online advertising no more relevant than rabbit ear antennas for the top of your black and white television.

Only a year into the data-driven decade, and the ways in which user-, buyer- and social-interactions are rapidly being brought to bear on B2C and B2B commerce are piling up -- as never before. The model makes especially good sense for B2B, as these decisions are more often data- and information-driven, not emotionally charged as the advertising-juiced B2C domain so often is. And more and more B2B purchases start and end with an online search.

Adding a powerful ingredient to the mix, Jigsaw has huge data sets and the ability to cleanse and verify who's who on the web. As buyers, sellers, social types and knowledge seekers, people the world over are conducting more and more of their everyday lives and business roles online.

All this leaves trails, crumbs, identities, scraps and gems about who we are, what we do and what we may want -- as individuals, families, businesses, employees. It's a Noah-scale flood of data. And if you take a mere scrap of what you know about someone online from that flood and run it through Jigsaw it will tell you yet more about the person, or verify that what you already have is correct and current.

Incidentally Jigsaw does this with data that is updated very rapidly, often daily or less. This is not those big CD-delivered data sets that are obsolete before they leave the hard drive. The whole arena of business intelligence is the gorilla in the room ... it provides even more and better data and helps decide what value to bring to whom and when.

To flesh out the "who" part, Jigsaw, like a lot of others in the field, are building the up-to-date meta directories of who's who and what's what online. From Marc Benioff's choice, we should assume that Jigsaw fit the right mix of being cloud-based, current, comprehensive and B2B-oriented.

Oh, and don't give me the "I'm a victim" crap about how your identify is being pilfered or your privacy invaded by this data collection and cleansing. The data is being contributed by you, and everybody else all the time, ie, Facebook, ... freely and openly -- just by being online. It's the quid pro quo of the web.

You want the benefits of the Internet, you give up some data about yourself along the way. It's life today. If you want privacy, stay off the Internet. For businesses and enterprise buyers, incidentally, they actually want to be known and to know about others. Such data is undoubtably the lingua franca of modern business. Ask Google how it's keywords sales are going.

And so why would Salesforce.com pay $142 million for Jigsaw's cache and carry and data services?

Because now any business that uses Salesforce's CRM, SFA and SasS/PaaS ecosystem can know a lot more about who's who inside the business processes that they are producing and involved in. That's right ... process. The economy needs to bind services and processes together just as much as buyers and sellers of goods. The common denominator is the users, and their identify data.

So think of Jigsaw as bringing cloud-based ETL from all of your web interactions that feed the leads that enter into your sales and customer resource data bases and interactions. I'm proud and happy to have been successfully experimenting with the knowledge-driven content onramps to the search and social media myself for five years. It's strong, knowledge-based content that precisely attracts and informs the users that begets their participation that begets the data that gets cleansed that nurtures more information sharing that begets the CRM process that leads to a sales cherished by both parties.

Incidentally, if Salesforce.com now straps on a marketing automation service (or ecosystem) to what they have -- cleaning the data all along the way via Jigsaw -- you get a glimpse of the lead generation future. Google could do this any time ... they have all the parts necessary. Indeed, Salesforce.com and Google are on a collision course even more with the Jigsaw buy.

Which brings us to advertising -- the Neanderthal of the ecommerce evolutionary tree. Ads online or off -- search or banner -- are big, dumb, blunt, hairy instruments of joining up buyers and sellers based on ignorance about each other. You want to reach young men with money and a yen for beer and pickup trucks? Spend millions on Superbowl ads. Very efficient.

Trouble is that I also have to watch these ads about beer and trucks, neither of which I need any information on right now. Give me some data I can use in my life and business, please.

I think the future of advertising is dwindling into the role of a cheap sidewalk hawker, and funneling a few unsure souls into a sideshow. Maybe advertising will simply one of many ways that buyers and sellers enter into a more efficient data-driven lead generation process ... Just like the one that Salesforce.com is build, buy and partnering its way to ASAP.

The money now spent on advertising will be moving aggressively to the lead generation portion of the equation, where the ROI is precise and understood by all. Most advertising is bought via the credit default swaps method of tails, I win (the media company), heads, you lose (the advertiser).

The lead generation economy does away with the murky nature of advertising's true value and return. In a lead generation process, you spend X to get Y. All the variables are measured and adjustable -- and it scales up as well as down.

Using readily proffered attention and affinity data, users can get a closer fit to what they actually want in terms of information and opportunity. Sellers can fine-tune the information and offers they direct into the buying process. Over time, this can be a proficient fit right down to a one-to-one relationship, from buying Boeing 787s to a stick of chewing gum.

It's clearly the future: B2B and B2C commerce driven by data-empowered inferences between that buys need, and what sellers have. Only the price needs to be negotiated. Perhaps Salesforce.com will broker that too?

So who will be in the uber hub position, the meta directory and meta facilitator for the lead generation economy future? Media companies want it, technology companies want it, search and social media companies want it. And they all should, it's a trillion dollar business opportunity.

Salesforce.com is clearly in the game. May the best data win.

Thursday, April 15, 2010

Information management takes aim at need for improved business insights from complex data sources

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

Get a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

Today's sponsored podcast discussion delves into how to better harness the power of information to drive and improve business insights.

We’ll examine how the tough economy has accelerated the progression toward more data-driven business decisions. To enable speedy proactive business analysis, information management (IM) has arisen as an essential ingredient for making business intelligence (BI) for these decisions pay off.

Yet IM itself can become unwieldy, as well as difficult to automate and scale. So managing IM has become an area for careful investment. Where then should those investments be made for the highest analytic business return? How do companies better compete through the strategic and effective use of its information?

We’ll look at some use case scenarios with executives from HP to learn how effective IM improves customer outcomes, while also identifying where costs can be cut through efficiency and better business decisions.

To get to the root of IM best practices and value, please join me in welcoming our guests, Brooks Esser, Worldwide Marketing Lead for Information Management Solutions at HP; John Santaferraro, Director of Marketing and Industry Communications for BI Solutions at HP, and Vickie Farrell, Manager of Market Strategy for BI Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Santaferraro: The customers that we work with tend to have very complex businesses, and because of that, very complex information requirements. It used to be that they looked primarily at their structured data as a source of insight into the business. More recently, the concern has moved well beyond business intelligence to look at a combination of unstructured data, text data, IM. There’s just a whole lot of different sources of information.

The idea that they can have some practices across the enterprise that would help them better manage information and produce real value and real outcomes for the business is extremely relevant.

If you look at the information worker or the person who has to make decisions on the front line, if you look at those kinds of people, the truth is that most of them need more than just data and analysis. In a lot of cases, they will need a document, a contract. They need all of those different kinds of data to give them different views to be able to make the right decision.

... I’d like to think of it as actually enterprise IM. It’s looking across the entire business and being able to see across the business. It’s information, all types of information as we identify structured, unstructured documents, scanned documents, video assets, media assets.

... By effectively using the information they have and further leveraging the investments that they’ve already made, there is going to be significant cost savings for the business. A lot of it comes out of just having the right insight to be able to reduce costs overall. There are even efficiencies to be had in the processing of information. It can cost a lot of money to capture data, to store it, and cleanse it.

Then it’s about the management, the effective management of all of those information assets to be able to produce real business outcomes and real value for the business. ... Obviously, the companies that figure out how to streamline the handling and the management of their information are going to have major cost reductions overall.

The way to compete

Esser: This is really becoming the way that leading edge companies compete. I’ve seen a lot of research that suggests that CEOs are becoming increasingly interested in leveraging data more effectively in their decision-making processes.

It used to be fairly simple. You would simply identify your best customers, market like heck to them, and try to maximize the revenue derived from your best customers.

Now, what we’re seeing is emphasis on getting the data right and applying analytics to an entire customer base, trying to maximize revenue from a broader customer base.

We’re going to talk about a few cases today where entities got the data right, they now serve their customers better, reduced cost at the same time, and increased their profitability.

... We think of IM as having four pillars. The first is the infrastructure, obviously -- the storage, the data warehousing, information integration that kind of ties the infrastructure together.

The second piece, which is very important, is governance. That includes things like data protection, master data management, compliance, and e-discovery.

The third is information processes. We start talking about paper-based information, digitizing documents, and getting them into the mix. Those first three pillars taken together really form the basis of an IM environment. They’re really the pieces that allow you to get the data right.

The fourth pillar, of course, is the analytics, the insight that business leaders can get from the analytics about the information. The two, obviously, go hand in hand. Rugged information infrastructure for your analytics isn’t any better than poor infrastructure with solid analytics. Getting both pieces of that right is very, very important.

... [And, again,] governance processes are the key to everything I talked about earlier -- the pillars of a solid IM environment. Governance [is] about protecting data, quality, compliance, and the whole idea of master data management -- limiting access and making sure that right people have access to input data and that data is of high-quality.

Farrell: We recently surveyed a number of data warehouse and BI users. We found that 81 percent of them either have a formal data governance process in place or they expect to invest in one in the next 12 months.

... What we’ve seen in the last couple of years is serious attention on investing in that data structure -- getting the data right, as we put it. It's establishing a high level of data quality, a level of trust in the data for users, so that they are able to make use of those tools and really glean from that data the insight and information that they need to better manage their business.

... A couple of years ago, I remember, a lot of pundits were talking about BI becoming pervasive, because tools have gotten more affordable and easier to use. Therefore anybody with a smartphone or PDA or laptop computer was going to be able to do heavy-duty analysis.

Of course, that hasn’t happened. There is more limiting the wide use of BI than the tools themselves. One of the biggest issues is the integration of the data, the quality of the data, and having a data foundation in an environment where the users can really trust it and use it to do the kind of analysis that they need to do.

... The more effectively you bring together the IT people and the business people and get them aligned, the better the acceptance is going to be. You certainly can mandate use of the system, but that’s really not a best practice. That’s not what you want to do.

By making the information easily accessible and relevant to the business users and showing them that they can trust that data, it’s going to be a more effective system, because they are going to be more likely to use it and not just be forced to use it.

Esser: Organizations all over the world are struggling with an expansion of information. In some companies, you’re seeing data doubling one year over the next. It’s creating problems for the storage environment. Managers are looking at processes like de-duplication to try to reduce the quantity of information.

The challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer.



Then, you’re getting pressure from business leaders for timely and accurate information to make decisions with. So, the challenge for a CIO is that you’ve got to balance the cost of IT, the cost of governance and risk issues involved in information, while at the same time, providing real insight to your business unit customer. It’s a tough job.

Key examples

Farrell: Well, one key example comes to mind. It’s an insurance company that we have worked with for several years. It’s a regional health insurance company faced with competition from national companies. They decided that they needed to make better use of their data to provide better services for their members, the patients as well as the providers, and also to create a more streamlined environment for themselves.

And so, to bring the IT and business users together, they developed an enterprise data warehouse that would be a common resource for all of the data. They ensured that it was accurate and they had a certain level of data quality.

They had outsourced some of the health management systems to other companies. Diabetes was outsourced to one company. Heart disease was outsourced to another company. It was expensive. By bringing it in house, they were able to save the money, but they were also able to do a better job, because they could integrate the data from one patient, and have one view of that patient.

That improved the aggregate wellness score overall for all of their patients. It enabled them to share data with the care providers, because they were confident in the quality of that data. It also saved them some administrative cost, and they recouped the investment in the first year.

... Another thing that we're doing is working with several health organizations in states in the US. We did one project several years ago and we are now in the midst of another one. The idea here is to integrate data from many different sources. This is health data from clinics, schools, hospitals, and so on throughout the state.

Doing this gives you the opportunity to bring together and integrate in a meaningful way data from all these different sources. Once that’s been done, that can serve not only these systems, but also some of the potential systems more real-time systems that we see coming down the line, like emergency surveillance systems that would detect terrorist threat, bioterrorism threats, pandemics, and things like that.

It's important to understand and be able to get this data integrated in a meaningful way, because more real-time applications and more mission-critical applications are coming and there is not going to be the time to do the manual integration.

Santaferraro: We find that a lot of our customers have very disconnected sets of intelligence and information. So, we look at how we can bring that whole world of information together for them and provide a connected intelligence approach. We are actually a complete provider of enterprise class industry-specific IM solutions.

... Probably the hottest topic that I have heard from customers in the last year or so has been around the development of the BI competency center. Again if you go to our BI site, you will find some additional information there about the concept of a BICC.

And the other trend that I am seeing is that a lot of companies are wanting to move from just the BI space with that kind of governance. They want to create an enterprise information competency center, so expanding beyond BI to include all of IM.

We have expertise around several business domains like customer relationship management, risk, and supply chain. We go to market with specific solutions from 13 different industries. As a complete solution provider, we provide everything from infrastructure to financing.

Obviously, HP has all of the infrastructure that a customer needs. We can package their IM solution in a single finance package that hits either CAPEX or OPEX. We've got software offerings. We've got our consulting business that comes in and helps them figure out how to do everything from the strategy that we talked about upfront and planning to the actual implementation.

We can help them break into new areas where we have practices around things like master data management or content management or e-discovery.

Esser: We have a couple of ways to get started. We can start with a business value assessment service. This is service that sets people up with a business case and tracks ROI, once they decide on a project. But, the interesting piece of that is they can choose to focus on data integration, master data management, what have you.

You look at the particular element of IM and build a project around that. This assessment service allows people to identify the element in their IM environment, their current environment, that will give them the best ROI. Or, we can offer them a master planning service which generates really comprehensive IM plan, everything from data protection and information quality to advanced analytics.

Obviously, you can get details on those services and our complete portfolio for that matter at www.hp.com/go/bi and www.hp.com/go/im, as well as at www.hp.com/go/neoview. There is some specific information about the Neoview Advantage enterprise data warehouse platform there.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: HP.

Access a free white paper on how Business Intelligence enables enterprises to better manage data and information assets:
Top 10 trends in Business Intelligence for 2010

You may also be interested in:

Wednesday, April 14, 2010

Fog clears on proper precautions for putting more enterprise data safely in clouds

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

The latest BriefingsDirect podcast hones in on managing risks and rewards in the proper placement of enterprise data in cloud computing environments.

Headlines tell us that Internet-based threats are becoming increasingly malicious, damaging, and sophisticated. These reports come just as more companies are adopting cloud practices and placing mission-critical data into cloud hosts, both public and private. Cloud skeptics frequently point to security risks as a reason for cautiously using cloud services. It’s the security around sensitive data that seems to concern many folks inside of enterprises.

There are also regulations and compliance issues that can vary from location to location, country to country and industry by industry. Yet cloud advocates point to the benefits of systemic security as an outcome of cloud architectures and methods. Distributed events and strategies based on cloud computing security solutions should therefore be a priority and prompt even more enterprise data to be stored, shared, and analyzed by a cloud by using strong governance and policy-driven controls.

So, where’s the reality amid the mixed perceptions and vision around cloud-based data? More importantly, what should those evaluating cloud services know about data and security solutions that will help to make their applications and data less vulnerable in general?

We've assembled a panel of HP experts to delve into the dos and don’ts of cloud computing and corporate data. Please welcome Christian Verstraete, Chief Technology Officer for Manufacturing and Distributions Industries Worldwide at HP, and Archie Reed, HP's Chief Technologist for Cloud Security, the author of several publications including, The Definitive Guide to Identity Management and he's working on a new book, The Concise Guide to Cloud Computing. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Reed: If you look at the history that we’re dealing with here, companies have been doing those sorts of things with outsourcing models or sharing with partners or indeed community type environments for some time. The big difference with this thing we call cloud computing, is that the vendors advancing the space have not developed comprehensive service level agreements (SLAs), terms of service, and those sorts of things, or are riding on very thin security guarantees.

Therefore, when we start to think about all the attributes of cloud computing -- elasticity, speed of provisioning, and those sorts of things -- the way in which a lot of companies that are offering cloud services get those capabilities, at least today, are by minimizing or doing away with security and protection mechanisms, as well as some of the other guarantees of service levels. That’s not to dismiss their capabilities, their up-time, or anything like that, but the guarantees are not there.

So that arguably is a big difference that I see here. The point that I generally make around the concerns is that companies should not just declare cloud, cloud services, or cloud computing secure or insecure.

It’s all about context and risk analysis. By that, I mean that you need to have a clear understanding of what you’re getting for what price and the risks associated with that and then create a vision about what you want and need from the cloud services. Then, you can put in the security implications of what it is that you’re looking at.

Verstraete: People need to look at the cloud with their eyes wide open. I'm sorry for the stupid wordplay, but the cloud is very foggy, in the sense that there are a lot of unknowns, when you start and when you subscribe to a cloud service. Archie talked about the very limited SLAs, the very limited pieces of information that you receive on the one hand.

On the other hand, when you go for service, there is often a whole supply chain of companies that are actually going to join forces to deliver you that service, and there's no visibility of what actually happens in there.

Considering the risk

I’m not saying that people shouldn't go to the cloud. I actually believe that the cloud is something that is very useful for companies to do things that they have not done in the past -- and I’ll give a couple of examples in a minute. But they should really assess what type of data they actually want to put in the cloud, how risky it would be if that data got public in one way, form, or shape, and assess what the implications are.

As companies are required to work more closely with the rest of their ecosystem, cloud services is an easy way to do that. It’s a concept that is reasonably well-known under the label of community cloud. It’s one of those that is actually starting to pop up.

A lot of companies are interested in doing that sort of thing and are interested in putting data in the cloud to achieve that and address some of the new needs that they have due to the fact that they become leaner in their operations, they become more global, and they're required to work much more closely with their suppliers, their distribution partners, and everybody else.

It’s really understanding, on one hand, what you get into and assessing what makes sense and what doesn’t make sense, what’s really critical for you and what is less critical.

Reed: At the RSA Conference in San Francisco, We spoke about what we called the seven deadly sins of cloud. ... One of the threats was data loss or leakage. In that, you have examples such as insufficient authentication, authorization, and all that, but also lack of encryption or inconsistent use of encryption, operational failures, and data center liability. All these things point to how to protect the data.

One of the key things we put forward as part of thethe Cloud Security Alliance (CSA) announcement that HP was active in was to try and draw out key areas that people need to focus on as they consider the cloud and try and deliver on the promises of what cloud brings to the market.

Although cloud introduces new capabilities and new options for getting services, commonly referred to as infrastructure or platform or software, the security posture of a company does not need to necessarily change significantly -- and I'll say this very carefully -- from what it should be. A lot of companies do not have a good security posture.

You need to understand what regs, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches, and then be able to prove that you did the right thing.



When we talk to folks about how to manage their approach to cloud or security in general, we have a very simple philosophy. We put out a high-level strategy called HP Secure Advantage, and it has three tenets. The first is to protect the data. We go a lot into data classification, data protection mechanisms, the privacy management, and those sorts of things.

The second tenet is to defend the resources which is generally about infrastructure security. In some cases, you have to worry about it less when you go into the cloud per se, because you're not responsible for all the infrastructure, but you do have to understand what infrastructure is in play to feed your risk analysis.

The third part of that validating compliance is the traditional governance, risk, and compliance management aspects. You need to understand what regulations, guidance, and policies you have from external resources, government, and industry, as well as your own internal approaches -- and then be able to prove that you did the right thing.

Verstraete: Going to the cloud is actually a very good moment for companies to really sit down and think about what is absolutely critical for my enterprise and what are things that, if they leak out, if they get known, it's not too bad. It's not great in any case, but it's not too bad. And, data classification is a very interesting exercise that enterprises should do, if they really want to go to the cloud, and particularly to the public clouds.

I've seen too many companies jumping in without that step and being burnt in one way, form, or shape. It's sitting down and think through that, thinking through, "What are my key assets? What are the things that I never want to let go that are absolutely critical? On the other hand, what are the things that I quite frankly don't care too much about?" It's building that understanding that is actually critical.

... Today, because of the term "cloud," most of the cloud providers are getting away with providing very little information, setting up SLAs that frankly don't mean a lot. It's quite interesting to read a number of the SLAs from the major either infrastructure-as-a-service (IaaS) or PaaS providers.

Fundamentally, they take no responsibility, or very little responsibility, and they don't tell you what they do to secure the environment in which they ask you to operate. The reason they give is, "Well, if I tell you, hackers can know, and that's going to make it easier for them to hack the environment and to limit our security."

There is a point there, but that makes it difficult for people who really want to have source code, as in your example. That's relevant and important for them, because you have source code that’s not too bad and source code that's very critical. To put that source code in the cloud, if you don't know what's actually being done, is probably worse than being able to make an assessment and have a very clear risk assessment. Then, you know what the level of risk is that you take. Today, you don't know in many situations.

Reed: Also consider that there are things like community clouds out there. I'll give the example of US Department of Defense back in 2008. HP worked with the Defense Information Systems Agency (DISA) to deploy cloud computing infrastructure. And, we created RACE, which is the Rapid Access Computing Environment, to set things up really quickly.

Within that, they share those resources to a community of users in a secure manner and they store all sorts of things in that. And, not to point fingers or anything, but the comment is, "Our cloud is better than Google's."

So, there are secure clouds out there. It's just that when we think about things like the visceral reaction that the cloud is insecure, it's not necessarily correct. It's insecure for certain instances, and we've got to be specific about those instances.

In the case of DISA, they have a highly secured cloud, and that's where we expect things to go and evolve into a set of cloud offerings that are stratified by the level of security they provide, the level of cost, right down to SLA’s and guarantees, and we’re already seeing that in these examples.

Beating the competition

While we’ve alluded to, and actually discussed, specific examples of security concerns and data issues, the fact is, if you get this right, you have the opportunity to accelerate your business, because you can basically break ahead of the competition.

Now, if you’re in a community cloud, standards may help you, or approaches that everyone agrees on may help the overall industry. But, you also get faster access to all that stuff. You also get capacity that you can share with the rest of the community. If you're thinking about cloud in general, in isolation, and by that I mean that you, as an individual organization, are going out and looking for those cloud resources, then you’re going to get that ability to expand well beyond what your internal IT department.

There are lots of things we could close on, of course, but I think that the IT department of today, as far as cloud goes, has the opportunity not only to deliver and better manage what they’re doing in terms of providing services for the organization, but also have a responsibility to do this right and understand the security implications and represent those appropriately to the company such that they can deliver that accelerated capability.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Private cloud models: Moving beyond static grid computing addiction

This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.

By Randy Clark


People don’t talk much about grid computing much these days anymore, but most application teams that require high performance from their infrastructure are actually addicted to grid computing -- whether they know it or not.

Gone are the days of requiring a massive new SMP box to get to the next level of performance. But in today’s world of tight budgets and diverse application needs, the linear scalability inherent in grid technologies becomes meaningless when there are no more blades being added.

This constraint has led grid managers and solution providers to search for new ways to squeeze more capacity from their existing infrastructures, within tight capital expenditure budgets. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

The problem is that grid infrastructures are typically static, with limited-to-no flexibility in changing the application stack parameters – such as OS, middleware, and libraries – and so resource capacity is fixed. By making grids dynamic, however, IT teams can provide a more flexible, agile infrastructure, with lower administration costs and improved service levels.

So how do you make a static grid dynamic? Can it be done in an easy-to-implement and pragmatic, gradual way, with limited impact on the application teams?

By introducing private cloud management capabilities, armed with standard host repurposing tools, any types of grid deployments can go from being static to dynamic.

For example, many firms have deployed multiple grids to serve the various needs of application teams, often using grid infrastructure software from multiple vendors. Implementing a private cloud enables consolidation of all the grid infrastructures to support all the apps through a shared pool approach.

The pool then dynamically allocates resources via each grid workload manager. This provides a phased approach to creating additional capacity through improved utilization, by sharing infrastructure without impacting the application or cluster environments.

The beginning of queue sprawl

T
ake another example. What if the grid teams have already consolidated using a single workload manager? This approach often results in “queue sprawl,” since resource pools are reserved exclusively for each application’s queues.

But by adding standard tools, such as virtual machines (VMs) and dual-boot, resources can be repurposed on demand for high priority applications. In this case, the private cloud platform instructs on which application stack image should be running at any given time. This results in dynamic application stacks across the available infrastructure, such that any suitable physical machine in the cluster can be repurposed on demand for additional capacity.

While many grid professionals consider their grid environments cloud-like already, the advent of cloud computing nonetheless helps make grid environments completely dynamic



Once an existing grid infrastructure is made dynamic and all available capacity is put to use, grid managers can still consider other non-capital spending sources to increase performance even further.

The first step is to scavenge internal underutilized resources that are not owned by the grid team. The under-used resources can range from employee desktop PCs, to VDI farms, and disaster recovery infrastructure or low-priority servers. From these, grid workloads can be launched within a VM on the "scavenged" machines, and then immediately stopped when the owning application or user resumes.

The second major step is to these higher levels of infrastructure productivity direct IT operating budget to external services such as Amazon EC2 and S3. A private cloud solution can centrally manage the integration with and metering of public cloud use (so-called hybrid models), providing additional capacity for “bursty” workloads, or full application environments. And since access to the public cloud is controlled and managed by the grid team, application groups are provided via a seamless service experience -- with higher performance for their total workloads.

While many grid professionals already consider their grid environments cloud-like, the advent of mature cloud computing models can help make grid environments more completely dynamic, providing new avenues for agility, service improvement and cost control.

And by squeezing more from your infrastructure before spending operating budget on external services, you can protect your investment while satisfying users’ insatiable appetite for more performance from the grid.

This guest post comes courtesy of Randy Clark, chief marketing officer at Platform Computing.

You may also be interested in: