Friday, October 1, 2010

Leo Apotheker needs to target HP's forgotten businesses

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Ever since its humble beginnings in the Palo Alto garage, HP has always been kind of a geeky company – in spite of Carly Fiorina’s superficial attempts to prod HP toward a vision thing during her aborted tenure. Yet HP keeps talking about getting back to that spiritual garage.

Software has long been the forgotten business of HP. Although – surprisingly – the software business was resuscitated under Mark Hurd’s reign (revenues have more than doubled as of a few years ago), software remains almost a rounding error in HP’s overall revenue pie.

Yes, Hurd gave the software business modest support. Mercury Interactive was acquired under his watch, giving the business a degree of critical mass when combined with the legacy OpenView business.

But during Hurd’s era, there were much bigger fish to fry beyond all the internal cost cutting for which Wall Street cheered, but insiders jeered. Converged Infrastructure has been the mantra, reminding us one and all that HP was still very much a hardware company. The message remains loud and clear with HP’s recent 3PAR acquisition at a heavily inflated $2.3 billion which was concluded in spite of the interim leadership vacuum.

The dilemma that HP faces is that, yes, it is the world’s largest hardware company (they call it technology), but the bulk of that is from personal systems. Ink, anybody?

Needs to compete

The converged infrastructure strategy was a play at the CTO’s office. Yet HP is a large enough company that it needs to compete in the leagues of IBM and Oracle, and for that it needs to get meetings with the CEO. Ergo, the rumors of feelers made to IBM Software’s Steve Mills, and the successful offer to Leo Apotheker, and agreement for Ray Lane as non-executive chairman.

Our initial reaction was one of disappointment; others have felt similarly. But Dennis Howlett feels that Apotheker is the right choice “to set a calm tone” that there won’t be a massive a debilitating reorg in the short term.

Under Apotheker’s watch, SAP stagnated, hit by the stillborn Business ByDesign and the hike in maintenance fees that, for the moment, made Oracle look warmer and fuzzier. Of course, you can’t blame all of SAP’s issues on Apotheker; the company was in a natural lull cycle as it was seeking a new direction in a mature ERP market.

The problem with SAP is that, defensive acquisition of Business Objects notwithstanding, the company has always been limited by a “not invented here” syndrome that has tended to blind the company to obvious opportunities – such as inexplicably letting strategic partner IDS Scheer slip away to Software AG. Apotheker’s shortcoming was not providing the strong leadership needed to jolt SAP out of its inertia.

So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

Instead, Apotheker’s – and Ray Lane’s for that matter – value proposition is that they know the side of the enterprise business applications market that HP doesn’t. That’s the key to this transition.

The next question becomes acquisitions. HP has a lot on its plate already. It took at least 18 months for HP to digest the $14 billion acquisition of EDS, providing a critical mass IT services and data center outsourcing business. It is still digesting nearly $7 billion of subsequent acquisitions of 3Com, 3PAR, and Palm to make its converged infrastructure strategy real.

HP might be able to get backing to make new acquisitions, but the dilemma is that Converged Infrastructure is a stretch in the opposite direction from business software. So it’s not just a question of whether HP can digest another acquisition; it’s an issue of whether HP can strategically focus in two different directions that ultimately might come together, but not for a while.

So let’s speculate about software acquisitions.

SAP, the most logical candidate, is, in a narrow sense, relatively “affordable” given that its stock is roughly about 10 – 15 percent off its 2007 high. But SAP would be obviously the most challenging given the scale; it would be difficult enough for HP to digest SAP under normal circumstances, but with all the converged infrastructure stuff on its plate, it’s back to the question of how can you be in two places at once. Infor is a smaller company, but as it is also a polyglot of many smaller enterprise software firms, would present HP additional integration headaches that it doesn’t need.

Little choice

HP may have little choice but to make a play for SAP if IBM or Microsoft were unexpectedly to actively bid. Otherwise, its best bet is to revive the relationship, which would give both HP and SAP the time to acclimate. But in a rapidly consolidating technology market, who has the luxury of time these days?

Salesforce.com would make a logical stab as it would reinforce HP Enterprise Services’ (formerly EDS) outsourcing and BPO business. It would be far easier for HP to get its arms around this business. The drawback is that Salesforce.com would not be very extensible as an application set, as it uses a proprietary stored procedures database architecture. That would make it difficult to integrate with other prospective ERP SaaS acquisitions, which would otherwise be the next logical step to growing the business software footprint.

Can HP afford to converge itself in another direction? Can it afford not to?

Informatica is often brought up – if HP is to salvage its Neoview and Knightsbridge BI business, it would need a data integration engine to help bolster it. Better yet, buy Teradata, which is one of the biggest resellers of Informatica PowerCenter – that would give HP far more credible presence in the analytics space. Then it will have to ward off Oracle – which has an even more pressing need for Informatica to fill out the data integration piece in its Fusion middleware stack – for Informatica. But with Teradata, there would at least be a real anchor for the Informatica business.

HP has to decide what kind of company it needs to be, as Tom Kucharvy summarized well a few weeks back. Can HP afford to converge itself in another direction? Can it afford not to? Leo Apotheker has a heck of a listening tour ahead of him.

This guest blog post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Financial services firms look to cloud, grid, and cluster to allay fears over data explosion, says survey

Look for a sharp uptick in cloud computing from financial services firms over the next two years, along with similar increases in cluster and grid technologies. This increased interest comes from a concern over the current data explosion and the firms' lack of scalable environments, insufficient capacity to run complex analytics, and contention for computing resources.

These findings come from a recent survey conducted by Wall Street & Technology in conjunction with Platform Computing, SAS, and the TABB Group. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

Completed in July, the survey found noteworthy differences in the challenges being faced by both buy- and sell-side firms, with sell-side institutions more likely to report a lack of a scalable environment, insufficient capacity to run complex analytics, and contention for computing resources as significant challenges.

According to the survey, data proliferation and the need to better manage it are at the root of many of the challenges being faced by financial institutions of all sizes. Two-thirds (66 percent) of buy-side firms and more than half (56 percent) of sell-side firms are grappling with siloed data sources. The silo problem is being exacerbated by organizational constraints, including policies prohibiting data sharing and access, network bandwidth issues and input/output (I/O) bottlenecks.

Too much data

Ever-increasing data growth is also cause for concern, with firms reporting that they are dealing with too much market data. Sixty-six percent of respondents didn't think their analytics infrastructures would be able to keep pace with demand over time.

Both buy- and sell-side firms plan to increase their focus on liquidity and counterparty risk in the next 12 months. Counterparty risk management was ranked as the highest priority for the sell side (45 percent) with liquidity risk following at 43 percent. Liquidity risk and counterparty risk scored high for the buy side with 36 percent and 33 percent, respectively.

Data proliferation and the need to better manage it are at the root of many of the challenges being faced by financial institutions of all sizes.



The financial institutions plan to turn to a combination of technologies including cloud computing and grid technologies. Within the next two years, 51 percent of all respondents are considering or likely to invest in cluster technology, 53 percent are considering or likely to buy grid technology, and 57 percent are considering or likely to purchase cloud technology.

The report, “The State of Business Analytics in Financial Services: Examining Current Preparedness for Future Demands,” is available for download at http://www.grid-analytics.wallstreetandtech.com. (Registration required.) Wall Street & Technology, in conjunction with the survey sponsors, will host a webinar to discuss in-depth key findings of the survey on October 7 at 12 pm ET/9 am PT. For more information, visit: http://tinyurl.com/2ulcesm.

You may also be interested in:

Tuesday, September 28, 2010

Automated governance: Cloud computing's lynchpin for success or failure

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a transcript or download a copy. Get a copy of Glitch: The Hidden Impact of Faulty Software. Learn more about governance risks. Sponsor: WebLayers.

Management and governance are the arbiters of success or failure when we look across a cloud services ecosystem and the full lifecycle of those applications. That's why governance is so important in the budding era of cloud computing.

As cloud-delivered services become the coin of the productivity realm, how those services are managed as they are developed, deployed, and used -- across a services lifecycle -- increasingly determines their true value.

And yet governance is still too often fractured, poorly extended across the development-and-deployment continuum, and often not able to satisfy the new complexity inherent in cloud models.

One key bellwether for future service environments and for defining the role and requirements for automated cloud governance is in applications development, which due to the popularity of platform as a service (PaaS) is already largely a services ecosystem.

Here to help us explain why baked-in visibility across services creation and deployment is essential please join Jeff Papows, President and CEO of WebLayers and the author of Glitch: The Hidden Impact of Faulty Software, and John McDonald, CEO of CloudOne Corp. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
McDonald: Cloud, from a technology perspective, is more about some very sophisticated tools that are used to virtualize the workloads and the data and move them live from one bank of servers to another, and from one whole data center to another, without the user really being aware of it. But, fundamentally, cloud computing is about getting access to a data center that’s my data center on-demand.

Fundamentally, the easiest way to remember it is that cloud is to hardware as software as a service (SaaS) is to software. Basically, for CloudOne, we're providing IBM Rational Development tools both through cloud computing and SaaS.

... There's a myth that development is something that we ought to be tooling up for, like providing power to a building or water service. In reality, that’s not how it works at all.

The money that you save by doing that is the reason you can open any trade magazine and the first seven pages are all going to be about cloud.



There are people who come and go with different roles throughout the development process. The front-end business analysts play a big role in gathering requirements. Then, quite often, architects take over and design the application software or whatever we are building from those requirements. Then, the people doing the coding, developers, take over. That rolls into testing and that rolls into deployment. And, as this lifecycle moves through, these roles wax and wane.

But the traditional model of getting development tools doesn’t really work that way at all. You usually buy all of the tools that you will ever need up front, usually with a large purchase, put them on servers, and let them sit there, until the people who are going to use them and log in and use them. But, while they are sitting there, taking up space and your capital expense budget, and not being used, that’s waste.

The cloud model allows you to spin up and spin down the appropriate amount of software and hardware to support the realities of the software development lifecycle. The money that you save by doing that is the reason you can open any trade magazine and the first seven pages are all going to be about cloud.

It's allowing customers of CloudOne and IBM Rational to use that money in new, creative, interesting ways to provide tools they couldn't afford before, to start pilots of different, more sophisticated technologies that they wouldn't have been able to gather the resources to do before. So, it's not only a cost-savings statement, it's also ease of use, ease of start-up, and an ability to get more for your dollar from the development process. That's a pretty cool thing all the way around.

Papows: A lot of about what’s going on in cloud computing it’s not a particularly new thing. What we used to think of was hosting or outsourcing. What’s happening now is the world is becoming more mobile, as 20 percent of our IT capacity is focused on new application development.

We have to get more creative and more distributed about the talent that contributes to those critical application development and projects. ... Design time governance is the next logical thing in that continuum, so that all of the inherent risk mitigation associated with governance and then IT contacts can be applied to application development in a hybrid model that’s both geographically and organizationally distributed.

When you try to add some linear structure and predictability to those hybrid models, the constant that can provide some order and some efficiency is not purely technology-based. It's not just the virtualization, the added virtual machine capacity, or even the middleware to include companies like WebLayers or tools like Rational. It's the process that goes along with it. One of the really important things about design-time governance is the review process.

Governance is a big part of the technology toolset that institutionalizes that review process and adds that order to what otherwise can quickly become a bit chaotic.

McDonald: The challenge of tools in the old days was that they were largely created during a time where all the people and the development project were sitting on the same floor with each other in a bunch of cubes in offices.

The cloud allows us to create a dedicated new data center that sits on the Internet and is accessible to all, wherever they are, and in whatever time zone they are working, and whatever relationship they have to my company.



As the challenges of development have caused companies to look at outsourcing and off-shoring, but even more simplistically the merger of my bank and your bank. Then we have groups of developers in two different cities, or we bought a packaged application, and the best skill to help us integrate it is actually from a third-party partner which is in a completely different city or country. Those tools have shown their weaknesses, even in just getting your hands on them.

How do I punch a hole through the firewall to give you a way to check in your code problems? The cloud allows us to create a dedicated new data center that sits on the Internet and is accessible to all, wherever they are, and in whatever time zone they are working, and whatever relationship they have to my company.

That frees things up to be collaborative across company boundaries. But with that freedom comes a great challenge in unifying a process across all of those different people, and getting a collaborative engine to work across all those people.

It’s almost a requirement to keep the wheels on the bus and to have some degree of ability to manage the process in the compliance with regulations and the information about how decisions were made in such distributed ways that they are traceable and reviewable. It’s really not possible to achieve such a distributed development environment without that governance guidance.

Papows: We're dealing with some challenges for the first time that require out-of-the-box thinking. I talk about this in "Glitch." We have reached a point where there a trillion connected devices on the Internet as the February of this year. There are a billion embedded transistors for every human being on the planet.

We have reached a point where there a trillion connected devices on the Internet as the February of this year. There are a billion embedded transistors for every human being on the planet.



You’ve read about or heard about or experienced first hand the disasters that can happen in production environments, where you have some market-facing application, where service is lost, where there is even brand damage or economic consequences.

... Everybody intellectually buys into governance, but nobody individually wants to be governed. Unless you automate it, unless you provide the right stack of tools and codify the best practices and libraries that can be reusable, it simply won’t happen. People are people, and without the automation to make it natural, unnatural things get applied some percentage of the time, and governance can’t work that way.

McDonald: Developers view themselves quite often as artists. They may not articulate it that way, but they often see themselves as artists and their palette is code.

As such, they immediately rankle at any notion that, as artists, they should be governed. Yet, as we’ve already established, that guidance for them around the processes, methods, regulations, and so on is absolutely critical for success, really in any size organization, but beyond the pale in a distributed development environment. So, how do you deal with that issue?

Well, you embed it into their entire environment from the very first stage. In most companies, this is trying to decide what projects we should undertake, which in lot of companies is a mainly over-glorified email argument.

Governance must be process-friendly

Governance has to be embedded at every step of that way, gently nudging, and sometimes shuttling all these players back into the right line, when it comes to ensuring that the result of their effort is compliant with whatever it is that I needed to be compliant to.

In short, you’ve got to make it be a part of and embedded into every stage of the development process, so that it largely disappears, and becomes something that becomes such a natural extension of the tool so that you don’t have anyone along the way realizing that they are being governed

WebLayers was the very first partner that we reached out to say, "Can you go down this journey with us together, as we begin developing these workbenches, these integrated toolsets, and delivering them through the cloud on-demand?" We already know and see that embedding governance in every layer is something we have to be able to do out of the gate.

The team at WebLayers was phenomenal in responding to that request and we were able to take several based instances of various Rational tools, embed into them WebLayers technology, and based on how the cloud works, archive those, put them up in our library to be able to be pulled down off-the-shelf, cloned, and made an instance of for the various customers that we have coming to our pipeline who want to experience this technology in what we are doing.

Better safe than sorry

... The avoidance of things going badly is unfortunately very difficult to measure. That is something that everyone who attempts to do a cloud-delivered development environment and does the right thing by embedding in it the right governance guidance should know coming out of the gate. The best thing that’s going to happen is you are not going to have a catastrophe.

That said, one of the neat things about having a common workbench, and having the kinds of reporting in metrics that it can measure, meaning the IBM Jazz, along with the WebLayers technology, is that I can get a very detailed view of what’s going on in my software factory at every turn of the crank and where things are coming off the rails a little bit.

Papows: There's an age-old expression that you're so close to the forest you can't see the trees. Well, I think in the IT business we’re sometime so deeply embedded in the bark we can't see anything.

We've been developing, expanding, deploying, and reinventing on a massive scale so rapidly for the last 30 years that we've reached a breaking point where, as I said earlier, between the complexity curves, between the lack of elasticity and human capital, between the explosion and the amount of mobile computing devices and their propensity for accessing all of this back-end infrastructure and applications, where something fundamentally has to change. It's a problem on a scale that can't be overwhelmed by simply throwing more bodies at it.

Creative solutions

Secondly, in the current economy, very few CIOs have elastic budgets. We have to do as an industry what we've done from the very beginning, which is to automate, innovate, and find creative solutions to combat the convergence of all of those digital elements to what would otherwise be a perfect storm.

There there is simply no barrier for anyone to give this a try.



So SaaS, cloud computing, automated governance, forms of artificial intelligence, Rational tooling, consistent workbench methodologies, all of these things are the instruments of getting ourselves out of the corner that we have otherwise painted ourselves in.

I don't want to seem like an alarmist or try to paint too big a storm cloud on the horizon, but this is simply not something that's going to happen or be resolved in a business-as-usual usual fashion.

That, in fact, is where companies like CloudOne are able to expand and leap productivity equations for companies in certain segments of the market. That's where automation, whether it's Rational, WebLayers, or another piece of technology, has got to be part of the recipe of getting off this limb before we saw it off behind us.

McDonald: If you have any inclination at all to see what it is that Jeff and I are telling you, give it a whirl, because it's very simple.

That's one of the coolest things of all about this whole model, in my mind. There there is simply no barrier for anyone to give this a try. In the old model, if you wanted to give the technology a try, you had better start with your calculator. And you had better get the names and addresses of your board of directors, because you're going there eventually to get the capital approval and so on to even get a pilot project started in many cases with some of these very sophisticated tools.

This is just not the case anymore. With the CloudOne environment you can sign on this afternoon with a web-based form to get a instance of let's say, Team Concert set up for you with WebLayers technology embedded in it, in about 20 minutes from when you push "submit," and it's absolutely free for the first model. From there, you grow only as you need them, user-by-user. It's really quite simple to give this concept a try and it's really very easy.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a transcript or download a copy. Get a copy of Glitch: The Hidden Impact of Faulty Software. Learn more about governance risks. Sponsor: WebLayers.

You may also be interested in:

Friday, September 24, 2010

Demise of enterprise IT departments: A pending crisis point

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

By Ronald Schmelzer

In ZapThink’s deep conversations with CIOs and other IT decision makers, we find that there’s broad agreement on the multitude of forces conspiring to change every aspect of the way the enterprise does IT.

Yet at the same time, everybody’s in denial that these changes will happen to them. For us as outsiders, it certainly looks like many enterprise IT decision-makers acknowledge that the world is changing -- but deny that they are part of that same world.

Of course, such executives simply have their head in the sand. If change is to occur, it will happen to the vast majority of enterprises, not just the minority.

This realization drives the Crisis Points of the ZapThink 2020 vision. However, ZapThink is not advocating that organizations should adopt any of the crisis points. Rather we are observing that these crises are coming, whether or not companies are ready for them.

In particular, we believe that companies will reach a crisis point as they seek to outsource IT. However, we aren’t advocating that companies outsource all their IT efforts. Rather, we are observing that siren call of offloading IT assets in the form of cloud computing and outsourcing is a significant trend that is leading to a crisis point.

And without a strong rudder, many companies will indeed be dashed on the rocks. This ZapFlash blog post provides greater detail on this particular crisis point: The pending demise of the enterprise IT department, or what we’ve called in previous ZapFlashes the Collapse of Enterprise IT.

Outsourcing and cloud computing:
Different parts of same story


Part of the reason for the visceral response to our Crisis Points ZapFlash is that there’s inherent fear when talking about outsourcing IT functions. Part of the fear comes from the fact that many people confuse outsourcing with offshoring.

Outsourcing is the purchasing of a service from an outside vendor to replace the performance of the task within the organization’s internal operations. Offshoring, on the other hand, is the movement of labor from a region of high cost (such as the United States) to one of comparatively lower cost (such as India).

People fear the latter because it means subcontracting existing work to other people, thus displacing jobs at home. However, the former has been going on for hundreds of years. Indeed, many companies exist solely because they are completing tasks that their customers would rather not undertake themselves.

Almost six years ago, we talked about how service oriented architecture (SOA) and outsourcing go hand in hand, for the simple reason that SOA requires organizations to think about their resources, processes, and capabilities in ways that are loosely coupled from the specifics of their implementation, location, and consumption. Indeed, the more companies implement SOA, the more they can outsource processes that are not strategic or competitive for the organization.

But it’s a mistake to assume the collapse of the enterprise IT department is due entirely to outsourcing the functions of IT to third parties.



Furthermore, the more companies outsource their functions, the more they are motivated to implement SOA to facilitate the consumption of the outsourced capabilities. So, therefore it should be no surprise that the combination of SOA and a challenging economic environment has motivated many companies to see outsourcing as a legitimate strategy for their IT organizations, regardless of whether they move to offshoring.

But it’s a mistake to assume the collapse of the enterprise IT department is due entirely to outsourcing the functions of IT to third parties. Outsourcing is a part of the story, but so is cloud computing. In much the same way that third-party firms can offload parts of IT in the outsourcing model, cloud computing offers the ability to offload other aspects of the IT department. Cloud computing provides both technological and economic benefits for distributing and offloading resources, functions, processes, and even data onto location-independent infrastructures.

While many enterprises are currently pursuing a private model for cloud computing, there are far too many economic benefits of the public model to ignore. Most likely, we will see hybrid cloud approaches, where organizations keep certain mission-critical features behind the firewall on the corporate premises while they shift the rest are to lower cost, more agile, and less costly third-party locations. The net result of this shift is continued erosion of the scope of responsibility for internal IT organizations.

The holistic perspective of the five supertrends

The demise of enterprise IT crisis point emerges from the fact that companies will rush into this vision of outsourced IT without thinking through first the dramatic impact that this transition will have throughout their organization.

For such organizations, the value of our ZapThink 2020 vision is that it pulls together multiple trends and delineates the interrelationships among them. One of the most closely related trends to the demise of the IT organization is the increased formality and dependence on governance, as organizations pull together the business side of governance (GRC, or governance, risk, and compliance), with the technology side of governance (IT governance, and to an increasing extent, SOA governance). Over time, CIOs become CGOs (Chief Governance Officers), as their focus shifts away from technology.

As the enterprise owns fewer and fewer of the organization’s IT assets, the role and responsibility of enterprise IT practitioners will be less about the mechanics of getting systems to work, integrating them with each other, and operating them, and more about the management of the one resource that remains constant: information. After all, IT is information technology, not computer or systems technology.

If you can successfully tackle these questions with a coherent, holistic strategy, then you have defused the risk inherent to movement to outsourcing and/or cloud computing.



With this perspective, it’s essential to view the shift to outsourcing and cloud computing holistically with all the other changes happening in the enterprise IT environment.

For example, the move to democratization of technology means that non-IT practitioners will be utilizing and creating IT capabilities and assets without the control of the IT organization. How will IT practitioners manage the sole enterprise IT asset (information) given that they cannot manage the systems in which that asset flows? As organizations realize the global cubicle vision of IT, how will enterprise IT practitioners and architects enable distributed information without losing GRC visibility?

As systems become increasingly interconnected with deep interoperability despite their increasing distributed nature, how can enterprise IT practitioners make sure the systems as a whole continue to provide value and avoid chaotic disruptions despite the fact that the organization doesn’t own or operate them? As organizations move to more iterative, agile forms of complex systems engineering where new capabilities emerge from compositions of existing ones, how will movements to cloud computing and outsourcing help or hurt those efforts?

If you can successfully tackle these questions with a coherent, holistic strategy, then you have defused the risk inherent to movement to outsourcing and/or cloud computing. On the other hand, if you rush into cloud computing and outsourcing strategies without thinking through all the issues we’ve discussed in this ZapFlash, you’ll be sunk before you know it.

The ZapThink take

Just like the Sirens calling to Odysseus in Homer’s Odyssey, the call of outsourcing and cloud computing will lead many enterprise IT ships to wreck on the rocks unless they can lash themselves to the masts of a holistic perspective of where the industry as a whole is heading. More importantly, the broad shifts in the industry that ZapThink’s 2020 vision of enterprise IT illuminates compels companies to think more broadly about their constant enterprise IT asset: information.

If it no longer matters where your IT is physically located and whether or not you actually own or operate the IT systems you depend on, then what IT department do you really need and what are they really doing? The answer: less hands-on technology and more governance, a sea change that represents the demise of the enterprise IT organization. Whether or not this transition develops into a full-blown crisis is entirely up to you.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.


SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
You may also be interested in:

Thursday, September 23, 2010

Sonoa becomes Apigee, offers new and rebranded API management and analysis product lines

Sonoa Systems, a provider of application programming interface (API) solutions, has changed its name this week to Apigee.

While Sonoa originally offered a free API tools and management platform, Apigee now offers three product lines for enterprises, developers, and API providers of all sizes. The company now serves more than 7,000 developers and some 140 enterprises with API management services. [Disclosure: Sonoa Systems is a past sponsor of BriefingsDirect podcasts.]

“By unifying the company under one brand and launching our premium line, we can better serve the full spectrum of companies and developers using APIs to power their apps, mobile and multichannel strategies and business partnerships,” said Chet Kapoor, CEO, Apigee.

The traffic on the traffic has been brisk. Currently, 2,500 GB of data per month and 25k messages are processed per second on Apigee Tech, says the firm.

As I heard more about the role of APIs and how managing and defining that traffic and use patterns -- both incoming and outgoing -- I was reminded too of the Big Data analysis value so many companies are building out.

What if you were to be able to analysis real-time data with real-time API activities? This may not be for everyone, but many mobile, e-commerce and service providers -- and a boat load of web-focused start-ups -- could develop some super insights.

Joining the analysis from APIs, systems logs, and data could be a killer business intelligence benefit. It might also spur new revenue by selling that analysis if you happen to find yourself at the juncture of APIs and data and either business or consumer behavior. Viva la real time analytics at scale!

Among the new and rebranded Apigee products:
  • Apigee Premium: Announced on Wednesday, Apigee Premium provides advanced features on top of the Apigee Free platform, including unlimited API traffic, advanced rate limiting and analytics, and developer key provisioning. Visit https://app.apigee.com/sign_up to sign up for the preview.

  • Apigee Free: A free tools platform launched last year for developers and providers to learn, test, and debug APIs, get analytics on API performance and usage, and apply basic rate-limits to protect their services.

  • Apigee Enterprise: An industrial-grade API platform for enterprises using APIs to fuel their mobile, multichannel, application and cloud strategies. Previously Sonoa Systems’ core product ServiceNet, Apigee Enterprise provides API visibility, control, management and security.
You may also be interested in:

Wednesday, September 22, 2010

Data center transformation requires more than new systems, there's also secure data removal, recycling, server disposal

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

An often-overlooked aspect of data center transformation (DCT) is what to do with the older assets as newer systems come online. Much of the retiring IT equipment can possess sensitive data, may be sources of significant economic return, or need at least need to be recycled according to various regulations.

Improperly disposing of data and other IT assets can cause embarrassing security breaches, increase costs, and pose the risk of regulatory penalties. Indeed, many IT organizations are largely unaware of the hazards and risks of selling older systems into auction sites, secondary markets or via untested suppliers.

Compliance and recycling issues, as well as data security concerns and proper software disposition, should therefore be top of mind early in the DCT process, not as an after-thought.

In a recent podcast discussion, I tapped two HP executives on how to best manages productive transitions of data center assets -- from security and environmental impact, to recycling and resale, and even to rental of transitional systems during a managed upgrade process. I spoke with Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O'Grady, Director of Global Life Cycle Asset Management Services with HP Financial Services.

Here are some excerpts:
Helen Tang: Today there are the new things coming about that everybody is really excited about, such as virtualization, and private cloud. ... This time around, enterprises don’t want to repeat past mistakes, in terms of buying just piles of stuff that are disconnected. Instead, they want a bigger strategy that is able to modernize their assets and tie into a strategic growth enablement asset for the entire business.

Yet throughout the entire DCT process, there's a lot to think about when you look at existing hardware and software assets that are probably aged, and won’t really meet today’s demands for supporting modern applications.

How to dispose of those assets? Most people don’t really think about it nor understand all of the risks involved. ... Even experienced IT professionals, who have been in the business for maybe 10, 20 years, don’t quite have the skills and understanding to grasp all of this.

We're starting to see sort of this IT hybrid role called the IT controller, that typically reports to the CIO, but also dot-lines into the CFO, so that the two organizations can work together from the very beginning of a data center project to understand how best to optimize both the technology, as well as the financial aspects.

Jim O'Grady: We see that a lot of companies try to manage this themselves, and they don’t have the internal expertise to do it. Often, it’s done in a very disconnected way in the company. Because it’s disconnected and done in many different ways, it leads to more risks than people think.

You are putting your company’s brand at stake, through improper environmental recycling compliance, or exposing your clients, customers, or patients’ data to a security breach. This is definitely one of those areas you don’t want to read about in a newspaper to figure out what went wrong.

One of the most common areas where our clients are caught unaware of the complexity of the data security, and the e-waste legislation requirements that are out there, and especially the pace of its change.

We suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning and/or prior to the asset leaving the physical premise of the site. Use your outsource partner, if you have one, as a final validation for data security. So, do it on site, as well as do it off site.

Have a well-established plan and budget up-front, one that’s sponsored by a corporate officer, to handle all of the end-of-use assets well before the end-of-use period comes.

Reams of regulations

E-waste legislation resides at the state, local, national, and regional levels, and they all differ. There's some conflict, but some are in line with each other. So it's very difficult to understand what your legislative requirements are and how to comply. Your best bet is to deal with a highest standard and pick someone that knows and has experience in meeting these legislative requirements.

Legislation resides at the state, local, national, and regional levels, and they all differ.



There are tremendous amounts of global complexities that customers are trying to overcome, especially when they try to do data center consolidation and transformation, throughout their enterprise across different geographies and country borders.

You're talking about a variety of regulatory practices and directives, especially in the EU, that are emerging and restrict how you move used and non-working product across borders. There are a variety of different data-security practices and environmental waste laws that you need to be aware of.

Partner beware

A lot of our clients choose to outsource this work to a partner. But they need to keep in mind that they are sharing risk with whomever they partner with. So they have to be very cautious and be extremely picky about who they select as a partner.

This may sound a bit self-serving, but I always suggest for enterprises to resist smaller local vendors. ... If you don’t kick the tires with your partner and you don’t find out that the partner consists of a man, a dog, and a pickup truck, you just may have a hard time defending yourself as to why you selected that partner.

Also, develop a very strong vendor audit qualification and ongoing inspection process. Visit that vendor prior to the selection and know where your waste stream is going to end up. Whatever they do with the waste stream, it’s your waste stream. You are a part of the chain of custody, so you are responsible for what happens to that waste stream, no matter what that vendor does with it.

You need to create rigorous documented end-to-end controls and audit processes to provide audit trails for any future legal issues. And finally, select a partner with a brand name and reputation for trust and integrity. Essentially, share the risk.

Total asset management

Enterprises should well consider how they retire and recover value for their entire end-of-use IT equipment, whether it's a PDA or supercomputer, HP or non-HP product. Most data center transformations and consolidations typically end with a lot of excess or end-of-use product.

We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market. This is a strength of HP Financial Services (HPFS).

Typically, what we find with companies trying to recover value for product is that they give it to their facilities guys or the local business units. These guys love to put it on eBay and try to advertise for the best price. But, that’s not always the best way to recover the best value for your data center equipment.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.



We're now seeing it migrate into the procurement arm. These guys typically put it out for bid and select the highest bid from a lot of the open market brokers. A better strategy to recover value, but not the best.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.

From a financial asset ownership model, HPFS has the ability to come in and work with a client, understand their asset management strategy, and help them to personalize the financial asset ownership model that makes sense for them.

For example, if you look at a leasing organization, when you lease a product, it's going to come back. A key strength in terms of managing your residual is to recover the value for the product as it comes back, and we do that on a worldwide basis.

We have the ability to reach emerging markets or find the market of highest recovery to be able to recover the value for that product. As we work with clients and they give us their equipment to remarket on their behalf, we bring it into the same process.

When you think about it, an asset recovery program is really the same thing as a lease return. It's really a lot of reverse logistics -- bring it into a technical center, where it's audited, the data is wiped, the product is tested, there’s some level of refurbishment done, especially if we can enhance the market value. Then, we bring it into our global markets to recover value for that product.

We have skilled product traders within our product families who know how to hold product, and wait for the right time to release it into the secondary market. If you take a lot of product and sell it in one day, you increase the supply, and all of the recovery rates for the brokers drop overnight. So, you have to be pretty smart. You have to know when to release product in small lot sizes to maximize that recovery value for the client.

Legacy support as core competency

We're seeing a big uptake in the need to support legacy product, especially in DCT. We're able to provide highly customized pre-owned authentic legacy HP product solutions, sometimes going back 20 years or more. The need for temporary equipment just scaling out legacy data center hardware platform capacity that’s legacy locked is an increasing need that we see from our clients.

Clients also need to ensure their product is legally licensed and they do not encounter intellectual property right infringements. Lastly, they want to trust that the vendor has the right technical skills to deal with the legacy configuration and compatibility issues.

Our short-term rental program covers new or legacy products. Again, many customers need access to temporary product to prove out some concepts, or just to test some software application on compatibility issues. Or, if you're in the midst of a transformation, you may need access to temporary swing gear to enable the move.

We also help clients understand strategies to recover the best value for decommissioned assets, as well as how to evaluate and how to put in place a good data-security plan.

We help them understand whether data security should be done on-site versus off-site, or is it worth the cost to do it on-site and off-site. We also help them understand the complexities of data wiping enterprise product, versus just the plain PC.

The one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy.



Most of the local vendors and providers out there are skilled in wiping data for PCs, but when you get into enterprise products, it can get really complex. You need to make sure that you understand those complexities, so you can secure the data properly.

Lastly, the one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy, especially on a global basis. How do you get the timing down for all the products coming back on a return basis?

Tang: We reach out to our customers in various interactions to talk them through the whole process from beginning to end.

One of the great starting points we recommend is something we called the Data Center Transformation Experience Workshop, where we actually bring together your financial side, your operations people, and your CIOs, so all the key stakeholders in the same room, and walk through these common issues that you may or may not have thought about to begin with. You can walk out of that room with consensus, with a shared vision, as well as a roadmap that’s customized for your success.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Tuesday, September 21, 2010

IBM acquires Netezza as big data market continues to consolidate around appliances, middle market, new architecture

IBM is snapping up yet another business analytics player. After purchasing OpenPages last week, Big Blue is now laying down $1.7 billion in an all-cash deal to acquire Netezza.

Netezza provides high-performance analytics in a data warehousing appliance that claims to handle complex analytic queries 10 to 100 times faster than traditional systems. Netezza appliances puts analytics into the hands of business users in sales, marketing, product development, human resources and other departments that need to actionable insights to drive decision-making.

With its latest business analytics acquisition, Steve Mills, senior vice president and group executive of IBM Software and Systems, says the company is bringing analytics to the masses.

“We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value,” Mills says. “Netezza is a perfect example of this approach.”

Big Blue’s long haul

Netezza fits in with IBM’s maturing business analytics strategy. Big Blue has long put an emphasis on data analysis and business intelligence (BI) as key drivers of IT infrastructure needs. The company has demonstrated a clear understanding that data analysis and BI can also be easily applied to business issues.

IBM’s relationship database, DB2, also fits into the big picture. Over the years, IBM has built a strong family of database-driven products around DB2. Essentially, IBM has successfully worked to tie the data equation together with the needs of enterprises and the strength of their IT departments.

We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value.



While DB2 reaches into the past and supports the data needs of legacy and distributed systems and applications, new architectures around in-memory and optimized platforms for persistence-driven tasks are in vogue. While Neteeza's strengths are in analytics, this architecture has other uses, ones we'll be seeing more of.

Fast-forward to the Netezza acquisition. The $1.7 billion grab shows that IBM is well aware that big data sets don’t lend themselves to traditional architecture for crunching data. IBM, along with its competitors, have been developing or acquiring new architectures that focus more on in-memory solutions.

Rather than moving the entire database or large caches around on disk or tape, then, new architectures have emerged where the data and logic reside closer together -- and the data is accessed from high-performing persistence.

For example, with Netezza appliances, NYSE Euronext has slashed the time it takes to load and extract massive amounts of historical data so it can run analytic queries more securely and efficiently, while reducing run times from hours to seconds. Virgin Media, a UK provider of TV, broadband, phone and mobile services with millions of subscribers, uses Netezza across its product marketing, revenue assurance and credit services departments to proactively plan, forecast, and respond to the effect of pricing and tariff changes enabling them to quickly respond with competitive offerings.

Business analytics consolidation

W
ith the Netezza acquisition, the business analytics market is seeing consolidation as major players begin preparing to tap into a growing big data opportunity. Much the same as the BI market saw consolidation a few years ago -- IBM acquired Cognos, Oracle bought Hyperion, and SAP snapped up Business Objects -- vendors are now seeing big data analytics as an area that should be embedded into the total infrastructure of solutions. That requires a different architecture.

The competition is heating up. EMC purchased Greenplum, an enabler of big data clouds and self-service analytics, in July. Both companies are planning to sell the hardware and software together in appliances. The vendors tune and optimize the hardware and software to offer the benefits of big data crunching, taking advantage of in memory architecture and high performance hardware.

Expect to see more consolidation, although there aren’t too many players left in the Netezza space. Acquisition candidates include data management and analysis software company Aster Data Systems and Teradata with its enterprise analytics technologies, among others. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Meanwhile, Oracle this week at OpenWorld is pushing against the market with its new Exadata product. The battle is on. My take is that these purchases are for more than the engines that drive analytics -- they are for the engines that drive SaaS, cloud, mobile, web and what we might call the more modern work loads ... data intensive, high-scaling, fast-changing and services-oriented.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Monday, September 20, 2010

Morphlabs eases way for companies to build private cloud infrastructures, partners with Zend

Morphlabs, a provider of enterprise cloud architecture platforms, has simplified the process of building and managing an internal cloud for enterprise environments -- enabling companies to create their own private cloud infrastructure.

The Manhattan Beach, Calif. company today announced a significant upgrade to its flagship product, mCloud Controller. The enhanced version introduces Enterprise Cloud Architecture (ECA), a new approach that provides enterprises with immediate access to the building blocks and binding components of a fault tolerant, elastic, and highly automated platform.

Morphlabs also announced a partnership with Zend Technologies Ltd., whose Zend Server will be shipped as part of the mCloud Enterprise, said Winston Damarillo, CEO at Morphlabs.

mCloud Controller is a comprehensive cloud computing platform, delivered as an appliance or virtual appliance, as well as providing open mCloud APIs (you can manage the ECA cloud from an iPad, for example). To support the leading platforms, mCloud Controller will have built-in ECA compliant support for Java, Ruby on Rails, and PHP.

Fittingly for enterprise private clouds, the Morph offering also provides direct integration to mainstream middleware via standards-based connectors. It also supports a plethora of VMs, from KVM to Xen, and and VMware, and allows for others cluster managers to be used as well.

Look for Morphlabs to seek to sell to both service providers and enterprises for the compatible hybrids benefits. Of course, we're hearing the same from Citrix, VMware, Novell, HP, etc. It's a horse race out there for a de facto hybrid cloud standard, all right.

Productivity gains

“PHP has been broadly adopted for the productivity gains it brings to Web application development, and because it can provide the massive scalability that e-commerce, social networking and media sites require,” said Matt Elson, vice president of business development at Zend. “Integrating Zend Server into Morphlabs’ mCloud Controller enables IT organizations to leverage the elasticity of cloud computing and automate the process of deploying highly reliable PHP applications in the cloud.”

Key features of the mCloud Controller with ECA include:
  • Uniform environments from development to production to help users simplify system configuration. Applications can grow as needed, while maintaining a standardized infrastructure for ease of growth and replacement.

  • Simplified system administration with automated monitoring and self-healing out of the box to avoid complicated system tuning. mCloud Controller also comes with graphical tools for viewing system-wide performance.

  • Self-service resource provisioning, which frees the IT department from numerous application provisioning requests. Without any system administration skills, authorized users can start and stop computes and provision applications as needed. Billing is also included within the system.

  • Streamlined application management automates the process of deploying, monitoring and backing-up applications. Users do not have to deal with configuration files and server settings.
The mCloud Controller v2.5 is available now in the United States, Japan and South East Asia. For more information contact Morphlabs at info@mor.ph.

You may also be interested in: