Monday, November 10, 2008

Solving IT energy conservation issues requires holistic approach to management and planning, say HP experts

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The critical and global problem of energy management for IT operations and data centers has emerged as both a cost and capacity issue. The goal is to find innovative means to conserve electricity use so that existing data centers don't need to be expanded or replaced -- at huge cost.

In order to promote a needed close matching of tight energy supply with the lowest IT energy demand possible, the entire IT landscape needs to be considered. That means an enterprise-by-enterprise examination of the "many sins" of energy mis-management. Wasted energy use, it turns out, has its origins all across IT and business practices.

To learn more about how enterprises should begin an energy-conservation mission, I recently spoke with Ian Jagger, Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group, and Andrew Fisher, manager of technology strategy in the Industry Standard Services group at HP.

Here are some excerpts:
Data centers typically were not designed for the computing loads that are available to us today ... (and so) enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments (IT and Facilities) to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs. One of the biggest issues out there is that the industry, by and large,drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy. ... If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria.

You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management. ... But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

This is a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

So, there is rarely a single silver bullet to solve this complex problem. ... The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

... There’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

... There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Thursday, November 6, 2008

ITIL requires better log management and analytics to gain IT operational efficiency, accountability

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Read complete transcript of the discussion.

Implementing best practices from the the Information Technology Infrastructure Library (ITIL) has become increasingly popular in IT departments. As managers improve IT operations with an eye to process efficiency, however, they need to gain operational accountability through visibility and analytics into how systems and networks are behaving.

Innovative use of systems log management and analytics -- in the context of entire IT infrastructures -- produces an audit and performance data trail that both helps implement and refine such models as ITIL. Compliance is also a building requirement that can be solved through verification tools such as systems monitoring and analytics in the context of ITIL best practices.

To learn more about how systems log tools and analysis are aiding organizations as they adopt ITIL, I recently spoke with Sean McClean, principal at consultancy KatalystNow, and Sudha Iyer, director of product management at LogLogic.

Here are some excerpts:
IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT. ... We are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business.

Because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things. ... When people look at ITIL, organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way.

ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.

But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.

Our log-management platform ... allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.

All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in. ... Our log management solutions allows [enterprises] to create better control and visibility into what actually is going on in their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on.

You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized." Or, "it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time."

[As] the industry matures, I think we will see ... people looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”

There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Tuesday, November 4, 2008

Genuitec, Eclipse aim for developer kit to smooth rendering of RIAs on mobile devices

The explosion in mobile Web use, due partly to the prevalence of the iPhone and other smart-phone devices -- and a desire to make developers less grumpy -- have led Genuitec to propose a new open-source project at the Eclipse Foundation for an extensible mobile Web developer kit for creating and testing new mobile rich Internet applications (RIAs).

Coming as a sub-project under the Device Software Development Platform (DSDP), the FireFly DevKit project is still in the proposal phase, and the original committers are all from Genuitec, Flower Mound, Tex. [Disclosure: Both Genuitec and the Eclipse Foundation are sponsors of BriefingsDirect podcasts.]

Included in the developer kit will be a previewer and a debugger, a Web rendering kit, a device service access framework, a deployment framework, and educational resources.

The two tool frameworks will enable mobile web developers to visualize and debug mobile web applications from within an Eclipse-based integrated development environment (IDE). Beyond this the FireFly project will develop next-generation technologies and frameworks to support the creation of mobile web applications that look and behavior similarly to native applications and are able to interact with device services such as GPS, accelerometers and personal data.

The issue of developer grumpiness was raised in the project proposal:
When programming, most developers dislike switching between unintegrated tools and environments. Frequent change of focus interrupts their flow of concentration, reduces their efficiency and makes them generally grumpier :). For mobile web application development, web designers and programmers need to quickly and seamlessly perform incremental development and testing directly within an IDE environment rather than switching from an IDE to a device testing environment and back again.
One goal of the Web rendering toolkit is to make Web applications take on the look and feel of the host mobile device. Possibly, an application could run in the Safari browser on an iPhone, but appear similar to a native iPhone app.

Initially, example implementations of the project frameworks will be provided for the iPhone. As resources become available, examples for the G1-Android platform will also be developed. The project will actively recruit and accept contributions for other mobile platforms such as Symbian, Windows Mobile and others.

The current timeframe of the project calls for it to piggyback an incubation release on top of the Eclipse 3.5 platform release. The entire project proposal is available on the Eclipse site.

Friday, October 31, 2008

BriefingsDirect Analysts take Microsoft's pulse: Will the software giant peak in next few years?

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Insights Edition, Vol. 32, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events, with a panel of IT analysts and guests.

In this episode, recorded Oct. 10, 2008, our experts examine the state of Microsoft at the onset of the annual Professional Developers Conference. Two narratives emerge from our roundtable discussion, that Microsoft is behind on many new IT trends and is tied to past business models. The opposing view is that Microsoft will ride pedestrian app dev, business intelligence, data services, Xbox, unified communications, virtualization and cloud computing to become bigger and more pervasive than ever.

Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Dave Linthicum, independent SOA consultant at Linthicum Group; Brad Shimmin, principal analyst at Current Analysis; Mike Meehan, a senior analyst at Current Analysis, and Joe McKendrick, independent analyst and prolific blogger. Our discussion is hosted and moderated by yours truly, Dana Gardner.

Here are some excerpts:
Kobielus: There’s some validity to the viewpoint that Microsoft's growth potential has capped on the business side, when you consider packaged applications, and software- and application-development tools, in the sense that the entire product niche of the service-oriented architecture (SOA) universe is rapidly maturing.

The vendors in this space -- the SOA vendors, the business-intelligence (BI) vendors, the master data management (MDM) vendors -- are going to realize revenue growth and profitability. Those who survive this economic downturn and thrive in the next uptick, will be those who very much focus on providing verticalized and customized applications on a consulting or professional services basis.

In that regard, Microsoft is a bit behind the eight ball. They don’t really have the strength on the consulting, professional services, and verticalization side, that an SAP, Oracle, or an IBM can bring to the table.

Microsoft, if they want to continue to grow in the whole platform and application space and in the whole SOA universe, needs to put a greater focus on consulting services.

McKendrick: Microsoft has its own economy. No matter what happens to the economy at large, Microsoft has its own economy going, and just seems to get through all this.

What’s driven Microsoft from day one, and continues to do so, is that Microsoft is the software company for Joe the Plumber. That’s their constituency, not necessarily Joe the Developer. They cater to Joe the Developer, Joe the CIO, and Joe the Analyst certainly likes to check in on what they are doing. It's this whole idea of disruptive technology. They have always targeted the under-served and un-served parts of the marketplace and move up from there.

... The base of Microsoft, these companies that are using Microsoft technology, don’t necessarily get virtualization or cloud computing. They just want a solution installed on their premises and want it to work.

Linthicum: I think they are behind the eight ball. A lot of the strategy I’ve seen coming out of Microsoft over the last few years, especially as it relates to cloud computing, SOA, and virtualization, has been inherently flawed. They get into very proprietary things very quickly. It really comes down to how are they going to sell an additional million desktop operating systems.

Ultimately, they just don’t get where this whole area is going. ... We’re heading into an area where they may not be as influential as they think they should be. They may be not only behind the eight ball, but lots of other organizations that are better at doing cloud computing, virtualization, and things like that, and have a good track record there, are going to end up owning a lot of the space.

Microsoft isn’t going to go away, but I think they’re going to find that their market has changed around them. The desktop isn't as significant as it once was. People aren’t going to want to upgrade Office every year. They’re not going to want to upgrade their desktop operating systems every year. Apple Macs are making big inroads into their market space, and it’s going to be a very tough fight for them. I think they’re going to be a lot smaller company in five years than they are today.

Meehan: Dave is absolutely right in that the one area that Microsoft never really conquered that it needed to conquer, given its strength in the desktop, is the handheld. If they are not going to be there with the handheld long-term, that’s a major growth area that they are going to miss out on. That’s where a lot of the business is going to shift to. ... On the SOA side, as I said before, Microsoft is just trying to be as service-oriented as they can for users who are trying to be not SOA-driven, but "As Service-Oriented As Possible" (ASOAP).

In fact, make that an acronym, ASOAP. There are going to be a number of users who are not going to go fully into SOA, because they have an enterprise architecture. It’s too hard to do, too hard to maintain. They’re never going to quite figure that out. They are just going to try to be tactical and ASOAP. Microsoft will try to service them and hold that part of their business.

What’s the next big thing they’re going to do? Joe referred to Microsoft having come up with that in previous downturns. I don’t see where they have got that right yet, and so I think that leads to them being smaller long-term.

Shimmin: [Microsoft is going to have an opportunity to change this perception] and simply because they don’t have to. I think back to a number of points that’s been made here that to be successful Microsoft doesn’t need to convince the world. It just needs to convince the people that attend the PDC. They have such an expansive and well-established channel, with all the little plumber-developers running around building software with their code, that just as 40 is the new 30, Microsoft is really kind of the new Apple, in a way.

They don’t need to be Oracle to succeed, they really need to have control over their environment and provide the best sort of tooling, management, deployment, and execution software that they can for those people who have signed on to the Microsoft bandwagon and are taking the ride with them. ... (Microsoft) is kind of capped out in many ways relative to the consumer market. But, gosh, they have shown that with things like SharePoint, for example, Microsoft is able to virally infest an organization successfully with their software without having to even lift a finger.

They’ll continue to do that, because they have this Visual Basic mentality. I hate to say it, but they have the mentality of “Let’s make it as simple as possible” for the people that are doing ASOAP, as Mike said, that don’t need to go all the way, but really just need to get the job done. I think they’ll be successful at that.

Kobielus: I think Microsoft will be larger, and they will be larger for the simple reason that they do own the desktop, but the desktop is becoming less relevant. But now, what’s new is that they do own the browser, in terms of predominant market share or installed base. They do own the spreadsheet. They do own the portal. As Brad indicated, SharePoint is everywhere.

One of the issues that many of our customers at Forrester have hit on -- CIO, CTO, that level -- is that SharePoint is everywhere. How do they manage SharePoint? It's a fait accompli, and enterprises have to somehow deal with it. It’s the de-facto standard portal for a large swath of the corporate world. Microsoft, to a great degree, owns the mid-market database with SQL Server.

So owning so many important components of the SOA stack, in terms of predominant market share, means that Microsoft has great clout to go in any number of directions. One direction in which they’re clearly going in a very forceful way that brings all this together is in BI and online analytical processing (OLAP). The announcements they made a few weeks ago at the BI conferenceshow where Microsoft clearly is heading. They very much want to become and remain a predominant BI vendor in the long run.

Gardner: ... On the total cost perspective, I think what I am hearing from you is that if you go all Microsoft all the time, there are going to be efficiencies, productivity, and cost savings. Is that the mantra? Is that the vision?

Shimmin: That‘s exactly right, Dana. That’s what they’re banking on, and that’s why I think they are the next Apple, in a way, because they are downtrodden, compared to some of the other big guns we’re talking about with Oracle, SAP, and IBM inside the middleware space. But that doesn’t matter, because they have a loyal following, which, if you guys have ever attended these shows of theirs, you’d see that they are just as rabid as Mac fans in many ways.

Microsoft is going to do their best job to make their customers lives as easy as possible, so that they remain loyal subjects. That’s a key to success. That’s how you succeed in keeping your customers.

Linthicum: Ultimately, people are looking for open solutions that are a lot more scalable than this stuff that Microsoft has to offer. The point that was just made, there are a bunch of huge Microsoft fans that will buy anything that they sell, that’s the way the shops are. But the number of companies that are doing that right now are shrinking.

People are looking for open, scalable, enterprise-ready solutions, they understand that Microsoft is going to own the desktop, at least for the time being, and they are going to keep them there. But, as far as their back office things and some of the things that Microsoft has put up as these huge enterprise class solutions, people are going to opt for other things right now.

It's just a buying pattern. It may be a perception issue or a technological issue. I think it’s a matter of openness or their insistence that everything be proprietary and come back to them. I heard the previous comment that looking at all Microsoft all the time will provide the best bang for the buck. I think people are very suspicious of that.

Gardner: We’ve heard quite a bit on this cloud operating system from Red Hat, Citrix, VMware, IBM, and HP talked it up a little bit. No one’s really come out with a lot of detail, but clearly this seems to be of interest to some of the major vendors. What is the nature of this operating system for the cloud, and does it have the same winner-take-all advantage for a vendor that the operating system on the desktop and departmental server had?

Linthicum: I think it does in virtualization. Once one vendor gets that right, people understand it, there are good standards around it, there are good use cases around it, and there’s a good business case around it, that particular vendor is going to own that space.

I’m not sure it’s going to be Microsoft. They’re very good about building operating systems, but in understanding my Vista crashes that are happening once a day, they are not that good.

Also, there are lots of guys out there who understand the virtualization space and the patterns for use there. The technology they’re going to use, the enabling standards, are going to be very different than what you are going to use on a desktop or even a small enterprise departmental kind of problem domain. Ultimately, a large player is going to step into this game and get a large share of this marketplace pretty quickly, because the cost and ease of moving to that particular vendor is very low.

... These virtualization operating systems that are enterprise bound or even in a gray area with the cloud are going to come from somebody else besides Microsoft.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Thursday, October 30, 2008

Microsoft's cloud push lacks economic expediency, sidesteps catalysts for ecology-wide adoption

Few topics among economists and business leaders engender the same enthusiasm as productivity. Doing more for less seems the universal balm for individuals, businesses and markets. If we all have growing productivity, well, then everything practically takes care of itself. You'll find few dissenters.

Ask 50 economists, however, how much IT specifically has contributed to productivity surge since the 1970s, and you'll get 50 different answers. They know IT has been good for work efficiency, sure, but just how so and in what dollops? No clue.

Is it better, cheaper software? Is it Moore's Law of escalating power and value from micro-processors? Is it the connected hive of the local area network, or the social fabric of the Internet? The behavior shifts to "always on" data access, or knowledge sharing of ad hoc and geography-free collaboration sessions -- are they behind the productivity boom of the past (apparently closing) bull economic cycle?

Yes, of course, to all. But how and to what degree that these complex causes and effects form the formula for overall productivity is as elusive as predicting the weather from butterfly wing beats. Somewhere in the fog of IT's storm into our consciousness, work habits and business strategies lies the answer. The algorithm of IT's productivity influence over individual people and their societal actions has yet to be written. Too many ghosts. Too many machines.

Nonetheless, productivity -- or rather the expectation of whole new dimensions of productivity -- is what has been behind the hype and strengthening embrace of cloud computing concepts the past two years. In a sense, cloud computing is the unifying theory of IT productivity, which has been indisputably powerful if not complexly disjointed over the past 25 years.

Cloud computing takes many of the essential elements of IT-spurred productivity and serves them up in a whole greater than the sum of the parts. Improved utilization, higher efficiency, better packaging, tuned and refined software, exploitation of network effects, viral effects from social networking, less energy per electron server or packet delivered -- these are just a few of the foundations of cloud computing. What's different is that these variables are working much more harmoniously, with common planning and strategic architectural forethought.

A typical enterprise data center landscape is more a window into the past of IT than the future. The chilled realms of raised floors inefficiently demonstrate how design sprung from unanticipated but compelling paradigm shifts in computing stinks. The barely backwards compatible data center of today eats up larger chuck of money doing less actual improvement in productivity.

Innovation is held hostage by the need to keep the transaction processing monitor properly integrated to the middleware so the n-tier architecture can join data from the right hand to the left hand during a sales call using a mobile supercomputer generating pretty pictures that completely dim after 145 minutes.

We are all well aware of the price for rapid technological change as progeny for helter-skelter IT adaptation and advancement over the past decades. It all probably could not have happened any differently, but it also does not need to continue like this.

Cloud computing entices and seduces because it is, after all, quite different. IT has matured and the requirements of the workload are appreciated sufficiently to architect data centers holistically and effectively. Cloud computing unifies the most up-to-date architectural concepts around data center resources of, for and by productivity. Total cost considerations and the careful association of all of the parts and elements -- working in concert -- these are the high-level requirements of an IT cloud. You can build it right for the workload and allow it to dynamically adjust and accept new workloads. It's more than a just the next big thing. It more than able to drag along all the old stuff too.

When the entire support infrastructure is designed properly, with all the technical and productivity requirements aligned, then IT is transformed. Leverage standards, employ best practices, depend on efficiencies of scale -- and more than incremental change occurs. It does a lot more, more flexibly, for a lot less. Cloud offers a whole new levels of productivity, directly attributed to advanced IT. Services can be assembled based on specific work needs independent of the underlying platforms. Less waste, more haste, all around.

Why then is Microsoft tepid in its march to cloud? Why is it "software plus services," not just services? Why would such productivity improvements that clouds afford -- at a time when economic conditions demand rapid transformational advances -- be dribbled out as Microsoft has done this week at its Professional Developers Conference in Los Angeles? What exactly is Microsoft waiting for?

Most of us observers expected Microsoft to move to the cloud now, based on the success of Amazon Web Services and Google. But the apparent pace is to offer developers training wheels, little more. The pricing -- sort of important when the whole goal is about economics and productivity -- remains missing.

How can architects, CFOs, developers, ISVs, channel partners -- in essence the entire Windows global ecology of participants -- move one step forward without knowing the pricing, both in terms of form, direction and dollars and cents?

My cynical side says that Microsoft wants to accomplish two things with its Azure initiatives. One, to suck the oxygen out of the cloud market (get it, Azure ... no oxygen) and slow the pace of innovation and investment around clouds and cloud ecologies. And two, to make sure the "software plus services" transition comes slower than the market demand might otherwise enjoy. Why swap out software (with a 60 percent margin) for services (with a 15 percent margin) faster than slower?

The answer, of course, is productivity. I have not been sure for many years whether Microsoft is focused on its users' productivity more than at a pace set by, well ... Microsoft. The cloud approach may be different than IT as usual from over the past 20 years, but so far Microsoft's approach to productivity ala cloud seems about the same.

How might this all be different? How might the new, more productive chapter in IT -- of swift yet appropriate adoption of cloud supported IT resources -- get going faster?

Microsoft could spur the engine of adoption on cloud ecology use and advancement by providing stunningly compelling pricing for ISVs and enterprise developers to build and deploy their applications using Azure and platform as a service now. Microsoft would help the makers of applications succeed quickly by making the services easily available to the huge markets that Microsoft is arbiter of -- both business and consumer.

I can even see if Microsoft is choosy and favors its tools, APIs, platforms, data formats, communications protocols, and existing partners. Make a Windows-only cloud, fine, but make it.

Apple with its online store (which favors the Mac world in a big) for developers of iPhone applications and services has shown just how powerful this approach can be. Microsoft could become the best friend of every Visual Studio, PHP and Eclipse developer and business by helping create the best applications for the least total cost, all as a service.

Microsoft could decide and declare what applications it will and won't provide as Azure services itself, allowing a huge market for others to build what's left and sell the services on a per-use basis to users (perhaps driving them to consume other Microsoft services). Deals could be made on applications portability, but optimally the market should pick cloud winners based on value and reach. May the best cloud for both developers and users win -- it would be a huge win.

Redmond could help those applications that provide good value find an audience quickly. Maybe Microsoft could sell or share metadata about users preferences and requirements so the applications are even more likely to succeed. That would include making pathways to the vast Web consumer markets via MSN and all its Web services available to those that build on the Azure platform. Maybe Yahoo finds its way into the mix. Microsoft could offer both advertising-subsidized and pay-per-use models, or combinations of the two for media and entertainment companies, for example, to load their stuff up on the Azure cloud, or build their own Azure clouds. Might compete effectively against Google as a result.

To me these only scratch the surface the the vast and rich ecology of partners and customers that would emerge from an accessible and productivity-priced Microsoft cloud. Done right, myriad specialized value-added business services and consumer services would spring up at multiple abstractions on top of the essential base services that the Microsoft cloud and APIs provide. It would be a very good business for Microsoft, but an even better business growth opportunity for all of the other players. The pie would grow, and productivity could soar. Users would bet better apps and services at low and predictable cost.

There could be a vast and rich community that thrives in the Microsoft cloud's ether. Or there could be a dark Microsoft cloud of, for and by Microsoft applications and services. Redmond could shoot for the moon again, but likely the other clouds will get in the way. Why risk the ecology play for trying to have it all Microsoft's way? That's why time is critical.

Microsoft, at least based on the tepid pace of the Azure roadmap as laid out this week, is more interested in hedging bets and protecting profits than in spurring on productivity and providing economic catalysts to rich new potential ecologies of online, services-driven businesses. Any continued delay to cloud is the giveaway of Microsoft's true intentions.

If Microsoft's own business interests prevent it from realizing the full potential of cloud computing, or make it try and damp down the cloud market generally, then Microsoft is a drag on the economy at a time when that is the last thing that's needed. And yet Microsoft could do cloud better than any other company on Earth.

Microsoft needs to decide whether it really wants to be in the software or services business. Trying to have it both ways, for an indeterminate amount of precocious time, to in effect delay the advancement of serious productivity, seems a terrible waste and a terrible way to affect its community.

The not trivial risk for Microsoft is that in five years it won't be leading in the software or services business any more.

Monday, October 27, 2008

Identity governance evolves to must-do item on personnel management and IT security checklist

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: SailPoint Technologies.

Read complete transcript of the discussion.

Security and risk aversion around personnel, applications, and IT systems access have never been more urgent. Properly managed identity information and access rules for users of enterprise applications and systems have evolved quickly from hiring and firing to also gaining insight into improper ongoing use and abuse patterns.

Shoddy provisioning of users on and off of IT systems and services won't pass muster with internally mandated best security practices, not to mention becoming a major liability in a new era of mandated regulation and oversight. Guessing which users have access to which discrete services from inside or outside of the company won't fly in a virtualized and cloud-augmented world.

The alternative to guesswork moves IT operators and security officer closer to the level of identity governance, a step above simply granting access and privileges. It means becoming pro-active in managing roles and access across multiple dimensions in a business, across an identity governance lifecycle, for as long as users can interact with a company and its assets.

To learn more about what identity governance means and how big the stakes are for enterprises and highly regulated vertical industries, I recently spoke with Mark McClain, the CEO and founder of SailPoint Technologies, along with Jackie Gilbert, vice president of marketing and also a founder at SailPoint.

Here are some excerpts:
If you look back over most of this decade, back to the turn of the century – it’s still funny to say that phrase – you see a series of issues with breaches. There’s been a series of issues with fraud or potential fraud, everything from Enron to things that happened with other companies where there are questionable practices, and then various clear issues of fraud or criminal activity.

And all of that together has brought about a new focus on privacy, financial oversight, and good governance, which is, in many cases, all related to the management of risk. ... There is a lot of churn in the financial markets and in the companies that make up those markets, where people are potentially moving inside of companies, changing jobs, lots of potential lay-offs happening.

That's when these issues of good governance, good controls over who has access to which critical information become very, very acute. ... Now, you have executives, boards, and business managers, who are being asked to be accountable and to gauge the risk and the effectiveness of controls around identity. ... Some industries have 30-percent churn, with people coming in and out of the organization. All that makes this an extremely difficult problem, just getting proper visibility.

Those people are being asked to use tools and approve, certify, and deem whether access privileges and the accounts the users hold are correct, and do not place businesses at risk. So, if you think about it, it has actually forced the marriage of business and IT all around this issue of identity governance.

We now have the auditors, both internal and external, and/or the compliance people who want to have a say, or a seat at the table, to talk about how well we are managing these kinds of access privileges and what risks are involved, when they are not managed well. ... One of the dirty, dark secrets today is that governance and compliance have become harder, and auditors have been forcing more frequent and periodic review of the access information. Quarterly or annually, these managers and applications owners need to re-certify who has access to what.

Another dirty secret in the industry right now is that managers and applications owners must sign-off on these reports, but they don't understand them, because those reports are generated out of the IT systems and they are incomprehensible to the business people.

You certainly have the business people paying attention now because you have senior management who are highly motivated to avoid being the next headline. They don't want their company showing up out there with Cox Communications, the IRS, Wachovia, and any number of companies like Dupont, which have hit the headlines in the last two or three years with some sort of significant breach related to access.

There is a little a bit of a hot potato now going on where IT and security groups are saying, "Hey, I am not going sign-up and own this problem entirely, because I don't have the business context to know exactly what does or doesn't represent risk. You business people have to define that for us."

It really does start with answering the fundamental question that most companies wrestle with, which is "Who has access to what?" One of my customers has joked about the fact that on the day you start with the company, you have access to nothing, and on the day you leave, you have access to everything. Quite often, the only person who actually knows all of the access privileges I may have after 15 years at a company is me.

There have been multiple groups I have moved through, multiple help desks, and IT organizations that have been part of granting me access over the years. So, it's quite probable that, literally, only I understand all of the privileges I have as an employee -- and that's a problem.

[Our solution] is pretty analogous to business intelligence (BI) and even data warehousing or data mining, if you will. Our approach is to take a very lightweight, read-only access to the data. We pull entitlement data and account data from applications and servers throughout the enterprise and we aggregate that into what is basically an entitlement warehouse.

We physically create a common data view of users and their entitlements. What that gives you is not only the visibility in one, single place, but it gives you the business context to better understand it. And it allows us to do some automation of controls and policy enforcement, and some risk assessment. It's amazing the value you can derive, once you get the data all in one place and normalized, so that you can apply all kinds of rules and logic to it.

... It's all in one place. They’re not getting a single spreadsheet per application. They’re getting it all centralized per employee or per application, however they want to see it.

We can also scan that data, looking for policy violations. A good example of that would be what we call "toxic combinations," such as “you can't have an employee who both has the ability to set up a vendor and pay a vendor.” Those are two different access privileges that together indicate a high potential for fraud. So by combining all the entitlement data into one single database, you can much more easily scan for and detect potential policy violations and also the potential for risk to the business.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: SailPoint Technologies.

Friday, October 24, 2008

Web 2.0 acts as accelerant in pending corporate Darwinian shake-out, says Palladium Group CEO

The impact of Web 2.0 technologies on the current global economic downturn will hasten the demise of closed and siloed corporate cultures while providing a reality-based balm on those companies that seek transformation and adaptation by tapping the wisdom of its communities.

These were some of the high-level observations from a chat I had this morning with Dr. David Friend, chairman, president and CEO of Palladium Group, a prominent performance management and consulting firm in Lincoln, Mass.

The swift and severe economic recession predicted by economists and presaged by plunging stock valuations worldwide points to a difficult period of months and possibly years, said Friend. And that means that the decisions that companies and government agencies make during the tumultuous period will carry more impact, perhaps making the difference between organizations becoming adaptive survivors or calcified road kill.

Companies in today's increasingly "hot, flat and crowded world," said Friend, need to make better decisions. Information alone does not do the job. A systemic and ingrained corporate capacity around decision making is needed. Former lions of Wall Street Lehman Brothers and Bear Stearns had oceans of data at their disposal, for example, but still made decisions that imperiled the venerable firms, such as allowing 40-times leverage bets against risky investment instruments.

Palladium Group, formerly the Balanced Scorecard Collaborative, works with companies to help develop strategy, performance management, and business intelligence (BI) methods that help implement "decision cycles" that foster better business outcomes. "It's about building decision-making engines," said Friend.

Today, more than ever, companies need to make decisions fast, based on accurate and comprehensive information. Those analytics need to reach the people who can act and communicate effectively. Yet even in good times, companies tend to take complexity and attack it via compartmental and isolated sub-problem-solving.

The result is that "IT people think about IT, budget people think about budgets, and strategy people think about strategy," said Friend. The decisions, therefore, are faulty or missing because the fuller implications and inputs are not assimilated. Companies, in effect, lack a figurative right brain function, or lose it over time.

New tools, collectively grouped under Web 2.0, however, have emerged in just the past few years that could have a dramatic bearing on how companies newly react to the pressures of the economic contraction. Severe downturns act in a Darwinian fashion, weeding out the ill-adapted and weak, and rewarding the fittest and agile. Web 2.0 will aid the agile and open while placing those who don't use tools such as blogs, wikis, podcasts, social networks at a disadvantage, said Friend.

Friend has a blog for his employees, he said, but it is not easily found via Web search. I find this a bit anachronistic given his stated fealty for the medium.

Nonetheless, BI and Web 2.0 can act in cohesion, said Friend, by allowing better communication about empirical findings and effects, and also injecting more valuable insights from more people with better access to changing environments. Friend cited the use of wikis by the U.S. intelligence community as a way of dramatically improving collaboration and cooperation among rival or siloed agencies.

For those companies that, by culture and tradition, act as dictatorships, with decisions that emanate from the top and which punish countervailing information from reaching decision makers, they will not benefit from the wellspring of information available from employees, customers, partners, and even competitors. Those who avoid or undermine the benefits of Web 2.0 tools will lack the information that leads to better decisions, will miss the ongoing refinement of strategies by those witnessing their impacts.

The power is in the "democratization of information," said Friend, and then of placing that information in the context of proper decision-making. The flow of information and its exploitation for the business's benefit will hasten the decision cycles, for better or worse, said Friend.

I fully agree with this assessment, and I also believe that the economic challenges will accelerate the transformation of both IT and business. IT needs to, as usual, reduce total costs while improving business outcomes. BI and Web 2.0 are essential ingredients for this challenge. But the business leaders must also see the value in these tools and expand their usefulness by integrating them into their decision cycles.

To do this means resisting the urge to slash IT budgets generally, but instead to fund the tools that will provide agility and better decisions, of investing in the means to increase revenue and improve the competitiveness of companies. The payback from BI and Web 2.0 collaboration are well understood. The decision on their use is what needs to be made -- not avoided or overlooked.

Tuesday, October 21, 2008

Looking forward to webinar on cloud-based desktops as a service story

Virtual desktop infrastructure (VDI) is gaining interest for a lot of technical, risk and cost reasons. But there's more than one way to skin the VDI cat.

I'll be taking part in a webinar presentation later this week, at noon ET on Thursday, Oct. 23, on the cloud version of VDI. Desktone is hosting the webinar, and I'm participating, but not getting paid. Here is more information, and how to join in.

Desktone impressed me over a year ago when I was first introduced to them. The value comes from creating the means for telcos and service providers to deliver virtual desktops, what Desktone calls desktops as a service (DaaS), en masse. The cost savings can be huge, and it gets those who don't need to be in the IT business (just to support a few applications) out of the soup-to-nuts IT game.

For home offices, small businesses, and small enterprises -- not to mention departments of certain types of workers inside large enterprises -- the VDI route via a cloud or network host makes a lot of sense.

It also aligns quite well with the platform as a service (PaaS) trend, in that those using a VDI service can order up access to packaged applications, or hire developers to customize or mashup their own data views or process and workflow efficiencies. All of it then gets delivered as a service. Good VDI is desktops and applications -- you need both.

This may be the last best bet for network service and communications providers as they seek a recurring and growth-oriented business service portfolio. As a small business owner, I'd be happy to acquire QuickBooks as a service, for example, or even move more of my apps, data, and PC functions to the cloud.

If I had more employees, I would certainly look at VDI from a package of business and network services as the answer to keeping them current on the apps I want them to use. And I would seek out a GUI tools set to create custom apps for my specific business needs.

So if you're curious about how VDI from the cloud and app dev works, join myself and other industry analysts, Rachael Chalmers from The 451 Group and Robin Bloor from Hurwitz & Associates, for the Desktone webinar this Thursday. Should be fun.