Friday, November 14, 2008

Interview: rPath’s Billy Marshall on how enterprises can virtualize applications as a precursor to cloud computing

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Read complete transcript of the discussion.

Many enterprises are factoring how to bring more applications into a virtual development and deployment environment to save on operating costs and to take advantage of service oriented architectures (SOA) and cloud computing models.

Finding proven deployment methods and governance for managing virtualized applications across a lifecycle is an essential ingredient in making SOA and cloud-computing approaches as productive as possible while avoiding risk and complexity. The goal is to avoid having to rewrite code in order for applications to work across multiple clouds -- public, private or hybrids.

The cloud forces the older notion of "write-once, run anywhere" into a new level of "deploy correctly so you can exploit the benefits of cloud choices and save a lot of money."

To learn more about how enterprises should begin moving to application-level virtualization that serves as an onramp to cloud benefits, I recently spoke with Billy Marshall, founder and chief strategy officer of rPath.

Here are some excerpts:
We're once again facing a similar situation now where enterprises are taking a very tough look at their data center expenditures and expansions that they're planning for the data center. ... The [economic downturn] is going to have folks looking very hard at large-scale outlays of capital for data centers.

I believe that will be a catalyst for folks to consider a variable-cost approach to using infrastructures or service, perhaps platform as a service (PaaS). All these things roll up under the notion of cloud.

Virtualization provides isolation for applications running their own logical server, their own virtual server. ... Virtualization gives you -- from a business perspective -- an opportunity to decouple the definition of the application from the system that it runs on. ... Then, at run-time, you can decide where you have capacity that best meets needs of the profile of an application.

I can begin sourcing infrastructure a little more dynamically, based upon the load that I see. Maybe I can spend less on the capital associated with my own data center, because with my application defined as this independent unit, separate from the physical infrastructure I'll be able to buy infrastructure on demand from Amazon, Rackspace, GoGrid, these folks who are now offering up these virtualized clouds of servers.

That's the architecture we're evolving toward. ... For legacy applications, there's not going to be much opportunity. [But] they may actually consider this for new applications that would get some level of benefit by being close to other services.

[If] I can define my application as a working unit, I may be able to choose between Amazon or my internal architecture that perhaps has a VMware basis, or a Rackspace, GoGrid, or BlueLock offering.

Another big consideration for these enterprises now is do I have workloads that I'm comfortable running on Linux right now, and so can I a take a step forward and bind Linux to the workload in order to take it to wherever I want it to go.

rPath brings a capability around defining applications as virtual machines (VMs), going through a process whereby you release those VMs to run on whichever cloud of your choosing, whether a hypervisor virtualized cloud of machines, such as what's provided by Amazon, or what you can build internally using Citrix XenSource or something like VMware's virtual infrastructure.

It then provides an infrastructure for managing those VMs through their lifecycle for things such as updates for backup and for configuration of certain services on the machines in a way that's optimized to run a virtualized cloud of systems. We specialize in optimizing applications to run as VMs on a cloud or virtualized infrastructure.

With our technology, we enforce a set of policies that we learned were best practices during our days at Red Hat when constructing an operating system. We've got some 50 to 60 policies that get enforced at build time, when you are building the VM. They're things like don't allow any dangling symlinks, and closing the dependency loop around all of the binary packages to get included. There could be other more corporate-specific policies that need to be included, and you would write those policies into the build system in order to build these VMs.

It's very similar to the way you put policies into your application lifecycle management (ALM) build system when you were building the application binary. You would enforce policy at build time to build the binary. We're simply suggesting that you extend that discipline of ALM to include policies associated with building VMs. There's a real opportunity here to close the gap between applications and operations by having much of what is typically been done in installing an application and taking it through Dev, QA and Test, and having that be part of an automated build system for creating VMs.

People are still thinking about the operating system as something that they bind to the infrastructure. In the new case, they're binding the operating system to the hypervisor and then installing the application on top of it. If the hypervisor is now this bottom layer, and if it provides all the management utilities associated with managing the physical infrastructure, you now get an opportunity to rethink the operating system as something that you bind to the application.

When you bind an operating system to an application, you're able to eliminate anything that is not relevant to that application. Typically, we see a surface area shrinking to about 10 percent of what is typically deployed as a standard operating system. So, the first thing is to package the application in a way that is optimized to run in a VM. We offer a product called rBuilder that enables just that functionality.

If you prove to yourself that you can do this, that you can run [applications] in both places (cloud and on-premises), you've architected correctly. ... That puts you in a position where eventually you could run that application on your local cloud or virtualized environment and then, for those lumpy demand periods -- when you need that exterior scale and capacity -- you might just look to that cloud provider to support that application [at scale].

There's a trap here. If you become dependent on something associated with a particular infrastructure set or a particular hypervisor, you preclude any use in the future of things that don't have that hypervisor involved. ... The real opportunity here is to separate the application-virtualization approach from the actual virtualization technology to avoid the lock-in, the lack of choice.

If you do it right, and if you think about application virtualization as an approach that frees your application from the infrastructure, there is a ton of benefit in terms of dynamic business capability that is going to be available to your organization.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: rPath.

Wednesday, November 12, 2008

IDC research shows enterprise SOA adoption deepens based on certain critical practices

Listen to the podcast. Download the podcast. Access the webinar. Learn more. Sponsor: Hewlett-Packard.

Download the IDC report "A Study in Critical Success Factors for SOA." Read complete transcript of the discussion.

Fresh research from IDC on service oriented architecture (SOA) adoption patterns shows what users of SOA identify as essential success factors. The perceptions are critical as more companies cross from experimentation to more holistic SOA use and its required governance management and lifecycle functions.

A recent webinar captures the IDC findings and shows how Hewlett-Packard (HP) is working to help companies adopt SOA successfully. That webinar is now captured as a podcast, transcript and blog.

Join me as I moderate a SOA market adoption trends presentation by Sandy Rogers, program director for SOA, Web services, and integration research at IDC. Sandy is followed by a presentation on SOA lifecycle approaches by Kelly Emo, SOA product marketing manager for HP Software.

Here are some excerpts:
Sandy Rogers: Organizations are looking for much more consistency across enterprise activities and views, and are really finding a lot of competitive differentiation in being able to manage their processes more effectively. That requires the ability to stand across different types of systems and to respond -- whether in a reactive mode or a proactive mode -- to opportunities.

What we’re finding is that, as we go to this generation, SOA, in and of itself, is spawning the ability to address new types of models, such as event-based processing, model-based processing, cloud computing, and appliances. We’re really, as a foundation, looking to make a strategic move.

The issue is not necessarily deciding if they should go toward SOA. What we're finding is that for most organizations this is the way that they are going to move, and the question is just navigating how to best do that for the best value and for better success.

According to the same poll ... What are most interesting are the top challenges in implementing SOA. All of our past studies reinforced that skills, availability of skills, and training in SOA continue to be a number one challenge. What’s really noticeable now is that setting up an SOA governance structure has reached the second most-indicated challenge.

We found in other studies that a lot of organizations did not have strong governance. SOA almost forces these companies to do what they should have been doing all along around incorporating the right procedures around governance, and making that a non-intrusive approach.

... What this is telling us is that we have reached another stage of maturity, and that in order to move forward organization will need to think about SOA as an overall program, and how it impacts both technology and people dimensions within the organization. ... We are indeed moving from project- and application-level SOA to more of a system and enterprise scale.

We [also] wanted to look at how SOA's success is actually defined, ... and what factors and practices in these organizations that are successful have the most impact. ... While technologies are key enablers, most of the study participants focused on organizational and program dynamics as being key contributors to success. Through technology, they are able to influence the impact of the activities that they are introducing into the overall SOA program.

The pervasiveness of SOA adoption in the enterprise was a key determinant of how ... they were being successful. ... If you’re able to handle trust, you’re able to influence organizational change management effectiveness. If you’re able to address business alignment, then you’ll have much more success in understanding the impact on architecture and vice versa.

Domains of SOA success

When we gathered all of this information ... we created a framework of varying components, and elements that impacted success. Then, we aggregated these into seven key domains. ... The seven domains are: Business Alignment, Organizational Change Management, Communication, Trust, Scale and Sustainability, Architecture and Governance. [See full transcript or listen to the podcast for more detail on each domain.]

We found that enforcing policies, not putting off governance until later on, was very important, [as well as] putting more efforts into business modeling, which many of these organizations are doing now. They said that they wished they had done a little bit more when thinking about the services that were created, focusing on preparing the architecture for much more process and innovation.

Kelly Emo: You heard from IDC the seven critical SOA success factors that came from this in-depth analysis of customers. The point that I want to reiterate here that was so powerful in this discussion is the idea that the seven domains are linked. By putting energy and effort in any one of them, you are setting yourself up for more success across the board.

What we are going to do now is drill down into that domain of governance. ... We’ll talk a little bit about the value of using an automated SOA governance platform, to help automate those manual activities and get you there faster.

... We see many of our customers now crossing the enterprise scalability divide with their SOA, looking to incorporate SOA into their mainstream IT organizations, and they’re seeing the benefits of that initial investment in governance help them make that leap.

SOA governance is all about helping IT get to the expected business benefits of their SOA. You can think of SOA governance, in essence, as IT's navigation system to get to the end goal of SOA. What it's going to help IT do, as they look to scale SOA out, is to more broadly foster trust across those distributed domains. It's going to help become a catalyst for communication and collaboration, and it's going to help jump-start that non-expert staff.

The thing that's key about governance is that it helps integrate those silos of IT. It helps integrate the folks who are responsible for designing services with those who actually have to develop the back end implementations and with those who are doing the testing of performance and functionality. Alternately, it integrates them with the organizations that are responsible for both deploying the services and the policies and integration logic that will support accessing those services.

Keeping a perspective on lifecycle governance, your organization can be primed and ready to handle SOA, as it scales, as more and more services go into production, and more and more services are deemed to be ready for consumption and reuse into new composite applications. ... The key is to keep a service lifecycle governance perspective in mind, as you go about your governance program, and automation is key. ... Automating policy compliance can bring a huge pay off.

What we are finding more and more now is that organizations are actually investing in a role known as service manager, someone who oversees the implication of not only delivering a service over time, but those that are consuming it. I see this as a best practice that can be supported by SOA governance, and which helps empower them by giving them a foundation to set up policies and have visibility in terms of how this service is meeting its objective and who is consuming the service.

You can actually get a dialog going between your enterprise architecture and planning teams, your development teams, and your testing teams, in terms of the expectations, and requirements right upfront, as the concept of the service is being ferreted out.

So why invest in SOA governance now ... [when] we’re under a lot of economic pressure, budgets are tight, there's fewer resources to do the same work? This sounds counter-intuitive, absolutely, but this is the right time to make that investment in SOA governance, because the benefits are going to pay off significantly.
Download the IDC report "A Study in Critical Success Factors for SOA." Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Access the Webinar. Learn more. Sponsor: Hewlett-Packard.

Tuesday, November 11, 2008

Looking forward to webinar on applications modernization trends and techniques with Nexaweb

Application modernization as a precursor and accelerant to IT transformation is the topic of a webinar I'm on this Thursday at 1 p.m. ET.

The topic is a no-brainer. Old apps that waste money need to come out to the web services and RIA model and join the grand mashup.

Application modernization is one of those IT initiatives that packs the one-two wallop of cutting costs while improving agility and business outcomes. That combination of doing more for less makes so much sense these days, and it may be the new number one requirement for any IT budget.

Services and logic locked up in mainframes, COBOL, n-tier Java, and other 3-4GL client-server implementations can find a new life as rich Internet services on virtualized or standard hardware and platforms. The process recovers past investments, closes down wasteful operations spending, and extends value into the platforms that operate at peak efficiency and lower costs. Hard to argue.

Remember the wave of ROI studies back in 2003? Well now you need ROI plus provable business improvements of the qualitative variety. Application modernization fits the bill because application sprawl wastes server utilization, leaves apps and data in silos that resist services orientation and prevents the sun-setting of older, expensive platforms -- plus you can do all kinds of innovative things with the services you couldn't do before.

Oh, and getting these services into a SOA and on virtualized platforms opens the door to more exploitation of cloud and SaaS models, as they make more sense.

I'll be discussing the rationale for application modernization, how to target which apps and platforms, what processes need to be in place, and how to scale app modernization projects appropriately. Joining me on the webinar will be David McFarlane, COO at Nexaweb. [Disclosure: Nexaweb is a sponsor of BriefingsDirect podcasts.]

McFarlane, no doubt, will be explaining how the Nexaweb Reference Framework is engineered to reduce the time, costs, and architectural decisions associated with modernizing business applications and bringing them to the Web.

I like the idea of app modernization for mainframe and COBOL code, but Nexaweb goes further in terms of the webification trend: Sybase PowerBuilder, Microsoft Visual Basic, Oracle Forms and other 3GL/4GL-based applications are what it has in mind, with as much as 67 percent in total costs savings in early customer implementations, says Nexaweb.

Sign up to listen in and watch the slides go by. Q&A to follow. Should be fun.

Monday, November 10, 2008

Solving IT energy conservation issues requires holistic approach to management and planning, say HP experts

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The critical and global problem of energy management for IT operations and data centers has emerged as both a cost and capacity issue. The goal is to find innovative means to conserve electricity use so that existing data centers don't need to be expanded or replaced -- at huge cost.

In order to promote a needed close matching of tight energy supply with the lowest IT energy demand possible, the entire IT landscape needs to be considered. That means an enterprise-by-enterprise examination of the "many sins" of energy mis-management. Wasted energy use, it turns out, has its origins all across IT and business practices.

To learn more about how enterprises should begin an energy-conservation mission, I recently spoke with Ian Jagger, Worldwide Data Center Services marketing manager in Hewlett-Packard's (HP) Technology Solutions Group, and Andrew Fisher, manager of technology strategy in the Industry Standard Services group at HP.

Here are some excerpts:
Data centers typically were not designed for the computing loads that are available to us today ... (and so) enterprise customers are having to consider strategically what they need to do with respect to their facilities and their capability to bring enough power to be able to supply the future capacity needs coming from their IT infrastructure.

Typically the cost of energy is now approaching 10 percent of IT budgets and that's significant. It now becomes a common problem for both of these departments (IT and Facilities) to address. If they don't address it themselves then I am sure a CEO or a CFO will help them along that path.

Just the latest generation server technology is something like 325 percent more energy efficient in terms of performance-per-watt than older equipment. So simply upgrading your single-core servers to the latest quad-core servers can lead to incredible improvements in energy efficiency, especially when combined with other technologies like virtualization.

Probably most importantly, you need to make sure that your cooling system is tuned and optimized to your real needs. One of the biggest issues out there is that the industry, by and large,drastically overcools data centers. That reduces their cooling capacity and ends up wasting an incredible amount of money.

You need to take a complete end-to-end solution that involves everything from analysis of your operational processes and behavioral issues, how you are configuring your data center, whether you have hot-aisle or cold-aisle configurations, these sorts of things, to trying to optimize the performance or the efficiency of the power delivery, making sure that you are getting the best performance per watt out of your IT equipment itself.

The best way of saving energy is, of course, to turn the computers off in the first place. Underutilized computing is not the greatest way to save energy. ... If you look at virtualizing the environment, then the facility design or the cooling design for that environment would be different. If you weren't in a virtualized environment, suddenly you are designing something around 15-35 kilowatts per cabinet, as opposed to 10 kilowatts per cabinet. That requires completely different design criteria.

You’re using four to eight times the wattage in comparison. That, in turn, requires stricter floor management. ... But having gotten that improved design around our floor management, you are then able to look at what improvements can be made from the IT infrastructure side as well.

If you are able to reduce the number of watts that you need for your IT equipment by buying more energy efficient equipment or by using virtualization and other technologies, then that has a multiplying effect on total energy. You no longer have to deliver power for that wattage that you have eliminated and you don't have to cool the heat that is no longer generated.

This is a complex system. When you look at the total process of delivering the energy from where it comes in from the utility feed, distributing it throughout the data center with UPS capability or backup power capability, through the actual IT equipment itself, and then finally with the cooling on the back end to remove the heat from the data center, there are a thousand points of opportunity to improve the overall efficiency.

We are really talking about the Adaptive Infrastructure in action here. Everything that we are doing across our product delivery, software, and services is really an embodiment of the Adaptive Infrastructure at work in terms of increasing the efficiency of our customers' IT assets and making them more efficient.

To complicate it even further, there are lot of organizational or behavioral issues that Ian alluded to as well. Different organizations have different priorities in terms of what they are trying to achieve.

The principal problem is that they tend to be snapshots in time and not necessarily a great view of what's actually going on in the data center. But, typically we can get beyond that and look over annualized values of energy usage and then take measurements from that point.

So, there is rarely a single silver bullet to solve this complex problem. ... The approach that we at HP are now taking is to move toward a new model, which we called the Hybrid Tiered Strategy, with respect to the data center. In other words, it’s a modular design, and you mix tiers according to need.

One thing that was just announced is relevant to what Ian was just talking about. We announced recently the HP Performance-Optimized Data Center (POD), which is our container strategy for small data centers that can be deployed incrementally.

This is another choice that's available for customers. Some of the folks who are looking at it first are the big scale-out infrastructure Web-service companies and so forth. The idea here is you take one of these 40-foot shipping containers that you see on container ships all over the place and you retrofit it into a mini data center.

... There’s an incredible opportunity to reclaim that reserve capacity, put it to good use, and continue to deploy new servers into your data center, without having to break ground on a new data center.

... There are new capabilities that are going to be coming online in the near future that allow greater control over the power consumption within the data center, so that precious capacity that's so expensive at the data center level can be more accurately allocated and used more effectively.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Thursday, November 6, 2008

ITIL requires better log management and analytics to gain IT operational efficiency, accountability

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Read complete transcript of the discussion.

Implementing best practices from the the Information Technology Infrastructure Library (ITIL) has become increasingly popular in IT departments. As managers improve IT operations with an eye to process efficiency, however, they need to gain operational accountability through visibility and analytics into how systems and networks are behaving.

Innovative use of systems log management and analytics -- in the context of entire IT infrastructures -- produces an audit and performance data trail that both helps implement and refine such models as ITIL. Compliance is also a building requirement that can be solved through verification tools such as systems monitoring and analytics in the context of ITIL best practices.

To learn more about how systems log tools and analysis are aiding organizations as they adopt ITIL, I recently spoke with Sean McClean, principal at consultancy KatalystNow, and Sudha Iyer, director of product management at LogLogic.

Here are some excerpts:
IT, as a business, a practice, or an industry is relatively new. The ITIL framework has been one that's always been focused on how we can create a common thread or a common language, so that all businesses can follow and do certain things consistently with regard to IT. ... We are looking to do even more with tying the IT structure into the business, the function of getting the business done, and how IT can better support that, so that IT becomes a part of the business.

Because the business of IT supporting a business is relatively new, we are still trying to grow and mature those frameworks of what we all agree upon is the best way to handle things. ... When people look at ITIL, organizations assume that it’s something you can simply purchase and plug into your organization. It doesn't quite work that way.

ITIL is generally a guidance -- best practices -- for service delivery, incident management, or what have you. Then, there are these sets of policies with these guidelines. What organizations can do is set up their data retention policy, firewall access policy, or any other policy.

But, how do they really know whether these policies are being actually enforced and/or violated, or what is the gap? How do they constantly improve upon their security posture? That's where it's important to collect activity in your enterprise on what's going on.

Our log-management platform ... allows organizations to collect information from a wide variety of sources, assimilate it, and analyze it. An auditor or an information security professional can look deep down into what's actually going on, on their storage capacity or planning for the future, on how many more firewalls are required, or what's the usage pattern in the organization of a particular server.

All these different metrics feed back into what ITIL is trying to help IT organizations do. Actually, the bottom line is how do you do more with less, and that's where log management fits in. ... Our log management solutions allows [enterprises] to create better control and visibility into what actually is going on in their network and their systems. From many angles, whether it's a security professional or an auditor, they’re all looking at whether you know what's going on.

You want to figure out how much of your current investment is being utilized. If there is a lot of unspent capacity, that's where understanding what's going on helps in assessing, “Okay, here is so much disk space that is unutilized." Or, "it's the end of the quarter, we need to bring in more virtualization of these servers to get our accounting to close on time."

[As] the industry matures, I think we will see ... people looking and talking more about, “How do I quantify maturity as an individual within ITIL? How much do you know with regard to ITIL? And, how do I quantify a business with regard to adhering to that framework?”

There has been a little bit of that and certainly we have ITIL certification processes in all of those, but I think we are going to see more drive to understand that and to formalize that in upcoming years.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Tuesday, November 4, 2008

Genuitec, Eclipse aim for developer kit to smooth rendering of RIAs on mobile devices

The explosion in mobile Web use, due partly to the prevalence of the iPhone and other smart-phone devices -- and a desire to make developers less grumpy -- have led Genuitec to propose a new open-source project at the Eclipse Foundation for an extensible mobile Web developer kit for creating and testing new mobile rich Internet applications (RIAs).

Coming as a sub-project under the Device Software Development Platform (DSDP), the FireFly DevKit project is still in the proposal phase, and the original committers are all from Genuitec, Flower Mound, Tex. [Disclosure: Both Genuitec and the Eclipse Foundation are sponsors of BriefingsDirect podcasts.]

Included in the developer kit will be a previewer and a debugger, a Web rendering kit, a device service access framework, a deployment framework, and educational resources.

The two tool frameworks will enable mobile web developers to visualize and debug mobile web applications from within an Eclipse-based integrated development environment (IDE). Beyond this the FireFly project will develop next-generation technologies and frameworks to support the creation of mobile web applications that look and behavior similarly to native applications and are able to interact with device services such as GPS, accelerometers and personal data.

The issue of developer grumpiness was raised in the project proposal:
When programming, most developers dislike switching between unintegrated tools and environments. Frequent change of focus interrupts their flow of concentration, reduces their efficiency and makes them generally grumpier :). For mobile web application development, web designers and programmers need to quickly and seamlessly perform incremental development and testing directly within an IDE environment rather than switching from an IDE to a device testing environment and back again.
One goal of the Web rendering toolkit is to make Web applications take on the look and feel of the host mobile device. Possibly, an application could run in the Safari browser on an iPhone, but appear similar to a native iPhone app.

Initially, example implementations of the project frameworks will be provided for the iPhone. As resources become available, examples for the G1-Android platform will also be developed. The project will actively recruit and accept contributions for other mobile platforms such as Symbian, Windows Mobile and others.

The current timeframe of the project calls for it to piggyback an incubation release on top of the Eclipse 3.5 platform release. The entire project proposal is available on the Eclipse site.

Friday, October 31, 2008

BriefingsDirect Analysts take Microsoft's pulse: Will the software giant peak in next few years?

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Insights Edition, Vol. 32, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events, with a panel of IT analysts and guests.

In this episode, recorded Oct. 10, 2008, our experts examine the state of Microsoft at the onset of the annual Professional Developers Conference. Two narratives emerge from our roundtable discussion, that Microsoft is behind on many new IT trends and is tied to past business models. The opposing view is that Microsoft will ride pedestrian app dev, business intelligence, data services, Xbox, unified communications, virtualization and cloud computing to become bigger and more pervasive than ever.

Please join noted IT industry analysts and experts Jim Kobielus, senior analyst at Forrester Research; Tony Baer, senior analyst at Ovum; Dave Linthicum, independent SOA consultant at Linthicum Group; Brad Shimmin, principal analyst at Current Analysis; Mike Meehan, a senior analyst at Current Analysis, and Joe McKendrick, independent analyst and prolific blogger. Our discussion is hosted and moderated by yours truly, Dana Gardner.

Here are some excerpts:
Kobielus: There’s some validity to the viewpoint that Microsoft's growth potential has capped on the business side, when you consider packaged applications, and software- and application-development tools, in the sense that the entire product niche of the service-oriented architecture (SOA) universe is rapidly maturing.

The vendors in this space -- the SOA vendors, the business-intelligence (BI) vendors, the master data management (MDM) vendors -- are going to realize revenue growth and profitability. Those who survive this economic downturn and thrive in the next uptick, will be those who very much focus on providing verticalized and customized applications on a consulting or professional services basis.

In that regard, Microsoft is a bit behind the eight ball. They don’t really have the strength on the consulting, professional services, and verticalization side, that an SAP, Oracle, or an IBM can bring to the table.

Microsoft, if they want to continue to grow in the whole platform and application space and in the whole SOA universe, needs to put a greater focus on consulting services.

McKendrick: Microsoft has its own economy. No matter what happens to the economy at large, Microsoft has its own economy going, and just seems to get through all this.

What’s driven Microsoft from day one, and continues to do so, is that Microsoft is the software company for Joe the Plumber. That’s their constituency, not necessarily Joe the Developer. They cater to Joe the Developer, Joe the CIO, and Joe the Analyst certainly likes to check in on what they are doing. It's this whole idea of disruptive technology. They have always targeted the under-served and un-served parts of the marketplace and move up from there.

... The base of Microsoft, these companies that are using Microsoft technology, don’t necessarily get virtualization or cloud computing. They just want a solution installed on their premises and want it to work.

Linthicum: I think they are behind the eight ball. A lot of the strategy I’ve seen coming out of Microsoft over the last few years, especially as it relates to cloud computing, SOA, and virtualization, has been inherently flawed. They get into very proprietary things very quickly. It really comes down to how are they going to sell an additional million desktop operating systems.

Ultimately, they just don’t get where this whole area is going. ... We’re heading into an area where they may not be as influential as they think they should be. They may be not only behind the eight ball, but lots of other organizations that are better at doing cloud computing, virtualization, and things like that, and have a good track record there, are going to end up owning a lot of the space.

Microsoft isn’t going to go away, but I think they’re going to find that their market has changed around them. The desktop isn't as significant as it once was. People aren’t going to want to upgrade Office every year. They’re not going to want to upgrade their desktop operating systems every year. Apple Macs are making big inroads into their market space, and it’s going to be a very tough fight for them. I think they’re going to be a lot smaller company in five years than they are today.

Meehan: Dave is absolutely right in that the one area that Microsoft never really conquered that it needed to conquer, given its strength in the desktop, is the handheld. If they are not going to be there with the handheld long-term, that’s a major growth area that they are going to miss out on. That’s where a lot of the business is going to shift to. ... On the SOA side, as I said before, Microsoft is just trying to be as service-oriented as they can for users who are trying to be not SOA-driven, but "As Service-Oriented As Possible" (ASOAP).

In fact, make that an acronym, ASOAP. There are going to be a number of users who are not going to go fully into SOA, because they have an enterprise architecture. It’s too hard to do, too hard to maintain. They’re never going to quite figure that out. They are just going to try to be tactical and ASOAP. Microsoft will try to service them and hold that part of their business.

What’s the next big thing they’re going to do? Joe referred to Microsoft having come up with that in previous downturns. I don’t see where they have got that right yet, and so I think that leads to them being smaller long-term.

Shimmin: [Microsoft is going to have an opportunity to change this perception] and simply because they don’t have to. I think back to a number of points that’s been made here that to be successful Microsoft doesn’t need to convince the world. It just needs to convince the people that attend the PDC. They have such an expansive and well-established channel, with all the little plumber-developers running around building software with their code, that just as 40 is the new 30, Microsoft is really kind of the new Apple, in a way.

They don’t need to be Oracle to succeed, they really need to have control over their environment and provide the best sort of tooling, management, deployment, and execution software that they can for those people who have signed on to the Microsoft bandwagon and are taking the ride with them. ... (Microsoft) is kind of capped out in many ways relative to the consumer market. But, gosh, they have shown that with things like SharePoint, for example, Microsoft is able to virally infest an organization successfully with their software without having to even lift a finger.

They’ll continue to do that, because they have this Visual Basic mentality. I hate to say it, but they have the mentality of “Let’s make it as simple as possible” for the people that are doing ASOAP, as Mike said, that don’t need to go all the way, but really just need to get the job done. I think they’ll be successful at that.

Kobielus: I think Microsoft will be larger, and they will be larger for the simple reason that they do own the desktop, but the desktop is becoming less relevant. But now, what’s new is that they do own the browser, in terms of predominant market share or installed base. They do own the spreadsheet. They do own the portal. As Brad indicated, SharePoint is everywhere.

One of the issues that many of our customers at Forrester have hit on -- CIO, CTO, that level -- is that SharePoint is everywhere. How do they manage SharePoint? It's a fait accompli, and enterprises have to somehow deal with it. It’s the de-facto standard portal for a large swath of the corporate world. Microsoft, to a great degree, owns the mid-market database with SQL Server.

So owning so many important components of the SOA stack, in terms of predominant market share, means that Microsoft has great clout to go in any number of directions. One direction in which they’re clearly going in a very forceful way that brings all this together is in BI and online analytical processing (OLAP). The announcements they made a few weeks ago at the BI conferenceshow where Microsoft clearly is heading. They very much want to become and remain a predominant BI vendor in the long run.

Gardner: ... On the total cost perspective, I think what I am hearing from you is that if you go all Microsoft all the time, there are going to be efficiencies, productivity, and cost savings. Is that the mantra? Is that the vision?

Shimmin: That‘s exactly right, Dana. That’s what they’re banking on, and that’s why I think they are the next Apple, in a way, because they are downtrodden, compared to some of the other big guns we’re talking about with Oracle, SAP, and IBM inside the middleware space. But that doesn’t matter, because they have a loyal following, which, if you guys have ever attended these shows of theirs, you’d see that they are just as rabid as Mac fans in many ways.

Microsoft is going to do their best job to make their customers lives as easy as possible, so that they remain loyal subjects. That’s a key to success. That’s how you succeed in keeping your customers.

Linthicum: Ultimately, people are looking for open solutions that are a lot more scalable than this stuff that Microsoft has to offer. The point that was just made, there are a bunch of huge Microsoft fans that will buy anything that they sell, that’s the way the shops are. But the number of companies that are doing that right now are shrinking.

People are looking for open, scalable, enterprise-ready solutions, they understand that Microsoft is going to own the desktop, at least for the time being, and they are going to keep them there. But, as far as their back office things and some of the things that Microsoft has put up as these huge enterprise class solutions, people are going to opt for other things right now.

It's just a buying pattern. It may be a perception issue or a technological issue. I think it’s a matter of openness or their insistence that everything be proprietary and come back to them. I heard the previous comment that looking at all Microsoft all the time will provide the best bang for the buck. I think people are very suspicious of that.

Gardner: We’ve heard quite a bit on this cloud operating system from Red Hat, Citrix, VMware, IBM, and HP talked it up a little bit. No one’s really come out with a lot of detail, but clearly this seems to be of interest to some of the major vendors. What is the nature of this operating system for the cloud, and does it have the same winner-take-all advantage for a vendor that the operating system on the desktop and departmental server had?

Linthicum: I think it does in virtualization. Once one vendor gets that right, people understand it, there are good standards around it, there are good use cases around it, and there’s a good business case around it, that particular vendor is going to own that space.

I’m not sure it’s going to be Microsoft. They’re very good about building operating systems, but in understanding my Vista crashes that are happening once a day, they are not that good.

Also, there are lots of guys out there who understand the virtualization space and the patterns for use there. The technology they’re going to use, the enabling standards, are going to be very different than what you are going to use on a desktop or even a small enterprise departmental kind of problem domain. Ultimately, a large player is going to step into this game and get a large share of this marketplace pretty quickly, because the cost and ease of moving to that particular vendor is very low.

... These virtualization operating systems that are enterprise bound or even in a gray area with the cloud are going to come from somebody else besides Microsoft.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Thursday, October 30, 2008

Microsoft's cloud push lacks economic expediency, sidesteps catalysts for ecology-wide adoption

Few topics among economists and business leaders engender the same enthusiasm as productivity. Doing more for less seems the universal balm for individuals, businesses and markets. If we all have growing productivity, well, then everything practically takes care of itself. You'll find few dissenters.

Ask 50 economists, however, how much IT specifically has contributed to productivity surge since the 1970s, and you'll get 50 different answers. They know IT has been good for work efficiency, sure, but just how so and in what dollops? No clue.

Is it better, cheaper software? Is it Moore's Law of escalating power and value from micro-processors? Is it the connected hive of the local area network, or the social fabric of the Internet? The behavior shifts to "always on" data access, or knowledge sharing of ad hoc and geography-free collaboration sessions -- are they behind the productivity boom of the past (apparently closing) bull economic cycle?

Yes, of course, to all. But how and to what degree that these complex causes and effects form the formula for overall productivity is as elusive as predicting the weather from butterfly wing beats. Somewhere in the fog of IT's storm into our consciousness, work habits and business strategies lies the answer. The algorithm of IT's productivity influence over individual people and their societal actions has yet to be written. Too many ghosts. Too many machines.

Nonetheless, productivity -- or rather the expectation of whole new dimensions of productivity -- is what has been behind the hype and strengthening embrace of cloud computing concepts the past two years. In a sense, cloud computing is the unifying theory of IT productivity, which has been indisputably powerful if not complexly disjointed over the past 25 years.

Cloud computing takes many of the essential elements of IT-spurred productivity and serves them up in a whole greater than the sum of the parts. Improved utilization, higher efficiency, better packaging, tuned and refined software, exploitation of network effects, viral effects from social networking, less energy per electron server or packet delivered -- these are just a few of the foundations of cloud computing. What's different is that these variables are working much more harmoniously, with common planning and strategic architectural forethought.

A typical enterprise data center landscape is more a window into the past of IT than the future. The chilled realms of raised floors inefficiently demonstrate how design sprung from unanticipated but compelling paradigm shifts in computing stinks. The barely backwards compatible data center of today eats up larger chuck of money doing less actual improvement in productivity.

Innovation is held hostage by the need to keep the transaction processing monitor properly integrated to the middleware so the n-tier architecture can join data from the right hand to the left hand during a sales call using a mobile supercomputer generating pretty pictures that completely dim after 145 minutes.

We are all well aware of the price for rapid technological change as progeny for helter-skelter IT adaptation and advancement over the past decades. It all probably could not have happened any differently, but it also does not need to continue like this.

Cloud computing entices and seduces because it is, after all, quite different. IT has matured and the requirements of the workload are appreciated sufficiently to architect data centers holistically and effectively. Cloud computing unifies the most up-to-date architectural concepts around data center resources of, for and by productivity. Total cost considerations and the careful association of all of the parts and elements -- working in concert -- these are the high-level requirements of an IT cloud. You can build it right for the workload and allow it to dynamically adjust and accept new workloads. It's more than a just the next big thing. It more than able to drag along all the old stuff too.

When the entire support infrastructure is designed properly, with all the technical and productivity requirements aligned, then IT is transformed. Leverage standards, employ best practices, depend on efficiencies of scale -- and more than incremental change occurs. It does a lot more, more flexibly, for a lot less. Cloud offers a whole new levels of productivity, directly attributed to advanced IT. Services can be assembled based on specific work needs independent of the underlying platforms. Less waste, more haste, all around.

Why then is Microsoft tepid in its march to cloud? Why is it "software plus services," not just services? Why would such productivity improvements that clouds afford -- at a time when economic conditions demand rapid transformational advances -- be dribbled out as Microsoft has done this week at its Professional Developers Conference in Los Angeles? What exactly is Microsoft waiting for?

Most of us observers expected Microsoft to move to the cloud now, based on the success of Amazon Web Services and Google. But the apparent pace is to offer developers training wheels, little more. The pricing -- sort of important when the whole goal is about economics and productivity -- remains missing.

How can architects, CFOs, developers, ISVs, channel partners -- in essence the entire Windows global ecology of participants -- move one step forward without knowing the pricing, both in terms of form, direction and dollars and cents?

My cynical side says that Microsoft wants to accomplish two things with its Azure initiatives. One, to suck the oxygen out of the cloud market (get it, Azure ... no oxygen) and slow the pace of innovation and investment around clouds and cloud ecologies. And two, to make sure the "software plus services" transition comes slower than the market demand might otherwise enjoy. Why swap out software (with a 60 percent margin) for services (with a 15 percent margin) faster than slower?

The answer, of course, is productivity. I have not been sure for many years whether Microsoft is focused on its users' productivity more than at a pace set by, well ... Microsoft. The cloud approach may be different than IT as usual from over the past 20 years, but so far Microsoft's approach to productivity ala cloud seems about the same.

How might this all be different? How might the new, more productive chapter in IT -- of swift yet appropriate adoption of cloud supported IT resources -- get going faster?

Microsoft could spur the engine of adoption on cloud ecology use and advancement by providing stunningly compelling pricing for ISVs and enterprise developers to build and deploy their applications using Azure and platform as a service now. Microsoft would help the makers of applications succeed quickly by making the services easily available to the huge markets that Microsoft is arbiter of -- both business and consumer.

I can even see if Microsoft is choosy and favors its tools, APIs, platforms, data formats, communications protocols, and existing partners. Make a Windows-only cloud, fine, but make it.

Apple with its online store (which favors the Mac world in a big) for developers of iPhone applications and services has shown just how powerful this approach can be. Microsoft could become the best friend of every Visual Studio, PHP and Eclipse developer and business by helping create the best applications for the least total cost, all as a service.

Microsoft could decide and declare what applications it will and won't provide as Azure services itself, allowing a huge market for others to build what's left and sell the services on a per-use basis to users (perhaps driving them to consume other Microsoft services). Deals could be made on applications portability, but optimally the market should pick cloud winners based on value and reach. May the best cloud for both developers and users win -- it would be a huge win.

Redmond could help those applications that provide good value find an audience quickly. Maybe Microsoft could sell or share metadata about users preferences and requirements so the applications are even more likely to succeed. That would include making pathways to the vast Web consumer markets via MSN and all its Web services available to those that build on the Azure platform. Maybe Yahoo finds its way into the mix. Microsoft could offer both advertising-subsidized and pay-per-use models, or combinations of the two for media and entertainment companies, for example, to load their stuff up on the Azure cloud, or build their own Azure clouds. Might compete effectively against Google as a result.

To me these only scratch the surface the the vast and rich ecology of partners and customers that would emerge from an accessible and productivity-priced Microsoft cloud. Done right, myriad specialized value-added business services and consumer services would spring up at multiple abstractions on top of the essential base services that the Microsoft cloud and APIs provide. It would be a very good business for Microsoft, but an even better business growth opportunity for all of the other players. The pie would grow, and productivity could soar. Users would bet better apps and services at low and predictable cost.

There could be a vast and rich community that thrives in the Microsoft cloud's ether. Or there could be a dark Microsoft cloud of, for and by Microsoft applications and services. Redmond could shoot for the moon again, but likely the other clouds will get in the way. Why risk the ecology play for trying to have it all Microsoft's way? That's why time is critical.

Microsoft, at least based on the tepid pace of the Azure roadmap as laid out this week, is more interested in hedging bets and protecting profits than in spurring on productivity and providing economic catalysts to rich new potential ecologies of online, services-driven businesses. Any continued delay to cloud is the giveaway of Microsoft's true intentions.

If Microsoft's own business interests prevent it from realizing the full potential of cloud computing, or make it try and damp down the cloud market generally, then Microsoft is a drag on the economy at a time when that is the last thing that's needed. And yet Microsoft could do cloud better than any other company on Earth.

Microsoft needs to decide whether it really wants to be in the software or services business. Trying to have it both ways, for an indeterminate amount of precocious time, to in effect delay the advancement of serious productivity, seems a terrible waste and a terrible way to affect its community.

The not trivial risk for Microsoft is that in five years it won't be leading in the software or services business any more.