Monday, July 18, 2011

WSO2 launches Stratos offerings as PaaS for open source cloud middleware

WSO2 today announced the debut of the WSO2 StratosLive platform as a service (PaaS) and the launch of WSO2 Stratos 1.5, the newest release of WSO2’s open-source cloud middleware platform software. Together, they provide comprehensive cloud middleware solutions for enabling service-oriented architecture (SOA) and composite application development and deployment in the cloud.

The Palo Alto, Calif., company offers a complete PaaS both as on-premise software and as a hosted service, running the same production-ready code wherever it best suits customers’ privacy, service-level agreement (SLA), and deployment requirements. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

StratosLive provides a complete enterprise deployment and integration platform, including application server, enterprise service bus (ESB), database, identity server, governance registry, business process manager, portal server and more.

Stratos provides the same capabilities to organizations that want the benefits of a PaaS running on their own premises. It builds on and extends WSO2's Carbon enterprise middleware platform by taking the Carbon code and adding cloud functionality for self-service provisioning, multi-tenancy, metering, and elastic scaling, among others.

With StratosLive and Stratos, central cloud features are built directly into the core platform.



All Carbon products, including the latest features from the recent Carbon 3.2 platform release, are available both as part of the Stratos cloud middleware platform and as cloud-hosted versions with instant provisioning on the StratosLive public PaaS. WSO2's approach enables developers to migrate their applications and services between on-premise servers, a private PaaS, a public PaaS, and hybrid cloud environments, providing deployment flexibility.

“The cloud is a compelling platform for enabling enterprises to combine the agility they’ve gained by employing SOAs and composite applications with an extended reach and greater cost efficiencies,” said Dr. Sanjiva Weerawarana, WSO2 founder and CEO. “At WSO2, we’re delivering on this promise by providing the only truly open and complete PaaS available today with our WSO2 StratosLive middleware PaaS and WSO2 Stratos 1.5 cloud middleware platform.”

Four new products

The launch of StratosLive and Stratos 1.5, adds four new cloud middleware products:
  • Data as a Service provides both SQL and NoSQL databases to users based on both MySQL and Apache Cassandra. This allows users to self-provision a database in the cloud and to choose the right model for their applications.
  • Complex Event Processing as a Service is the full multi-tenant cloud version of CEP Server, which launched in June 2011 and supports multiple CEP engines, including Drools Fusion and Esper, to enable complex event processing and event stream analysis.
  • Message Broker as a Service is the full multi-tenant cloud version of Message Broker, which launched in June 2011 and supports message queuing and publish-subscribe to enable message-driven and event-driven solutions in the enterprise. It uses Apache Qpid as the core messaging engine to implement the Advanced Message Queuing Protocol (AMQP) standard.
  • Cloud Services Gateway (CSG), first launched as a separate single-tenant product, is now a fully multi-tenant product within Stratos and StratosLive.
With StratosLive and Stratos, central cloud features are built directly into the core platform—including multi-tenancy, automatic metering and monitoring, auto-scaling, centralized governance and identity management, and single sign-on. The Cloud Manager in StratosLive and Stratos offers point-and-click simplicity for configuring and provisioning middleware services, so developers can get started immediately and focus on the business logic, rather than configuring and deploying software systems.

The cloud is a compelling platform for enabling enterprises to combine the agility they’ve gained by employing SOAs and composite applications with an extended reach and greater cost efficiencies.



Additionally, the Stratos cloud middleware platform features an integration layer that allows it to install onto any existing cloud infrastructure such as Eucalyptus, Ubuntu Enterprise Cloud, Amazon Elastic Computing Cloud (EC2), and VMware ESX. Enterprises are never locked into a specific infrastructure provider or platform.

The availability of StratosLive and Stratos 1.5 brings several new core platform enhancements:
WSO2 StratosLive and WSO2 Stratos 1.5 are available today. Released under the Apache License 2.0, they do not carry any licensing fees. Production support for the Stratos starts at $24,000 per year. StratosLive middleware PaaS is available at three paid subscription levels: SMB, Professional, and Enterprise, as well as a free demo subscription. For details on subscription pricing, visit the WSO2 website.

You may also be interested in:

Friday, July 15, 2011

SaaS PPM helps Deloitte-Australia streamline top-level decision making

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

The latest BriefingsDirect podcast focuses on Deloitte-Australia and how their business has benefited from leveraging the software-as-a-service (SaaS) model for project and portfolio management (PPM) activities.

We spoke to Deloitte-Australia at a recent HP conference in Barcelona to explore some major enterprise software and solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

To learn more about Deloitte’s innovative use of SaaS hosting for non-core applications, join here Ferne King, director within the Investment and Growth Forum at Deloitte-Australia in Melbourne. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
King: The SaaS model made sense to us, because we had a strategic direction in our firm that any non-core application’s strategic intent was to have them as SaaS.

It’s the only solution that we found in the marketplace that would help us support and have visibility into the investments we were going to make for ourselves internally around the growth and the maintenance of what we did internally within our own firm.

Deloitte-Australia is approximately 5,000 practitioners. In 2010, our revenue was A$850 million. We provide professional services to public and private clients, and we are now globally the largest professional services firm. We utilize PPM internally within the firm, and that helps us to understand that portfolio and prioritization. Deloitte-UK practice and Deloitte-America practice in their consulting areas use PPM to go to the market and help manage deliver investments with their client base.

Three benefits


The three benefits of PPM, primarily for us has been understanding that portfolio and linking that to our strategy. For example, our executive will have a series of business objectives they want to achieve in the Australian practice.

By utilizing PPM, we can understand what is going on within the firm that’s meeting those objectives, and then, more importantly for them, with the gap, and then they can take the action on the gap. That’s the number one priority. The number two priority is being able to communicate to our people within the practice the particulars of change.

For example, over the next quarter, what will our practitioners in the firm see as a result of all of these myriad of initiatives going on, whether it’s a SaaS service HR system or whether it’s a new product and service that they can take to market. Whatever change is coming, we can better communicate that to them within their organization.

Our third priority which the PPM product helps with discipline is our area of delivery. So, in our project management methodology, it helps us improve our disciplines. We had a journey of 18 months of doing things manually and then we brought PPM to technology enable what we were doing manually.

From a SaaS perspective, the benefit we’ve achieved there is that we can focus our people on the right things. Instead of having our people focused on what hardware, what platform, what change request, what design do we need to be happening, we can focus on what our to-be process should be, what our design should be. Then we basically hand that over the fence to the SaaS team, which then help execute that.

We don’t have to stand in a queue within our own IT group and look for a change window. We can make changes every Wednesday, every Sunday, 12 months of the year, and that works for us.

A top priority

Just because [our applications use] is "non-core" doesn’t mean that it’s not a top priority. Our firm has approximately 2,500 applications within our Australian practice. PPM, at our executive level, is seen as one of our top 10 applications to help our executive, our partners, the senior groups of our firm register ideas to help our business grow and be maintained.

So it’s high value, but it’s not part of our core practice suite. It doesn’t bring in revenue and it doesn’t keep the lights on, but it helps us manage our business.

[This fits into] our roadmap of strategic enterprise portfolio management. In that journey, we're four years in and we are two years into technology enablement. We undertook the journey four years ago to go down strategic portfolio management and we lasted about 18 months to 2 years, manually developing and understanding our methodology, understanding the value where we wanted to go to.

In our second year we technology enabled that to help us execute more effectively, speed to value, time to value, and now we are entering our third year into the maturity model of that.

[We have attained] fantastic results, particularly at the executive levels, and they are the ones who pay for us to create the time to work on this. Deloitte itself has taken the transformation over the years. If anybody in the market follows the professional services, industry group areas, Deloitte globally is 160,000 practitioners and over $250 billion of revenue on FY10. We're coming together and have been taking a journey for some time to be as one.

So, if you're a client in the marketplace, you don’t have to think about what door you need to enter the Deloitte world. You enter one door and you get service from whatever service group you need.

PPM has enabled us to help the executive achieve their vision of firm-wide visibility of the enterprise investments we are making to improve our growth and support our maintenance.



If I take the example of three years ago, our tax group would only be interested in what’s happening in their tax group. Our consultant group would really only be interested in what’s happening in the consultant group.

Now that we are acting as one, the tax service line lead and the consulting service line lead would like visibility of what’s happening firm wide. PPM is now enabling us to do that.

What I would summarize there is that PPM has enabled us to help the executive achieve their vision of firm-wide visibility of the enterprise investments we are making to improve our growth and support our maintenance.

[How did we pick HP?] First of all, probably, 27 years experience with project delivery, coming from an engineering construction background, getting very detailed knowledge over the years about the one-on-one delivery components and dealing with a lot of vendors over the years in the client marketplace.

So, well-versed in what we needed and well-versed in what was available out there in the marketplace. When we went to market looking for a partner and a vendor solution, we were very clear on what we wanted. HP was are able to meet that.

I actually took my own role out of the scoring process. We helped put scripts together, scenarios for our vendors to come and demonstrate to us how we were going to achieve meeting our objectives. Then, we brought people around the table from that business with a scoring method, and HP won on that scoring method.

[I would recommend to those approaching this that they] understand the method or the approach that you want to use PPM for. You cannot bring PPM and expect it to answer 80 percent of your issues. It can support and help direct resolution of issues, but you need to understand how you are expecting to do that. An example would be if you want to capture ideas from other business units or groups or the technology department on what they'd like to do to improve their application or improve product development, any area of the business, understand the life-cycle of how you want that to be managed. Don’t expect PPM to have preset examples for you.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, July 14, 2011

How I became a REST 'convert'

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

By Ronald Schmelzer

Many of you know me as one half of the ZapThink team – an advisor, analyst, sometimes-trainer, and pundit that has been focused on XML, web services, service oriented architecture (SOA), and now cloud computing over the past decade or so. Some you may also know that immediately prior to starting ZapThink I was one of the original members of the UDDI Advisory Group back in 2000 when I was with ChannelWave, and I also sat on a number of standards bodies including RosettaNet, ebXML, and CPExchange initiatives. Furthermore, as part of the ZapThink team, I tracked the various WS-* standards from their inception to their current “mature” standing.

I’ve closely followed the ups and downs of the Web Service Interoperability (WS-I) organization and more than a few efforts to standardize such things as business process. Why do I mention all this? To let you know that I’m no slouch when it comes to understanding the full scope and depth of the web services family of standards. And yet, when push came to shove and I was tasked with implementing SOA as a developer, what did I choose? REST.

Representational State Transfer, commonly known as REST, is a style of distributed software architecture that offers an alternative to the commonly accepted XML-based web services as a means for system-to-system interaction. ZapThink has written numerous times about REST and its relationship to SOA and Web Services. Of course, this has nothing to do with Service-Oriented Architecture, as we’ve discussed in numerous ZapFlashes in the past. The power of SOA is in loose coupling, composition, and how it enables approaches like cloud computing. It is for these reasons that I chose to adopt SOA for a project I’m currently working on. But when I needed to implement the services I had already determined were necessary, I faced a choice: use web services or REST-based styles as the means to interact with the services. For the reasons I outline below, REST was a clear winner for my particular use case.

Web services in theory and in practice

The main concepts behind Web Services were established in 1999 and 2000 during the height of the dot-com boom. SOAP, then known as the Simple Object Access Protocol and later just “SOAP,” is the standardized, XML-based method for interacting with a third-party service. Simple in concept, but in practice, there’s many ways to utilize SOAP. RPC style (we think not) or Document style? How do you identify end points? And what about naming operations and methods? Clearly SOAP on its own leaves too much to interpretation.

So, this is the role that the Web Services Description Language (WSDL) is supposed to fill. But writing and reading (and understanding) WSDL is a cumbersome affair. Data type matching can be a pain. Versioning is a bear. Minor server-side changes often result in different WSDL and a resulting different service interface, and on the client-side, XSD descriptions of the service are often similarly tied to a particular version of the SOAP endpoint and can break all too easily. And you still have all the problems associated with SOAP. In my attempts to simply get a service up and running, I found myself fighting more with SOAP and WSDL than doing actual work to get services built and systems communicating.

Writing and reading (and understanding) WSDL is a cumbersome affair.

The third “leg” of the web services concept, Universal Description, Discovery and Integration (UDDI), conceptually makes a lot of sense, but in practice, hardly anyone uses it. As a developer, I couldn’t even think of a scenario where UDDI would help me in my particular project. Sure, I could artificially insert UDDI into my use case, but in the scenario where I needed loose coupling, I could get that by simply abstracting my end points and data schema. To the extent I needed run-time and design-time discoverability or visibility into services at various different states of versioning, I could make use of a registry / repository without having to involve UDDI at all. I think UDDI’s time has come and gone, and the market has proven its lack of necessity. Bye, bye UDDI.

As for the rest of the WS-* stack, these standards are far too undeveloped, under implemented, under-standardized, inefficient, and obscure to make any use of whatever value they might bring to the SOA equation, with a few select exceptions. I have found the security-related specifications, specifically OAuth, Service Provisioning Markup Language (SPML), Security Assertion Markup Language (SAML), eXtensible Access Control Markup Language (XACML), are particularly useful, especially in a Cloud environment. These specifications are not web services dependent, and indeed, many of the largest Web-based applications use OAuth and the other specs to make their REST-based environments more secure.

Why REST is ruling

I ended up using REST for a number of reasons, but the primary one is simplicity. As most advocates of REST will tell you, REST is simpler to use and understand than web services. development with REST is easier and quicker than building WSDL files and getting SOAP to work and this is the reason why many of the most-used web APIs are REST-based. You can easily test HTTP-based REST requests with a simply browser call. It can also be more efficient as a protocol since it doesn’t require a SOAP envelope for every call and can leverage the JavaScript Object Notation (JSON) as a data representation format instead of the more verbose and complex to process XML.

But even more than the simplicity, I appreciated the elegance of the REST approach. The basic operation and scalability of the Web has proven the underlying premise of the fundamental REST approach. HTTP operations are standardized, widely accepted, well understood, and operate consistently. There’s no need for a REST version of the WS-I. There’s no need to communicate company-specific SOAP actions or methods – the basic GET, POST, PUT, and DELETE operations are standardized across all Service calls.

As most advocates of REST will tell you, REST is simpler to use and understand than web services.

Even more appealing is the fact that the vendors have not polluted REST with their own interests. The primary driver for web services adoption has been the vendors. Say what you might about the standard’s applicability outside a vendor environment, one would be very hard pressed to utilize web services in any robust way without first choosing a vendor platform. And once you’ve chosen that platform, you’ve pretty much committed to a specific web services implementation approach, forcing third-parties and others to comply with the quirks of your particular platform.

Not so with REST. Not only does the simplicity and purity of the approach eschew vendor meddling, it actually negates much of the value that vendor offerings provide. Indeed, it’s much easier (and not to mention lower cost) to utilize open source offerings in REST-based SOA approaches than more expensive and cumbersome vendor offerings. Furthermore, you can leverage existing technologies that have already proven themselves in high-scale, high-performance environments.

Focus on architecture, not on HTTP

So, how did I meld the fundamental tenets of SOA with a REST-based implementation approach? In our Web-Oriented SOA ZapFlash, we recommended using the following approach to RESTafarian styles of SOA:

  • Make sure your services are properly abstracted, loosely coupled, composable, and contracted
  • Every web-oriented service should have an unambiguous and unique URI to locate the service on the network
  • Use the URI as a means to locate as well as taxonomically define the service in relation to other services.
  • Use well-established actions (such as POST, GET, PUT, and DELETE for HTTP) for interacting with services
  • Lessen the dependence on proprietary middleware to coordinate service interaction and shift to common web infrastructure to handle SOA infrastructure needs

    Much of the criticism of REST comes not from the interaction approach, but rather from the use of HTTP.

Much of the criticism of REST comes not from the interaction approach, but rather from the use of HTTP. Roy Fielding, the progenitor of REST, states in his dissertation that REST was initially described in the context of HTTP, but is not limited to that protocol. He states that REST is an architectural style, not an implementation, and that the web and the use of the HTTP protocol happens to be designed under such style. I chose to implement REST using eXtensible Messaging and Presence Protocol (XMPP) as a way of doing distributed, asynchronous messaging styles of REST-based Service interaction. XMPP, also known as the Jabber Protocol, has already proven itself as a widely-used, highly-scalable messaging protocol for secure and distributed near-realtime messaging protocol. XMPP-based software is deployed widely across the Internet, and forms the basis of many high-scale messaging systems, including those used by Facebook and Google.

Am I bending the rules or the intent of REST by using XMPP instead of HTTP? Perhaps. If HTTP suits you, then you have a wide array of options to choose from in optimizing your implementation. Steven Tilkov does a good job of describing how to best apply HTTP for REST use. But you don’t have to choose XMPP for your implementation if HTTP doesn’t meet your needs. There are a number of other open-source approaches to alternative transports for REST existing including RabbitMQ (based on the AMQP standard), ZeroMQ, and Redis.

The ZapThink take

The title of this ZapFlash is a bit of a misnomer. In order to be a convert to something you first need to be indoctrinated into another religion, and I don’t believe that REST or web services is something upon which to take a religious stance. That being said, for the past decade or so, dogmatic vendors, developers, and enterprise architects have reinforced the notion that to do SOA properly, you must use web services.

ZapThink never believed that this was the case, and my own experiences now shows that SOA can be done well in practice without using Web Services in any significant manner. Indeed, my experience shows that it is actually easier, less costly, and potentially more scalable to not use Web Services unless there’s an otherwise compelling reason.

The conversation about SOA is a conversation about architecture – everything that we’ve talked about over the past decade applies just as equally when the Services are implemented using REST or Web Services on top of any protocol, infrastructure, or data schema. While good enterprise architects do their work at the architecture level of abstraction, the implementation details are left to those who are most concerned with putting the principles of SOA into practice.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at ZapThink.

SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Monday, July 11, 2011

Enterprise architects increasingly leverage advanced TOGAF 9 for innovation, market response, and governance benefits

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Join The Open Group in Austin, Texas July 18-22 to learn more about enterprise architecture, cloud computing, and TOGAF 9. To register, go to http://www.opengroup.org/austin2011/register.htm.

Join a podcast discussion in conjunction with the latest Open Group Conference in Austin, Texas, to examine the maturing use of The Open Group Architecture Framework (TOGAF), and how enterprise architects and business leaders are advancing and exploiting the latest Version 9.

The panel explores how the full embrace of TOGAF, its principles, and methodologies are benefiting companies in their pursuit of improved innovation, responsiveness to markets, and operational governance.

Is enterprise architecture (EA) joining other business transformation agents as a part of a larger and extended strategic value? How? And what exactly are the best practitioners of TOGAF getting for their efforts in terms of business achievements?

Here to answer such questions, and delve into advanced use and expanded benefits of EA frameworks, is Chris Forde, Vice President of Enterprise Architecture and Membership Capabilities for The Open Group, who is based in Shanghai, and Jason Uppal, Chief Architect at QR Systems, based in Toronto. The panel is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Uppal: This is a time for the enterprise architects to really step up to the plate and be accountable for real performance influence on the organization’s bottom line.

If we can improve things like exploiting assets better today than what we have, improve our planning program, and have very measurable and unambiguous performance indicator that we're committing to, this is a huge step forward for enterprise architects and moving away from technology and frameworks to real-time problems that resonate with executives and align to business and in IT.

An example where EA has a huge impact in many of the organizations is ... we're able to capture the innovation that exists in the organization -- and make that innovation real, as opposed to just suggestions that are thrown in a box, and nobody ever sees.

Say you define an end-to-end process using architecture development method (ADM) methods in TOGAF. This gives me a way to capture that innovation at the lowest level and then evolve it over time.

Those people who are part of the innovation at the beginning see their innovation or idea progressing through the organization, as the innovation gets aligned to value statements, and value statements get aligned to their capabilities, and to the strategies and the projects.

Therefore, if I make a suggestion of some sort, that innovation or idea is seen throughout the organization through the methods like ADM, and the linkage is explicit and very visible to the people. Therefore, they feel comfortable that their ideas are going somewhere, they are just not getting stuck.

So one of the things with a framework like a TOGAF is that, on the outside, it’s a framework. But at the same time, when you apply this along with the other disciplines, it's making a big difference in the organization, because it's allowing the IT organizations to ... actually exploit the current assets that they already have.

And [TOGAF helps] make sure the new assets that they do bring into the organization are aligned to the business needs.

Forde: In the end, what you want to be seeing out of your architectural program is moving the key performance indicators (KPIs) for the business, the business levers. If that is related to cost reduction or is related to top-line numbers or whatever, that explicit linkage through to the business levers in an architecture program is critical.

Going back to the framework reference, what we have with TOGAF 9 is a number of assets, but primarily it’s a tool that’s available to be customized, and it's expected to be customized.

You can start at the top and work your way down through the framework, from this kind of über value proposition, right down through delivery to the departmental level or whatever. Or, you can come into the bottom, in the infrastructure layer, in IT for example, and work your way up. Or, you can come in at the middle. The question is what is impeding your company’s growth or your department’s growth, if those are the issues that are facing you.

If you come to the toolset with a problem, you need to focus the framework on the area that's going to help you get rapid value to solving your particular problem set. So once you get into that particular space, then you can look at migrating out from that entry point, if that's the approach, to expanding your use of the framework, the methods, the capabilities, that are implicit and explicit in the framework to address other areas.

One of the reasons that this framework is so useful in so many different dimensions is that it is a framework. It’s designed to be customized, and is applicable to many different problems.

Uppal: When we think about an advanced TOGAF use ..., it allows us to focus on the current assets that are under deployment in the organization. How do you get the most out of them? An advanced user can figure out how to standardize and scale those assets into a scalable way so therefore they become reusable in the organization.

As we move up the food chain from very technology-centric view of a more optimized and transformed scale, advanced users at that point look and say -- a framework like TOGAF -- they have all these tools in their back pocket.

Now, depending on the stakeholder that they're working with, be that a CEO, a CFO, or a junior manager in the line of business, they can actually focus them on defining a specific capability that they are working toward and create transitional roadmaps. Once those transitional roadmaps are established, then they can drive that through.

An advanced user in the organization is somebody who has all these tools available to them, frameworks available to them, but at the same time, are very focused on a specific value delivery point in their scope.

It moves the conversation away from this framework debate and very quickly moves our conversation into what we do with it.



One beauty of TOGAF is that, because we get to define what enterprise is and we are not told that we have to interview the CEO on day one, I can define an enterprise from a manager’s point of view or a CFO’s point of view and work within that framework. That to me is an advanced user.

... I use methods like TOGAF to define the capabilities in a business strategy that [leaders] are trying to optimize, where they are, and what they want to transition to.

Very creative

This is where a framework allows me to be very creative, defining the capabilities and the transition points, and giving a roadmap to get to those transitions. That is the cleverness and cuteness of architecture work, and the real skills of an architect comes into, not in defining the framework, but defining the application of the framework to a specific business strategy.

... Because, what we do in the business space, and we have done it many times with the framework, is to look at the value chain of the organization. And looking at the value chain, then to map that out to the capabilities required.

Once we know those capabilities, then I can squarely put that question to the executives and say, "Tell me which capability you want to be the best at. Tell me what capability you want to lead the market in. And, tell me which capability you want to be mediocre and just be at below the benchmark in industry."

Once I get an understanding of which capability I want to be the best at, that's where I want to focus my energy.



Once I get an understanding of which capability I want to be the best at, that's where I want to focus my energy. Those ones that I am prepared to live with being mediocre, then I can put another strategy into place and ask how I outsource these things, and focus my outsourcing deal on the cost and service.

This is opposed to having very confused contract with the outsourcer, where one day I'm outsourcing for the cost reasons. The other day, I'm outsourcing for growth reasons. It becomes very difficult for an organization to manage the contracts and bend it to provide the support.

That conversation, at the beginning, is getting executives to commit to which capability they want to be best at. That is a good conversation for an enterprise architect.

My personal experience has been that if I get a call back from the executive, and they say they want to be best at every one of them, then I say, "Well, you really don’t have a clue what you are talking about. You can’t be super fast and super good at every single thing that you do."

One of the things that we've been looking at [at next week's conference] from the industry’s point of view is saying that this conversation around the frameworks is a done deal now, because everybody accepted that we have good enough frameworks. We're moving to the next phase of what we do with these frameworks.

Continuous planning

I
n Austin we'll be looking at how we're using a TOGAF framework to improve ongoing annual business and IT planning. We have a specific example that we are going to bring out where we looked at an organization that was doing once-a-year planning. That was not a very effective way for the organizations. They wanted to change it to continuous planning, which means planning that happens throughout the year.

We identified four or five very specific measurable goals that the program had, such as accuracy of your plan, business goals being achieved by the plan, time and cost to manage and govern the plan, and stakeholders’ satisfaction. Those are the areas that we are defining as to how the TOGAF like framework will be applied to solve a specific problem like enterprise planning and governance.

That's something we will be bringing to our conference in Austin and that event will be held on a Sunday. In the future, we'll be doing a lot more of those specific applications of a framework like a TOGAF to a unique set of problems that are very tangible and they very quickly resonate with the executives, not in IT, but in the entire organization.

Join The Open Group in Austin, Texas July 18-22 to learn more about enterprise architecture, cloud computing, and TOGAF 9. To register, go to http://www.opengroup.org/austin2011/register.htm.

In our future conferences, we're going to be addressing that and saying what people are specifically doing with these frameworks, not to debate the framework itself, but the application of it.

Forde: Jason is going to be talking as a senior architect at the conference on the applied side of TOGAF on Sunday [July 17]. For the Monday plenary, this is basically the rundown. We have David Baker, a Principal from PricewaterhouseCoopers, talking about business driven architecture for strategic transformations.

This is a time now for the enterprise architects to really step up to the plate and be accountable for real performance influence on the organization’s bottom line.



Following that, Tim Barnes, the Chief Architect at Devon Energy out of Canada, covering what they are doing from an EA perspective with their organization.

Then, we're going to wrap up the morning with Mike Wolf, the Principal Architect for EA Strategy and Architecture at Microsoft, talking about IT Architecture to the Enterprise Architecture.

This is a very powerful lineup of people addressing this business focus in EA and the application of it for strategic transformations, which I think are issues that many, many organizations are struggling with.

Capability-based planning

Uppal: The whole of our capability-based planning conversation was introduced in TOGAF 9, and we got more legs to go into developing that concept further, as we learn how best to do some of these things.

When I look at a capability-based planning, I expect my executives to look at it from a point of view and ask what are the opportunities and threats. What it is that you can get out there in the industry, if you have this capability in your back pocket? Don’t worry about how we are going to get it first, let’s decide that it’s worth getting it.

Then, we focus the organization into the long haul and say, well, if we don’t have this capability and nobody in the industry has this capability, if we do have it, what will it do for us? It provides us another view, a long-term view, of the organization. How are we going to focus our attention on the capabilities?

One of the beauties of doing EA is, is that when we start EA at the starting point of a strategic intent, that gives us a good 10-15 year view of what our business is going to be like. When we start architecture at the business strategy level, that gives us a six months to five-year view.

Enterprise architects are very effective at having two views of the world -- a 5-, 10-, or 15-year view of the world, and a 6-month to 3-year view of the world. If we don’t focus on the strategic intent, we'll never know what is possible, and we would always be working on what is possible within our organization, as opposed to thinking of what is possible in the industry as a whole.

Everybody is trying to understand what it is they need to be good at and what it is their partners are very good at that they can leverage.



Forde: In the kinds of environment that most organizations are operating in -- government, for-profit, not-for-profit organizations -- everybody is trying to understand what it is they need to be good at and what it is their partners are very good at that they can leverage. Their choices around this are of course critical.

One of the things that you need to consider is that if you are going to give X out and have the power to manage that and operate whatever it is, whatever process it might be, what do you have to be good at in order to make them effective? One of the things you need to be good at is managing third parties.

One of the advanced uses of an EA is applying the architecture to those management processes. In the maturity of things you can see potentially an effective organization managing a number of partners through an architected approach to things. So when we talked about what do advanced users do, what I am offering is that an advanced use of EA is in the application of it to third-party management.

Framework necessity

You need a framework. Think about what most major Fortune 500 companies in the United States do. They have multiple, multiple IT partners for application development and potentially for operations. They split the network out. They split the desktop out. This creates an amazing degree of complexity around multiple contracts. If you have an integrator, that’s great, but how do you manage the integrator?

There’s a whole slew of complex problems. What we've learned over the years is that the original idea of “outsourcing,” or whatever the term that’s going to be used, we tend to think of that in the abstract, as one activity, when in fact it might be anywhere from 5-25 partners. Coordinating that complexity is a major issue for organizations, and taking an architected approach to that problem is an advanced use of EA.

Uppal: Chris is right. For example, there are two capabilities that an organization we worked with decided on ... that they wanted to be very, very good at.

We worked with a large concrete manufacturing company. If you're a concrete manufacturing company, your biggest cost is the cement. If you can exploit your capability to optimize the cement and substitute products with the chemicals and get the same performance, you can actually get a lot more return and higher margins for the same concrete.

The next thing is the cleverness of the architect -- how he uses his tools to actually define the best possible solutions.



In this organization, the concrete manufacturing process itself was core competency. That had to be kept in-house. The infrastructure is essential to make the concrete, but it wasn’t the core competency of the organization. So those things had to be outsourced.

In this organization we have to build a process -- how to manage the outsourcers and, at the same time, have a capability and a process. Also, how to become best concrete manufacturers. Those two essential capabilities were identified.

An EA framework like TOGAF actually allows you to build both of those capabilities, because it doesn’t care. It just thinks, okay, I have a capability to build, and I am going to give you a set of instructions, the way you do it. The next thing is the cleverness of the architect -- how he uses his tools to actually define the best possible solutions.

Very explicit model

Our governance model is very explicit about who does what and when and how you monitor it. We extended this conversation using TOGAF 9 many times. At the end, when the capability is deployed, the initial value statement that was created in the business architecture is given back to the executive who asked for that capability.

We say, "This is what the benefits of these capabilities are and you signed off at the beginning. Now, you're going to find out that you got the capability. We are going to pass this thing into strategic planning next year, because for next year's planning starting point, this is going to be your baseline." So not only is the governance just to make sure it’s via monitoring, but did we actually get the business scores that we anticipated out of it.

... The whole cloud conversation becomes a very effective conversation within the IT organization.

When we think about cloud, we have actually done cloud before. This is not a new thing, except that before we looked at it from a hosting point of view and from a SaaS point of view. Now, cloud is going in a much further extended way, where entire capability is provided to you. That capability is not only that the infrastructure is being used for somebody else, but the entire industry’s knowledge is in that capability.

This is becoming a very popular thing, and rightfully so, not because it’s a sexy thing to have. In healthcare, especially in countries where it’s a socialized healthcare and it's not monopolized, they are sharing this knowledge in the cloud space with all the hospitals. It's becoming a very productive thing, and enterprise architects are driving it, because we're thinking of capabilities, not components.

IT interaction

Forde: Under normal circumstances the IT organizations are very good at interacting with other technology areas of the business. From what I've seen with the organizations I have dealt with, typically they see slices of business processes, rather than the end-to-end process entirely.

Even within the IT organizations typically, because of the size of many organizations, you have some sort of division of responsibilities. As far as Jason’s emphasis on capabilities and business processes, of course the capabilities and processes transcend functional areas in an organization.

To the extent that a business unit or a business area has a process owner end to end, they may well be better positioned to manage the BPM outsourcing-type of things. If there's a heavy technology orientation around the process outsourcing, then you will see the IT organization being involved to one extent or another.

The real question is, where is the most effective knowledge, skill, and experience around managing these outsourcing capabilities? It may be in the IT organization or it may be in the business unit, but you have to assess where that is.

Under normal circumstances the IT organizations are very good at interacting with other technology areas of the business.



That's one of the functions that the architecture approaches. You need to assess what it is that's going to make you successful in this. If what you need happens to be in the IT organization, then go with that ability. If it is more effective in the business unit, then go with that. And perhaps the answer is that you need to combine or create a new functional organization for the specific purpose of meeting that activity and outsource need.

For most, if not all, companies, information and data are critical to their operation and planning activities, both on a day-to-day basis, month-to-month, annually, and in longer time spans. So the information needs of a company are absolutely critical in any architected approach to solutions or value-add type of activities.

I don’t think I would accept the assumption that the IT department is best-placed to understand what those information needs are. The IT organization may be well-placed to provide input into what technologies could be applied to those problems, but if the information needs are normally being applied to business problems, as opposed to technology problems, I would suggest that it is probably the business units that are best-placed to decide what their information needs are and how best to apply them.

The technologist’s role, at least in the model I'm suggesting, is to be supportive in that and deliver the right technology, at the right time, for the right purpose.

Join The Open Group in Austin, Texas July 18-22 to learn more about enterprise architecture, cloud computing, and TOGAF 9. To register, go to http://www.opengroup.org/austin2011/register.htm.

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Wednesday, July 6, 2011

Case Study: T-Mobile's massive data center transformation journey wins award using HP ALM tools

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference June 8 in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on an award-winning applications migration and transformation -- and a grand-scale data center transition, too -- for T-Mobile. I was really impressed with the scope and size -- and the amount of time, in terms of being short -- for this award-winning project set.

We're here with two IT executives to learn more about what T-Mobile has done to set up two data centers, and how in the process they have improved their application quality and the processes using advanced application lifecycle management (ALM): Michael Cooper, Senior Director of Enterprise IT Quality Assurance at T-Mobile, and Kirthy Chennaian, Director Enterprise IT Quality Management at T-Mobile. The interview was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: People don’t just do these sorts of massive, hundred million dollar-plus activities because it's nice to have.

Cooper: Absolutely. There are some definite business drivers behind setting up a world-class, green data center and then a separate disaster-recovery data center.

Gardner: Why did you decide to undertake both an application transformation as well as a data center transformation -- almost simultaneously?

Chennaian: Given the scope and complexity of the initiative, ensuring system availability was primarily the major driver behind this. Quality assurance (QA) plays a significant role in ensuring that both data centers were migrated simultaneously, that the applications were available in real-time, and that from a quality assurance and testing standpoint we had to meet time-frames and timelines.

Gardner: Let's get a sense of the scope. Tell me about T-Mobile and its stature nowadays.

Cooper: T-Mobile is a national provider of voice, data, and messaging services. Right now, we're the fourth largest carrier in the US and have about 33 million customers and $21 billion in revenue, actually a little bit more than that. So, it's a significant company.

We're a company that’s really focused on our customers, and we've gone through an IT modernization. The data center efforts were a big part of that IT modernization, in addition to modernizing our application platform.

Gardner: Let's also talk about the scope of your movement to a new data center.

Chennaian: Two world-class data centers, one in Wenatchee, Washington, and the other one is Tempe, Arizona. The primary data center is the one in Wenatchee, and the failover disaster-recovery data center is in Tempe, Arizona.

Cooper: What we were doing was migrating more than 175 Tier 1 applications and Tier 0, and some Tier 2 as well. It was a significant effort requiring quite a bit of planning, and the HP tools had a big part in that, especially in the QA realm.

Gardner: Now, were these customer-facing apps, internal apps, logistics? Are we talking about retail? Give me a sense of the scope here on the breadth and depth of your apps?

Chennaian: Significant. We're talking critical applications that are customer-facing. We're talking enterprise applications that span across the entire organization. And, we're also talking about applications that support these critical front-end applications. So, as Michael pointed out, 175 applications needed to be migrated across both of the data centers.

For example, moving T-Mobile.com, which is a customer-facing critical application, ensuring that it was transitioned seamlessly and was available to the customer in real-time was probably one of the key examples of the criticality behind ensuring QA for this effort.

Gardner: IT is critical for almost all companies nowadays, but I can't imagine a company where technology is more essential and critical than T-Mobile, as a data and services carrier.

What's the case with the customer response? Do you have any business metrics, now that you’ve gone through this, that demonstrate not just that you're able to get better efficiency and your employees are getting better response times from their apps and data, but is there like a tangible business benefit, Michael?

Near-perfect availability

Cooper: I can't give you the exact specifics, but we've had significant increases in our system up-time and almost near-perfect availability in most areas. That’s been the biggest thing.

Kirthy mentioned T-Mobile.com. That’s an example where, instead of the primary and the backup, we actually have an active-active situation in the data center. So, if one goes down the other one is there, and this is significant.

A significant part of the way that we used HP tools in this process was not only the functional testing with Quick Test Professional and Quality Center, but we also did the performance testing with Performance Center and found some very significant issues that would have gone on to production.

This is a unique situation, because we actually got to do the performance testing live in the performance environments. We had to scale up to real performance types of loads and found some real issues that -- instead of the customers facing them, they didn’t have to face them.

The other thing that we did that was unique was high-availability testing. We tested each server to make sure that if one went down, the other ones were stable and could support our customers.

We were able to deliver application availability, ensure a timeframe for the migration and leverage the ability to use automation tools.



Gardner: This was literally changing the wings on the airplane when it was still flying. Tell me why doing it all at once was a good thing.

Chennaian: It was the fact that we were able to leverage the additional functionality that the HP suite of products provide. We were able to deliver application availability, ensure a time-frame for the migration and leverage the ability to use automation tools that HP provides. With Quick Test Professional, for example, we migrated from version 9.5 to 10.0, and we were able to leverage the functionality with business process testing from a Quality Center standpoint.

As a whole, from an application lifecycle management and from an enterprise-wide QA and testing standpoint, it allowed us to ensure system availability and QA on a timely basis. So, it made sense to upgrade as we were undergoing this transformation.

Cooper: Good point, Kirthy. In addition to upgrading our tools and so forth, we also upgraded many of the servers to some of the latest Itanium technology. We also implemented a lot of the state-of-the-art virtualization services offered by HP, and some of the other partners as well.

Streamlined process

Using HP tools, we were able to create a regression test set for each of our Tier 1 applications in a standard way and a performance test for each one of the applications. So, we were able to streamline our whole QA process as a side-benefit of the data migration, building out these state-of-the-art data centers, and IT modernization.

Gardner: So, this really affected operations. You changed some platforms, you adopted the higher levels of virtualization, you're injecting quality into your apps, and you're moving them into an entirely new facility. That's very impressive, but it's not just me being impressed. You've won a People's Choice Award, voted by peers of the HP software community and their Customer Advisory Board. That must have felt pretty good.

Cooper: It feels excellent. In 2009, we won the IT Transformation Award. So, this isn't our first time to the party. That was for a different project. I think that in the community people know who we are and what we're capable of. It's really an honor that the people who are our peers, who read over the different submissions, decided that we were the ones that were at the top.

We've won lots of awards, but that's not what we do it for. The reason why we do the awards is for the team. It's a big morale builder for the team. Everybody is working hard. Some of these project people work night and day to get them done, and the proof of the pudding is the recognition by the industry.

Our CIO has a high belief in quality and really supports us in doing this. It's nice that we've got the industry recognition as well.



Honestly, we also couldn't do without great executive support. Our CIO has a high belief in quality and really supports us in doing this. It's nice that we've got the industry recognition as well.

Gardner: Of course, the proof of the pudding is in the eating. You've got some metrics here. They were pretty impressive in turns of availability, cost savings, reduction in execution time, performance and stability improvements, and higher systems availability.

Cooper: The metrics I can speak to are from the QA perspective. We were able to do the testing and we never missed one of the testing deadlines. We cut our testing time using HP tools by about 50 percent through automation, and we can pretty accurately measure that. We probably have about 30 percent savings in the testing, but the best part of it is the availability. But, because of the sensitive nature and competitive marketplace, we're not going to talk exactly about what our availability is.

Gardner: And how about your particular point of pride on this one, Kirthy?

Chennaian: For one, being able to get recognized is an acknowledgement of all the work you do, and for your organization as a whole. Mike rightly pointed out that it boosts the morale of the organization. It also enables you to perform at a higher level. So, it's definitely a significant acknowledgment, and I'm very excited that we actually won the People's Choice Award.

Gardner: A number of other organizations and other series of industries are going to be facing the same kind of a situation, where it's not just going to be a slow, iterative improvement process,. They're going to have to go catalytic, and make wholesale changes in the data center, looking for that efficiency benefit.

You've done that. You've improved on your QA and applications lifecycle benefits at the same time. With that 20-20 hindsight, what would you have done differently?

Planning and strategy

Chennaian: If I were to do this again, I think there is definitely a significant opportunity with respect to planning and investing in the overall strategy of QA and testing for such a significant transformation. There has to be a standard methodology. You have to have the right toolsets in place. You have to plan for the entire transformation as a whole. Those are significant elements in successful transformation.

Cooper: We did a lot of things right. One of the things that we did right was to augment our team. We didn’t try to do the ongoing work with the exact same team. We brought in some extra specialists to work with us or to back-fill in some places. Other groups didn’t and paid the price, but that part worked out for us.

Also, it helped to have a seat at the table and say, "It's great to do a technology upgrade, but unless we really have the customer point of view and focus on the quality, you're not going to have success."

We were lucky enough to have that executive support and the seat at the table, to really have the go/no-go decisions. I don't think we really missed one in terms of ones that we said, "We shouldn't do it this time. Let's do it next time." Or, ones where we said, "Let's go." I can't remember even one application we had to roll back. Overall, it was very good. The other thing is, work with the right tools and the right partners.

Gardner: With data center transformation, after all, it's all about the apps. You were able to maintain that focus. You didn’t lose focus of the apps?

It's great to do a technology upgrade, but unless we really have the customer point of view and focus on the quality, you're not going to have success.



Cooper: Definitely.The applications do a couple of things. One, the ones that support the customers directly. Those have to have really high availability, and we're able to speed them up quite a bit with the newest and the latest hardware.

The other part are the apps that people don't think about that much, which are the ones that support the front lines, the ones that support retail and customer care and so forth. I would say that our business customers or internal customers have also really benefited from this project.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: