Wednesday, March 16, 2011

Mobile enablement presents challenges, opportunities as enterprises retool apps for the future now

This guest post comes courtesy of Stefan Andreasen, Founder/CTO, Kapow Software.

By Stefan Andreasen

Mobile adoption rates are on the rise and if market reports are any indication, growth rates aren’t slowing down anytime soon. Consumers and employees alike are the driving forces behind mobile adoption spurred by the evolution in mobile device capabilities along with the speed of mobile networks.

A recent Morgan Stanley research study predicts that sales of smartphones will overtake PC sales (including both desktops and notebooks) in the next two years, supporting the demands of our always-connected society. [Disclosure: Kapow Software is a sponsor of BriefingsDirect podcasts.]

The ubiquity of smartphones and more than 300,000 mobile apps available on Apple’s App Store, coupled with the ease and convenience of mobile computing is putting pressure on IT to mobile enable B2C and B2E applications to facilitate organizational efficiency and keep up with consumer and employee demand for mobile access to applications and content.

When it comes to enabling mobile access to mission-critical enterprise apps, companies have made far less progress.



It’s no surprise that millions of employees around the world are bringing their smartphones and mobile devices to work, resetting workplace expectations to have always-on access to the instantly available business apps that they’ve grown accustomed to from their personal lives.

According to a survey conducted by the Yankee Group, 90 percent of organizations surveyed have already enabled smartphone access to corporate email and PIM. Yet when it comes to enabling mobile access to mission-critical enterprise apps, companies have made far less progress, with only 30 percent of those surveyed providing smartphone access to customer relationship management (CRM), 20 percent to enterprise resource planning (ERP), and 18 percent to sales force automation (SFA).

CIOs scrambling

IT leaders and industry analysts are noticing CIOs scrambling to mobile-enable legacy applications to make them available on smartphones, tablets, and even GPS/navigation devices. And, IT departments are feeling the growing pressure to get this done in a matter of months -- to not only stay ahead of the competition, but in many cases, just to keep up.

One of the main challenges companies need to overcome when enabling mobile device access to existing data or legacy applications is the lack of “mobile ready” web service application programming interfaces (APIs) for existing applications.

Adding a service-level interface to a legacy application is a complex development project that typically involves a full or extensive rewrite of the existing legacy application. A common problem is that throughout the years an application has been written and modified by multiple developers, which are likely to have left the company, along with their institutional knowledge about the application. This situation had led many companies to basically re-write the application, which can take several years of coding and insurmountable resources and budget.

This situation had led many companies to basically re-write the application, which can take several years of coding and insurmountable resources and budget



It’s essential that organizations evaluate these important factors when embarking on a mobile enablement project:
  • Do the applications you want to mobile-enable have documented APIs?
  • What components and features of your business application do you want to mobile enable?
  • How are you taking into account form factor?
  • How will you deal with business logic and processes too complicated to be executed on a mobile device with a limited keyboard, where air time needs to be controlled, and server round trips need to be minimized?
  • How will you deal with service interruptions requiring the ability to queue processes for later execution on the back end?
  • Will you be combining data from multiple apps into one mobile application?
  • What mobile platforms do you need to support?
  • To what extent will you want to modify or extend your mobile application in the near future?
The best way to facilitate mobile enablement projects is with focused, goal oriented, up-front planning that doesn’t underestimate the complexity of the process, especially when dealing with traditional data integration techniques.

What many companies aren’t aware of is that there is an alternative approach to developing custom-built, native apps that doesn’t require dependency on pre-existing APIs.

Known as “browser-based data integration,” this emerging approach makes existing business applications and data “mobile ready” by allowing organizations to wrap their existing web application without changing the systems that are already there.

By creating a new web service interface “wrapper” without re-writing any of the existing code, mobile access to enterprise B2C and B2E applications can be possible in days or weeks, not months or years.

It’s no surprise that mobile initiatives are now a top priority for every enterprise. The challenge is to approach these projects as swiftly and efficiently as possible to stay relevant and productive. By combining the proper up-front planning process with browser-based mobile enablement technologies, companies can quickly provide their mobile users with the data and apps they so desperately want and need.

This guest post comes courtesy of Stefan Andreasen, Founder/CTO, Kapow Software.

You may also be interested in:

Friday, March 11, 2011

New HP Premier Services closes gap between single point of accountability and software-to-cloud sprawl

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a sponsored podcast discussion on how new models for IT support services are required to provide a single point of accountability when multiple and increasingly complex software implementations are involved.

Nowadays, the focal point for IT operational success lies not so much in just choosing the software and services mixture, but also in the management and support of these systems and implementations and the SLAs as an ecosystem -- and that ecosystem must be managed comprehensively with flexibility and for the long-term.

Long before cloud and hybrid computing models become a concern, the challenge before IT is how to straddle complexity and how to corral and manage -- as a lifecycle -- the vast software implementations already on-premises.

Of course, more of these workloads are supported these days by virtualized containers and often by a service-level commitment. IT needs to get a handle on supporting multiparty software and virtualized instances, along with the complex integrations and custom extensions across and between the applications.

Who are you going to call when things go wrong or when maintenance needs to affect one element of the stack without hosing the rest? How do you manage and broker at the service level agreement (SLA), or multiple SLA, level?

More than ever, finger pointing on who is accountable or responsible amid a diverse and fast-moving software environment cannot be allowed, not in an Instant-On Enterprise.

Not only does IT need a one-hand-to-shake value on comprehensive support more than ever, but IT departments may need to increasingly opt to outsource more of the routine operational tasks and software support to free up their IT knowledge resources and experts for transformation, security initiatives, and new business growth projects.

To learn how this can be better managed, we've tapped an executive from HP Software to examine an expanding set of new HP Premier Services designed to combine custom software support and consulting expertise to better deliver managed support outcomes across entire software implementations.

Anand Eswaran, Vice President, Global Professional Services at HP Software, is interviewed by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Eswaran: We're offering HP Premier Services across the entire portfolio for all solutions we put in front of customers. People may ask what's different. "Why are you able to do this today? The customer problem you are talking about sounds pretty native. Why haven’t you done this forever?"

If you look at a software organization, the segmentation between support and services is very discrete, whether inside the company or whether it is support working with services organization outside the company, and that’s the heart of the problem.

What we're doing here is a pretty big step. You hear about "services convergence" an awful lot in the industry. People think that’s the way to go. What they mean by services convergence is that all the services you need across the customer lifecycle merges to become one, and that’s what we are doing here.

We're merging what was customer support, which is a call center, and that’s why they can't take accountability for a solution. They are good at diagnostics, but they're not good at full-fledged solutions. They're merging that organization.

What that organization brings in is scale, infrastructure, and absolute global data center coverage. We're merging that with the Professional Services (PS) organization. When the rubber hits the road, PS is the organization, or the people, who deploy these solutions.

In my view, and in HP Software’s view, this is a fairly groundbreaking solution.



By merging those two, you get the best of both worlds, because you get scale, coverage, infrastructure, capability. And by virtue of a very, very extensive PS team within HP Software, we operate in 80 or 90 countries. We have coverage worldwide. That's how we're able to provide the service where we take accountability for this whole solution.

Converged IT support and professional services

What we're announcing and launching and what we're talking about is enhancing and elevating that support from just being a product to actually being the entire project and the solution for the customer. This is where, when we deploy a solution for a customer, which involves our technology, our software, for the most part, a service element to actually make it a reality, we will support the full solution.

That's the principal thing now that will allow us to not just talk about business outcomes when we go through the selling lifecycle, but it will also allow us to make those business outcomes a reality by taking full accountability for it. That is at the heart of what we are announcing -- extending customer support from a product to the project, and from a product to the full solution.

If I walk through what HP Premier Services is, that probably will shed more light on it. As I explain HP Premier Services, there are two dimensions to it.

The first dimension is the three choice points, and the first of those is what has classically been customer support. We just call it Foundation, where customer support supports the product. You have a phone line you can call. That doesn't change. That's always been there.

The second menu item in the first dimension is what we term as Premier Response, and this menu item is where we actually take that support for the product and extend it to the full project and the full solution. This is new and this is the first level of the extension we are going to offer to the customer.

The third menu item takes it even further. We call it Premier Advisory. In addition to just supporting the product, which has always been there, or just extending it to support a solution and the project -- both of those things are reactive -- we can engage with the customer to be proactive about support.

That's proactive as in not just reacting to an issue, but preempting problems and preempting issues, based on our knowledge of all the customers and how they have deployed the solution. We can advise the customer, whether it's patches, whether it's upgrades, whether it's other issues we see, or whether it's a best practice they need to implement. We can get proactive, so we preempt issues. Those are the three choice points on the first dimension.

We make anything and everything to do with the back end -- infrastructure, upgrades, and all of that -- completely transparent to the customer.



The second dimension is a different way to look at how we're extending Premier Services for the benefit of the customer. Again, the first choice point in the second dimension is called Premier Business. We have a named account manager who will work with the customer across the entire lifecycle. This is already there right now.

The second part of the second dimension is very new, and large enterprise customers will derive a lot of value from it. It's called Premier TeamExtend. Not only we will be do the first three choice points of foundation, support for the whole solution, and proactive support, we will extend and take control for the customer of the entire operation of that solution.

At that point, you almost mimic a software-as-a-service (SaaS) solution, but if there are reasons a customer wouldn't want to do SaaS and wouldn't want to do managed services, but want to host it on-site and have the full solution hosted in the customer premises, we will still deploy the solution, have them realize the full benefit of it, and run their solution and operate their solution.

Customer choice

We're not just giving them one thing, which they're pretty much forced to take, but if it's a very mature customer, with extensive capability on all the products and IT strategies that they're putting into place, they don’t need to go to TeamExtend. They can just maybe take a Foundation with just the first bit of HP Premier Services, which is Premier Response. That’s all they need to take.

Choice is a very big deal for us, so that customers can actually make the decision and we can recommend to them what they should be doing.

If there is an enterprise that is so focused on competitive differentiation in the marketplace and they don't want to worry about maintaining the solutions, then they could absolutely go to Premier TeamExtend, which offers them the best of all worlds.

By virtue of that, we make anything and everything to do with the back end -- infrastructure, upgrades, and all of that -- transparent to the customer. All they care about is the business outcome. If it's a solution we have deployed to cut outages by 3 percent and get service levels up-time up to 99.99 percent, that's what they get.

How we do it, the solutions involved, the service involved, and how we're managing it is completely transparent. The fundamental headline there is that it allows the customer to go back to 70 percent innovation and 30 percent maintenance, and completely flip the current ratio.

Impact of cloud solutions in the support mix

T
he reality is that cloud is still nebulous. Different companies have different interpretations of cloud. Customers are still a little nervous about going into the cloud, because we're still not completely sure about quality, security, and all of those things. So, this is the first or second step you take before you get comfortable to get to the cloud.

What we're able to do here is take complete control of that complexity and make it transparent to the customer -- and in a way -- to quasi-deliver the same outcomes which a cloud can deliver. Cloud is a trend, and we're making sure that we actually address it before we get there.

A lot of these services are also things we're providing to the cloud service providers. So, in a way, we're making sure that people who offer that cloud service are able to leverage our services to make sure that they can offer the same outcomes back to the customer. So, it’s a full lifecycle.

When we deploy a solution for a customer, which involves our technology, our software, for the most part, a service element to actually make it a reality, we will support the full solution.



In my view, and in HP Software’s view, this is a fairly groundbreaking solution. If I were to characterize everything we talked about in three words, the first would be simplify. The second would be proactive -- how can we be proactive, versus reacting to issues. And, how can we, still under the construct of the first two, offer the customers choice.

We've been in limited launch mode since June of last year. We wanted to make sure that we engage with a limited set of customers, make sure this really works, work out all the logistics, before we actually do a full public general availability launch. So, it is effective immediately.

We can also offer the same service to all the outsourcing providers or cloud service providers we work with. If you feel you're bouncing around between different organizations, as you try to get control of your IT infrastructure, whether if you work with an external SI and you do not feel that there is enough in sync happening between support and an external SI and you feel frustrated about it, this falls right in the sweet spot.

If you feel that you need to start moving away from just projects to business outcome based solutions you need to deploy in your IT organization, this falls right in the sweet spot for it.

If you feel that you want to spend less of your time maintaining solutions and more of your time thinking about the core business your company is in and making sure that your innovation is able to capture a bigger market share and bigger business benefits for the company you work for, and you want some organization to take accountability for the operations and maintenance of the stack you have, this falls right in the sweet spot for it.

Smaller companies

The last thing, interestingly enough, is that we see uptake from even smaller and medium-sized companies, where they do not have enough people, and they do not want to worry about maintenance of the stack based on the capability or the experience of the people they have on these different solutions -- whether it's operations, whether it's applications, whether it is security across the entire HP software stack. So, if you're on any of those four or five different use cases, this falls right in the sweet spot for all of them.

So, in summary, at the heart of it what we're trying to do is simplify the complexity of how a customer or an IT organization deals with the complexity of their stack.

The second thing is that an IT organization is always striving to flip the ratio of innovation and operations. As you look today, it is 70 percent operations and 30 percent innovation. If you get that single point of accountability, they can focus more on innovation and supporting the business needs, so that their company can take advantage of greater market share, versus operations and maintaining the stack they already have.

IT complexity is increasing by the day. Having multiple vendors accountable for different parts of the IT strategy and IT implementation is a huge problem. Because of the complexity of the solution and because multiple organizations are accountable for different discrete parts of the solution, the customer is left holding the bag on to figure out how to navigate the complexity of the software organization. How do you pinpoint exactly where the problem is and then engage the right party?

We actually start to engage with them in solving a business problem for them. We paint the ROI that we could get.

Find out more about the new HP Premier Services launch.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, March 9, 2011

Red Hat introduces JBoss Enterprise SOA Platform 5.1 with enterprise-class open source data virtualization

Red Hat today announced the availability of JBoss Enterprise SOA Platform 5.1, which includes new extensions for data services integration.

JBoss Enterprise Data Services Platform 5.1, a superset of JBoss Enterprise SOA Platform 5.1, is an open source data virtualization and integration platform that includes tools to create data services out of multiple data stores with different formats, presenting information to applications and business processes in an easy-to-use service. 
These data services become reusable assets across the enterprise.

We're beginning to see a real marketplace for open source-based integration and middleware, and in many ways the open source versions are advancing the value and variety of these services beyond where the commercial products can quickly tread. The advantages of community development and open source sharing really shine when multiple and fast-adapting integrations are involved.

What's more, as cloud and SaaS services become more common, ways of integrating data and applications assets -- regardless or origins -- will need to keep pace. Standardization and inclusiveness of integration points and types may be much better served by a community approach, and open source licenses, than waiting for a commercial product upgrade, or costly custom integrations.

I also see enterprises, SMBs, ISVs and cloud providers working to elevate the concept of "applications" more to the business processes level. And that means that decomposing and re-composing and orchestrating of services -- dare I say via SOA principles -- becomes essential, again, regardless of services, data and assets origins.

Lastly, the interest and value in Big Data benefits is also roiling the landscape. The integration of data then becomes tactical, strategic, imperative and at the heart of what drives an agile and instant-on enterprise.

“Being able to integrate and synchronize useful information out of a wide range of disparate data sources remains a serious stumbling block to the enterprise,” said Craig Muzilla, vice president and general manager, Middleware BusinessUnit at Raleigh, N.C.-based Red Hat. “JBoss Enterprise Data Services Platform 5.1 is a flexible, standards-based integration and data virtualization solution built on JBossEnterprise SOA Platform that delivers more efficient and cost-effective application and data integration techniques, allowing enterprises to more fully realize the value of their data.”

All businesses draw upon many different data sources and formats to run their applications. In many cases these data sources are hardwired into applications through data access frameworks that reduce agility and make control and compliance difficult. This data architecture counters the agility and cost-savings benefits delivered by service-oriented architectures (SOA) by forcing redundant data silos for each application.

Multiple data stores

D
ata Services Platform 5.1 aims to address these problems by virtualizing multiple data stores simultaneously, delivering data services consumable by multiple applications and business processes. By leveraging the integrated JBoss Enterprise SOA Platform 5.1, the information delivered using data virtualization can more easily be integrated into the business via the enterprise service bus (ESB) included with the platform. 


JBoss Enterprise SOA Platform 5.1 includes:
  • Apache CXF web services stack
  • JBoss Developer Studio 4.0, which features updated SOA tooling for ESB and data virtualization
  • A technology preview of WS-BPEL, which delivers service orchestration
  • A technology preview of Apache Camel Gateway, which is a popular enterprise integration pattern framework that brings an expanded set of adapters to JBoss Enterprise SOA Platform
  • Updated certifications -- Red Hat Enterprise Linux 6, Windows 2008, IBM, JDK, among others

    Being able to integrate and synchronize useful information out of a wide range of disparate data sources remains a serious stumbling block to the enterprise.

JBoss Enterprise SOA Platform follows the JBoss Open Choice strategy of offering a choice of integration architectures, messaging platforms, and deployment options. Also, both JBoss Enterprise SOA Platform 5.1 and JBoss Enterprise Data Services Platform 5.1 are designed to leverage past and present solutions, such as SOA integration, through the ESB, event-driven architecture (EDA) and data virtualization, while building a foundation to support future integration paradigms, such as integrating cloud, hybrid, and on-premise data, services and applications.

Along with JBoss Enterprise SOA Platform 5.1, Red Hat is offering a new two-day training course, JBoss Enterprise SOA Platform – ESB Implementation, which is focused on developing and deploying ESB providers and services using JBoss Developer Studio and JBoss Enterprise SOA Platform.

For more information on JBoss Enterprise SOA Platform, visit http://www.jboss.com/products/platforms/soa/.

For more information on JBoss Enterprise Data Services Platform, visit http://www.jboss.com/products/platforms/dataservices/.

You may also be interested in:

Monday, March 7, 2011

GigaSpaces announces new product for enterprise PaaS and ISV SaaS enablement

GigaSpaces Technologies announced today the upcoming release of its second-generation cloud-enablement platform, which offers an architecture aimed specifically at enterprise platform-as-a-service (PaaS) and independent software vendor (ISV) software-as-a-service (SaaS) enablement.

The addition of the newest GigaSpaces cloud-enablement platform broadens a growing field of vendors that are bringing to market the picks and shovels of the cloud gold rush. Targeting SaaS ISVs and the PaaS value makes a lot of sense, as this is where the services are being forged that will need to find cloud homes, be them on-premises, public or hybrids.

In the history of IT, no one got fired for helping good apps get built quickly and well, and deployed widely and openly. That goes for both ISVs and for custom enterprise apps. You just don't get to see that value truly delivered too often. But perhaps the transition to cloud and the need for ISVs to be seduced with openness, what GigaSpaces calls "silo free", will allow for new round of choice and productivity.

Expanding on the current GigaSpaces' solutions, the new products are include private and hybrid cloud-based offerings:
  • "Silo-free" architecture that is a converged for more application and data environments, enabling improved cross-stack elasticity, multi-tenancy, unified SLA-driven performance, central management and simplifying development and operational processes.

  • User, data and application policy-driven multi-tenancy management from the web tier down to the customer and data object levels. This provides better monitoring through a console that includes views into control, security, and visibility over the multi-tenancy aspects of the application.

  • Built-in DevOps support helps uniformly manage and automate the lifecycle of the application middleware and its resources, reducing operational and development complexity, says GigaSpaces.

  • Out-of-the-box third-party middleware management (e.g. Tomcat, Cassandra, JMS) that helps automate and manage application middleware services during deployment and production.

  • Portability, multi-language and multi-middleware support, along with integration with existing processes and systems for private, public, and hybrid clouds.

Silo-free architecture

The platform has already been integrated with major strategic partners in the cloud arena, says GigaSpaces, with enterprises and SaaS providers using GigaSpaces cloud enablement in such industries as financial services, e-commerce, online gaming, healthcare, business process management, analytics, and telecommunications.

This new product offers a field-proven technology, minimizing the risks associated with migrating to the cloud, making former ‘mission impossibles’ very possible indeed.

In addition, the solution has been integrated with leading cloud-focused technologies with such partners Cisco and Citrix.

The GigaSpaces ISV SaaS and enterprise PaaS enablement platform is scheduled to be released in Q2 2011. All the new cloud-enablement features will be available to existing customers already using GigaSpaces eXtreme Application Platform (XAP) solutions for enterprise scaling as easily integrated add-ons.

You may also be interested in:

Thursday, March 3, 2011

Big data consolidation race enters home stretch, as Teradata buys Aster Data

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

At this point, probably at least 90 percent or more of analytic systems/data warehouses are easily contained within the SQL-based technologies that are commercially available today. We’ll take that argument a step further: Most enterprise data warehouses are less than 5 terabytes. So why then all the excitement about big data, and why are acquisitions in this field becoming almost a biweekly thing?

To refresh the memory, barely a couple weeks back, HP announced its intention to buy Vertica. And this morning came the news that Teradata is buying the other 89 percent of Aster Data that it doesn’t already own. Given Teradata’s 11 percent stake, the acquisition was hardly a surprise. Maybe what was surprising was the mere $263-million price tag, which Neil Raden wondered facetiously in his tweet, “That seems like a real bargain. I should have bought them myself!!! Or as Forrester’s James Kobielus tweeted, “Essentially, AsterData gives #Teradata the analytic application server (analytics + OLTP) they need to duke it out with Oracle Exadata.” [Disclosure: Aster Data Systems is a sponsor of BriefingsDirect podcasts.]

The irony is when you talk about big data, for years it was synonymous with one player: Teradata. But as we’ve observed, there’s more data everywhere and there’s cheaper processor, disk, cache, and bandwidth to transport and manage it –- whether you intercept event streams, store it, or federate to it.

Widening vendor ecosystem

In all this, Teradata has found itself part of a widening vendor ecosystem that has responded to its massively parallel technology with new variants in columnar, in memory, solid state, NoSQL, unstructured data, and event stream technology. While Teradata was known for taking traditional historical analytics, and in some cases, operational data stores to extreme scale, others were eking out different aspects of extreme analytics, whether it being real-time or interactive analysis of structured data, parsing of social media sentiment, taking smarter approaches to managing civil infrastructure or homeland security through analysis of sensory data streams, fraud detection, and so on.

Teradata has hardly stood still, having broadened out its product footprint from its classic proprietary hardware to a broad array of form factors that run on commodity platforms, solid state disk, and virtual cloud, and more recently with acquisitions of MySQL appliance Kickfire and marketing analytics provider Aprimo.

Viewed from a market perspective, Teradata’s acquisition marks the home stretch for consolidation of the current crop of analytic database challengers.



Acquisition of Aster Data, probably the best pick of the remaining lot of columnar database challengers, provides Teradata yet another facet of an increasingly well-rounded product portfolio. Going forward, we expect that Teradata will continue its offerings of vertical industry data templates to extend to the columnar world.

Viewed from a market perspective, Teradata’s acquisition marks the home stretch for consolidation of the current crop of analytic database challengers, who are mostly spread in the columnar field. Dell is the last major platform player standing that has yet to make its move.

The currently wave of consolidation hardly spells the end of innovation here, as there is plenty of headroom in the taming of the NoSQL world. And although acquisition of Aster Data overlaps with HP’s Vertica deal, that makes Teradata no less attractive for an HP that seeks to broaden out its enterprise software footprint.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Thursday, February 24, 2011

Open Group cloud panel forecasts cloud as spurring useful transition phase for enterprise architecture

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: The Open Group.

Welcome to a special discussion on predicting how cloud computing will actually unfold for enterprises and their core applications and services in the next few years. Part of The Open Group 2011 Conference in San Diego the week of Feb. 7, a live, on-stage panel examined the expectations of new types of cloud models -- and perhaps cloud specialization requirements -- emerging quite soon.

By now, we're all familiar with the taxonomy around public cloud, private cloud, software as a service (SaaS), platform as a service (PaaS), and my favorite, infrastructure as a service (IaaS). But we thought we would do you all an additional service and examine, firstly, where these general types of cloud models are actually gaining use and allegiance, and look at vertical industries and types of companies that are leaping ahead with cloud. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Then, second, we're going to look at why one-size-fits-all cloud services may not fit so well in a highly fragmented, customized, heterogeneous, and specialized IT world -- which is, of course, the world most of us live in.

How much of cloud services that come with a true price benefit -- and that’s usually at scale and cheap -- will be able to replace what is actually on the ground in many complex and unique enterprise IT organizations? Can a few types of cloud work for all of them?

Here to help us better understand the quest for "fit for purpose" cloud balance and to predict, at least for some time, the considerable mismatch between enterprise cloud wants and cloud provider offerings, is our panel: Penelope Gordon, co-founder of 1Plug Corp., based in San Francisco; Mark Skilton, Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini in London; Ed Harrington, Principal Consultant in Virginia for the UK-based Architecting the Enterprise organization; Tom Plunkett, Senior Solution Consultant with Oracle in Huntsville, Alabama, and TJ Virdi, Computing Architect in the CAS IT System Architecture Group at Boeing based in Seattle. The discussion was moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gordon: A lot of companies don’t even necessarily realize that they're using cloud services, particularly when you talk about SaaS. There are a number of SaaS solutions that are becoming more and more ubiquitous.

I see a lot more of the buying of cloud moving out to the non-IT line of business executives. If that accelerates, there is going to be less and less focus. Companies are really separating now what is differentiating and what is core to my business from the rest of it.

There's going to be less emphasis on, "Let’s do our scale development on a platform level" and more, "Let’s really seek out those vendors that are going to enable us to effectively integrate, so we don’t have to do double entry of data between different solutions. Let's look out for the solutions that allow us to apply the governance and that effectively let us tailor our experience with these solutions in a way that doesn’t impinge upon the provider’s ability to deliver in a cost effective fashion."

That’s going to become much more important. So, a lot of the development onus is going to be on the providers, rather than on the actual buyers.

Enterprise architects need to break out of the idea of focusing on how to address the boundary between IT and the business and talk to the business in business terms.

One way of doing that that I have seen as effective is to look at it from the standpoint of portfolio management. Where you were familiar with financial portfolio management, now you are looking at a service portfolio, as well as looking at your overall business and all of your business processes as a portfolio. How can you optimize at a macro level for your portfolio of all the investment decisions you're making, and how the various processes and services are enabled? Then, it comes down to a money issue.

Shadow IT

Harrington: We're seeing a lot of cloud uptake in the small businesses. I work for a 50-person company. We have one "sort of" IT person and we do virtually everything in the cloud. We have people in Australia and Canada, here in the States, headquartered in the UK, and we use cloud services for virtually everything across that. I'm associated with a number of other small companies and we are seeing big uptake of cloud services.

We talked about line management IT getting involved in acquiring cloud services. If you think we've got this thing called "shadow IT" today, wait a few years. We're going to have a huge problem with shadow IT.

From the architect’s perspective, there's lot to be involved with and a lot to play with. There's an awful lot of analysis to be done -- what is the value that the cloud solution being proposed is going to be supplying to the organization in business terms, versus the risk associated with it? Enterprise architects deal with change, and that’s what we're talking about. We're talking about change, and change will inherently involve risk.

The federal government has done some analysis. In particular, the General Services Administration (GSA), has done some considerable analysis on what they think they can save by going to, in their case, a public cloud model for email and collaboration services. They've issued a $6.7 million contract to Unisys as the systems integrator, with Google being the cloud services supplier.

So, the debate over the benefits of cloud, versus the risks associated with cloud, is still going on quite heatedly.

Skilton: From personal experience, there are probably three areas of adaptation of cloud into businesses. For sure, there are horizontal common services to which, what you call, the homogeneous cloud solution could be applied common to a number of business units or operations across a market.

But we're starting to increasingly see the need for customization to meet vertical competitive needs of a company or the decisions within that large company. So, differentiation and business models are still there, they are still in platform cloud as they were in the pre-cloud era.

But, the key thing is that we're seeing a different kind of potential that a business can do now with cloud -- a more elastic, explosive expansion and contraction of a business model. We're seeing fundamentally the operating model of the business growing, and the industry can change using cloud technology.

So, there are two things going on in the business and the technologies are changing because of the cloud.

... There are two more key points. There's a missing architecture practice that needs to be there, which is a workload analysis, so that you design applications to fit specific infrastructure containers, and you've got a bridge between the the application service and the infrastructure service. There needs to be a piece of work by enterprise architects (EAs) that starts to bring that together as a deliberate design for applications to be able to operate in the cloud. And the PaaS platform is a perfect environment.

The second thing is that there's a lack of policy management in terms of technical governance, and because of the lack of understanding. There needs to be more of a matching exercise going on. The key thing is that that needs to evolve.

Part of the work we're doing in The Open Group with the Cloud Computing Work Group is to develop new standards and methodologies that bridge those gaps between infrastructure, PaaS, platform development, and SaaS.

Plunkett: Another place we're seeing a lot of growth with regard to private clouds is actually on the defense side. The U.S. Defense Department is looking at private clouds, but they also have to deal with this core and context issue. The requirements for a [Navy] shipboard system are very different from the land-based systems.

Ships have to deal with narrow bandwidth and going disconnected. They also have to deal with coalition partners or perhaps they are providing humanitarian assistance and they are dealing even with organizations we wouldn’t normally consider military. So they have to deal with lots of information, assurance issues, and have completely different governance concerns that we normally think about for public clouds.

We talked about the importance of governance increasing as the IT industry went into SOA. Well, cloud is going to make it even more important. Governance throughout the lifecycle, not just at the end, not just at deployment, but from the very beginning.

If you think we've got this thing called "shadow IT" today, wait a few years. We're going to have a huge problem with shadow IT.



You mentioned variable workloads. Another place where we are seeing a lot of customers approach cloud is when they are starting a new project. Because then, they don’t have to migrate from the existing infrastructure. Instead everything is brand new. That’s the other place where we see a lot of customers looking at cloud, your greenfields.

Virdi: I think what we are really looking [to cloud] for speed to put new products into the market or evolve the products that we already have and how to optimize business operations, as well as reduce the cost. These may be parallel to any vertical industries, where all these things are probably going to be working as a cloud solution.

How to measure and create a new product or solutions is the really cool things you would be looking for in the cloud. And, it has proven pretty easy to put a new solution into the market. So, speed is also the big thing in there.

All these business decisions are going to be coming upstream, and business executives need to be more aware about how cloud could be utilized as a delivery model. The enterprise architects and someone with a technical background needs to educate or drive them to make the right decisions and choose the proper solutions.

It has an impact how you want to use the cloud, as well as how you get out of it too, in case you want to move to different cloud vendors or providers. All those things come into play upstream, rather than downstream.

You probably also have to figure out how you want to plan to adapt to the cloud. You don’t want to start as a Big Bang theory. You want to start in incremental steps, small steps, test out what you really want to do. If that works, then go do the other things after that.

Gordon: One example in talking about core and context is when you look in retail. You can have two retailers like a Walmart or a Costco, where they're competing in the same general space, but are differentiating in different areas.

Walmart is really differentiating on the supply chain, and so it’s not a good candidate for public cloud computing solutions. That might possibly be a candidate for private cloud computing.

But that’s really where they're going to invest in the differentiating, as opposed to a Costco, where it makes more sense for them to invest in their relationship with their customers and their relationship with their employees. They're going to put more emphasis on those business processes, and they might be more inclined to outsource some of the aspects of their supply chain.

Hard to do it alone

Skilton: The lessons that we're learning in running private clouds for our clients is the need to have a much more of a running-IT-as-a-business ethos and approach. We find that if customers try to do it themselves, either they may find that difficult, because they are used to buying that as a service, or they have to change their enterprise architecture and support service disciplines to operate the cloud.

Also, fundamentally the data center and network strategies need to be in place to adopt cloud. From my experience, the data center transformation or refurbishment strategies or next generation networks tend to be done as a separate exercise from the applications area. So a strong, strong recommendation from me would be to drive a clear cloud route map to your data center.

Harrington: Again, we're back to the governance, and certification of some sort. I'm not in favor of regulation, but I am in favor of some sort of third-party certification of services that consumers can rely upon safely. But, I will go back to what I said earlier. It's a combination of governance, treating the cloud services as services per se, and enterprise architecture.

Plunkett: What we're seeing with private cloud is that it’s actually impacting governance, because one of the things that you look at with private cloud is charge-back between different internal customers. This is forcing these organizations to deal with complexity, money, and business issues that they don't really like to do.

Nowadays, it's mostly vertical applications, where you've got one owner who is paying for everything. Now, we're actually going back to, as we were talking about earlier, dealing with some of the tricky issues of SOA.

Securing your data

Virdi: Private clouds actually allow you to make more business modular. Your capability is going to be a little bit more modular and interoperability testing could happen in the private cloud. Then you can actually use those same kind of modular functions, utilize the public cloud, and work with other commercial off-the-shelf (COTS) vendors that really package this as new holistic solutions.

Configuration and change management -- how in the private cloud we are adapting to it and supporting different customer segments is really the key. This could be utilized in the public cloud too, as well as how you are really securing your information and data or your business knowledge. How you want to secure that is key, and that's why the private cloud is there. If we can adapt to or mimic the same kind of controls in the public cloud, maybe we'll have more adoptions in the public cloud too.

Gordon: I also look at it in a little different way. For example, in the U.S., you have the National Security Agency (NSA). For a lot of what you would think of as their non-differentiating processes, for example payroll, they can't use ADP. They can't use that SaaS for payroll, because they can't allow the identities of their employees to become publicly known.

Anything that involves their employee data and all the rest of the information within the agency has to be kept within a private cloud. But, they're actively looking at private cloud solutions for some of the other benefits of cloud.

In one sense, I look at it and say that private cloud adoption to me tells a provider that this is an area that's not a candidate for a public-cloud solution. But, private clouds could also be another channel for public cloud providers to be able to better monetize what they're doing, rather than just focusing on public cloud solutions.

Impact on mergers and acquisitions

Plunkett: Not to speak on behalf of Oracle, but we've gone through a few mergers and acquisitions recently, and I do believe that having a cloud environment internally helps quite a bit. Specifically, TJ made the earlier point about modularity. Well, when we're looking at modules, they're easier to integrate. It’s easier to recompose services, and all the benefits of SOA really.

That kind of thinking, the cloud constructs applied up at a business architecture level, enables the kind of business expansion that we are looking at.



Gordon: If you are going to effectively consume and provide cloud services, if you do become much more rigorous about your change management, your configuration management, and if you then apply that out to a larger process level. Some of this comes back to some of the discussions we were having about the extra discipline that comes into play.

So, if you define certain capabilities within the business in a much more modular fashion, then, when you go through that growth and add on people, you have documented procedures and processes. It’s much easier to bring someone in and say, "You're going to be a product manager, and that job role is fungible across the business."

That kind of thinking, the cloud constructs applied up at a business architecture level, enables the kind of business expansion that we are looking at.

Harrington: [As for M&As], it depends a lot on how close the organizations are, how close their service portfolios are, to what degree has each of the organizations adapted the cloud, and is that going to cause conflict as well. So I think there is potential.

Skilton: Right now, I'm involved in merging in a cloud company that we bought last year in May ... It’s kind of a mixed blessing with cloud. With our own cloud services, we acquire these new companies, but we still have the same IT integration problem to then exploit that capability we've acquired.

Each organization in the commercial sector can have different standards, and then you still have that interoperability problem that we have to translate to make it benefit, the post merger integration issue. It’s not plug and play yet, unfortunately.

But, the upside is that I can bundle that service that we acquired, because we wanted to get that additional capability, and rewrite design techniques for cloud computing. We can then launch that bundle of new service faster into the market.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: The Open Group.

You may also be interested in:

Monday, February 21, 2011

The Open Trusted Technology Provider Framework Aims to No Less Than Secure the Global IT Supply Chain

This guest post is courtesy of Andras Szakal, IBM Distinguished Engineer and Director of IBM's Federal Software Architecture team.

By Andras Szakal

Nearly two months ago, we announced the formation of The Open Group Trusted Technology Forum (OTTF), a global standards initiative among technology companies, customers, government and supplier organizations to create and promote guidelines for manufacturing, sourcing, and integrating trusted, secure technologies.

The OTTF’s purpose is to shape global procurement strategies and best practices to help reduce threats and vulnerabilities in the global supply chain. I’m proud to say that we have just completed our first deliverable toward achieving our goal: The Open Trusted Technology Provider Framework (O-TTPF) whitepaper.

The framework outlines industry best practices that contribute to the secure and trusted development, manufacture, delivery and ongoing operation of commercial software and hardware products. Even though the OTTF has only recently been announced to the public, the framework and the work that led to this whitepaper have been in development for more than a year: first as a project of the Acquisition Cybersecurity Initiative, a collaborative effort facilitated by The Open Group between government and industry verticals under the sponsorship of the U.S. Department of Defense (OUSD (AT&L)/DDR&E).

The framework is intended to benefit technology buyers and providers across all industries and across the globe concerned with secure development practices and supply chain management. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist



More than 15 member organizations joined efforts to form the OTTF as a proactive response to the changing cyber security threat landscape, which has forced governments and larger enterprises to take a more comprehensive view of risk management and product assurance. Current members of the OTTF include Atsec, Boeing, Carnegie Mellon SEI, CA Technologies, Cisco Systems, EMC, Hewlett-Packard, IBM, IDA, Kingdee, Microsoft, MITRE, NASA, Oracle, and the U.S. Department of Defense (OUSD(AT&L)/DDR&E), with the forum operating under the stewardship and guidance of The Open Group.

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist. These best practices have been compiled from a number of sources throughout the industry including cues taken from industry associations, coalitions, traditional standards bodies and through existing vendor practices. OTTF member representatives have also shared best practices from within their own organizations.

From there, the OTTF created a common set of best practices distilled into categories and eventually categorized into the O-TTPF whitepaper. All this was done with a goal of ensuring that the practices are practical, outcome-based, aren’t unnecessarily prescriptive and don’t favor any particular vendor.

The framework

T
he diagram below outlines the structure of the framework divided into categories that outline a hierarchy of how the OTTF arrived at the best practices it created.

















Trusted technology provider categories

Best practices were grouped by category because the types of technology development, manufacturing or integration activities conducted by a supplier are usually tailored to suit the type of product being produced, whether it is hardware, firmware, or software-based. Categories may also be aligned by manufacturing or development phase so that, for example, a supplier can implement a secure engineering/development method if necessary.

Provider categories outlined in the framework include:
  • Product engineering/development method
  • Secure engineering/development method
  • Supply chain integrity method
  • Product evaluation method
  • Establishing conformance and determining accreditation
In order for the best practices set forth in the O-TTPF to have a long-lasting effect on securing product development and the supply chain, the OTTF will define an accreditation process. Without an accreditation process, there can be no assurance that a practitioner has implemented practices according to the approved framework.

After the framework is formally adopted as a specification, The Open Group will establish conformance criteria and design an accreditation program for the O-TTPF. The Open Group currently manages multiple industry certification and accreditation programs, operating some independently and some in conjunction with third party validation labs. The Open Group is uniquely positioned to provide the foundation for creating standards and accreditation programs. Since trusted technology providers could be either software or hardware vendors, conformance will be applicable to each technology supplier based on the appropriate product architecture.

At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division.



At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division. An accreditation program of this nature could provide alternative routes to claim conformity to the O-TTPF.

Over the long-term, the OTTF is expected to evolve the framework to make sure its industry best practices continue to ensure the integrity of the global supply chain. Since the O-TTPF is a framework, the authors fully expect that it will evolve to help augment existing manufacturing processes rather than replace existing organizational practices or policies.

There is much left to do, but we’re already well on the way to ensuring the technology supply chain stays safe and secure. If you’re interested in shaping the Trusted Technology Provider Framework best practices and accreditation program, please join us in the OTTF.

Download the O-TTPF paper, or read the OTTPF in full here.

This guest post is courtesy of Andras Szakal, IBM Distinguished Engineer and Director of IBM's Federal Software Architecture team.

You may also be interested in: