Friday, March 11, 2011

New HP Premier Services closes gap between single point of accountability and software-to-cloud sprawl

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a sponsored podcast discussion on how new models for IT support services are required to provide a single point of accountability when multiple and increasingly complex software implementations are involved.

Nowadays, the focal point for IT operational success lies not so much in just choosing the software and services mixture, but also in the management and support of these systems and implementations and the SLAs as an ecosystem -- and that ecosystem must be managed comprehensively with flexibility and for the long-term.

Long before cloud and hybrid computing models become a concern, the challenge before IT is how to straddle complexity and how to corral and manage -- as a lifecycle -- the vast software implementations already on-premises.

Of course, more of these workloads are supported these days by virtualized containers and often by a service-level commitment. IT needs to get a handle on supporting multiparty software and virtualized instances, along with the complex integrations and custom extensions across and between the applications.

Who are you going to call when things go wrong or when maintenance needs to affect one element of the stack without hosing the rest? How do you manage and broker at the service level agreement (SLA), or multiple SLA, level?

More than ever, finger pointing on who is accountable or responsible amid a diverse and fast-moving software environment cannot be allowed, not in an Instant-On Enterprise.

Not only does IT need a one-hand-to-shake value on comprehensive support more than ever, but IT departments may need to increasingly opt to outsource more of the routine operational tasks and software support to free up their IT knowledge resources and experts for transformation, security initiatives, and new business growth projects.

To learn how this can be better managed, we've tapped an executive from HP Software to examine an expanding set of new HP Premier Services designed to combine custom software support and consulting expertise to better deliver managed support outcomes across entire software implementations.

Anand Eswaran, Vice President, Global Professional Services at HP Software, is interviewed by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Eswaran: We're offering HP Premier Services across the entire portfolio for all solutions we put in front of customers. People may ask what's different. "Why are you able to do this today? The customer problem you are talking about sounds pretty native. Why haven’t you done this forever?"

If you look at a software organization, the segmentation between support and services is very discrete, whether inside the company or whether it is support working with services organization outside the company, and that’s the heart of the problem.

What we're doing here is a pretty big step. You hear about "services convergence" an awful lot in the industry. People think that’s the way to go. What they mean by services convergence is that all the services you need across the customer lifecycle merges to become one, and that’s what we are doing here.

We're merging what was customer support, which is a call center, and that’s why they can't take accountability for a solution. They are good at diagnostics, but they're not good at full-fledged solutions. They're merging that organization.

What that organization brings in is scale, infrastructure, and absolute global data center coverage. We're merging that with the Professional Services (PS) organization. When the rubber hits the road, PS is the organization, or the people, who deploy these solutions.

In my view, and in HP Software’s view, this is a fairly groundbreaking solution.



By merging those two, you get the best of both worlds, because you get scale, coverage, infrastructure, capability. And by virtue of a very, very extensive PS team within HP Software, we operate in 80 or 90 countries. We have coverage worldwide. That's how we're able to provide the service where we take accountability for this whole solution.

Converged IT support and professional services

What we're announcing and launching and what we're talking about is enhancing and elevating that support from just being a product to actually being the entire project and the solution for the customer. This is where, when we deploy a solution for a customer, which involves our technology, our software, for the most part, a service element to actually make it a reality, we will support the full solution.

That's the principal thing now that will allow us to not just talk about business outcomes when we go through the selling lifecycle, but it will also allow us to make those business outcomes a reality by taking full accountability for it. That is at the heart of what we are announcing -- extending customer support from a product to the project, and from a product to the full solution.

If I walk through what HP Premier Services is, that probably will shed more light on it. As I explain HP Premier Services, there are two dimensions to it.

The first dimension is the three choice points, and the first of those is what has classically been customer support. We just call it Foundation, where customer support supports the product. You have a phone line you can call. That doesn't change. That's always been there.

The second menu item in the first dimension is what we term as Premier Response, and this menu item is where we actually take that support for the product and extend it to the full project and the full solution. This is new and this is the first level of the extension we are going to offer to the customer.

The third menu item takes it even further. We call it Premier Advisory. In addition to just supporting the product, which has always been there, or just extending it to support a solution and the project -- both of those things are reactive -- we can engage with the customer to be proactive about support.

That's proactive as in not just reacting to an issue, but preempting problems and preempting issues, based on our knowledge of all the customers and how they have deployed the solution. We can advise the customer, whether it's patches, whether it's upgrades, whether it's other issues we see, or whether it's a best practice they need to implement. We can get proactive, so we preempt issues. Those are the three choice points on the first dimension.

We make anything and everything to do with the back end -- infrastructure, upgrades, and all of that -- completely transparent to the customer.



The second dimension is a different way to look at how we're extending Premier Services for the benefit of the customer. Again, the first choice point in the second dimension is called Premier Business. We have a named account manager who will work with the customer across the entire lifecycle. This is already there right now.

The second part of the second dimension is very new, and large enterprise customers will derive a lot of value from it. It's called Premier TeamExtend. Not only we will be do the first three choice points of foundation, support for the whole solution, and proactive support, we will extend and take control for the customer of the entire operation of that solution.

At that point, you almost mimic a software-as-a-service (SaaS) solution, but if there are reasons a customer wouldn't want to do SaaS and wouldn't want to do managed services, but want to host it on-site and have the full solution hosted in the customer premises, we will still deploy the solution, have them realize the full benefit of it, and run their solution and operate their solution.

Customer choice

We're not just giving them one thing, which they're pretty much forced to take, but if it's a very mature customer, with extensive capability on all the products and IT strategies that they're putting into place, they don’t need to go to TeamExtend. They can just maybe take a Foundation with just the first bit of HP Premier Services, which is Premier Response. That’s all they need to take.

Choice is a very big deal for us, so that customers can actually make the decision and we can recommend to them what they should be doing.

If there is an enterprise that is so focused on competitive differentiation in the marketplace and they don't want to worry about maintaining the solutions, then they could absolutely go to Premier TeamExtend, which offers them the best of all worlds.

By virtue of that, we make anything and everything to do with the back end -- infrastructure, upgrades, and all of that -- transparent to the customer. All they care about is the business outcome. If it's a solution we have deployed to cut outages by 3 percent and get service levels up-time up to 99.99 percent, that's what they get.

How we do it, the solutions involved, the service involved, and how we're managing it is completely transparent. The fundamental headline there is that it allows the customer to go back to 70 percent innovation and 30 percent maintenance, and completely flip the current ratio.

Impact of cloud solutions in the support mix

T
he reality is that cloud is still nebulous. Different companies have different interpretations of cloud. Customers are still a little nervous about going into the cloud, because we're still not completely sure about quality, security, and all of those things. So, this is the first or second step you take before you get comfortable to get to the cloud.

What we're able to do here is take complete control of that complexity and make it transparent to the customer -- and in a way -- to quasi-deliver the same outcomes which a cloud can deliver. Cloud is a trend, and we're making sure that we actually address it before we get there.

A lot of these services are also things we're providing to the cloud service providers. So, in a way, we're making sure that people who offer that cloud service are able to leverage our services to make sure that they can offer the same outcomes back to the customer. So, it’s a full lifecycle.

When we deploy a solution for a customer, which involves our technology, our software, for the most part, a service element to actually make it a reality, we will support the full solution.



In my view, and in HP Software’s view, this is a fairly groundbreaking solution. If I were to characterize everything we talked about in three words, the first would be simplify. The second would be proactive -- how can we be proactive, versus reacting to issues. And, how can we, still under the construct of the first two, offer the customers choice.

We've been in limited launch mode since June of last year. We wanted to make sure that we engage with a limited set of customers, make sure this really works, work out all the logistics, before we actually do a full public general availability launch. So, it is effective immediately.

We can also offer the same service to all the outsourcing providers or cloud service providers we work with. If you feel you're bouncing around between different organizations, as you try to get control of your IT infrastructure, whether if you work with an external SI and you do not feel that there is enough in sync happening between support and an external SI and you feel frustrated about it, this falls right in the sweet spot.

If you feel that you need to start moving away from just projects to business outcome based solutions you need to deploy in your IT organization, this falls right in the sweet spot for it.

If you feel that you want to spend less of your time maintaining solutions and more of your time thinking about the core business your company is in and making sure that your innovation is able to capture a bigger market share and bigger business benefits for the company you work for, and you want some organization to take accountability for the operations and maintenance of the stack you have, this falls right in the sweet spot for it.

Smaller companies

The last thing, interestingly enough, is that we see uptake from even smaller and medium-sized companies, where they do not have enough people, and they do not want to worry about maintenance of the stack based on the capability or the experience of the people they have on these different solutions -- whether it's operations, whether it's applications, whether it is security across the entire HP software stack. So, if you're on any of those four or five different use cases, this falls right in the sweet spot for all of them.

So, in summary, at the heart of it what we're trying to do is simplify the complexity of how a customer or an IT organization deals with the complexity of their stack.

The second thing is that an IT organization is always striving to flip the ratio of innovation and operations. As you look today, it is 70 percent operations and 30 percent innovation. If you get that single point of accountability, they can focus more on innovation and supporting the business needs, so that their company can take advantage of greater market share, versus operations and maintaining the stack they already have.

IT complexity is increasing by the day. Having multiple vendors accountable for different parts of the IT strategy and IT implementation is a huge problem. Because of the complexity of the solution and because multiple organizations are accountable for different discrete parts of the solution, the customer is left holding the bag on to figure out how to navigate the complexity of the software organization. How do you pinpoint exactly where the problem is and then engage the right party?

We actually start to engage with them in solving a business problem for them. We paint the ROI that we could get.

Find out more about the new HP Premier Services launch.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Wednesday, March 9, 2011

Red Hat introduces JBoss Enterprise SOA Platform 5.1 with enterprise-class open source data virtualization

Red Hat today announced the availability of JBoss Enterprise SOA Platform 5.1, which includes new extensions for data services integration.

JBoss Enterprise Data Services Platform 5.1, a superset of JBoss Enterprise SOA Platform 5.1, is an open source data virtualization and integration platform that includes tools to create data services out of multiple data stores with different formats, presenting information to applications and business processes in an easy-to-use service. 
These data services become reusable assets across the enterprise.

We're beginning to see a real marketplace for open source-based integration and middleware, and in many ways the open source versions are advancing the value and variety of these services beyond where the commercial products can quickly tread. The advantages of community development and open source sharing really shine when multiple and fast-adapting integrations are involved.

What's more, as cloud and SaaS services become more common, ways of integrating data and applications assets -- regardless or origins -- will need to keep pace. Standardization and inclusiveness of integration points and types may be much better served by a community approach, and open source licenses, than waiting for a commercial product upgrade, or costly custom integrations.

I also see enterprises, SMBs, ISVs and cloud providers working to elevate the concept of "applications" more to the business processes level. And that means that decomposing and re-composing and orchestrating of services -- dare I say via SOA principles -- becomes essential, again, regardless of services, data and assets origins.

Lastly, the interest and value in Big Data benefits is also roiling the landscape. The integration of data then becomes tactical, strategic, imperative and at the heart of what drives an agile and instant-on enterprise.

“Being able to integrate and synchronize useful information out of a wide range of disparate data sources remains a serious stumbling block to the enterprise,” said Craig Muzilla, vice president and general manager, Middleware BusinessUnit at Raleigh, N.C.-based Red Hat. “JBoss Enterprise Data Services Platform 5.1 is a flexible, standards-based integration and data virtualization solution built on JBossEnterprise SOA Platform that delivers more efficient and cost-effective application and data integration techniques, allowing enterprises to more fully realize the value of their data.”

All businesses draw upon many different data sources and formats to run their applications. In many cases these data sources are hardwired into applications through data access frameworks that reduce agility and make control and compliance difficult. This data architecture counters the agility and cost-savings benefits delivered by service-oriented architectures (SOA) by forcing redundant data silos for each application.

Multiple data stores

D
ata Services Platform 5.1 aims to address these problems by virtualizing multiple data stores simultaneously, delivering data services consumable by multiple applications and business processes. By leveraging the integrated JBoss Enterprise SOA Platform 5.1, the information delivered using data virtualization can more easily be integrated into the business via the enterprise service bus (ESB) included with the platform. 


JBoss Enterprise SOA Platform 5.1 includes:
  • Apache CXF web services stack
  • JBoss Developer Studio 4.0, which features updated SOA tooling for ESB and data virtualization
  • A technology preview of WS-BPEL, which delivers service orchestration
  • A technology preview of Apache Camel Gateway, which is a popular enterprise integration pattern framework that brings an expanded set of adapters to JBoss Enterprise SOA Platform
  • Updated certifications -- Red Hat Enterprise Linux 6, Windows 2008, IBM, JDK, among others

    Being able to integrate and synchronize useful information out of a wide range of disparate data sources remains a serious stumbling block to the enterprise.

JBoss Enterprise SOA Platform follows the JBoss Open Choice strategy of offering a choice of integration architectures, messaging platforms, and deployment options. Also, both JBoss Enterprise SOA Platform 5.1 and JBoss Enterprise Data Services Platform 5.1 are designed to leverage past and present solutions, such as SOA integration, through the ESB, event-driven architecture (EDA) and data virtualization, while building a foundation to support future integration paradigms, such as integrating cloud, hybrid, and on-premise data, services and applications.

Along with JBoss Enterprise SOA Platform 5.1, Red Hat is offering a new two-day training course, JBoss Enterprise SOA Platform – ESB Implementation, which is focused on developing and deploying ESB providers and services using JBoss Developer Studio and JBoss Enterprise SOA Platform.

For more information on JBoss Enterprise SOA Platform, visit http://www.jboss.com/products/platforms/soa/.

For more information on JBoss Enterprise Data Services Platform, visit http://www.jboss.com/products/platforms/dataservices/.

You may also be interested in:

Monday, March 7, 2011

GigaSpaces announces new product for enterprise PaaS and ISV SaaS enablement

GigaSpaces Technologies announced today the upcoming release of its second-generation cloud-enablement platform, which offers an architecture aimed specifically at enterprise platform-as-a-service (PaaS) and independent software vendor (ISV) software-as-a-service (SaaS) enablement.

The addition of the newest GigaSpaces cloud-enablement platform broadens a growing field of vendors that are bringing to market the picks and shovels of the cloud gold rush. Targeting SaaS ISVs and the PaaS value makes a lot of sense, as this is where the services are being forged that will need to find cloud homes, be them on-premises, public or hybrids.

In the history of IT, no one got fired for helping good apps get built quickly and well, and deployed widely and openly. That goes for both ISVs and for custom enterprise apps. You just don't get to see that value truly delivered too often. But perhaps the transition to cloud and the need for ISVs to be seduced with openness, what GigaSpaces calls "silo free", will allow for new round of choice and productivity.

Expanding on the current GigaSpaces' solutions, the new products are include private and hybrid cloud-based offerings:
  • "Silo-free" architecture that is a converged for more application and data environments, enabling improved cross-stack elasticity, multi-tenancy, unified SLA-driven performance, central management and simplifying development and operational processes.

  • User, data and application policy-driven multi-tenancy management from the web tier down to the customer and data object levels. This provides better monitoring through a console that includes views into control, security, and visibility over the multi-tenancy aspects of the application.

  • Built-in DevOps support helps uniformly manage and automate the lifecycle of the application middleware and its resources, reducing operational and development complexity, says GigaSpaces.

  • Out-of-the-box third-party middleware management (e.g. Tomcat, Cassandra, JMS) that helps automate and manage application middleware services during deployment and production.

  • Portability, multi-language and multi-middleware support, along with integration with existing processes and systems for private, public, and hybrid clouds.

Silo-free architecture

The platform has already been integrated with major strategic partners in the cloud arena, says GigaSpaces, with enterprises and SaaS providers using GigaSpaces cloud enablement in such industries as financial services, e-commerce, online gaming, healthcare, business process management, analytics, and telecommunications.

This new product offers a field-proven technology, minimizing the risks associated with migrating to the cloud, making former ‘mission impossibles’ very possible indeed.

In addition, the solution has been integrated with leading cloud-focused technologies with such partners Cisco and Citrix.

The GigaSpaces ISV SaaS and enterprise PaaS enablement platform is scheduled to be released in Q2 2011. All the new cloud-enablement features will be available to existing customers already using GigaSpaces eXtreme Application Platform (XAP) solutions for enterprise scaling as easily integrated add-ons.

You may also be interested in:

Thursday, March 3, 2011

Big data consolidation race enters home stretch, as Teradata buys Aster Data

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

At this point, probably at least 90 percent or more of analytic systems/data warehouses are easily contained within the SQL-based technologies that are commercially available today. We’ll take that argument a step further: Most enterprise data warehouses are less than 5 terabytes. So why then all the excitement about big data, and why are acquisitions in this field becoming almost a biweekly thing?

To refresh the memory, barely a couple weeks back, HP announced its intention to buy Vertica. And this morning came the news that Teradata is buying the other 89 percent of Aster Data that it doesn’t already own. Given Teradata’s 11 percent stake, the acquisition was hardly a surprise. Maybe what was surprising was the mere $263-million price tag, which Neil Raden wondered facetiously in his tweet, “That seems like a real bargain. I should have bought them myself!!! Or as Forrester’s James Kobielus tweeted, “Essentially, AsterData gives #Teradata the analytic application server (analytics + OLTP) they need to duke it out with Oracle Exadata.” [Disclosure: Aster Data Systems is a sponsor of BriefingsDirect podcasts.]

The irony is when you talk about big data, for years it was synonymous with one player: Teradata. But as we’ve observed, there’s more data everywhere and there’s cheaper processor, disk, cache, and bandwidth to transport and manage it –- whether you intercept event streams, store it, or federate to it.

Widening vendor ecosystem

In all this, Teradata has found itself part of a widening vendor ecosystem that has responded to its massively parallel technology with new variants in columnar, in memory, solid state, NoSQL, unstructured data, and event stream technology. While Teradata was known for taking traditional historical analytics, and in some cases, operational data stores to extreme scale, others were eking out different aspects of extreme analytics, whether it being real-time or interactive analysis of structured data, parsing of social media sentiment, taking smarter approaches to managing civil infrastructure or homeland security through analysis of sensory data streams, fraud detection, and so on.

Teradata has hardly stood still, having broadened out its product footprint from its classic proprietary hardware to a broad array of form factors that run on commodity platforms, solid state disk, and virtual cloud, and more recently with acquisitions of MySQL appliance Kickfire and marketing analytics provider Aprimo.

Viewed from a market perspective, Teradata’s acquisition marks the home stretch for consolidation of the current crop of analytic database challengers.



Acquisition of Aster Data, probably the best pick of the remaining lot of columnar database challengers, provides Teradata yet another facet of an increasingly well-rounded product portfolio. Going forward, we expect that Teradata will continue its offerings of vertical industry data templates to extend to the columnar world.

Viewed from a market perspective, Teradata’s acquisition marks the home stretch for consolidation of the current crop of analytic database challengers, who are mostly spread in the columnar field. Dell is the last major platform player standing that has yet to make its move.

The currently wave of consolidation hardly spells the end of innovation here, as there is plenty of headroom in the taming of the NoSQL world. And although acquisition of Aster Data overlaps with HP’s Vertica deal, that makes Teradata no less attractive for an HP that seeks to broaden out its enterprise software footprint.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Thursday, February 24, 2011

Open Group cloud panel forecasts cloud as spurring useful transition phase for enterprise architecture

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: The Open Group.

Welcome to a special discussion on predicting how cloud computing will actually unfold for enterprises and their core applications and services in the next few years. Part of The Open Group 2011 Conference in San Diego the week of Feb. 7, a live, on-stage panel examined the expectations of new types of cloud models -- and perhaps cloud specialization requirements -- emerging quite soon.

By now, we're all familiar with the taxonomy around public cloud, private cloud, software as a service (SaaS), platform as a service (PaaS), and my favorite, infrastructure as a service (IaaS). But we thought we would do you all an additional service and examine, firstly, where these general types of cloud models are actually gaining use and allegiance, and look at vertical industries and types of companies that are leaping ahead with cloud. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Then, second, we're going to look at why one-size-fits-all cloud services may not fit so well in a highly fragmented, customized, heterogeneous, and specialized IT world -- which is, of course, the world most of us live in.

How much of cloud services that come with a true price benefit -- and that’s usually at scale and cheap -- will be able to replace what is actually on the ground in many complex and unique enterprise IT organizations? Can a few types of cloud work for all of them?

Here to help us better understand the quest for "fit for purpose" cloud balance and to predict, at least for some time, the considerable mismatch between enterprise cloud wants and cloud provider offerings, is our panel: Penelope Gordon, co-founder of 1Plug Corp., based in San Francisco; Mark Skilton, Director of Portfolio and Solutions in the Global Infrastructure Services with Capgemini in London; Ed Harrington, Principal Consultant in Virginia for the UK-based Architecting the Enterprise organization; Tom Plunkett, Senior Solution Consultant with Oracle in Huntsville, Alabama, and TJ Virdi, Computing Architect in the CAS IT System Architecture Group at Boeing based in Seattle. The discussion was moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gordon: A lot of companies don’t even necessarily realize that they're using cloud services, particularly when you talk about SaaS. There are a number of SaaS solutions that are becoming more and more ubiquitous.

I see a lot more of the buying of cloud moving out to the non-IT line of business executives. If that accelerates, there is going to be less and less focus. Companies are really separating now what is differentiating and what is core to my business from the rest of it.

There's going to be less emphasis on, "Let’s do our scale development on a platform level" and more, "Let’s really seek out those vendors that are going to enable us to effectively integrate, so we don’t have to do double entry of data between different solutions. Let's look out for the solutions that allow us to apply the governance and that effectively let us tailor our experience with these solutions in a way that doesn’t impinge upon the provider’s ability to deliver in a cost effective fashion."

That’s going to become much more important. So, a lot of the development onus is going to be on the providers, rather than on the actual buyers.

Enterprise architects need to break out of the idea of focusing on how to address the boundary between IT and the business and talk to the business in business terms.

One way of doing that that I have seen as effective is to look at it from the standpoint of portfolio management. Where you were familiar with financial portfolio management, now you are looking at a service portfolio, as well as looking at your overall business and all of your business processes as a portfolio. How can you optimize at a macro level for your portfolio of all the investment decisions you're making, and how the various processes and services are enabled? Then, it comes down to a money issue.

Shadow IT

Harrington: We're seeing a lot of cloud uptake in the small businesses. I work for a 50-person company. We have one "sort of" IT person and we do virtually everything in the cloud. We have people in Australia and Canada, here in the States, headquartered in the UK, and we use cloud services for virtually everything across that. I'm associated with a number of other small companies and we are seeing big uptake of cloud services.

We talked about line management IT getting involved in acquiring cloud services. If you think we've got this thing called "shadow IT" today, wait a few years. We're going to have a huge problem with shadow IT.

From the architect’s perspective, there's lot to be involved with and a lot to play with. There's an awful lot of analysis to be done -- what is the value that the cloud solution being proposed is going to be supplying to the organization in business terms, versus the risk associated with it? Enterprise architects deal with change, and that’s what we're talking about. We're talking about change, and change will inherently involve risk.

The federal government has done some analysis. In particular, the General Services Administration (GSA), has done some considerable analysis on what they think they can save by going to, in their case, a public cloud model for email and collaboration services. They've issued a $6.7 million contract to Unisys as the systems integrator, with Google being the cloud services supplier.

So, the debate over the benefits of cloud, versus the risks associated with cloud, is still going on quite heatedly.

Skilton: From personal experience, there are probably three areas of adaptation of cloud into businesses. For sure, there are horizontal common services to which, what you call, the homogeneous cloud solution could be applied common to a number of business units or operations across a market.

But we're starting to increasingly see the need for customization to meet vertical competitive needs of a company or the decisions within that large company. So, differentiation and business models are still there, they are still in platform cloud as they were in the pre-cloud era.

But, the key thing is that we're seeing a different kind of potential that a business can do now with cloud -- a more elastic, explosive expansion and contraction of a business model. We're seeing fundamentally the operating model of the business growing, and the industry can change using cloud technology.

So, there are two things going on in the business and the technologies are changing because of the cloud.

... There are two more key points. There's a missing architecture practice that needs to be there, which is a workload analysis, so that you design applications to fit specific infrastructure containers, and you've got a bridge between the the application service and the infrastructure service. There needs to be a piece of work by enterprise architects (EAs) that starts to bring that together as a deliberate design for applications to be able to operate in the cloud. And the PaaS platform is a perfect environment.

The second thing is that there's a lack of policy management in terms of technical governance, and because of the lack of understanding. There needs to be more of a matching exercise going on. The key thing is that that needs to evolve.

Part of the work we're doing in The Open Group with the Cloud Computing Work Group is to develop new standards and methodologies that bridge those gaps between infrastructure, PaaS, platform development, and SaaS.

Plunkett: Another place we're seeing a lot of growth with regard to private clouds is actually on the defense side. The U.S. Defense Department is looking at private clouds, but they also have to deal with this core and context issue. The requirements for a [Navy] shipboard system are very different from the land-based systems.

Ships have to deal with narrow bandwidth and going disconnected. They also have to deal with coalition partners or perhaps they are providing humanitarian assistance and they are dealing even with organizations we wouldn’t normally consider military. So they have to deal with lots of information, assurance issues, and have completely different governance concerns that we normally think about for public clouds.

We talked about the importance of governance increasing as the IT industry went into SOA. Well, cloud is going to make it even more important. Governance throughout the lifecycle, not just at the end, not just at deployment, but from the very beginning.

If you think we've got this thing called "shadow IT" today, wait a few years. We're going to have a huge problem with shadow IT.



You mentioned variable workloads. Another place where we are seeing a lot of customers approach cloud is when they are starting a new project. Because then, they don’t have to migrate from the existing infrastructure. Instead everything is brand new. That’s the other place where we see a lot of customers looking at cloud, your greenfields.

Virdi: I think what we are really looking [to cloud] for speed to put new products into the market or evolve the products that we already have and how to optimize business operations, as well as reduce the cost. These may be parallel to any vertical industries, where all these things are probably going to be working as a cloud solution.

How to measure and create a new product or solutions is the really cool things you would be looking for in the cloud. And, it has proven pretty easy to put a new solution into the market. So, speed is also the big thing in there.

All these business decisions are going to be coming upstream, and business executives need to be more aware about how cloud could be utilized as a delivery model. The enterprise architects and someone with a technical background needs to educate or drive them to make the right decisions and choose the proper solutions.

It has an impact how you want to use the cloud, as well as how you get out of it too, in case you want to move to different cloud vendors or providers. All those things come into play upstream, rather than downstream.

You probably also have to figure out how you want to plan to adapt to the cloud. You don’t want to start as a Big Bang theory. You want to start in incremental steps, small steps, test out what you really want to do. If that works, then go do the other things after that.

Gordon: One example in talking about core and context is when you look in retail. You can have two retailers like a Walmart or a Costco, where they're competing in the same general space, but are differentiating in different areas.

Walmart is really differentiating on the supply chain, and so it’s not a good candidate for public cloud computing solutions. That might possibly be a candidate for private cloud computing.

But that’s really where they're going to invest in the differentiating, as opposed to a Costco, where it makes more sense for them to invest in their relationship with their customers and their relationship with their employees. They're going to put more emphasis on those business processes, and they might be more inclined to outsource some of the aspects of their supply chain.

Hard to do it alone

Skilton: The lessons that we're learning in running private clouds for our clients is the need to have a much more of a running-IT-as-a-business ethos and approach. We find that if customers try to do it themselves, either they may find that difficult, because they are used to buying that as a service, or they have to change their enterprise architecture and support service disciplines to operate the cloud.

Also, fundamentally the data center and network strategies need to be in place to adopt cloud. From my experience, the data center transformation or refurbishment strategies or next generation networks tend to be done as a separate exercise from the applications area. So a strong, strong recommendation from me would be to drive a clear cloud route map to your data center.

Harrington: Again, we're back to the governance, and certification of some sort. I'm not in favor of regulation, but I am in favor of some sort of third-party certification of services that consumers can rely upon safely. But, I will go back to what I said earlier. It's a combination of governance, treating the cloud services as services per se, and enterprise architecture.

Plunkett: What we're seeing with private cloud is that it’s actually impacting governance, because one of the things that you look at with private cloud is charge-back between different internal customers. This is forcing these organizations to deal with complexity, money, and business issues that they don't really like to do.

Nowadays, it's mostly vertical applications, where you've got one owner who is paying for everything. Now, we're actually going back to, as we were talking about earlier, dealing with some of the tricky issues of SOA.

Securing your data

Virdi: Private clouds actually allow you to make more business modular. Your capability is going to be a little bit more modular and interoperability testing could happen in the private cloud. Then you can actually use those same kind of modular functions, utilize the public cloud, and work with other commercial off-the-shelf (COTS) vendors that really package this as new holistic solutions.

Configuration and change management -- how in the private cloud we are adapting to it and supporting different customer segments is really the key. This could be utilized in the public cloud too, as well as how you are really securing your information and data or your business knowledge. How you want to secure that is key, and that's why the private cloud is there. If we can adapt to or mimic the same kind of controls in the public cloud, maybe we'll have more adoptions in the public cloud too.

Gordon: I also look at it in a little different way. For example, in the U.S., you have the National Security Agency (NSA). For a lot of what you would think of as their non-differentiating processes, for example payroll, they can't use ADP. They can't use that SaaS for payroll, because they can't allow the identities of their employees to become publicly known.

Anything that involves their employee data and all the rest of the information within the agency has to be kept within a private cloud. But, they're actively looking at private cloud solutions for some of the other benefits of cloud.

In one sense, I look at it and say that private cloud adoption to me tells a provider that this is an area that's not a candidate for a public-cloud solution. But, private clouds could also be another channel for public cloud providers to be able to better monetize what they're doing, rather than just focusing on public cloud solutions.

Impact on mergers and acquisitions

Plunkett: Not to speak on behalf of Oracle, but we've gone through a few mergers and acquisitions recently, and I do believe that having a cloud environment internally helps quite a bit. Specifically, TJ made the earlier point about modularity. Well, when we're looking at modules, they're easier to integrate. It’s easier to recompose services, and all the benefits of SOA really.

That kind of thinking, the cloud constructs applied up at a business architecture level, enables the kind of business expansion that we are looking at.



Gordon: If you are going to effectively consume and provide cloud services, if you do become much more rigorous about your change management, your configuration management, and if you then apply that out to a larger process level. Some of this comes back to some of the discussions we were having about the extra discipline that comes into play.

So, if you define certain capabilities within the business in a much more modular fashion, then, when you go through that growth and add on people, you have documented procedures and processes. It’s much easier to bring someone in and say, "You're going to be a product manager, and that job role is fungible across the business."

That kind of thinking, the cloud constructs applied up at a business architecture level, enables the kind of business expansion that we are looking at.

Harrington: [As for M&As], it depends a lot on how close the organizations are, how close their service portfolios are, to what degree has each of the organizations adapted the cloud, and is that going to cause conflict as well. So I think there is potential.

Skilton: Right now, I'm involved in merging in a cloud company that we bought last year in May ... It’s kind of a mixed blessing with cloud. With our own cloud services, we acquire these new companies, but we still have the same IT integration problem to then exploit that capability we've acquired.

Each organization in the commercial sector can have different standards, and then you still have that interoperability problem that we have to translate to make it benefit, the post merger integration issue. It’s not plug and play yet, unfortunately.

But, the upside is that I can bundle that service that we acquired, because we wanted to get that additional capability, and rewrite design techniques for cloud computing. We can then launch that bundle of new service faster into the market.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: The Open Group.

You may also be interested in:

Monday, February 21, 2011

The Open Trusted Technology Provider Framework Aims to No Less Than Secure the Global IT Supply Chain

This guest post is courtesy of Andras Szakal, IBM Distinguished Engineer and Director of IBM's Federal Software Architecture team.

By Andras Szakal

Nearly two months ago, we announced the formation of The Open Group Trusted Technology Forum (OTTF), a global standards initiative among technology companies, customers, government and supplier organizations to create and promote guidelines for manufacturing, sourcing, and integrating trusted, secure technologies.

The OTTF’s purpose is to shape global procurement strategies and best practices to help reduce threats and vulnerabilities in the global supply chain. I’m proud to say that we have just completed our first deliverable toward achieving our goal: The Open Trusted Technology Provider Framework (O-TTPF) whitepaper.

The framework outlines industry best practices that contribute to the secure and trusted development, manufacture, delivery and ongoing operation of commercial software and hardware products. Even though the OTTF has only recently been announced to the public, the framework and the work that led to this whitepaper have been in development for more than a year: first as a project of the Acquisition Cybersecurity Initiative, a collaborative effort facilitated by The Open Group between government and industry verticals under the sponsorship of the U.S. Department of Defense (OUSD (AT&L)/DDR&E).

The framework is intended to benefit technology buyers and providers across all industries and across the globe concerned with secure development practices and supply chain management. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist



More than 15 member organizations joined efforts to form the OTTF as a proactive response to the changing cyber security threat landscape, which has forced governments and larger enterprises to take a more comprehensive view of risk management and product assurance. Current members of the OTTF include Atsec, Boeing, Carnegie Mellon SEI, CA Technologies, Cisco Systems, EMC, Hewlett-Packard, IBM, IDA, Kingdee, Microsoft, MITRE, NASA, Oracle, and the U.S. Department of Defense (OUSD(AT&L)/DDR&E), with the forum operating under the stewardship and guidance of The Open Group.

Over the past year, OTTF member organizations have been hard at work collaborating, sharing and identifying secure engineering and supply chain integrity best practices that currently exist. These best practices have been compiled from a number of sources throughout the industry including cues taken from industry associations, coalitions, traditional standards bodies and through existing vendor practices. OTTF member representatives have also shared best practices from within their own organizations.

From there, the OTTF created a common set of best practices distilled into categories and eventually categorized into the O-TTPF whitepaper. All this was done with a goal of ensuring that the practices are practical, outcome-based, aren’t unnecessarily prescriptive and don’t favor any particular vendor.

The framework

T
he diagram below outlines the structure of the framework divided into categories that outline a hierarchy of how the OTTF arrived at the best practices it created.

















Trusted technology provider categories

Best practices were grouped by category because the types of technology development, manufacturing or integration activities conducted by a supplier are usually tailored to suit the type of product being produced, whether it is hardware, firmware, or software-based. Categories may also be aligned by manufacturing or development phase so that, for example, a supplier can implement a secure engineering/development method if necessary.

Provider categories outlined in the framework include:
  • Product engineering/development method
  • Secure engineering/development method
  • Supply chain integrity method
  • Product evaluation method
  • Establishing conformance and determining accreditation
In order for the best practices set forth in the O-TTPF to have a long-lasting effect on securing product development and the supply chain, the OTTF will define an accreditation process. Without an accreditation process, there can be no assurance that a practitioner has implemented practices according to the approved framework.

After the framework is formally adopted as a specification, The Open Group will establish conformance criteria and design an accreditation program for the O-TTPF. The Open Group currently manages multiple industry certification and accreditation programs, operating some independently and some in conjunction with third party validation labs. The Open Group is uniquely positioned to provide the foundation for creating standards and accreditation programs. Since trusted technology providers could be either software or hardware vendors, conformance will be applicable to each technology supplier based on the appropriate product architecture.

At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division.



At this point, the OTTF envisions a multi-tiered accreditation scheme, which would allow for many levels of accreditation including enterprise-wide accreditations or a specific division. An accreditation program of this nature could provide alternative routes to claim conformity to the O-TTPF.

Over the long-term, the OTTF is expected to evolve the framework to make sure its industry best practices continue to ensure the integrity of the global supply chain. Since the O-TTPF is a framework, the authors fully expect that it will evolve to help augment existing manufacturing processes rather than replace existing organizational practices or policies.

There is much left to do, but we’re already well on the way to ensuring the technology supply chain stays safe and secure. If you’re interested in shaping the Trusted Technology Provider Framework best practices and accreditation program, please join us in the OTTF.

Download the O-TTPF paper, or read the OTTPF in full here.

This guest post is courtesy of Andras Szakal, IBM Distinguished Engineer and Director of IBM's Federal Software Architecture team.

You may also be interested in:

Friday, February 18, 2011

Explore the role and impact of the Open Trusted Technology Forum to help ensure secure IT products in global supply chains

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Get the free white paper. Sponsor: The Open Group.

Join a panel discussion that examines The Open Group’s new Open Trusted Technology Forum (OTTF), established in December to find ways to safely conduct global procurement and supply-chain commerce among and between technology acquirers and buyers.

The forum is tasked with providing transparency, collaboration, innovation, and more trust on the partners and market participants in a IT supplier environment. The goal is for the OTTF to lead to improved business risk for global supply activities in the IT field. [Get the new free OTTF white paper.]

Presented in conjunction with The Open Group Conference held in San Diego, the week of Feb. 7, the panel of experts examines how the OTTF will function, what its new framework will be charged with providing, and ways that participants in the global IT commerce ecosystem can become involved with and perhaps use the OTTF’s work to their advantage.

Here with us to delve into the mandate and impact of the Open Trusted Technology Forum is the panel: Dave Lounsbury, Chief Technology Officer for The Open Group; Steve Lipner, Senior Director of Security Engineering Strategy in Microsoft’s Trustworthy Computing Group; Andras Szakal, Chief Architect in IBM’s Federal Software Group and an IBM distinguished engineer, and Carrie Gates, Vice President and Research Staff Member at CA Labs. The discussion is moderated by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Lounsbury: The OTTF is a group that came together under the umbrella of The Open Group to identify and develop standards and best practices for trusting supply chains. It's about how one consumer in a supply chain could trust their partners and how they will be able to indicate their use of best practices in the market, so that people who are buying from the supply chain or buying from a specific vendor will be able to know that they can procure this with a high level of confidence.

This actually started a while ago at The Open Group by a question from the U.S. Department of Defense (DoD), which faced the challenge of buying commercial off-the-shelf product. Obviously, they wanted to take advantage of the economies of scale and the pace of technology in the commercial supply chain, but realized that means they're not going to get purpose-built equipment, that they are going to buy things from a global supply chain.

If you pick up any piece of technology, for example, it will be designed in the US, assembled in Mexico, and built in China. So we need that international and global dimension in production of this set of standards as well.

They asked, "What would we look for in these things that we are buying to know that people have used good engineering practices and good supply chain management practices? Do they have a good software development methodology? What would be those indicators?"

Now, that was a question from the DoD, but everybody is on somebody’s supply chain. People buy components. The big vendors buy components from smaller vendors. Integrators bring multiple systems together.

So, this is a really broad question in the industry. Because of that, we felt the best way to address this was bring together a broad spectrum of industry to come in, identify the practices that they have been using -- your real, practical experience -- and bring that together within a framework to create a standard for how we would do that.

Szakal: In today’s environment, we're seeing a bit of a paradigm shift. We're seeing technology move out of the traditional enterprise infrastructure. We're seeing these very complex value chains be created. We're seeing cloud computing.

Smarter infrastructures

We're actually working to create smarter infrastructures that are becoming more intelligent, automated, and instrumented, and they are very much becoming open-loop systems.

As technology becomes more pervasive and gets integrated into these environments, into the critical infrastructure, we have to consider whether they are vulnerable and how the components that have gone into these solutions are trustworthy.

Governments worldwide are asking that question. They're worried about critical infrastructure and the risk of using commercial, off-the-shelf technology -- software and hardware -- in a myriad of ways, as it gets integrated into these more complex solutions.

Part of our focus here is to help our constituents, government customers and critical infrastructure customers, understand how the commercial technology manufacturers, the software development manufactures, go about engineering and managing their supply chain integrity.

[The OTTF] is about all types of technology. Software obviously is a particularly important focus, because it’s at the center of most technology anyway. Even if you're developing a chip, a chip has some sort of firmware, which is ultimately software. So that perception is valid to a certain extent, but no, not just software, hardware as well.

Our vision is that we want to leverage some of the capability that's already out there. Most of us go through common criteria evaluations and that is actually listed as a best practice for a validating security function and products.

Where we are focused, from a [forthcoming] accreditation point of view, affects more than just security products. That's important to know. However, we definitely believe that the community of assessment labs that exists out there that already conducts security evaluations, whether they be country-specific or that they be common criteria, needs to be leveraged. We'll endeavor to do that and integrate them into both the membership and the thinking of the accreditation process.

Lounsbury: The Open Group provides the framework under which both buyers and suppliers at any scale could come together to solve a common problem -- in this case, the question of providing trusted technology best practices and standards. We operate a set of proven processes that ensure that everyone has a voice and that all these standards go forward in an orderly manner.

The new OTTF white paper actually lays out the framework. The work of forum is to turn that framework into an Open Group standard and populate it. That will provide the standards and best practice foundation for this conformance program. [Get the new free OTTF white paper.]

We're just getting started on the vision for a conformance program. One of the challenges here is that first, not only do we have to come up with the standard and then come up with the criteria by which people would submit evidence, but you also have to deal with the problem of scale.

If we really want to address this problem of global supply chains, we're talking about a very large number of companies around the world. It’s a part of the challenge that the forum faces.

Accrediting vendors

Part of the work that they’ve embarked on is, in fact, to figure out how we wouldn't necessarily do that kind of conformance one on one, but how we would accredit either vendors themselves who have their own duty of quality processes as a big vendor would or third parties who can do assessments and then help provide the evidence for that conformance.

We're getting ahead of ourselves here, but there would be a certification authority that would verify that all the evidence is correct and grant some certificate that says that they have met some or all of the standards.

The white paper actually lays out the framework. The work of forum is to turn that framework into an Open Group standard and populate it.



We provide infrastructure for doing that ... The Open Group operates industry-based conformance programs, the certification programs, that allow someone who is not a member to come in and indicate their conformance standard and give evidence that they're using the best practices there.

Lipner: Build with integrity really means that the developer who is building a technology product, whether it be hardware or software, applies best practices and understood techniques to prevent the inclusion of security problems, holes, bugs, in the product -- whether those problems arise from some malicious act in the supply chain or whether they arise from inadvertent errors. With the complexity of modern software, it’s likely that security vulnerabilities can creep in.

So, what build with integrity really means is that the developer applies best practices to reduce the likelihood of security problems arising, as much as commercially feasible.

And not only that, but any given supplier has processes for convincing himself that upstream suppliers, component suppliers, and people or organizations that he relies on, do the same, so that ultimately he delivers as secure a product as possible.

Creating a discipline

One of the things we think that the forum can contribute is a discipline that governments and potentially other customers can use to say, "What is my supplier actually doing? What assurance do I have? What confidence do I have?"

To the extent that the process is successful, why then customers will really value the certification? And will that open markets or create preferences in markets for organizations that have sought and achieved the certification?

Obviously, there will be effort involved in achieving [pending OTTF] certification, but that will be related to real value, more trust, more security, and the ability of customers to buy with confidence.

The challenge that we'll face as a forum going forward is to make the processes deterministic and cost-effective. I can understand what I have to do. I can understand what it will cost me. I won't get surprised in the certification process and I can understand that value equation. Here's what I'm going to have to do and then here are the markets and the customer sets, and the supply chains it's going to open up to me.

International trust

Gates: This all helps tremendously in improving trust internationally. We're looking at developing a framework that can be applied regardless of which country you're coming from. So, it is not a US-centric framework that we'll be using and adhering to.

We're looking for a framework so that each country, regardless of its government, regardless of the consumers within that country, all of them have confidence in what it is that we're building, that we're building with integrity, that we are concerned about both malicious acts or inadvertent errors.

And each country has its own bad guy, and so by adhering to international standard we can say we're looking for bad guys for every country and ensuring that what we provide is the best possible software.

If you refer to our white paper, we start to address that there. We were looking at a number of different metrics across the board. For example, what do you have for documentation practices? Do you do code reviews? There are a number of different best practices that are already in the field that people are using. Anyone who wants to be a certified, can go and look at this document and say, "Yes, we are following these best practices" or "No, we are missing this. Is it something that we really need to add? What kind of benefit it will provide to us beyond the certification?"

Lounsbury: The white paper’s name, "The Open Trusted Technology Provider Framework" was quite deliberately chosen. There are a lot of practices out there that talk about how you would establish specific security criteria or specific security practices for products. The Open Trusted Technology Provider Forum wants to take a step up and not look at the products, but actually look at the practices that the providers employ to do that. So it's bringing together those best practices.

Now, good technology providers will use good practices, when they're looking at their products, but we want to make sure that they're doing all of the necessary standards and best practices across the spectrum, not just, "Oh, I did this in this product."

Again, I refer everybody to the white paper, which is available on The Open Group website. You'll see there in the categories that we've divided these kinds of best practice into four broad categories: product engineering and development methods, secure engineering development methods, supply chain integrity methods and the product evaluation methods.

Under there those are the categories, we'll be looking at the attributes that are necessary to each of those categories and then identifying the underlying standards or bits of evidence, so people can submit to indicate their conformance.

I want to underscore this point about the question of the cost to a vendor. The objective here is to raise best practices across the industry and make the best practice commonplace. One of the great things about an industry-based conformance program is that it gives you the opportunity to take the standards and those categories that we've talked about as they are developed by OTTF and incorporate those in your engineering and development processes.

Within secure engineering, for example, one of the attributes is threat assessment and threat modeling.



So you're baking in the quality as you go along, and not trying to have an expensive thing going on at the end.

Szakal: [To attain such quality and lowered risk] we have three broad categories here and we've broken each of the categories into a set of principles, what we call best practice attributes. One of those is secure engineering. Within secure engineering, for example, one of the attributes is threat assessment and threat modeling. Another would be to focus on lineage of open-source. So, these are some of the attributes that go into these large-grained categories.

Unpublished best practices

Steve and I have talked a lot about this. We've worked on his secure engineering initiative, his SDLC initiative within Microsoft. I worked on and was co-author of the IBM Secure Engineering Framework. So, these are living examples that have been published, but are proprietary, for some of the best practices out there. There are others, and in many cases, most companies have addressed this internally, as part of their practices without having to publish them.

Part of the challenge that we are seeing, and part of the reason that Microsoft and IBM went to the length of publishing there is that government customers and critical infrastructure were asking what is the industry practice and what were the best practices.

What we've done here is taken the best practices in the industry and bringing them together in a way that's a non-vendor specific. So you're not looking to IBM, you're not having to look at the other vendors' methods of implementing these practices, and it gives you a non-specific way of addressing them based on outcome.

We believe that this is going to actually help vendors mature in these specific areas. Governments recognize that, to a certain degree, the industry is not a little drunk and disorderly and we do actually have a view on what it means to develop product in a secure engineering manner and that we have supply chain integrity initiatives out there. So, those are very important.

We're not simply focused on a bunch of security controls here. This is industry continuity and practices for supply chain integrity, as well as our internal manufacturing practices around the actual practice and process of engineering or software development, as well as supply chain integrity practices.

That's a very important point to be made. This is not a traditional security standard, insomuch as that we've got a hundred security controls that you should always go out and implement. You're going to have certain practices that make sense in certain situations, depending on the context of the product you're manufacturing.

Gates: In terms of getting started, the white paper is an excellent resource to get started and understand how the OTTF is thinking about the problem. How we are sort of structuring things? What are the high-level attributes that we are looking at? Then, digging down further and saying, "How are we actually addressing the problem?"

We had mentioned threat modeling, which for some -- if you're not security-focused -- might be a new thing to think about, as an example, in terms of your supply chain. What are the threats to your supply chain? Who might be interested, if you're looking at malicious attack, in inserting something into your code? Who are your customers and who might be interested in potentially compromising them? How might you go about protecting them?

The security mindset is a little bit different, in that you tend to be thinking about who is it that would be interested in doing harm and how do you prevent that.

It's not a normal way of thinking about problems. Usually, people have a problem, they want to solve it, and security is an add-on afterward. We're asking that they start that thinking as part of their process now and then start including that as part of their process.

Lipner: We talk about security assurance, and assurance is really what the OTTF is about, providing developers and suppliers with ways to achieve that assurance in providing their customers ways to know that they have done that. This is really not about adding some security band-aid onto a technology or a product. It's really about the fundamental attributes or assurance of the product or technology that’s being produced.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Get the free white paper. Sponsor: The Open Group.

You may also be interested in: