Saturday, July 14, 2012

Here's how to better leverage TOGAF to deliver DoDAF capabilities and benefits

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

This guest post comes courtesy of Chris Armstrong, President of Armstrong Process Group, Inc.

By Chris Armstrong

In today’s environment of competing priorities and constrained resources, companies and government agencies are in even greater need to understand how to balance those priorities, leverage existing investments and align their critical resources to realize their business strategy. Sound appealing?

It turns out that this is the fundamental goal of establishing an Enterprise Architecture (EA) capability. In fact, we have seen some of our clients position EA as the Enterprise Decision Support capability – that is, providing an architecture-grounded, fact-based approach to making business and IT decisions.

Many government agencies and contractors have been playing the EA game for some time -- often in the context of mandatory compliance with architecture frameworks, such as the Federal Enterprise Architecture (FEA) and the Department of Defense Architecture Framework (DoDAF).

We’re seeing a new breed of organizations that are looking past contractual compliance and want to exploit the business transformation dimension of EA.



These frameworks often focus significantly on taxonomies and reference models that organizations are required to use when describing their current state and their vision of a future state. We’re seeing a new breed of organizations that are looking past contractual compliance and want to exploit the business transformation dimension of EA.

In the Department of Defense (DoD) world, this is in part due to the new “capability driven” aspect of DoDAF version 2.0, where an organization aligns its architecture to a set of capabilities that are relevant to its mission.

The addition of the Capability Viewpoint (CV) in DoDAF 2 enables organizations to describe their capability requirements and how their organization supports and delivers those capabilities. The CV also provides models for representing capability gaps and how new capabilities are going to be deployed over time and managed in the context of an overall capability portfolio.

Critical difference

Another critical difference in DoDAF 2 is the principle of “fit-for-purpose,” which allows organizations to select which architecture viewpoints and models to develop based on mission/program requirements and organizational context. One fundamental consequence of this is that an organization is no longer required to create all the models for each DoDAF viewpoint. They are to select the models and viewpoints that are relevant to developing and deploying their new, evolved capabilities.

While DoDAF 2 does provide some brief guidance on how to build architecture descriptions and subsequently leverage them for capability deployment and management, many organizations are seeking a more well-defined set of techniques and methods based on industry standard best practices.

This is where the effectiveness of DoDAF 2 can be significantly enhanced by integrating it with The Open Group Architecture Framework (TOGAF) version 9.1, in particular the TOGAF Architecture Development Method (ADM). The ADM not only describes how to develop descriptions of the baseline and target architectures, but also provides considerable guidance on how to establish an EA capability and performing architecture roadmapping and migration planning.

TOGAF ADM describes how to drive the realization of the target architecture through integration with the systems engineering and solution delivery lifecycles.



Most important, the TOGAF ADM describes how to drive the realization of the target architecture through integration with the systems engineering and solution delivery lifecycles. Lastly, TOGAF describes how to sustain an EA capability through the operation of a governance framework to manage the evolution of the architecture. In a nutshell, DoDAF 2 provides a common vocabulary for architecture content, while TOGAF provides a common vocabulary for developing and using that content.

I hope that those of you in the Washington, D.C. area will join me at The Open Group Conference beginning July 16, where we’ll continue the discussion of how to deliver DoDAF capabilities using TOGAF. For those of you who can’t make it, I’m pleased to announce that The Open Group will also be delivering a livestream of my presentation (free of charge) on Monday, July 16 at 2:45 p.m. ET.

Hope to see you there!

This guest post comes courtesy of Chris Armstrong, President of Armstrong Process Group, Inc. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2012. All rights reserved.

Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.

You may also be interested in:

Friday, July 13, 2012

The Open Group Trusted Technology Forum is leading the way to securing global IT supply chains

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.
 
This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on enterprise architecture (EA), enterprise transformation, and securing global supply chains.

We're joined in advance by some of the main speakers at the conference to examine the latest efforts to make global supply chains for technology providers more secure, verified, and therefore trusted. We'll examine the advancement of The Open Group Trusted Technology Forum (OTTF) to gain an update on the effort's achievements, and to learn more about how technology suppliers and buyers can expect to benefit.

The expert panel consists of Dave Lounsbury, Chief Technical Officer at The Open Group; Dan Reddy, Senior Consultant Product Manager in the Product Security Office at EMC Corp.; Andras Szakal, Vice President and Chief Technology Officer at IBM's U.S. Federal Group, and also the Chair of the OTTF, and Edna Conway, Chief Security Strategist for Global Supply Chain at Cisco. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: Why this is an important issue, and why is there a sense of urgency in the markets?

Lounsbury: The Open Group has a vision of boundaryless information flow, and that necessarily involves interoperability. But interoperability doesn't have the effect that you want, unless you can also trust the information that you're getting, as it flows through the system.

Therefore, it’s necessary that you be able to trust all of the links in the chain that you use to deliver your information. One thing that everybody who watches the news would acknowledge is that the threat landscape has changed. As systems become more and more interoperable, we get more and more attacks on the system.

As the value that flows through the system increases, there’s a lot more interest in cyber crime. Unfortunately, in our world, there's now the issue of state-sponsored incursions in cyberspace, whether officially state-sponsored or not, but politically motivated ones certainly.

So there is an increasing awareness on the part of government and industry that we must protect the supply chain, both through increasing technical security measures, which are handled in lots of places, and in making sure that the vendors and consumers of components in the supply chain are using proper methodologies to make sure that there are no vulnerabilities in their components.

I'll note that the demand we're hearing is increasingly for work on standards in security. That’s top of everybody's mind these days.

Reddy: One of the things that we're addressing is the supply chain item that was part of the Comprehensive National Cybersecurity Initiative (CNCI), which spans the work of two presidents. Initiative 11 was to develop a multi-pronged approach to global supply chain risk management. That really started the conversation, especially in the federal government as to how private industry and government should work together to address the risks there.

In the OTTF, we've tried create a clear measurable way to address supply-chain risk. It’s been really hard to even talk about supply chain risk, because you have to start with getting a common agreement about what the supply chain is, and then talk about how to deal with risk by following best practices.

Szakal: One of the observations that I've made over the last couple of years is that this group of individuals, who are now part of this standards forum, have grown in their ability to collaborate, define, and rise to the challenges, and work together to solve the problem.

Standards process

Technology supply chain security and integrity are not necessarily a set of requirements or an initiative that has been taken on by the standards committee or standards groups up to this point. The people who are participating in this aren't your traditional IT standards gurus. They had to learn the standards process. They had to understand how to approach the standardization of best practices, which is how we approach solving this problem.

It’s sharing information. It’s opening up across the industry to share best practices on how to secure the supply chain and how to ensure its overall integrity. Our goal has been to develop a framework of best practices and then ultimately take those codified best practices and instantiate them into a standard, which we can then assess providers against. It’s a big effort, but I think we’re making tremendous progress.

Gardner: Because The Open Group Conference is taking place in Washington, D.C., what’s the current perception in the U.S. Government about this in terms of its role?

Szakal: The government has always taken a prominent role, at least to help focus the attention of the industry.
The government has always taken a prominent role, at least to help focus the attention of the industry.


Now that they’ve corralled the industry and they’ve got us moving in the right direction, in many ways, we’ve fought through many of the intricate complex technology supply chain issues and we’re ahead of some of the thinking of folks outside of this group because the industry lives these challenges and understands the state of the art. Some of the best minds in the industry are focused on this, and we’ve applied some significant internal resources across our membership to work on this challenge.

So the government is very interested in it. We’ve had collaborations all the way from the White House across the Department of Defense (DoD) and within the Department of Homeland Security (DHS), and we have members from the government space in NASA and DoD.

It’s very much a collaborative effort, and I'm hoping that it can continue to be so and be utilized as a standard that the government can point to, instead of coming up with their own policies and practices that may actually not work as well as those defined by the industry.

Conway: Our colleagues on the public side of the public-private partnership that is addressing supply-chain integrity have recognized that we need to do it together.

More importantly, you need only to listen to a statement, which I know has often been quoted, but it’s worth noting again from EU Commissioner Algirdas Semeta. He recently said that in a globalized world, no country can secure the supply chain in isolation. He recognized that, again quoting, national supply chains are ineffective and too costly unless they’re supported by enhanced international cooperation.

Mindful focus

The one thing that we bring to bear here is a mindful focus on the fact that we need a public-private partnership to address comprehensively in our information and communications technology industry supply chain integrity internationally. That has been very important in our focus. We want to be a one-stop shop of best practices that the world can look at, so that we continue to benefit from commercial technology which sells globally and frequently builds once or on a limited basis.

Combining that international focus and the public-private partnership is something that's really coming home to roost in everyone’s minds right now, as we see security value migrating away from an end point and looking comprehensively at the product lifecycle or the global supply chain.

Lounsbury: I had the honor of testifying before the U.S. House Energy and Commerce Committee on Oversight Investigations, on the view from within the U.S. Government on IT security.
It was even more gratifying to see that the concerns that were raised in the hearings were exactly the ones that the OTTF is pursuing.


It was very gratifying to see that the government does recognize this problem. We had witnesses in from the DoD and Department of Energy (DoE). I was there, because I was one of the two voices on industry that the government wants to tap into to get the industry’s best practices into the government.

It was even more gratifying to see that the concerns that were raised in the hearings were exactly the ones that the OTTF is pursuing. How do you validate a long and complex global supply chain in the face of a very wide threat environment, recognizing that it can’t be any single country? Also, it really does need to be not a process that you apply to a point, but something where you have a standard that raises the bar for our security for all the participants in your supply chain.

So it was really good to know that we were on track and that the government, and certainly the U.S. Government, as we’ve heard from Edna, the European governments, and I suspect all world governments are looking at exactly how to tap into this industry activity.

Gardner: Where we are in the progression of OTTF?

Lounsbury: In the last 18 months, there has been a tremendous amount of progress. The thing that I'll highlight is that early in 2012, the OTTF published a snapshot of the standard. A snapshot is what The Open Group uses to give a preview of what we expect the standards will apply. It has fleshed out two areas, one on tainted products and one on counterfeit products, the standards and best practices needed to secure a supply chain against those two vulnerabilities.

So that’s out there. People can take a look at that document. Of course, we would welcome their feedback on it. We think other people have good answers too. Also, if they want to start using that as guidance for how they should shape their own practices, then that would be available to them.

Normative guidance

That’s the top development topic inside the OTTF itself. Of course, in parallel with that, we're continuing to engage in an outreach process and talking to government agencies that have a stake in securing the supply chain, whether it's part of government policy or other forms of steering the government to making sure they are making the right decisions. In terms of exactly where we are, I'll defer to Edna and Andras on the top priority in the group.

Gardner: Edna, what’s been going on at OTTF and where do things stand?

Conway: We decided that this was, in fact, a comprehensive effort that was going to grow over time and change as the challenges change. We began by looking at two primary areas, which were counterfeit and taint in that communications technology arena. In doing so, we first identified a set of best practices, which you referenced briefly inside of that snapshot.

Where we are today is adding the diligence, and extracting the knowledge and experience from the broad spectrum of participants in the OTTF to establish a set of rigorous conformance criteria that allow a balance between flexibility and how one goes about showing compliance to those best practices, while also assuring the end customer that there is rigor sufficient to ensure that certain requirements are met meticulously, but most importantly comprehensively.
Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.
We have a practice right now where we're going through each and every requirement or best practice and thinking through the broad spectrum of the development stage of the lifecycle, as well as the end-to-end nodes of the supply chain itself.

This is to ensure that there are requirements that would establish conformance that could be pointed to, by both those who would seek accreditation to this international standard, as well as those who would rely on that accreditation as the imprimatur of some higher degree of trustworthiness in the products and solutions that are being afforded to them, when they select an OTTF accredited provider.

Gardner: Andras, I'm curious where in an organization like IBM that these issues are most enforceable. Where within the private sector is the knowledge and the expertise to reside?

Szakal: Speaking for IBM, we recently celebrated our 100th anniversary in 2011. We’ve had a little more time than some folks to come up with a robust engineering and development process, which harkens back to the IBM 701 and the beginning of the modern computing era.

Integrated process

We have what we call the integrated product development process (IPD), which all products follow and that includes hardware and software. And we have a very robust quality assurance team, the QSE team, which ensures that the folks are following those practices that are called out. Within each of line of business there exist specific requirements that apply more directly to the architecture of a particular product offering.

For example, the hardware group obviously has additional standards that they have to follow during the course of development that is specific to hardware development and the associated supply chain, and that is true with the software team as well.

The product development teams are integrated with the supply chain folks, and we have what we call the Secure Engineering Framework, of which I was an author and the Secure Engineering Initiative which we have continued to evolve for quite some time now, to ensure that we are effectively engineering and sourcing components and that we're following these Open Trusted Technology Provider Standard (O-TTPS) best practices.

In fact, the work that we've done here in the OTTF has helped to ensure that we're focused in all of the same areas that Edna’s team is with Cisco, because we’ve shared our best practices across all of the members here in the OTTF, and it gives us a great view into what others are doing, and helps us ensure that we're following the most effective industry best practices.
We want to be able to encourage suppliers, which may be small suppliers, to conform to a standard, as we go and select who will be our authorized suppliers.


Gardner: Dan, at EMC, is the Product Security Office something similar to what Andras explained for how IBM operates? Perhaps you could just give us a sense of how it’s done there?

Reddy: At EMC in our Product Security Office, we house the enabling expertise to define how to build their products securely. We're interested in building that in as soon as possible throughout the entire lifecycle. We work with all of our product teams to measure where they are, to help them define their path forward, as they look at each of the releases of their other products. And we’ve done a lot of work in sharing our practices within the industry.

One of the things this standard does for us, especially in the area of dealing with the supply chain, is it gives us a way to communicate what our practices are with our customers. Customers are looking for that kind of assurance and rather than having a one-by-one conversation with customers about what our practices are for a particular organization. This would allow us to have a way of demonstrating the measurement and the conformance against a standard to our own customers.

Also, as we flip it around and take a look at our own suppliers, we want to be able to encourage suppliers, which may be small suppliers, to conform to a standard, as we go and select who will be our authorized suppliers.

Gardner: Dave, what would you suggest for those various suppliers around the globe to begin the process?

Publications catalog


Lounsbury: Obviously, the thing I would recommend right off is to go to The Open Group website, go to the publications catalog, and download the snapshot of the OTTF standard. That gives a good overview of the two areas of best practices for protection from tainted and counterfeit products we’ve mentioned on the call here.

That’s the starting point, but of course, the reason it’s very important for the commercial world to lead this is that commercial vendors face the commercial market pressures and have to respond to threats quickly. So the other part of this is how to stay involved and how to stay up to date?

And of course the two ways that The Open Group offers to let people do that is that you can come to our quarterly conferences, where we do regular presentations on this topic. In fact, the Washington meeting is themed on the supply chain security.

Of course, the best way to do it is to actually be in the room as these standards are evolved to meet the current and the changing threat environment. So, joining The Open Group and joining the OTTF is absolutely the best way to be on the cutting edge of what's happening, and to take advantage of the great information you get from the companies represented on this call, who have invested years-and-years, as Andras said, in making their own best practices and learning from them.

Gardner: Edna, what's on the short list of next OTTF priorities?
It's from that kind of information sharing, as we think in a more comprehensive way, that we begin to gather the expertise.


Conway: You’ve heard us talk about CNCI, and the fact that cybersecurity is on everyone’s minds today. So while taint embodies that to some degree, we probably need to think about partnering in a more comprehensive way under the resiliency and risk umbrella that you heard Dan talk about and really think about embedding security into a resilient supply chain or a resilient enterprise approach.

In fact, to give that some forethought, we actually have invited at the upcoming conference, a colleague who I've worked with for a number of years who is a leading expert in enterprise resiliency and supply chain resiliency to join us and share his thoughts.

He is a professor at MIT, and his name is Yossi Sheffi. Dr. Sheffi will be with us. It's from that kind of information sharing, as we think in a more comprehensive way, that we begin to gather the expertise that not only resides today globally in different pockets, whether it be academia, government, or private enterprise, but also to think about what the next generation is going to look like.

Resiliency, as it was known five years ago, is nothing like supply chain resiliency today, and where we want to take it into the future. You need only look at the US national strategy for global supply chain security to understand that. When it was announced in January of this year at Davos by Secretary Napolitano of the DHS, she made it quite clear that we're now putting security at the forefront, and resiliency is a part of that security endeavor.

So that mindset is a change, given the reliance ubiquitously on communications, for everything, everywhere, at all times -- not only critical infrastructure, but private enterprise, as well as all of us on a daily basis today. Our communications infrastructure is essential to us.

Thinking about resiliency

Given that security has taken top ranking, we’re probably at the beginning of this stage of thinking about resiliency. It's not just about continuity of supply, not just about prevention from the kinds of cyber incidents that we’re worried about, but also to be cognizant of those nation-state concerns or personal concerns that would arise from those parties who are engaging in malicious activity, either for political, religious or reasons.

Or, as you know, some of them are just interested in seeing whether or not they can challenge the system, and that causes loss of productivity and a loss of time. In some cases, there are devastating negative impacts to infrastructure.
We'll then be able to take that level of confidence and assurance that we get from knowing that and translate it to the people who are acquiring our technology as well.


Szakal: There's another area too that I am highly focused on, but have kind of set aside, and that's the continued development and formalization of the framework itself that is to continue the collective best practices from the industry and provide some sort of methods by which vendors can submit and externalize those best practices. So those are a couple of areas that I think that would keep me busy for the next 12 months easily.

Gardner: What do IT vendors companies gain if they do this properly?

Secure by Design

Szakal: Especially now in this day and age, any time that you actually approach security as part of the lifecycle -- what we call an IBM Secure by Design -- you're going to be ahead of the market in some ways. You're going to be in a better place. All of these best practices that we’ve defined are additive in effect. However, the very nature of technology as it exists today is that it will be probably another 50 or so years, before we see a perfect security paradigm in the way that we all think about it.

So the researchers are going to be ahead of all of the providers in many ways in identifying security flaws and helping us to remediate those practices. That’s part of what we're doing here, trying to make sure that we continue to keep these practices up to date and relevant to the entire lifecycle of commercial off-the-shelf technology (COTS) development.

So that’s important, but you also have to be realistic about the best practices as they exist today. The bar is going to move as we address future challenges.
Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Monday, July 9, 2012

The Open Group and MIT experts detail new advances in ID management to help reduce cyber risk

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

This BriefingsDirect thought leadership interview comes in conjunction with The Open Group Conference in Washington, D.C., beginning July 16. The conference will focus on enterprise architecture (EA), enterprise transformation, and securing global supply chains.

We're joined in advance by some of the main speakers at the July 16 conference to examine the relationship between controlled digital identities in cyber risk management. Our panel will explore how the technical and legal support of ID management best practices have been advancing rapidly. And we’ll see how individuals and organizations can better protect themselves through better understanding and managing of their online identities.

The panelists are Jim Hietala, the Vice President of Security at The Open Group; Thomas Hardjono, Technical Lead and Executive Director of the MIT Kerberos Consortium, and Dazza Greenwood, President of the CIVICS.com consultancy, and lecturer at the MIT Media Lab. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:

Gardner: What is ID management, and how does it form a fundamental component of cyber security?

Hietala: ID management is really the process of identifying folks who are logging onto computing services, assessing their identity, looking at authenticating them, and authorizing them to access various services within a system. It’s something that’s been around in IT since the dawn of computing, and it’s something that keeps evolving in terms of new requirements and new issues for the industry to solve.

Particularly as we look at the emergence of cloud and software-as-a-service (SaaS) services, you have new issues for users in terms of identity, because we all have to create multiple identities for every service we access.

You have issues for the providers of cloud and SaaS services, in terms of how they provision, where they get authoritative identity information for the users, and even for enterprises who have to look at federating identity across networks of partners. There are a lot of challenges there for them as well.

Key theme

Figuring out who is at the other end of that connection is fundamental to all of cyber security. As we look at the conference that we're putting on this month in Washington, D.C., a key theme is cyber security -- and identity is a fundamental piece of that.

You can look at things that are happening right now in terms of trojans, bank fraud, scammers, and attackers, wire transferring money out of company’s bank accounts and other things you can point to.

There are failures in their client security and the customer’s security mechanisms on the client devices, but I think there are also identity failures. They need new approaches for financial institutions to adopt to prevent some of those sorts of things from happening. I don’t know if I’d use the word "rampant," but they are clearly happening all over the place right now. So I think there is a high need to move quickly on some of these issues.
They need new approaches for financial institutions to adopt to prevent some of those sorts of things from happening.


Gardner: Are we at a plateau? Or has ID management been a continuous progression over the past decade?

Hardjono: So it’s been at least a decade since the industry began addressing identity and identity federation. Someone in the audience might recall Liberty Alliance, the Project Liberty in its early days.

One notable thing about the industry is that the efforts have been sort of piecemeal, and the industry, as a whole, is now reaching the point where a true correct identity is absolutely needed now in transactions in a time of so many so-called Internet scams.

Gardner: Dazza, is there a casual approach to this, or a professional need? By that, I mean that we see a lot of social media activities, Facebook for example, where people can have an identity and may or may not be verified. That’s sort of the casual side, but it sounds like what we’re really talking about is more for professional business or eCommerce transactions, where verification is important. In other words, is there a division between these two areas that we should consider before we get into it more deeply?

Greenwood: Rather than thinking of it as a division, a spectrum would be a more useful way to look at it. On one side, you have, as you mentioned, a very casual use of identity online, where it may be self-asserted. It may be that you've signed a posting or an email.

On the other side, of course, the Internet and other online services are being used to conduct very high value, highly sensitive, or mission-critical interactions and transactions all the time. When you get toward that spectrum, a lot more information is needed about the identity authenticating, that it really is that person, as Thomas was starting to foreshadow. The authorization, workflow permissions, and accesses are also incredibly important.

In the middle, you have a lot of gradations, based partly on the sensitivity of what’s happening, based partly on culture and context as well. When you have people who are operating within organizations or within contexts that are well-known and well-understood -- or where there is already a lot of not just technical, but business, legal, and cultural understanding of what happens -- if something goes wrong, there are the right kind of supports and risk management processes.

There are different ways that this can play out. It’s not always just a matter of higher security. It’s really higher confidence, and more trust based on a variety of factors. But the way you phrased it is a good way to enter this topic, which is, we have a spectrum of identity that occurs online, and much of it is more than sufficient for the very casual or some of the social activities that are happening.

Higher risk

But as the economy in our society moves into a digital age, ever more fully and at ever-higher speeds, much more important, higher risk, higher value interactions are occurring. So we have to revisit how it is that we have been addressing identity -- and give it more attention and a more careful design, instead of architectures and rules around it. Then we’ll be able to make that transition more gracefully and with less collateral damage, and really get to the benefits of going online.

Gardner: What’s happening to shore this up and pull it together? Let’s look at some of the big news.

Hietala: I think the biggest recent news is the US National Strategy for Trusted Identities in Cyber Space (NSTIC) initiative. It clearly shows that a large government, the United States government, is focused on the issue and is willing to devote resources to furthering an ID management ecosystem and construct for the future. To me that’s the biggest recent news.

At a crossroads

Greenwood: We're just now is at a crossroads where finally industry, government, and increasingly the populations in general, are understanding that there is a different playing field. In the way that we interact, the way we work, the way we do healthcare, the way we do education, the way our social groups cohere and communicate, big parts are happening online.

In some cases, it happens online through the entire lifecycle. What that means now is that a deeper approach is needed. Jim mentioned NSTIC as one of those examples. There are a number of those to touch on that are occurring because of the profound transition that requires a deeper treatment.

NSTIC is the US government’s roadmap to go from its piecemeal approach to a coherent architecture and infrastructure for identity within the United States. It could provide a great model for other countries as well.

People can reuse their identity, and we can start to address what you're talking about with identity and other people taking your ID, and more to the point, how to prove you are who you said you were to get that ID back. That’s not always so easy after identity theft, because we don’t have an underlying effective identity structure in the United States yet.

I just came back from the United Kingdom at a World Economic Forum meeting. I was very impressed by what their cabinet officers are doing with an identity-assurance scheme in large scale procurement. It's very consistent with the NSTIC approach in the United States. They can get tens of millions of their citizens using secure well-authenticated identities across a number of transactions, while always keeping privacy, security, and also individual autonomy at the forefront.
Practically everywhere you look, you see news and signs of this transition that’s occurring, an exciting time for people interested in identity.


There are a number of technology and business milestones that are occurring as well. Open Identity Exchange (OIX) is a great group that’s beginning to bring industry and other sectors together to look at their approaches and technology. We’ve had Security Assertion Markup Language (SAML). Thomas is co-chair of the PC, and that’s getting a facelift.

That approach was being brought to match scale with OpenID Connect, which is OpenID and OAuth. There are a great number of technology innovations that are coming online.

Legally, there are also some very interesting newsworthy harbingers. Some of it is really just a deeper usage of statutes that have been passed a few years ago -- the Uniform Electronic Transactions Act, the Electronic Signatures in Global and National Commerce Act, among others, in the US.

There is eSignature Directive and others in Europe and in the rest of the world that have enabled the use of interactions online and dealt with identity and signatures, but have left to the private sector and to culture which technologies, approaches, and solutions we’ll use.

Now, we're not only getting one-off solutions, but architectures for a number of different solutions, so that whole sectors of the economy and segments of society can more fully go online. Practically everywhere you look, you see news and signs of this transition that’s occurring, an exciting time for people interested in identity.

Gardner: What’s most new and interesting from your perspective on what’s being brought to bear on this problem, particularly from a technology perspective?

Two dimensions

Hardjono: It's along two dimensions. The first one is within the Kerberos Consortium. We have a number of people coming from the financial industry. They all have the same desire, and that is to scale their services to the global market, basically sign up new customers abroad, outside United States. In wanting to do so, they're facing a question of identity. How do we assert that somebody in a country is truly who they say they are.

The second, introduces a number of difficult technical problems. Closer to home and maybe at a smaller scale, the next big thing is user consent. The OpenID exchange and the OpenID Connect specifications have been completed, and people can do single sign-on using technology such as OAuth 2.0.

The next big thing is how can an attribute provider, banks, telcos and so on, who have data about me, share data with other partners in the industry and across the sectors of the industry with my expressed consent in a digital manner.

Gardner: Tell us a bit about the MIT Core ID approach and how this relates to the Jericho Forum approach.

Greenwood: I would defer to Jim of The Open Group to speak more authoritatively on Jericho Forum, which is a part of Open Group. But, in general, Jericho Forum is a group of experts in the security field from industry and, more broadly, who have done some great work in the past on deperimeterized security and some other foundational work.
With a lot of the solutions in the market, your different aspects of life, unintentionally sometimes or even counter-intentionally, will merge.


In the last few years, they've been really focused on identity, coming to realize that identity is at the center of what one would have to solve in order to have a workable approach to security. It's necessary, but not sufficient, for security. We have to get that right.

To their credit, they've come up with a remarkably good list of simple understandable principles, that they call the Jericho Forum Identity Commandments, which I strongly commend to everybody to read.

It puts forward a vision of an approach to identity, which is very constant with an approach that I've been exploring here at MIT for some years. A person would have a core ID identity, a core ID, and could from that create more than one persona. You may have a work persona, an eCommerce persona, maybe a social and social networking persona and so on. Some people may want a separate political persona.

You could cluster all of the accounts, interactions, services, attributes, and so forth, directly related to each of those to those individual personas, but not be in a situation where we're almost blindly backing into right now. With a lot of the solutions in the market, your different aspects of life, unintentionally sometimes or even counter-intentionally, will merge.

Good architecture

Sometimes, that’s okay. Sometimes, in fact, we need to be able to have an inability to separate different parts of life. That’s part of privacy and can be part of security. It's also just part of autonomy. It's a good architecture. So Jericho Forum has got the commandments.

Many years ago, at MIT, we had a project called the Identity Embassy here in the Media Lab, where we put forward some simple prototypes and ideas, ways you could do that. Now, with all the recent activity we mentioned earlier toward full-scale usage of architectures for identity in US with NSTIC and around the world, we're taking a stronger, deeper run at this problem.

Thomas and I have been collaborating across different parts of MIT. I'm putting out what we think is a very exciting and workable way that you can in a high security manner, but also quite usably, have these core identifiers or individuals and inextricably link them to personas, but escape that link back to the core ID, and from across the different personas, so that you can get the benefits when you want them, keeping the personas separate.

Also it allows for many flexible business models and other personalization and privacy services as well, but we can get into that more in the fullness of time. But, in general, that’s what’s happening right now and we couldn’t be more excited about it.

Hardjono: For a global infrastructure for core identities to be able to develop, we definitely need collaboration between the governments of the world and the private sector. Looking at this problem, we were searching back in history to find an analogy, and the best analogy we could find was the rollout of a DNS infrastructure and the IP address assignment.
Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.
It's not perfect and it's got its critics, but the idea is that you could split blocks of IP addresses and get it sold and resold by private industry, really has allowed the Internet to scale, hitting limitations, but of course IPv6 is on the horizon. It's here today.

So we were thinking along the same philosophy, where core identifiers could be arranged in blocks and handed out to the private sector, so that they can assign, sell it, or manage it on behalf of people who are Internet savvy, and perhaps not, such as my mom. So we have a number of challenges in that phase.

Gardner: Does this relate to the MIT Model Trust Framework System Rules project?

Greenwood: The Model Trust Framework System Rules project that we are pursuing in MIT is a very important aspect of what we're talking about. Thomas and I talked somewhat about the technical and practical aspects of core identifiers and core identities. There is a very important business and legal layer within there as well.

So these trust framework system rules are ways to begin to approach the complete interconnected set of dimensions necessary to roll out these kinds of schemes at the legal, business, and technical layers.
What’s really missing is the business models, business cases, and of course the legal side.


They come from very successful examples in the past, where organizations have federated ID with more traditional approaches such as SAML and other approaches. There are some examples of those trust framework system rules at the business, legal, and technical level available.

Right now it’s CIVICS.com, and soon, when we have our model MIT under Creative Commons approach, we'll take a lot of the best of what’s come before codified in a rational way. Business, legal, and technical rules can really be aligned in a more granular way to fit well, and put out a model that we think will be very helpful for the identity solutions of today that are looking at federate according to NSTIC and similar models. It absolutely would be applicable to how at the core identity persona underlying architecture and infrastructure that Thomas, I, and Jericho Forum are postulating could occur.

Hardjono: Looking back 10-15 years, we engineers came up with all sorts of solutions and standardized them. What’s really missing is the business models, business cases, and of course the legal side.

How can a business make revenue out of the management of identity-related aspects, management of attributes, and so on and how can they do so in such a manner that it doesn’t violate the user’s privacy. But it’s still user-centric in the sense that the user needs to give consent and can withdraw consent and so on. And trying to develop an infrastructure where everybody is protected.

Gardner: The Open Group, being a global organization focused on the collaboration process behind the establishment of standards, it sounds like these are some important aspects that you can bring out to your audience, and start to create that collaboration and discussion that could lead to more fuller implementation. Is that the plan, and is that what we're expecting to hear more of at the conference next month?

Hietala: It is the plan, and we do get a good mix at our conferences and events of folks from all over the world, from government organizations and large enterprises as well. So it tends to be a good mixing of thoughts and ideas from around the globe on whatever topic we're talking about -- in this case identity and cyber security.

At the Washington Conference, we have a mix of discussions. The kick-off one is a fellow by the name Joel Brenner who has written a book, America the Vulnerable, which I would recommend. He was inside the National Security Agency (NSA) and he's been involved in fighting a lot of the cyber attacks. He has a really good insight into what's actually happening on the threat and defending against the threat side. So that will be a very interesting discussion. [Read an interview with Joel Brenner.]

Then, on Monday, we have conference presentations in the afternoon looking at cyber security and identity, including Thomas and Dazza presenting on some of the projects that they’ve mentioned.

Cartoon videos

Then, we're also bringing to that event for the first time, a series of cartoon videos that were produced for the Jericho Forum. They describe a lot of the commandments that Dazza mentioned in a more approachable way. So they're hopefully understandable to laymen, and folks with not as much understanding about all the identity mechanisms that are out there. So, yeah, that’s what we are hoping to do.

Gardner: Perhaps we could now better explain what NSTIC is and does?

Greenwood: The best person to speak about NSTIC in the United States right now is probably President Barrack Obama, because he is the person that signed the policy. Our president and the administration has taken a needed, and I think a very well-conceived approach, to getting industry involved with other stakeholders in creating the architecture that’s going to be needed for identity for the United States and as a model for the world, and also how to interact with other models.
In general, NSTIC is a strategy document and a roadmap for how a national ecosystem can emerge.


Jeremy Grant is in charge of the program office and he is very accessible. So if people want more information, they can find Jeremy online easily in at nist.gov/nstic. And nstic.us also has more information.

In general, NSTIC is a strategy document and a roadmap for how a national ecosystem can emerge, which is comprised of a governing body. They're beginning to put that together this very summer, with 13 different stakeholders groups, each of which would self-organize and elect or appoint a person -- industry, government, state and local government, academia, privacy groups, individuals -- which is terrific -- and so forth.

That governance group will come up with more of the details in terms of what the accreditation and trust marks look like, the types of technologies and approaches that would be favored according to the general principles I hope everyone reads within the NSTIC document.

At a lower level, Congress has appropriated more than $10 million to work with the White House for a number of pilots that will be under a million half dollars each for a year or two, where individual proof of concept, technologies, or approaches to trust frameworks will be piloted and put out into where they can be used in the market.

In general, by this time two months from now, we’ll know a lot more about the governing body, once it’s been convened and about the pilots once those contracts have been awarded and grants have been concluded. What we can say right now is that the way it’s going to come together is with trust framework system rules, the same exact type of entity that we are doing a model of, to help facilitate people's understanding and having templates and well-thought through structures that they can pull down and, in turn, use as a starting point.

Circle of trust

S
o industry-by-industry, sector-by-sector, but also what we call circle of trust by circle of trust. Folks will come up with their own specific rules to define exactly how they will meet these requirements. They can get a trust mark, be interoperable with other trust framework consistent rules, and eventually you'll get a clustering of those, which will lead to an ecosystem.

The ecosystem is not one size fits all. It’s a lot of systems that interoperate in a healthy way and can adapt and involve over time. A lot more, as I said, is available on nstic.us and nist.gov/nstic, and it's exciting times. It’s certainly the best government document I have ever read. I'll be so very excited to see how it comes out.

Gardner: What’s coming down the pike that’s going to make this yet more important?

Hietala: I would turn to the threat and attacks side of the discussion and say that, unfortunately, we're likely to see more headlines of organizations being breached, of identities being lost, stolen, and compromised. I think it’s going to be more bad news that's going to drive this discussion forward. That’s my take based on working in the industry and where it’s at right now.

Hardjono: I mentioned the user consent going forward. I think this is increasingly becoming an important sort of small step to address and to resolve in the industry and efforts like the User Managed Access (UMA) working group within the Kantara Initiative.

Folks are trying to solve the problem of how to share resources. How can I legitimately not only share my photos on Flickr with data, but how can I allow my bank to share some of my attributes with partners of the bank with my consent. It’s a small step, but it’s a pretty important step.

Greenwood: Keep your eyes on UMA out of Kantara. Keep looking at OASIS, as well, and the work that’s coming with SAML and some of the Model Trust Framework System Rules.

Most important thing

In my mind the most strategically important thing that will happen is OpenID Connect. They're just finalizing the standard now, and there are some reference implementations. I'm very excited to work with MIT, with our friends and partners at MITRE Corporation and elsewhere.

That’s going to allow mass scales of individuals to have more ready access to identities that they can reuse in a great number of places. Right now, it's a little bit catch-as-catch-can. You’ve got your Google ID or Facebook, and a few others. It’s not something that a lot of industries or others are really quite willing to accept to understand yet.

They've done a complete rethink of that, and use the best lessons learned from SAML and a bunch of other federated technology approaches. I believe this one is going to change how identity is done and what’s possible.

They’ve done such a great job on it, I might add It fits hand in glove with the types of Model Trust Framework System Rules approaches, a layer of UMA on top, and is completely consistent with the architecture rights, with a future infrastructure where people would have a Core ID and more than one persona, which could be expressed as OpenID Connect credentials that are reusable by design across great numbers of relying parties getting where we want to be with single sign-on.
So it's exciting times. If it's one thing you have to look at, I’d say do a Google search and get updates on OpenID Connect and watch how that evolves.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.
Register for The Open Group Conference
July 16-18 in Washington, D.C. Watch the live stream.
You may also be interested in:

Tuesday, July 3, 2012

Roundtable: Revlon and SAP executives describe accretive benefits from aggressive cloud adoption

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

The latest BriefingsDirect roundtable discussion focuses on two prime examples of organizations that have gleaned huge benefits from high degrees of virtualization and aggressive cloud computing adoption.

Join here executives from Revlon and SAP, who recently participated in a VMware-organized media roundtable event in San Francisco. The event, attended by industry analysts and journalists, demonstrated how mission-critical applications supported by advanced virtualization strategies are transforming businesses.

The discussion examines the full implications of IT virtualization, and how accretive benefits are being realized -- from bringing speed to business requests, to enhancing security, to strategic disaster recovery (DR), and to unprecedented agility in creating and exploiting applications and data delivery value.

Our guests are David Giambruno, Senior Vice President and CIO of Revlon, and Heinz Roggenkemper, Executive Vice President of Development at SAP Labs. The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What's going on with your internal cloud at SAP, and why is the speed and agility so important for you?

Roggenkemper: If you look at SAP, you find literally thousands of development systems. You find a lot of training systems. You find systems that support sales activities for pre-sales. You find systems that support our consulting organization in developing customer solutions.

From a developer's perspective, the first order of business is to get access to a system fast. Developers, by themselves, don’t care that much about cost. They want the system and they want it now. For development managers and management in general, it’s a different story.

For training, it's important that the systems are reliable and available. Of course again for management, it's the cost perspective. For people in custom development, they need the right system quickly to build up the correct environment for the particular project that they're working on.

Better supported

A
lso these requirements are much better supported in the virtualized environment than they were before. We can give them the system quickly. We can give them the systems reliably. We can give them the systems with good performance, and from a corporate perspective, do it at a much better cost than we did before.

Our business agility and ability to respond to market drivers is greatly improved by this.

Gardner: How does the training application demonstrate some of the more productive aspects of cloud?

Roggenkemper: The most interesting part about that is that you don’t need a vanilla system, but a system that is prepared for a particular class, which has the correct set of data. You need a system that can be reset to a controlled stage very quickly after the end of a training class, so that it’s ready for the next training class.

So there are two aspects to it. One is the reliable infrastructure on which the systems run, and second part is to get the correct system for that particular class ready in a short period of time.

Gardner: Are there unintended consequences or unintended benefits that come from this cloud model?

One is the reliable infrastructure on which the systems run, and second part is to get the correct system for that particular class ready in a short period of time



Roggenkemper: The thing that comes to my mind is that it allows us to take advantage of new computing infrastructure more quickly. We reduce the use of power, which is always a good thing.

Gardner: This idea of agility when producing these applications proves this concept of IT as a service. Do you see it that way?

Roggenkemper: Absolutely. And obviously, what we use internally benefits our customers as well. To have these systems available in a much shorter period of time for the customer’s development environment is as important for them as it is for us.

Future plans


Gardner: And a question about future plans. It sounds as if this works for you. Then the virtual desktop infrastructure (VDI) approach of delivering entire client environments with apps, data, and full configuration would be a natural progression. Is that something that you're looking at or perhaps you're already doing?

Roggenkemper: Some things we're already doing, We have a hefty set of terminal services in our environment, as well, which people, especially if they are on the road or work from home, take full advantage of.

Gardner: David, I was very interested to hear you say that advances in pervasive virtualization and cloud methods are transforming how IT operates, giving you the ability of, as you said, saying "yes" when your business leaders come calling. What have you have been able to say "yes" to that exemplifies this shift in IT?

Giambruno: We've increased our project throughput over the past couple of years by 300. So my job is to say, "yes." I'm just here to help. I'm a service. Services are supposed to deliver. What this cloud ecosystem has delivered for us is our ability to say yes and get more done faster, better, cheaper.

The correlating effect of that is we have seen not only this massive increase in our ability to deliver projects for the business, because that’s really what business alignment is. I do what they want and I give them some counsel along the way.

The second piece is that we've seen a 70 percent reduction in the time it takes us to deliver applications, because we have all of these applications available to us in the task and development site which is part of our DR.

So this ability to move massive amounts of information where everything is just the file, bring that up and let our development teams at it, has added this whole speed, accuracy, and ability to deliver back to the business.

It’s probably easier to quantify it this way. We have 531 applications running on our internal cloud. Our internal cloud makes roughly 15,000 automated application moves a month. Our transaction rate is roughly 14,000 transactions a second. Our data change rate is between 17 and 30 terabytes a week. Over 90 percent of our corporate workload sits on our internal cloud, and it runs most of our footprint globally.

Gardner: We're talking about mission-critical apps here -- ERP, manufacturing, warehousing, business intelligence. Did you start with mission-critical apps or did you end up there? How did you progress?

Trust, but verify


Giambruno: I have a couple of "isms" that I live by. The first one is “Crawl, Walk, Run” and the second one is “Trust, but Verify.” When we started our journey roughly five years ago, we started with "Crawl" -- very much "Crawl" and “Trust – but Verify.” At Revlon, we didn’t spend any more to put this in. We changed how we spent our money.

We were going through a server refresh, and instead of buying all the servers, we only bought roughly 20 percent. With the balance of that money, we bought the VMware licenses. We started putting in our storage area network (SAN), and although core component pieces, and we took some of our low-hanging fruit file systems and started moving all that.

As we did that, we started sharing with the business. We showed them what we're doing and that it still worked. Then, we started the walk phase of putting applications on it. We actually ran north of six nines.

System availability went up. Performance went up. And after this "Crawl Walk Run," "Trust and Verify," it became "Just keep Going." We accelerated the whole process and we have these things that we call "fuzzies," things that we can do for the business that they weren't expecting. Every couple of months, we would start delivering new capabilities.

One of the big things that we did was that we internalized all our DR. We kept taking external money that we were spending and were able to give it back to the business and essentially invest in ourselves, because at Revlon I'm not going to be a profit center.

We kept taking external money that we were spending and were able to give it back to the business and essentially invest in ourselves.



For Revlon, the more money R&D has to develop new products to get to our consumers and for marketing to tell that product story and get it out to our channels and use the media to talk about our glamorous products, that really drives growth in Revlon.

What we've done is focused on those things, taking the complexity out, but delivering capability to the business while either avoiding or saving money that that the business can now use to grow.

Gardner: You've been able to keep your costs at or below the previous levels. Do you credit that to virtualization, to cloud, to the entire modernization?

Giambruno: To me it’s the interaction of the entire ecosystem. It is a system. Virtualization is a huge part of that. That’s where all it started. As you look through the transition, it's really been interesting. I'm going to segue back to the saying yes pieces and what it’s allowed us to be.

We have this thing called Oneness. I always talk about being the Southwest [Airlines] of computing, and I live inside of very simple triangle. The triangle has three sides, obviously. One side is our application inventory, the other side is our infrastructure capabilities, and the other side is my skill-sets.

Saying yes

I
f you're inside that space I can say yes, very quickly. What’s happened inside that space helped us contain cost . When we first started work, our ratio was one physical to seven virtual. A couple years later, we're at 1:35. It’s roughly a 500 percent increase in capacity without any commensurate cost. I give credit to my team for owning the technology and for wielding the technology for the benefit of the business and to get the most out of it.

The frame of reference to keep ourselves grounded is that we make lipstick, and it’s really how much money we can save and how well we can wield that technology to deliver value and do more with less. That’ll enable our company to grow.

We love simplicity and we have this Southwest computing model of taking a very complex ecosystem and making it simple to use. To a large degree it's kind of like an iPad, where the business wants to touch it, but they don’t care what’s going on underneath.

It's our job to deliver that, to deliver that experience and capability back to the business, without them having to think about it. I just want them to ask that we’re here to help and that we can figure a way to deliver it and keep exercising our technical capabilities to wield the technology to do more.

Gardner: What are some of the upsides on the data when it comes to this ecosystem approach?

Giambruno: One of the things that we have is a big gestalt after our cloud was live. We literally had all of our data in one place.

One of the big challenges historically was that we had all these applications geographically dispersed. The ability to touch them, feel them, get access, access controls, all of these things were monumentally challenging. In Revlon, as we went to the Southwest or Oneness model, we organized globally our access controls and those little things.

So when we had all this data and all these applications now sitting at one place, with our ability to look at them and understand them, we started a fairly big effort for our master data model. We’re structuring our data on the way in So when we're trying to query the data, we already know where it is and what it does in its relationships, instead of trying to mine through unstructured data and make reasoning out of it. It’s been this big data structure.

I’d say we "chewed glass." We spent a couple of years chewing glass, structuring all this data, because the change rate is so big, but there's value in information to the business. I joke, if you've missed at this, we’re in the information age. So how well we can wield our information and give our leadership team information to act on is a differentiator. The ability to do this big data and this master data model has been really what we see as the golden egg going forward, the thing that can really make a difference with the business.

Symbiotic relationship

Gardner: How does disaster recovery (DR) play into this larger set of values?

Giambruno: We’ve actually done this. No one was hurt, but last year, our factory in Venezuela burned. It was on a Sunday afternoon and they had what we call a drib. If you look at VMware architecture, they have data center in a box. I always joke that we’re years ahead of them in that. We use dribs, strategically placed throughout the world where we push capacity to for our cloud. They largely run dark.

So our drib "phoned home" that it was getting hot. We were notified that the building was on fire. It took us an hour and 45 minutes, and most of that time was finding one of my global storage guys who was at the beach. We found Ben, and got him to do his part, which was to tell the cloud to move from Venezuela to our disaster site in New Jersey.

So we joke that our model in DR is that we just copy everything. We don’t even think about tiering or anything. It’s this model, sometimes a Casio is just better than a Rolex. Simplicity rules, and not thinking about it ensures that we have all the data available. Again, it goes back to our cloud and virtualization. Everything is just a file. We just copy the deltas all the time. We never stop.

For us it was available in less than 15 minutes. We went in, we broke the synchronization, we made sure everything was up-to-date, and we told our F5s and our info blocks that Venezuela is now New Jersey. Everything swung, we got everything in, we contacted the business units to test everything and verify everything.

It's this whole idea of simplicity, where you're just not putting the complexity into the system.



Then we brought up all the virtual desktops and we used Riverbed mobile devices. We e-mailed their client to everyone. So people either worked from home or we had some very good partners that gave us some office space where people could use the computers. They loaded the Riverbed mobile devices on those computers. They brought the virtual desktops, people went to work, and the business didn’t go away.

This is a real-world example of how you can do it, and it wasn't a lot of effort. It's this whole idea of simplicity, where you're just not putting the complexity into the system. I always go back to this iPad view of the world, where the business just wants to know what's available and we will do the rest underneath.

This high degree of virtualization lets us move all of this data around the world, and it's for DR, development, and a myriad of capabilities that we keep finding new ways to use this capability.

Redundancy and expense

And some of the other unintended consequences are interesting. You talk about redundancy and expense. Two is one and one is none in a data center. Do you really need to be fully redundant, because if something happens we'll just switch to the other data center?

I only need one core switch or whatever. You start to challenge all these old precepts of up-time, because it's almost cheaper for me or less-expensive. I can just roll the computer over here for a little while. I get that fixed, if I have a four-hour service-level agreement (SLA) with my vendors for repairs.

You can start to question a lot of the “old ways of doing things” or what was the standard in figuring out new ways to operate. One of the interesting things I love about my job is you can question yourself and figure out what you can do next.

Gardner: Tell us about this extended business-process value that you're starting to explore.

Giambruno: One of the things we realized is that we could start extending our cloud. We spend a lot of time managing security and VPNs, and the audits that have to go around that.

At the end of the day, it's about collaboration with our community of vendors and suppliers, and enabling them to interact with us easily.



If I could just push out a piece of my application or make that available to them, they could update their data, reduce the number of APIs, the number of connections, all of that complexity that goes out there, and extend our MDM.

Then we can interface our MDM through our cloud to do some of this translation for us that they can enter data, or we can take it from their systems, from our cloud edge securely and in context and bring that back into our systems.

We think there are huge possibilities around automating and simplifying. But at the end of the day, it's about collaboration with our community of vendors and suppliers, and enabling them to interact with us easily.

So you're always trying to foster those relationships and get whatever synergies you can. If we make it easier on them to interact with us from a system’s perspective, it just makes everybody happier. We've got some projects slated for deployment this year. Maybe in a year, if you come back, I can tell you how well we’ve done or what we’ve done. But one of the things that we are looking is we can think really change how we operate as a company.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: