Wednesday, July 2, 2014

Standards and APIs: How to best manage identity and security in the mobile era

The advent of the application programming interface (API) economy has forced a huge, pressing need for organizations to both seek openness and improve security for accessing mobile applications, data, and services anytime, anywhere, and from any device.

Awash in inadequate passwords and battling subsequent security breaches, business and end-users alike are calling for improved identity management and federation technologies. They want workable standards to better chart the waters of identity management and federation, while preserving the need for enterprise-caliber risk remediation and security.

Meanwhile, the mobile tier is becoming an integration point for scads of cloud services and APIs, yet unauthorized access to data remains common. Mobile applications are not yet fully secure, and identity control that meets audit requirements is hard to come by. And so developers are scrambling to find the platforms and tools to help them manage identity and security, too.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

Clearly, the game has clearly changed for creating new and attractive mobile processes, yet the same old requirements remain wanting around security, management, interoperability, and openness.

BriefingsDirect assembled a panel of experts to explore how to fix these pressing needs: Bradford Stephens, the Developer and Platforms Evangelist in the CTO’s Office at Ping Identity; Ross Garrett, Senior Director of Product Marketing at Axway, Kelly Grizzle, Principal Software Engineer at SailPoint Technologies. The sponsored panel discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We are approaching the Cloud Identity Summit 2014 (CIS), which is coming up on July 19 in Monterey, Calif. There's a lot of frustration with identity services that meet the needs of developers and enterprise operators as well. So let’s talk a little bit about what’s going on with APIs and identity.

What are the trends in the market that keep this problem pressing? Why is it so difficult to solve?

Interaction changes

Stephens: Well, as soon as we've settled on a standard, the way we interact with computers changes. It wasn’t that long ago that if you had Active Directory and SAML and you hand-wrote security endpoints of model security products, you were pretty much covered.

Stephens
But in the last three or four years, we've gone to a world where mobile is more important than web. Distributed systems are more important than big iron. And we communicate with APIs instead of channels and SDKs, and that requires a whole new way of thinking about the problem.

Garrett: Ultimately, APIs are becoming the communication framework, the fabric, in which all of the products that we touch today talk to each other. That, by extension, provides a new identity challenge. That’s a lot of reason why we've seen some friction and schizophrenia around the types of identity technologies that are available to us.

So we see waves of different technologies come and go, depending on what is the flavor of the month. That has caused some frustration for developers, and will definitely come up during our Cloud Identity Summit in a couple of weeks.

Grizzle: APIs are becoming exponentially more important in the identity world now. As Bradford alluded to, the landscape is changing. There are mobile devices as well as software-as-a-service (SaaS) providers out there who are popping up new services all the time. The common thread between all of them is the need to be able to manage identities. They need to be able to manage the security within their system. It makes total sense to have a common way to do this.

Grizzle
APIs are key for all the different devices and ways that we connect to these service providers. Becoming standards based is extremely important, just to be able to keep up with the adoption of all these new service providers coming on board.

Gardner: As we describe this as the API economy, I suppose it’s just as much a marketplace and therefore, as we have seen in other markets, people strive for predominance. There's jockeying going on. Bradford, is this a matter of an architectural shift? Is this a matter of standards? Or is this a matter of de-facto standards? Or perhaps all of the above?

Stephens: It’s getting complex quickly. I think we're settling on standards, like it or not, mostly positively. I see most people settling on at least OAuth 2.0 as a standard token, and OpenID Connect for implementation and authentication of information, but I think that’s about as far as we get.

There's a lot of struggle with established vendors vying to implement these protocols. They try to bridge the gap between the old world of say SAML and Active Directory and all that, and the new world of SCIM, OAuth, OpenID Connect. The standards are pretty settled, at least for the next two years, but the tools, how we implement them, and how much work it takes developers to implement them, are going to change a lot, and hopefully for the better.

Evolving standards

Garrett: We have identified a number of new standards that are bridging this new world of API-oriented connectivity. Learning from the past of SAML and legacy, single sign-on infrastructure, we definitely need some good technology choices.

Garrett
The standards seem to be leading the way. But by the same token, we should keep a close eye on the market changing with regards to how fast standards are changing. We've all seen things like OAuth progress slower than some of the implementations out there. This means the ratification of the standard was happening after many providers had actually implemented it. It's the same for OpenID Connect.

We are in line there, but the actual standardization process doesn’t always keep up with where the market wants to be.

Gardner: We've seen this play out before that the standards can lag. Getting consensus, developing the documentation and details, and getting committees to sign off can take time, and markets move at their own velocity. Many times in the past, organizations have hedged their bets by adopting multiple standards or tracking multiple ways of doing things, which requires federation and integration.

Kelly, are there big tradeoffs with standards and APIs? How do we mitigate the risk and protect ourselves by both adhering to standards, but also being agile in the market?

Grizzle: That’s kind of tricky. You're right in that standards tend to lag. That’s just part and parcel of the standardization process. It’s like trying to pass a bill through Congress. It can go slow.
You're right in that standards tend to lag. That’s just part and parcel of the standardization process.

Something that we've seen some of these standards do right, from OAuth and from the SCIM perspective, is that both of those have started their early work with a very loose standardization process, going through not one of the big standards bodies, but something that can be a little bit more nimble. That’s how the SCIM 1.0 and 1.1 specs came out, and they came out in a reasonable time frame to get people moving on it.

Now that things have moved to the Internet Engineering Task Force (IETF), development has slowed down a little bit, but people have something to work with and are able to keep up with the changes going on there.

I don’t know that people necessarily need to adopt multiple standards to hedge their bets, but by taking what’s already there and keeping a pulse on the things that are going to change, as well as the standard being forward-thinking enough to allow some extensibility within it, service providers and clients, in the long run, are going to be in a pretty good spot.

Quick primer

Gardner: We've talked a few technical terms so far, and just for the benefit of our audience, I'd like to do a quick primer, perhaps with you Bradford. To start: OAuth, this is with the IETF now. Could you just quickly tell the audience what OAuth is, what it’s doing, and why it’s important when we talk about API, security and mobile?

Stephens: OAuth is the foundation protocol for authorization when it comes to APIs for web applications. OAuth 2 is much more flexible than OAuth 1.

Basically, it allows applications to ask for access to stuff. It seems very vague, but it’s really powerful once you start getting the right tokens for your workflows. And it provides the same foundation for everything else we do for identity and APIs.

The best example I can think of is when you log into Facebook, and Facebook asks whether you really want this app to see your birthday, all your friends’ information, and everything else. Being able to communicate all that over the OAuth 2.0 is a lot easier than how it was with OAuth 1.0 a few years ago.

Gardner: How about OpenID Connect. This is with the OpenID Foundation. How does that relate, and what is it?
If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself.

Stephens: If OAuth actually is the medium, OpenID Connect can be described as the content of the message. It’s not the message itself. That’s usually done with the Token, Javascript object notation (JSON) Web Token, but OpenID Connect provides the actual identity information.

When you access an API and you authenticate, you choose a scope, and one of the most common scopes is OpenID Profile. This OpenID Profile will just have things like your username, maybe your address, various other pieces of identity information, and it describes who the "you" is, who you are.

Gardner: And SCIM, you mentioned that Kelly, and I know you have been involved with it. So why don’t you take the primer for SCIM, and I believe it’s Simple Cloud Identity Management?

Grizzle: That's the historical name for it, Simple Cloud Identity Management. When we took the standard to the IETF, we realized that the problems that we were solving were a little bit broader than just the cloud and within the cloud. So the acronym now stands for the System for Cross-domain Identity Management.

That’s kind of a mouthful, but the concept is pretty simple. SCIM is really just an API and a schema that allows you to manage identities and identity-related information. And by manage them, I mean to create identities in systems to delete them, update them, change the entitlements and the group memberships, and things like that.

Gardner: From your perspective, Kelly, what is the relationship then between OAuth and SCIM?

Managing identities

Grizzle: OAuth, as Bradford mentioned, is primarily geared toward authorization, and answers the question, "Can Bob access this top-secret document?" SCIM is really not in the authorization and authentication business at all. SCIM is about managing identities.

OAuth assumes that an identity is already present. SCIM is able to create that identity. You can create the user "Bob." You can say that Bob should not have access to that top-secret document. Then, if you catch Bob doing some illicit activity, you can quickly disable his account through a SCIM call. So they fit together very nicely.

Gardner: In the real world, developers like to be able to use APIs, but they might not be familiar with all the details that we've just gone through on some of these standards and security approaches.

How do we make this palatable to developers? How do we make this something that they can implement without necessarily getting into the nitty-gritty? Are there some approaches to making this a bit easier to consume as a developer?
The best thing we can do is have tool-providers give them tools in their native language or in the way developers work with things.

Stephens: As a developer who's relatively new to this field -- I worked in database for three years -- I've had personal experience of how hard it is to wrap your head around all the standards and all these flows and stuff. The best thing we can do is have tool providers give them tools in their native language, or in the way developers work with things.

This needs well-documented, interactive APIs -- things like Swagger -- and lots of real-world code examples. Once you've actually done the process of authentication through OAuth, getting a JSON Web Token, and getting an OpenID Connect profile, it’s really  simple to see how it all works together, if you do it all through a SaaS platform that handles all the nitty-gritty, like user creation and all that.

If you have to roll your own, though, there's not a lot of information out there besides the WhitePages and Wall Post. It’s just a nightmare. I tried to roll my own. You should never roll your own.

So having SaaS platforms to do all this stuff, instead of having documents, means that developers can focus on providing their applications, and just understand that I have this media and project, not about which tokens carry information that accesses OAuth and OpenID Connect.

I don’t really care how it all works together; I just know that I have this token and it has the information I need. And it’s really liberating, once you finally get there.

So I guess the best thing we can do is provide really great tools that solve the identity-management problems.

Tools: a key point

Garrett: Tools, that’s the key point here. Whether we like it or not, developers tend to be kind of lazy sometimes and they certainly don’t have the time or the energy to understand every facet of the OAuth specification. So providing tools that can wrap that up and make it as easy to implement as possible is really the only way that we get to really secure mobile applications or any API interaction. Because without a deep understanding of how this stuff works, you can make pretty fundamental errors.

Having said that, at least we've started to take steps in the right direction with the standards. OAuth is built at least with the idea of mobile access in mind. It’s leveraging REST and JSON types, rather than SOAP and XML types, which are really way too heavyweight for mobile applications.

So the standards, in their own right, have taken us in the right direction, but we absolutely need tools to make it easy for developers.

Grizzle: Tools are of the utmost importance, and some of the identity providers and people with skin in the game, so to speak, are helping to create these tools and to open-source them, so that they can be used by other people.
Identity isn’t the most glamorous thing to talk about, except when it all goes wrong, and some huge leak makes the news headlines.

Another thing that Ross touched on was keeping the simplicity in the spec. These things that we're addressing -- authorization, authentication, and managing identities -- are not extremely simple concepts always. So in the standards that are being created, finding the right balance of complexity versus completeness and flexibility is a tough line to walk.

With SCIM, as you said, the first initial of the acronym used to stand for Simple. It’s still a guiding principle that we use to try to keep these interactions as simple as possible. SCIM uses REST and JSON, just like some of these other standards. Developers are familiar with that. Putting the burden on the right parties for implementation is very important, too. To make it easy on clients, the ones who are going to be implementing these a lot, is pretty important.

Gardner: Do these standards do more than help the API economy settle out and mature? Cloud providers or SaaS providers want to provide APIs and they want the mobile apps to consume them. By the same token, the enterprises want to share data and want data to get out to those mobile tiers. So is there a data-management or brokering benefit that goes along with this? Are we killing multiple birds with one set of standards?

Garrett: The real issue here, when we think about the new types of products and services that the API economy is helping us deliver, is around privacy and ultimately customer confidence. Putting the user in control of who gets to access which parts of my identity profile, or how contextual information about me can perhaps make identity decisions easier, allows us to lock down, or better understand, these privacy concerns that the world has.

Identity isn’t the most glamorous thing to talk about -- except when it all goes wrong -- and some huge leak makes the news headlines, or some other security breach has lost credit-card numbers or people’s usernames and passwords.

Hand in hand

In terms of how identity services are developing the API economy, the two things go hand in hand. Unless people are absolutely certain about how their information is being used, they simply choose not to use these services. That’s where all the work that the API management vendors and the identity management vendors are doing and bringing that together is so important.

Gardner: You mentioned that identity might not be sexy or top of mind, but how else can you manage all these variables on an automated or policy-driven basis? When we move to the mobile tier, we're dealing with multiple networks. We're dealing with multiple services ... cloud, SaaS, and APIs. And then we're linking this back to enterprise applications. How other than identity can this possibly be managed?

Stephens: Identity is often thought of as usernames and passwords, but it’s evolving really quickly to be so much more. This is something I harp on a lot, but it’s really quickly becoming that who we are online is more important than who we are in real life. How I identify myself online is more important than the driver's license I carry in my wallet.
And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.

As you know, your driver’s license is like a real-life token of information that describes what you're allowed to do in your life. That’s part of your identity. Anybody who has lost their license knows that, without that, there's not a whole lot you can do.

Bringing that analogy back to the Internet, what you're able to access and what access you're able to give other people or other applications to change important things, like your Facebook posts, your tweets, or go through your email and help categorize that is important. All these little tasks that help define how you live, are all part of your identity. And it’s important that developers understand that because any connected application is going to have to have a deep sense of identity.

Gardner: Let me pose the same question, but in a different way. When you do this well, when you can manage identity, when you can take advantage of these new standards that extend into mobile requirements and architectures, with the API economy in mind, what do you get? What does it endow you with? What can you do that perhaps you couldn’t do if you were stuck in some older architectures or thinking?

Grizzle: Identity is key to everything we do. Like Bradford was just saying, the things that you do online are built on the trust that you have with who is doing them. There are very few services out there where you want completely anonymous access. Almost every service that you use is tied to an identity.

So it’s of paramount importance to get a common language between these. If we don’t move to standards here, it's just going to be a major cost problem, because there are a ton of different providers and clients out there.

If every provider tries to roll their own identity infrastructure, without relying on standards, then, as a client, if I need to talk to two different identity providers, I need to write to two different APIs. It’s just an explosive problem, with the amount that everything is connected these days.

So it’s key. I can’t see how the system will stand up and move forward efficiently without these common pieces in place.

Use cases

Gardner: Do we have any examples along these same lines of what do you get when you do this well and appropriately based on what you all think is the right approach and direction? We've been talking at a fairly abstract level, but it really helps solidify people’s thinking and understanding when they can look at a use-case, a named situation or an application.

Stephens: If you want a good example of how OAuth delegation works, building a Facebook app or just working on Facebook app documentation is pretty straightforward. It gives you a good idea of what it means to delegate certain authorization.

Likewise, Google is very good. It’s very integrated with OAuth and OpenID Connect when it comes to building things on Google App Engine.
The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure.

So if you want to secure an API that you built using Google Cloud on Google App Engine, which is trivial to do, Google Cloud Endpoints provides a really good example. In fact, there is a button that you can hit in their example button called Use OAuth and that OAuth transports OpenID Connect profile, and that’s a pretty easy way to go about it.

Garrett: I'll just take a simple consumer example, and we've touched on this already. It was the idea in the past, where every individual service or product is offering only their identity solution. So I have to create a new identity profile for every product or service that I'm using. This has been the case for a long time in the consumer web and in the enterprise setting as well.

So we have to be able to solve that problem and offer a way to reuse existing identities. It involves so taking technologies like OpenID Connect, which is totally hidden to the end user really, but simply saying that you can use this existing identity, your LinkedIn or  Facebook credentials, etc., to access some new products, takes a lot of burden away from the consumer. Ultimately, that provides us a better security model end to end.

The thing that these new identity service providers have been offering has, behind the scenes, been making your lives more secure. Even though some people might shy away from using their Facebook identity across multiple applications, in many ways it’s actually better to, because that’s really one centralized place where I can actually see, audit, and adjust the way that I'm presenting my identity to other people.

That’s a really great example of how these new technologies are changing the way we interact with products everyday.

Standardized approach

Grizzle: At SailPoint, the company that I work for, we have a client, a large chip maker, who has seen the identity problem and really been bitten by it within their enterprise. They have somewhere around 3,500 systems that have to be able to talk to each other, exchange identity data, and things like that.

The issue is that every time they acquire a new company or bring a new group into the fold, that company has its own set of systems that speak their own language, and it takes forever to get them integrated into their IT organization there.

So they've said that they're not going to support every app that these people bring into the IT infrastructure. They're going with SCIM and they are saying that all these apps that come in, if they speak SCIM, then they'll take ownership of those and pull them into their environment. It should just plug in nice and easy. They're doing it just because of a resourcing perspective. They can't keep up with the amount of change to their IT infrastructure and keep everything automated.
They can't keep up with the amount of change to their IT infrastructure and keep everything automated.

Gardner: I want to quickly look at the Cloud Identity Summit that’s coming up. It sounds like a lot of these issues are going to be top of mind there. We're going to hear a lot of back and forth and progress made.

Does this strike you, Bradford, as a tipping point of some sort, that this event will really start to solidify thinking and get people motivated? How do you view the impact of this summit on cloud identity?

Stephens: At CIS, we're going to see a lot of talk about real-world implementation of these standards. In fact, I'm running the Enterprise API track and I'll be giving a talk on end-to-end authentication using JAuth, OAuth, and OpenID Connect. This year is the year that we show that it's possible. Next year, we'll be hearing a lot more about people using it in production.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ping Identity.

You may also be interested in:

Thursday, June 26, 2014

How Capgemini's UK financial services unit helps clients manage risk using big data analysis

When Capgemini's business information management (BIM) practices unit needed to provide big data capabilities to its insurance company customers, it needed to deliver the right information to businesses much faster from the very bottom up.

That means an improved technical design and an architectural way of delivering information through business intelligence (BI) and analytics. The ability to bring together structured and unstructured data -- and be able to slice and dice that data in a rapid fashion; not only deploy it, but also execute rapidly for organizations out there -- was critical for CapGemini.

And that's because Capgemini's Financial Services Global Business Unit, based in the United Kingdom, must drive better value to its principal-level and senior-level consultants as they work with group-level CEOs in the financial services, insurance, and capital markets arenas. Their main focus is to drive a strategy and roadmap, consulting work, enterprise information architecture, and enterprise information strategy with a lot of those COO- and CFO-level customers.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Our next innovation case study interview therefore highlights how Capgemini is using big data and analysis to help its organization clients better manage risk.

BriefingsDirect had an opportunity to learn first-hand how big data and analysis help its Global 500 clients identify the most pressing analysis from huge data volumes we interviewed Ernie Martinez, Business Information Management Head at the Capgemini Financial Services Global Business Unit in London. The discussion, at the HP Discover conference in Barcelona, is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Risk has always been with us. But is there anything new, pressing, or different about the types of risks that your clients are trying to reduce and understand?

Martinez
Martinez: I don't think it's as much about what's new within the risk world, as much as it's about the time it takes to provision the data so companies can make the right decisions faster, therefore limiting the amount of risk they may take on in issuing policies or taking on policies with new clients.

Gardner: In addition to the risk issue, of course, there is competition. The speed of business is picking up, and we’re still seeing difficult economic climates in many markets. How do you step into this environment and find a technology that can improve things? What have you found?

Martinez: There is the technology aspect of delivering the right information to business faster. There is also the business-driven way of delivering that information faster to business.

Bottom up

The BIM practice is a global practice. We’re ranked in the top upper right-hand quadrant in Gartner as one of the best BIM practices out there with about 7,000 BIM resources worldwide.

Our focus is on driving better value to the customer. So we have principal-level and senior-level consultants that work with group-level CEOs in the financial services, insurance, and capital markets arenas. Their main focus is to drive a strategy and roadmap, consulting work, enterprise information architecture, and enterprise information strategy with a lot of those, the COO- and CFO-level customers.

We then drive more business into the technical design and architectural way of delivering information in business intelligence (BI) and analytics. Once we define what the road to good looks like for an organization, when you talk about integrating information across the enterprise, it's about what is that path to good looks like and what are the key initiatives that an organization must do to be able to get there.

This is where our technical design, business analysis, and data analysis consultants fit in. They’re actually going in to work with business to define what do they need to see out of their information to help them make better decisions.

To get a product demonstration, send an email to:
Gardner: Of course, the very basis of this is to identify the information, find the information, and put the information in a format that can be analyzed. Then, do the analysis, speed this all up, and manage it at scale and at the lowest possible cost. It’s a piece of cake, right? Tell us about the process you go through and how you decide what solutions to use and where the best bang for the buck comes from?

Martinez: Our approach is to take that senior-level expertise in big data and analytics, bring that into our practice, put that together with our business needs across financial services, insurance, and capital markets, and begin to define valid use cases that solve real business problems out there.

We’re a consulting organization, and I expect our teams to be able to be subject matter experts on what's happening in the space and also have a good handle on what the business problems are that our customers are facing. If that’s true, then we should be able to outline some valid use cases that are going to solve some specific problems for business customers out there.

In doing so, we’ll define that use case. We’ll do the research to validate that indeed it is a business problem that's real. Then we’ll build the business case that outlines that if we do build this piece of intellectual property (IP), we believe we can go out and proactively affect the marketplace and help customers out there. This is exactly what we did with HP and the HAVEn platform.

Why Capgemini and our BIM practices jumped in with a partnership with HP and Vertica in the HAVEn platform is really about the ability to deliver the right information to business faster from the bottom up. That means the infrastructure and the middleware by which we serve that data to business. From the top down, we work with business in a more iterative fashion in delivering value quickly out of the data that they are trying to harvest.

Wide applicability

Gardner: So we’re talking about a situation where you want to have wide applicability of the technology across many aspects of what you are doing, that make sense economically, but of course it also has to be the right tool for the job, that's to go deep and wide. You’re in a proof-of-concept (POC) stage. How did you come to that? What were some of the chief requirements you had for doing this at that right balance of deep and wide?

Martinez: We, as an organization, believe that our goal as BI and analytics professionals is to deliver the right information faster to business. In doing so, you look at the technologies that are out there that are positioned to do that. You look at the business partners that have that mentality to actually execute in that manner. And then you look at the organization, like ours, whose sole purpose is to mobilize quickly and deliver value to customer.

I think it was a natural fit. When you look at HP Vertica in the HAVEn platform, the ability to integrate social media data through Autonomy and then of course through Vertica and Hadoop -- the integration of the entire architecture -- gives us the ability to do many things.

But number one, it's the ability to bring in structured and unstructured data, and be able to slice and dice that data in a rapid fashion; not only deploy it, but also execute rapidly for organizations out there.
Being here at HP Discover this week has certainly solidified in my mind that we’re betting on the right horse.

Over the course of the last six months of 2013, that conversation began to blossom into a relationship. We all work together as a team and we think we can mobilize not just the application or the solution that we’re thinking about, but the entire infrastructure derivatives to our customers quickly. That's where we’re at.

What that means is that once we partnered and got the go ahead with HP Vertica to move forward with the POC, we mobilized a solution in less than 45 days, which I think shows the value of the relationship from the HP side as well as from Capgemini.

Gardner: Down the road, after some period of implementation, there are general concerns about scale when you’re dealing with big data. Because you’re near the beginning of this, how do you feel about the ability for the platform to work to whatever degree you may need?

Martinez: Absolutely no concern at all. Being here at HP Discover has certainly solidified in my mind that we’re betting on the right horse with their ability to scale. If you heard some of the announcements coming out, they’re talking about the ability to take on big data. They’re using Vertica and the HAVEn network.

There’s absolutely zero question in my mind that organizations out there can leverage this platform and grow with it over time. Also, it gives us the ability to be able to do some things that we couldn’t do a few years back.

Business value

Gardner: Ernie, let's get back to the business value here. Perhaps you can identify some of the types of companies that you think would be in the best position to use this. How will this hit the road? What are the sweet spots in the market, the applications you think would be the most urgently that make a right fit for this?

Martinez: When you talk about the largest insurers around the world, whether from Zurich to Farmers in the US to Liberty Mutual, you name it, these are some of our friendly customers that we are talking to that are providing feedback to us on this solution.

We’ll incorporate that feedback. We’ll then take that to some targeted customers in North America, UK, and across Europe, that are primed and in need of a solution that will give them the ability to not only assess risk more effectively, but reduce the time to be able to make these type of decisions.

Reducing the time to provision data reduces costs by integrating data across multiple sources, whether it be customer sentiment from the Internet, from Twitter and other areas, to what they are doing around their current policies. It allows them to identify customers that they might want to go after. It will increase their market share and reduce their costs. It gives them the ability to do many more things than they were able to do in the past.
It allows them to identify customers that they might want to go after. It will increase their market share and reduce their costs.

Gardner: And Capgemini is in the position of mastering this platform and being able to extend the value of that platform across multiple clients and business units. Therefore, that reduces the total cost of that technology, but at the same time, you’re going to have access to data across industries, and perhaps across boundaries that individual organizations might not be able to attain.

So there's a value-add here in terms of your penetration into the industry and then being able to come up with the inferences. Tell me a little bit about how the access-to-data benefit works for you?

Martinez: If you take a look at the POC or the use case that he POC was built on, it was built on a commercial insurance risk assessment. If you take a look at the underlying architecture around commercial insurance risk, our goal was to be able to build an architecture that will serve the uses case that HP bought into, but at the same time, flatten out that data model and that architecture to also bring in better customer analytics for commercial insurance risk.

So we’ve flattened out that model and we’ve built the architecture so we could go after additional business, instead of more clients, across not just commercial insurance, but also general insurance. Then, you start building in the customer analytics capability within that underlying architecture and it gives us the ability to go from the insurance market over to the financial services market, as well as into the capital markets area.

Gardner: All the data in one place makes a big difference.

Martinez: It makes a huge difference, absolutely.

Future plans

Gardner: Tell us a bit about the future. We’ve talked about a couple of aspects of the HAVEn suite. Autonomy, Vertica, and Hadoop seem to be on everyone's horizon at some point or another due to scale and efficiencies. Have you already been using Hadoop, or how do expect to get there?

Martinez: We haven’t used Hadoop, but certainly, with its capability, we plan to. I’ve done a number of different strategies and roadmaps in engaging with larger organizations, from American Express to the largest retailer in the world. In every case, they have a lot of issues around how they’re processing the massive amounts of data that are coming into their organization.

When you look at the extract, transform, load (ETL) processes by which they are taking data from systems of record, trying to massage that data and move it into their large databases, they are having issues around load and meeting load windows.

The HAVEn platform, in itself, gives us the ability to leverage Hadoop, maybe take out some of that processing pre-ETL, and then, before we go into the Vertica environment, be able to take out some of that load and make the Vertica even more efficient than it is today, which is one of the biggest selling points of Vertica. It certainly is in our plans.
This is a culture that organizations absolutely have to adopt if they are going to be able to manage the amount of data at the speed at which that data is coming to their organizations.

Gardner: Another announcement here at Discover has been around converged infrastructure, where they’re trying to make the hardware-software efficiency and integration factor come to bear on some of these big-data issues. Have you thought about the deployment platform as well as the software platform?

Martinez: You bet. At the beginning of this interview, we talked about the ability to deliver the right information faster to business. This is a culture that organizations absolutely have to adopt if they are going to be able to manage the amount of data at the speed at which that data is coming to their organizations. To be able to have a partner like HP who is talking about the convergence of software and infrastructure all at the same time to help companies manage this better, is one of the biggest reasons why we're here.

We, as a consulting organization, can provide the consulting services and solutions that are going to help deliver the right information, but without that infrastructure, without that ability to be able to integrate faster and then be able to analyze what's happening out there, it’s a moot point. This is where this partnership is blossoming for us.

Gardner: Before we sign off, Ernie, now that you have gone through this understanding and have developed some insights into the available technologies and made some choices, is there any food for thought for others who might just be beginning to examine how to enter big data, how to create a common platform across multiple types of business activities? What did you not think of before that you wish you had known?

Lessons learned

Martinez: If I look back at lessons learned over the last 60 to 90 days for us within this process, it’s one thing to say that you're mobilizing the team right from the bottom up, meaning from the infrastructure and the partnership with HP, and as well as the top-down with your business needs to finding the right business requirements and then actually building to that solution.

In most cases, we’re dealing with individuals. While we might talk about an entrepreneurial way of delivering solutions into the marketplace, we need to challenge ourselves, and all of the resources that we bring into the organization, to actually have that mentality.

What I’ve learned is that while we have some very good tactical individuals, having that entrepreneurial way of thinking and actually delivering that information is a different mindset altogether. It's about mentoring our resources that we currently have, bringing in that talent that has more of an entrepreneurial way of delivering, and trying to build solutions to go to market into our organization.

To get a product demonstration, send an email to:
I didn’t really think about the impact of our current resources and how it would affect them. We were a little slow as we started the POC. Granted, we did this in 45 days, so that’s the perfectionist coming out in me, but I’d say it did highlight a couple of areas within our own team that we can improve on.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, June 24, 2014

The Open Group Amsterdam panel delves into how to best gain business value from Open Platform 3.0

The next BriefingsDirect panel discussion defines new business values from the massive Open Platform 3.0 shift that combines the impacts and benefits of big data, cloud, Internet of things, mobile and social.

Our discussion comes to you from The Open Group Conference held on May 13, 2014 in Amsterdam, where the focus was on enabling boundaryless information flow.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

To learn more about making Open Platform 3.0 a business benefit in an architected fashion, please join moderator Stuart Boardman, a Senior Business Consultant at KPN and Open Platform 3.0 Forum co-chairman; Dr. Chris Harding, Director for Interoperability at The Open Group, and Open Platform 3.0 Forum Director; Lydia Duijvestijn, Executive Architect at IBM Global Business Services in The Netherlands; Andy Jones, Technical Director for EMEA at SOA Software; TJ Virdi, Computing Architect in the Systems Architecture Group at Boeing and also a co-chair of the Open Platform 3.0 Forum; Louis Dietvorst, Enterprise Architect at Enexis in The Netherlands; Sjoerd Hulzinga, Charter Lead at KPN Consulting, and Frans van der Reep, Professor at the Inholland University of Applied Sciences.

Here are some excerpts:

Boardman: Welcome to the session about obtaining value from Open Platform 3.0, and how we're actually going to get value out of the things that we want to implement from big data, social, and the Internet-of-Things, etc., in collaboration with each other. 

Boardman
We're going to start off with Chris Harding, who is going to give us a brief explanation of what the platform is, what we mean by it, what we've produced so far, and where we're trying to go with it. 

He'll be followed by Lydia Duijvestijn, who will give us a presentation about the importance of non-functional requirements (NFRs). If we talk about getting business value, those are absolutely central. Then, we're going to go over to a panel discussion with additional guests. 

Without further ado, here's Chris Harding, who will give you an introduction to Open Platform 3.0. 

Purpose of architecture

Harding: Hello, everybody. It's a great pleasure to be here in Amsterdam. I was out in the city by the canals this morning. The sunshine was out, and it was like moving through a set of picture postcards. 

Harding
It's a great city. As you walk through, you see the canals, the great buildings, the houses to the sides, and you see the cargo hoists up in the eaves of those buildings. That reminds you that the purpose of the arrangement was not to give pleasure to tourists, but because Amsterdam is a great trading city, that is a very efficient way of getting goods distributed throughout the city. 

That's perhaps a reminder to us that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well. 

Probably when those canals were first thought of, it was not obvious that this was the right thing to do for Amsterdam. Certainly it would not be obvious that this was the right layout for that canal network, and that is the exciting stage that we're at with Open Platform 3.0 right now.

We have developed a statement, a number of use cases. We started off with the idea that we were going to define a platform to enable enterprises to get value from new technologies such as cloud computing, social computing, mobile computing, big data, the Internet-of-Things, and perhaps others.

We developed a set of business use cases to show how people are using and wanting to use those technologies. We developed an Open Group business scenario to capture the business requirements. That then leads to the next step. All these things sound wonderful, all these new technologies sound wonderful, but what is Open Platform 3.0? 

Jones
Though we don't have the complete description of it yet, it is beginning to take shape. That's what I am hoping to share with you in this presentation, our current thoughts on it. 

Looking historically, the first platform, you could say, was operating systems -- the Unix operating system. The reason why The Open Group, X/Open in those days, got involved was because we had companies complaining, "We are locked into a proprietary operating system or proprietary operating systems. We want applications portability." The value delivered through a common application environment, which was what The Open Group specified for Unix, was to prevent vendor lock-in. 

The second platform is the World Wide Web. That delivers a common services environment, for services either through accessing web pages for your browser or for web services where programs similarly can retrieve or input information from or to the web service. 

The benefit that that has delivered is universal deployment and access. Pretty much anyone or any company anywhere can create a services-based solution and deploy it on the web, and everyone anywhere can access that solution. That was the second platform. 

Common environment

The way Open Platform 3.0 is developing is as a common architecture environment, a common environment in which enterprises can do architecture, not as a replacement for TOGAF. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0. 

Open Platform 3.0 is more about what kind of architecture you will create, and by the definition of a common environment for doing this, the big business benefit that will be delivered will be integrated solutions. 

Yes, you can develop a solution, anyone can develop a solution, based on services accessible over the World Wide Web, but will those solutions work together out of the box? Not usually. Very rarely. 
There is an increasing need, which we have come upon in looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions developed for those technologies independently of each other that need to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently. That's what I am going talk about. 

The Open Group has recently published its first thoughts on Open Platform 3.0, that's the White Paper. I will be saying what’s in that White Paper, what the platform will do -- and because this is just the first rough picture of what Open Platform 3.0 could be like -- how we're going to complete the definition. Then, I will wrap up with a few conclusions. 

So what is in the current White Paper? Well, what we see as being eventually in the Open Platform 3.0 standards are a number of things. You could say that a lot of these are common architecture artifacts that can be used in solution development, and that's why I'm talking about a common architecture environment.

Statement of need objectives and principles is not that of course; it's why we're doing it. 

Dietvorst
Definition of key terms: clearly you have to share an understanding of the key terms if you're going to develop common solutions or integrable solutions. 

Stakeholders and their concerns: an important feature of an architecture development. An understanding of the stakeholders and their concerns is something that we need in the standard. 

A capabilities map that shows what the products and services do that are in the platform. 

And basic models that show how those platform components work with each other and with other products and services. 

Explanation: this is an important point and one that we haven’t gotten to yet, but we need to explain how those models can be combined to realize solutions. 

Standards and guidelines

Finally, it's not enough to just have those models; there needs to be the standards and guidelines that govern how the products and services interoperate. These are not standards that The Open Group is likely to produce. They will almost certainly be produced by other bodies, but we need to identify the appropriate ones and, probably in some cases, coordinate with the appropriate bodies to see that they are developed.

van der Reep
What we have in the White Paper is an initial statement of needs, objectives, and principles; definitions of some key terms; our first-pass list of stakeholders and their concerns; and maybe half a dozen basic models. These are in an analysis of the use cases, the business use cases, for Open Platform 3.0 that were developed earlier. 

These are just starting points, and it's incomplete. Each of those sections is incomplete in itself, and of course we don't have the complete set of sections. It's all subject to change. 

This is one of the basic models that we identified in the snapshot. It's the Mobile Connected Device Model and it comes up quite often. And you can see, that stack on the left is a mobile device, it has a user, and it has a platform, which would probably be Android or iOS, quite likely. And it has infrastructure that supports the platform. It’s connected to the World Wide Web, because that’s part of the definition of mobile computing. 

On the right, you see, and this is a frequently encountered pattern, that you don't just use your mobile phone for running an app. Maybe you connect it to a printer. Maybe you connect it to your headphones. Maybe you connect it to somebody's payment terminal. You might connect it to various things. You might do it through a USB. You might do it through Bluetooth. You might do it by near field communication (NFC)
It's fundamental to mobile computing and also somewhat connected to the Internet of Things.

But you're connecting to some device, and that device is being operated possibly by yourself, if it was headphones; and possibly by another organization if, for example, it was a payment terminal and the user of the mobile device has a business relationship with the operator of the connected device.

That’s the basic model. It's one of the basic models that came up in the analysis of use cases, which is captured in the White Paper. As you can see, it's fundamental to mobile computing and also somewhat connected to the Internet-of-Things.

That's the kind of thing that's in the current White Paper, a specific example of all those models in the current White Paper. Let’s move on to what the platform is actually going to do? 

There are three slides in this section. This slide is probably familiar to people who have watched presentations on Open Platform 3.0 previously. It captures our understanding of the need to obtain information from these new technologies, the social media, the mobile devices, sensors, and so on, the need to process that information, maybe on the cloud, and to manage it, stewardship, query and search, all those things. 

Ultimately, and this is where you get the business value, it delivers it in a form where there is analysis and reasoning, which enables enterprises to take business decisions based on that information.

So that’s our original picture of what Open Platform 3.0 will do. 

IT as broker

This next picture captures a requirement that we picked up in the development of the business scenario. A gentleman from Shell gave the very excellent presentation this morning. One of the things you may have picked up him saying was that the IT department is becoming a broker.

Traditionally, you would have had the business use in the business departments and pretty much everything else on that slide in the IT department, but two things are changing. One, the business users are getting smarter, more able to use technology; and two, they want to use technology either themselves or to have business technologists closely working with them.

Systems provisioning and management is often going out to cloud service providers, and the programming, integration, and helpdesk is going to brokers, who may be independent cloud brokers. This is the IT department in a broker role, you might say. 

But the business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, it's not a good defense to say, "Well, our broker did it that way." You are responsible. 
That's why we're looking for Open Platform 3.0 to define the common models that you need to access the technologies in question.

Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved. And that’s a way of using IT that Open Platform 3.0 must and will accommodate. 

Finally, I mentioned the integration of independently developed solutions. This next slide captures how that can be achieved. Both of these, by the way, are from the analysis of business use cases. 

Also, you'll also notice they are done in ArchiMate, and I will give ArchiMate a little plug at this point, because we have found it actually very useful in doing this analysis. 

But the point is that if those solutions share a common model, then it's much easier to integrate them. That's why we're looking for Open Platform 3.0 to define the common models that you need to access the technologies in question.

It will also have common artifacts, such as architectural principles, stakeholders, definitions, descriptions, and so on. If the independently developed architectures use those, it will mean that they can be integrated more easily.

So how are we going to complete the definition of Open Platform 3.0? This slide comes from our business use cases’ White Paper and it shows the 22 use cases we published. We've added one or two to them since the publication in a whole range of areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on. Those use cases touch on a wide variety of areas.

You can see that we've started an analysis of those use cases. This is an ArchiMate picture showing how our first business use case, The Mobile Smart Store, could be realized. 

Business layer

And as you look at that, you see common models. If you notice, that is pretty much the same as the TOGAF Technical Reference Model (TRM) from the year dot. We've added a business layer. I guess that shows that we have come architecturally a little way in that direction since the TRM was defined. 

But you also see that the same model actually appears in the same use case in a different place, and it appears all over the business use cases.

But you can also see there that the Mobile Connected Device Model has appeared in this use case and is appearing in other use cases. So as we analyze those use cases, we're finding common models that can be identified, as well as common principles, common stakeholders, and so on. 

So we have a development cycle, whereby the use cases provide an understanding. We'll be looking not only at the ones we have developed, but also at things like the healthcare presentation that we heard this morning. That is really a use case for Open Platform 3.0 just as much as any of the ones that we have looked at. We'll be doing an analysis of those use cases and the specification and we'll be iterating through that. 
This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially new technologies. 

The White Paper represents the very first pass through that cycle. Further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt, more than one version of that standard.

In conclusion, Open Platform 3.0 provides a common environment for architecture development. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially new technologies. 

Cognitive computing no doubt has been suggested as another technology that Open Platform 3.0 might, in due course, accommodate. What would that lead to? That would lead to additional use cases and further analysis, which would no doubt identify some basic models for common computing, which will be added to the platform. 

Open Platform 3.0 enables enterprise IT to be user-driven. This is really the revolution on that slide that showed the IT department becoming a broker, and devolvement of IT to cloud suppliers and so on. That's giving users the ability to drive IT directly themselves, and the platform will enable that. 

It will deliver the ability to integrate solutions that have been independently developed, with independently developed architectures, and to do that within a business ecosystem, because businesses typically exist within one or more business ecosystems. 

Those ecosystems are dynamic. Partners join, partners leave, and businesses cannot necessarily standardize the whole architecture across the ecosystem. It would be nice to do so, but by the time you finish the job, the business opportunity would be gone. 

So independently developed integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them. 

Iterative process

The platform will deliver that and is being developed through an iterative process of understanding the content, analyzing the use cases, and documenting the common features, as I have explained.

The development is being done by The Open Platform 3.0 Forum, and these are representatives of Open Group members. They are defining the platform. And the forum is not only defining the platform, but it's also working on standards and guides in the technology areas. 

For example, we have reformed a group to develop a White Paper on big data. If you want to learn about that, Ken Street, who is one of the co-chairs, is in this conference. And we also have cloud projects and other projects.

But not only are we doing the development within the Forum, we welcome input and comments from other individuals within and outside The Open Group and from other industry bodies. That’s part of the purpose of publishing the White Paper and giving this presentation to obtain that input and comment. 
The platform will deliver that and is being developed through an iterative process of understanding the content, analyzing the use cases, and documenting the common features

If you need further information, here's where you can download the White Paper from. You have to give your name and email address and have an Open Group ID and then it's free to download. 

If you are looking for deeper information on what the Forum is doing, the Forum Plato page, which is the next URL, is the place to find it. Nonmembers get some information there; Forum members can log in and get more information on our work in progress. 

If your organization is not a member of The Open Group, you can find out about Open Group membership from that URL. So thank you very much for your attention.

Boardman: Next is Lydia Duijvestijn, who is one of these people who, years ago when I first got involved in this business, we used to call Technical Architects, when the term meant something. The Technical Architect was the person who made sure that the system actually did what the business needed it to do, that it performed, that it was reliable, and that it was trustworthy. 

That's one of her preoccupations. Lydia is going to give us a short presentation about some ideas that she is developing and is going to contribute to The Open Platform 3.0. 

Quality of service

Duijvestijn: Like Stuart said, my profession is being an architect, apart from your conventional performance engineer. I lead a worldwide community within IBM for performance and competency. I've been working a couple of years with the Dutch Research Institute on projects around quality of service. That basically is my focus area within the business. I work for Global Services within IBM. 

Duijvestijin
What I want to achieve with this presentation is for you to get a better awareness of what functional requirements, functional characteristics, or quality of service characteristics are, and why they won't just appear out of the blue when the new world of Platform 3.0 comes along. They are getting more and more important. 

I will zoom in very briefly on three categories; performance and scalability, availability and business continuity, and security and privacy. I'm not going to talk in detail about these topics. I could do that for hours, but we don’t have the time. 

Then, I'll briefly start the discussion on how that reflects into Platform 3.0. The goal is that when we're here next year at the same time, maybe we would have formed a stream around it and we would have many more ideas, but now, it's just in the beginning.

This is a recap, basically, of a non-functional requirement. We have to start the presentation with that, because maybe not everybody knows this. They basically are qualities or constraints that must be satisfied by the IT system. But normally, it's not the highest priority. Normally, it's functionality first and then the rest. We'll find out about that later when the thing is going into production, and then it's too late. 

So what sorts of non-functionals do we have? We have run-time non-functionals, things that can be observed at run-time, such as performance, availability, or what have you. We also have non-run-time non-functionals, things that cannot apparently be tested, such as maintainability, but they are all very important for the system. 
Non-functionals are fairly often seen as a risk. If you did not pay attention to them, very nasty things could happen.

Then, we have constraints, limitations that you have to be aware of. It looks like in the new world, there are no limitations, cloud is endless, but in fact it's not true. 

Non-functionals are fairly often seen as a risk. If you did not pay attention to them, very nasty things could happen. You could lose business. You could lose image. And many other things could happen to you. It's not seen as something positive to work on it. It's seen as a risk if you don’t do it, but it's a significant risk. 

We've seen occasions where a system was developed that was really doing what it should do in terms of functionality. Then, it was rolled into production, all these different users came along, and the website completely collapsed. The company was in the newspapers, and it was a very bad place to be in. 

As an example, I took this picture in Badaling Station, near the Great Wall. I use this in my performance class. This depicts a mismatch between the workload pattern and the available capacity. 

What happens here is that you take the train in the morning and walk over to Great Wall. Then you've seen it, you're completely fed up with it, and you want to go back, but you have to wait until 3 o’clock for the first train. The Chinese people are very patient people. So they accept that. In the Netherlands people would start shouting and screaming, asking for better.

Basic mismatch

This is an example from real life, where you can have a very dissatisfied user because there was a mismatch between the workload, the arrival pattern, and available capacity. 

But it can get much worse, here we have listed a number of newspaper quotes as a result of security incidents. This is something that really bothers companies. This is also non-functional. It's really very important, especially when we go towards always on, always accessible, anytime, anywhere. This is really a big issue. 

There are many, many non-functional aspects, as you can see. This guy is not making sense out of it. He doesn’t know how to balance it, because it's not as if you can have them all. If you put too much focus on one, it could be bad for the other. So you really have to balance and prioritize. 

Not all non-functionals are equally important. We picked three of them for our conference in February: performance, availability and security. I now want to talk about performance. 
It's really very important, especially when we go towards always on, always accessible, anytime, anywhere. This is really a big issue. 

Everybody recognizes this picture. This was Usain Bolt winning his 100 meters in London. Why did I put this up? Because it very clearly shows what it's all about in performance. There are three attributes that are important.

You have the response time, basically you compare the 100 meters time from start to finish. 

You have the throughput, that is the number of items that can be processed with the time limit. If this is an eight-lane track, you can have only eight runners at the same time. And the capacity is basically the fact that this was an eight lane track, and they are all dependent on each other. It's very simple. But you have to be aware of all of them when you start designing your system. So this is performance. 

Now, let’s go to availability. That is really a very big point today, because with the coming of the Internet in the '90s, availability really became important. We saw that when companies started opening up their mainframes for the Internet, they weren't designed for being open all the time. This is about scheduled downtime. Companies such as eBay, Amazon, Google are setting the standard. 

We come to a company, and they ask us for our performance engineering. We ask them what their non-functional requirements are. They tell us that it has to be as fast as Google.

Well, you're not doing the same thing as Google; you are doing something completely different. Your infrastructure doesn’t look as commodity as Google's does. So how are you going to achieve that? But that is the perception. That is what they want. They see that coming their way.

Big challenge

They're using mobile devices, and they want it also in the company. That is the standard, and disaster recovery is slowly going away. RTO/RPO are going to 0. It's really a challenge. It's a big challenge.

The future is never-down technology independence, and it's very important to get customer satisfaction. This is a big thing.

Now, a little bit about security incidents. I'm not a security specialist. This was prepared by one of my colleagues. Her presentation shows that nothing is secure, nothing, and you have all these incidents. This comes from a report that tracks over several months what sort of incidents are happening. When you see this, you really get frightened. 

Is there a secure site? Maybe, they say, but in fact, no, nothing is secure. This is also very important, especially nowadays. We're sharing more and more personal information over the net. It's really important to think about this. 

What does this have to do with Platform 3.0? I think I answered it already, but let's make it a little bit more specific. Open Platform 3.0 has a number of constituents, and Chris has introduced that to you. 
In the Internet of Things,we have all these devices, sensors, creating huge amounts of data. They're collected by very many different devices all over the place. 

I want to highlight the following clouds, the ones with the big letters in it. There is Internet-of-Things, social, mobile, cloud, big data, but let’s talk about this and briefly try to figure out what it means in terms of non-functionals. 

In the Internet of Things,we have all these devices, sensors, creating huge amounts of data. They're collected by very many different devices all over the place. 

If this is about healthcare, you can understand that privacy must be ensured. Social security privacy is very important in that respect. And it doesn’t come for free. We have to design it into the systems. 

Now, big data. We have the four Vs there; Volume, Variety, Velocity, and Veracity. That already suggests a high focus on non-functionals, especially volume, performance, veracity, security, velocity, performance, and also availability, because you need this information instantaneously. When decisions have to be made based on it, it has to be there. 

So non-functionals are really important for big data. We wrote a white paper about this, and it's very highly rated. 

Cloud has a specific capacity of handling multi-tenant environments. So we have to make sure that the information of one tenant isn’t entered in another tenant’s environment. That's a very important security problem again. There are different workloads coming in parallel, because all these tenants have to have very specific types of workloads. So we have to handle it and balance it. That’s a performance problem. 

Non-functional aspects

Again, there are a lot of non-functional aspects. For mobile and social, the issue is that  you have to be always on, always there, accessible from anywhere. In social especially, you want to share your photos, you personal data, with your friends. So it's social security again. 

It's actually very important in Platform 3.0 and it doesn’t come for free. We have to design it into our model. 

That's basically my presentation. I hope that you enjoyed it and that it has made you aware of this important problem. I hope that, in the next year, we can start really thinking about how to incorporate this in Platform 3.0. 
Boardman: Let me introduce the panelists: Andy Jones of SOA Software, TJ Virdi from Boeing, Louis Dietvorst from Enexis, Sjoerd Hulzinga from KPN, and Frans van der Reep from Inholland University. 
The subject of interoperability, the semantic layer, is going to be a permanent and long running problem.

We want the panel to think about what they've just heard and what they would like Platform 3.0 to do next. What is actually going to be the most important, the most useful, for them, which is not necessarily the things we have thought of.

Jones: The subject of interoperability, the semantic layer, is going to be a permanent and long running problem. We're seeing some industries. for example, clinical trials data, where they can see movement in that area. Some financial services businesses are trying to abstract their information models, but without semantic alignment, the vision of the platform is going to be difficult to achieve. 

Dietvorst: For my vision on Platform 3.0 and what it should support, I am very much in favor of giving the consumer or the asking party the lead, empower them. If you develop this kind of platform thinking, you should do it with your stakeholders and not for your stakeholders. And I wonder how can we attach those kind of stakeholders that they become co-creators. I don’t know the answer. 

Male Speaker: Neither do I, but I feel that what The Open Group should be doing next on the platform is, just as my neighbor said, keep the business perspective, the user perspective, continuously in your focus, because basically that’s the only reason you're doing it. 

In the presentation just now from Lydia about NFRs, you need to keep in mind that one of the most difficult, but also most important, parts of the model ought to be the security and blind spots over it. I don’t disagree that they are NFRs. They are probably the most important requirements. It’s where you start. That would be my idea of what to do next. 

Not platform, but ecosystem

Male Speaker: Three remarks. First, I have the impression this is not a platform, but an ecosystem. So one should change the wording, number one.You should correct the wording. 

Second, I should stress the business case. Why should I buy this? What problem does it solve? I don’t know yet. 

The third point, as the Open Group, I would welcome a lobby to make IT vendors, in a formal sense, product reliable like other industries -- cars, for example. That will do a lot for the security problem the last lady talked about. IT centers are not reliable. They are not responsible. That should change in order to be a grownup industry. 

Virdi: I agree about what’s been said, but I will categorize in three elements here what I am looking for from a Boeing perspective on what platform should be doing: how enterprises could create new business opportunities, how they can actually optimize their current business processes or business things, and how they can optimize the operational aspects. 

So if there is a way to expedite these by having some standardized way to do things, Open Platform 3.0 would be a great forum to do that. 
In the bottom layers, in the infrastructure, there is lot of reliability. Everything is very much known and has been developed for a long time.

Boardman: Okay, thanks.Louis made the point that we need to go to the stakeholders and find out what they want. Of course, we would love if everybody in the world were a member of The Open Group, but we realize that that isn’t going to be the case tomorrow, perhaps the day after, who knows. In the meantime, we're very interested in getting the perspectives of a wider audience. 

So if you have things you would like to contribute, things you would like to challenge us with, questions, more about understanding, but particularly if you have ideas to contribute, you should feel free to do that. Get in touch probably via Chris, but you could also get in touch with either TJ or me as co-chairs, and put in your ideas. Anybody who contributes anything will be recognized. That was a reasonable statement, wasn’t it Chris? You're official Open Group? 

Is there anybody down there who has a question for this panel, raise your hand? 

Duijvestijn: Your remark was that IT vendors are not reliable, but I think that you have to distinguish the layers of the stack. In the bottom layers, in the infrastructure, there is lot of reliability. Everything is very much known and has been developed for a long time. 

If you look at the Gartner reports about incidents in performance and availability, what you see is that most of these happen because of process problems and application problems. That is where the focus has to be. Regarding the availability of applications, nobody ever publishes their book rate.

Boardman: Would anybody like to react to that?

Male Speaker: I totally agree with what Lydia was just saying. As soon as you go up in the stack, that’s where the variation starts. That’s where we need to make sure that we provide some kind of capabilities to manage that easily, so the business can make those kind of expedited way to provide business solutions on that. That’s where we're actually targeting it. 

The lower in the stack we go, it's already commoditized. So we're just trying to see how far high we can go and standardize those things.

Two discussions

Male Speaker: I think there are two discussions together; one discussion is about the reliability on the total [IT process], where the fault is in a [specific IT stack]. I think that’s two different discussions.

I totally agree that IT, or at least IT suppliers, need to focus more on reliability when they get the service as a whole. The customers aren’t interested in where in the stack the problem is. It should be reliable as a whole, not on a platform or in the presentation layer. That’s a non-issue, non-operational, but a non-issue. The issue is it should be reliable, and I totally agree that IT has a long way to go in that department.  
Boardman: I'm going to move on to another question, because an interesting question came up on the Tweets. The question is: "Do you think that Open Platform 3.0 will change how enterprises will work, creating new line of business applications? What impact do you see?" An interesting question. Would anybody like to endeavor to answer that?

Male Speaker: That’s an excellent question actually. When creating new lines of business applications, what we're really looking for is semantic interoperability. How can you bridge the gap between social and business media kind of information, so you can utilize the concept of what’s happening in the social media? Can you migrate that into a business media kind of thing and make it a more agile knowledge or information transfer. 
We are seeing a trend towards line of business apps being composed from micro-apps. So there's less ownership of their own resources.

For example, in the morning we were talking about HL7 as being very heavyweight for healthcare systems. There may be need to be some kind of an easy way to transform and share information. Those kind of things. If we provide those kind of capabilities in the platform, that will make the new line-of-business applications easier to do, as well as it will have an impact in the current systems as well. 

Jones: We are seeing a trend towards line of business apps being composed from micro-apps. So there's less ownership of their own resources. And with new functionality being more focused on a particular application area, there's less utility bundling. 

It also leads on to the question of what happens to the existing line of business apps. How will they exist in an enterprise, which is trying to go for a Platform 3.0 kind of strategy? Lydia’s point about NFRs and the importance of the NFRs brings into light a question of applications that don’t meet NFRs which are appropriate to the new world, and how you retrofit and constrain their behavior, so that they do play well in that kind of architecture. This is an interesting problem for most enterprises. 
Boardman: There's another completely different granularity question here. Is there a concept of small virtualization, a virtual machine on a watch or phone? 

Male Speaker: On phones and all, we have to make a compartmentalized area, where it's kind of like a sandbox. So you can consider that as a virtualization of area, where you would be doing things and then tearing that apart. 

It's not similar to what virtualization is, but it's creating a sandbox in smart devices, where enterprises could utilize some of their functionality, not mingling up with what are called personal device data. Those things are actually part of the concept, and could be utilized in that way. 

Architectural framework

Question: My question about virtualization is linked to whether this is just an architectural framework. Because when I hear the word platform, it's something I try to build something on, and I don’t think this is something I build on. If you can, comment on the validity of the use of the word platform here. 

Male Speaker: I don’t care that much what it is called. If I can use it in whatever I am doing and it produces a positive outcome for me, I'm okay with it. I gave my presentation the Internet-of-Things, or the Internet of everything, or the everywhere or the Thing of Net, or the Internet of People. Whatever you want to call it, just name it, if you can identify its object that’s important to you. That’s okay with me. The same thing goes for Platform 3.0 or whatever.

I'm happy with whatever you want to call it. Those kinds of discussions don't really contribute to the value that you want to produce with this effort. So I am happy with anything. You don't agree?
What we're really trying to do is provide some kind of capabilities that would expedite enterprises to build their business solutions on that.

Male Speaker: A large part of architecture is about having clear understandings and what they mean.

Male Speaker: Let me augment what was just said, and I think Dr. Harding was also alluding to this. It is in the stage where we're defining what Platform 3.0 is. One thing for sure is that we're going to be targeting it as to how you can build that architectural environment. 

Whether it may have frameworks or anything is still to be determined. What we're really trying to do is provide some kind of capabilities that would expedite enterprises to build their business solutions on that. Whether it's a pure translation of a platform per se is still to be determined. 

Boardman: The Internet-of-Things is still a very fuzzy definition. Here we're also looking at fuzzy definitions, and it's something that we constantly get asked questions about. What do we mean by Platform 3.0? 

The reason this question is important, and I also think Sjoerd’s answer to it is important, is because there are two aspects of the problem. What things do we need to tie down and define because we are architects and what things can we simply live with. As long as I know that his fish is my bicycle, I'm okay. 

It's one of the things we're working on. One of the challenges we have in the Forum is what exactly are we going to try and tie down in the definition and what not? Sorry, I had to slip that one in. 

I wanted to ask about trust, how important you see the issue of trust. My attention was drawn to this because I just saw a post that the European Court of Justice has announced that Google has to make it possible for any person or organization who asks for it to have Google erase all information that Google has stored anywhere about them

I wonder whether these kinds of trust issues going to become critical for the success of this kind of  ecosystem, because whether we call it a platform or not, it is an ecosystem.

Trust is important

Male Speaker: I'll try to start an answer. Trust is a very important part ever since the Internet became the backbone of all of those processes and all of those systems in those data exchanges. The trouble is that it's very easy to compromise that trust, as we have seen with the word from the NSA as exposed by Snowden. So yes, trust ought to be a part of it, but trust is probably pretty fragile the way w're approaching it right now. 

Do I have a solution to that problem? No, I don't. Maybe it will come in this new ecosystem. I don't see it explicitly being addressed, but I am assuming that, between all those little clouds, there ought to be some kind of a trust relationship. That's my start of an answer.

Jones: Trust is going to be one of those permanently difficult questions. In historical times, maybe the types of organizations that were highest in trust ratings would have been perhaps democratic governments and possibly banks, neither of which have been doing particularly well in the last five years in that area. 

It’s going to be an ethical question for organizations who are gathering and holding data on behalf of their consumers. We know that if you put a set of terms and conditions in front of your consumers, they will probably click on "agree" without reading it. So you have to decide what trust you're going to ask for and what trust you think you can deliver on. 
That data can then be summarized across groups of individuals to create an ensemble dataset. At what level of privacy are we then?

Data ownership and data usage is going to be quite complex. For example, in clinical trials data, you have a set of data, which can be identified against the named individual. That sounds quite fine, but you can then make that set of data so it’s anonymized and is known to relate to a single individual, but can no longer identify who. Is that as private? 

That data can then be summarized across groups of individuals to create an ensemble dataset. At what level of privacy are we then? It seems to quickly goes out of the scope of reason and understanding of the consumer themselves. So the responsibility for ethical behavior appears to lie with the experts, which is always quite a dangerous place.

Male Speaker: We probably all agree that trust management is a key aspect when we are converging different solutions from so many partners and suppliers. When we're talking about Internet of data, Internet-of-Things, social, and mobile, no one organization would be providing all the solutions from scratch. 

So we may be utilizing stuff from different organizations or different organizational boundaries. Extending the organizational boundaries requires a very strong trust relationship, and it is very significant when you are trying to do that.

Boardman: There was a question that went through a little while ago. I'm noticing some of these questions are more questions to The Open Group than to our panel, but one I felt I could maybe turn around. The question was: "What kind of guidelines is the Forum thinking of providing?"

I'd like to do is turn that around to the panel and ask: what do you think it would be useful for us to produce? What would you like a guideline on, because there would be lots of things where you would think you don’t need that, you'll figure it out for yourself. But what would actually be useful to you if we were to produce some guidelines or something that could be accepted as a standard? 

Does it work?

Male Speaker: Just go to a number of companies out there and test whether it works. 

Male Speaker: In terms of guidelines, you mentioned it very well about semantic interoperability. How do you exchange information between different participants in an ecosystem or things built on a platform. 

The other thing is how you can standardize things that are yet to be standardized. There's unstructured data. There are things that need to be interrogated through that unstructured data. What are the guiding principles and guidelines that would do those things? So maybe in those areas, Platform 3.0 with the participations from these Forum members, can advance and work on it. 

Jones: I think contract, composition, and accumulation. If an application is delivering service to its end users by combining dozens of complementary services, each of which has a separate contract, what contract can it then offer to its end user?

Boardman: Does the platform plan to define guidelines and directions to define application programming interfaces (APIs) and data models or specific domains? Also, how are you integrating with major industry reference models? 

Just for the information, some of this is work of other parts of The Open Group's work around industry domain reference models and that kind of thing. But in general, one of the things we've said from the Platform, from the Forum, is that as much as possible, we want to collate what is out there in terms of standards, both in APIs, data models, open data, etc.
No single organization would be able to actually tap into all the advancement that’s happening in technologies, processes, and other areas where business could utilize those things so quickly.

We're desperate not to go and reproduce anybody else’s work. So we are looking to see what’s out there, so the guideline would, as far as possible, help to understand what was available in which domain, whether that was a functional domain, technical domain, or whatever. I just thought I would answer those because we can’t really ask the panel that.

We said that the session would be about dealing with realizing business value, and we've talked around issues related to that, depending on your own personal take. But I'd like to ask the members of the panel, and I'd like all of you to try and come up with an answer to it: What do you see are the things that are critical to being able to deliver business value in this kind of ecosystem?

I keep saying ecosystem, not to be nice to Frans, I am never nice to Frans, but because I think that that captures what we are talking about better. So do you want to start TJ? What are you looking for in terms of value? 

Virdi: No single organization would be able to actually tap into all the advancement that’s happening in technologies, processes, and other areas where business could utilize those things so quickly. The expectations from business values or businesses to provide new solutions in real-time, information exchange, and all those things are the norm now. 

We can provide some of those as a baseline to provide as maybe foundational aspects to business to realize those new things what we are looking as in social media or some other places, where things are getting exchanged so quickly, and the kind of payload they have is a very small payload in terms of information exchange.

So keeping the integrity of information, as well as sharing the information with the right people at the right time and in the right venue, is really the key when we can provide those kind of enabling capabilities.

Ease of change

Jones: In Lydia’s presentation, at the end, she added the ease of use requirement as the 401st. I think the 402nd is ease of change and the speed of change. Business value pretty much relies on dynamism, and it will become even more so. Platforms have to be architected in a way that they are sufficiently understood that they can change quickly, but predictably, maintaining the NFRs. 

Dietvorst: One of the reasons why I would want to adopt this new ecosystem is that it gives me enough feeling that it is a reliable product. What we know from the energy system innovations we've done the last three or four years is that the way you enable and empower communities is to build up the trust themselves, locally, like you and your neighbor, or people who are close in proximity. Then, it’s very easy to build trust. 

Some call it social evidence. I know you, you know me, so I trust you. You are my neighbor and together we build a community. But the wider this distance is, the less easy it is to trust each other. That’s something you need to build in into the whole concept. How do you get the trust if it is something that's a global concept. It seems hardly possible.

van der Reep: This ecosystem, or whatever you're going to call it, needs to bring the change, the rate of change. "Change is life" is a well-known saying, but lightning-fast change is the fact of life right now, with things like social and mobile specifically. 

One Twitter storm and the world has a very different view of your company, of your business. Literally, it can happen in minutes. This development ought to address that, and also provide the relevant hooks, if you will, for businesses to deal with that. So the rate of change is what I would like to see addressed in Platform 3.0, the ecosystem. 
In order to create meaningful customer interaction, what we used to call center or whatever, that is where the cognition comes in.

Male Speaker: It should be cheap and reliable, it should allow for change, for example Cognition-as-a-Service, and it should hide complexity for those "stupid businesspeople" and make it simple. 

Boardman: I want to pick up on something that Frans just said because it connects to a question I was going to ask anyway. People sometimes ask us why the particular five technologies that we have named in the Forum: cloud, big data, big-data analysis, social, mobile, and the Internet-of-Things? It's a good question, because fundamental to our ideas in the Forum that it’s not just about those five things. Other things can come along and be adopted. 

One of the things that we had played with at the beginning and decided not to include, just on the basis of a feeling about lack of maturity, was cognitive computing. Then, here comes Frans and just mentions cognitive things. 

I want to ask the panel: "Do you have a view on cognitive computing? Where is it? When we can expect it to be something we could incorporate? Is it something that should be built into the platform, or is it maybe just tangential to the platform?" Any thoughts? 

Male Speaker: I did a speech on this last week. In order to create meaningful customer interaction, what we used to call center or whatever, that is where the cognition comes in. That's a very big market and there's no reason not to include it in the lower levels of the platform and to make it into cloud. 

We have lots of examples already in the Netherlands that ICT devices recognize emotions and from recognizing speech. Recognizing emotion, you can optimize the matching of the company with the customer, and you can hide complexity. I think there’s a big market for that. 

What the business wants

Virdi: We need to look at it in the context of what business wants to do with that. It could be enabling things that could be what I consider as proprietary things, which may not be part of the platform for others to utilize. So we have to balance out what would be the enabling things we can provide as a base of foundation for everyone to utilize. Or companies can build on top of it what values it would provide. We probably have to do a little bit further assessment on that.

Male Speaker: I'd like to follow up on this notion of cognitive computing, the notion that maybe objects are self-aware, as opposed to being dumb -- self-aware being an object, a sensor that’s aware of its neighbor. When a neighbor goes away, it can find other neighbors. Quite simple as opposed to a bar code. 

We see that all the time. We have kids that are civil engineers and they pour it in concrete all the time. In terms of cost, in terms of being able to have the discussion, it's something that’s in front of us all the time. So at this time, should we probably think about at least the binary aspect of having self-aware sensors as opposed to dumb sensors?

Male Speaker: From aviation perspective, there are some areas where dumb devices would be there, as well as active devices. There are some passive sensor devices where you can just interrogate them when you request and there are some devices that are active, constantly sending sensor messages. Both are there in terms of utilization for business to create new business solutions. 
I'm certainly all in favor of devices in the field being able to tell you what they're doing and how they think they're feeling.

Both of them are going to be there, and it depends upon what business needs are to support those things. Probably we could provide some ways to standardize some of those and some other specifications. For example, an ATA, for aviation. They're doing that already. Also, in healthcare, there's HL7, looking for doing some smart sensor devices to exchange information as well. So some work is already happening in the industry. 

There are so many business solutions that have already been built on those. Maybe they're a little bit more proprietary. So a platform could provide some ways to provide a standard base to exchange that information. It may be some things relate to guidelines and how you can exchange information in those active and passive sensor devices.

Jones: I'm certainly all in favor of devices in the field being able to tell you what they're doing and how they think they're feeling. I have an interest in complex consumer devices in retail and other field locations, especially self-service kiosks, and in that field quite a lot of effort has been spent trying to infer the states of devices by their behavior, rather than just having them tell you what's going on, which should be so much easier. 

Male Speaker: Of course, it depends on where the boundary is between aware and not aware. If there is thermometer in the field and it sends data that it's 15 degrees centigrade, for example, do I really want to know whether it thinks it's chilly or not? I'm not really sure about it. 

I'd have to think about it a long time to get a clear answer on whether ther's a benefit in self-aware devices in those kinds of applications. I can understand that there will be an advantage in self-aware sensor devices, but I struggle a little to see any pattern or similarities in those circumstances. 

I could come up with use cases, but I don’t think it's very easy to come up with a certain set of rules that leads to the determination whether or not a self-aware device is applicable in that particular situation. It's a good question. I think it deserves some more thought, but I can't come up with a better answer than that right now.

Case studies

Skilton: I just wanted to add to the embedded question, because I thought it was a very good one. Three case studies happened to me recently. I was doing some work with Rolls Royce and the MH370, the flight that went down. One of the key things about the flight was that the engines had telemetry built in. TJ, you're more qualified to talk about this than I am, but essentially there was information that was embedded in the telemetry of the technology of the plane. 

As we know from the mass media that reported on that, that they were able to analyze from some of the data potentially what was going on in the flight. Clearly, with the band connection, it was the satellite data that was used to project it was going south, rather than north. 

So one of the lessons there was that smart information built into the object was of value. Clearly, there was a lesson learned there. 

With Coca Cola, for example, what's very interesting in retail is that a lot of the shops now have embedded sensors in the cooler systems or into products that are in the warehouse or on stock. Now, you're getting that kind of intelligence over RFID coming back into the supply chain to do backfilling, reordering, and stuff like that. So all of this I see is smart. 
Embedded technology in the dashboard is going to be something that is going to be coming in the next three to five years.

Another one is image recognition when you go into a car park court. You have your face being scanned in, whether you want it or not. Potentially, they can do advertising in context. These are all smart feedback loops that are going on in these ecosystems and are happening right now. 

There are real equations of value in doing that. I was just looking at the Open Automotive Alliance. We've done some work with them around connected car forecast. Embedded technology in the dashboard is going to be something that is going to be coming in the next three to five years with BMW, Jaguar Land Rover, and Volvo. All the major car players are doing this right now. 

So Open Platform 3.0 for me is riding that wave of understanding where the  intelligence and the feedback mechanisms work within each of the supply chains, within each of the contexts, either in the plane, in the shop, or whatever, starting to get intelligence built in. 

We talk about big data and small data at the university that I work at. At the moment, we're moving from a big-data era, which is analytics, static, and analyzing the process in situ. Most likely it's Amazon sort of purchasing recommendations or advertisement that you see on your browser today. 

We 're moving to a small-data era, which is where you have very much data in context of what's going on in the events at that time. I would expect this with embedded technologies. The feedback loops are going to happen within each of the traditional supply chains and will start to build that strength.

The issue for The Open Group is to capture the sort of standards of interoperability and connectivity much like what Boeing is already leading with, with the automotive sector , and with the airline sector. It's riding that wave, because the value of bringing that feedback into context, the small-data context is where the future lies. 

Infrastructure needed

Male Speaker: I totally agree. Not only are the devices or individual components getting smarter, but that requires infrastructures to be there to utilize that sensing information in a proper way. From the Platform 3.0 guidelines or specifications perspective, determining how you can utilize some devices, which are already smart, and others, which are still considered to be legacy, and how you can bridge those gap would be a good thing to do.
Boardman: Would anyone like to add anything, closing remarks?

Jones: Everybody’s perspective and everybody’s context is going to be slightly different. We talked about whether it's a platform ora framework. In the end there will be a built universal 3.0 Platform, but everybody will still have a different view and a different perspective of what it does and what it means to them. 
My suggestion would be that, if you're going to continue with this ecosystem, try to built it up locally, in a locally controlled environment.

Male Speaker: My suggestion would be that, if you're going to continue with this ecosystem, try to built it up locally, in a locally controlled environment, where you can experiment and see what happens. Do it at many places at the same time in the world, and let the factors be proof of the pudding. 

Male Speaker: Whatever you are going to call it, keep to 3.0, that sounds snappy, but just get the beneficiaries in, get the businesses in, and get the users in.

Male Speaker: The more open, the more a commodity it will be. That means that no company can get profit from it. In the end, human interaction and stewardship will enter the market. If you come to London city airport and you find your way in the Tube, there is a human being there who helps you into the system. That becomes very important as well. I think you need to do both, stewardship and these kinds of ecosystems that spread complexity. 

Listen to the podcast. Find it on iTunesRead a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: