Tuesday, June 24, 2014

The Open Group Amsterdam panel delves into how to best gain business value from Open Platform 3.0

The next BriefingsDirect panel discussion defines new business values from the massive Open Platform 3.0 shift that combines the impacts and benefits of big data, cloud, Internet of things, mobile and social.

Our discussion comes to you from The Open Group Conference held on May 13, 2014 in Amsterdam, where the focus was on enabling boundaryless information flow.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

To learn more about making Open Platform 3.0 a business benefit in an architected fashion, please join moderator Stuart Boardman, a Senior Business Consultant at KPN and Open Platform 3.0 Forum co-chairman; Dr. Chris Harding, Director for Interoperability at The Open Group, and Open Platform 3.0 Forum Director; Lydia Duijvestijn, Executive Architect at IBM Global Business Services in The Netherlands; Andy Jones, Technical Director for EMEA at SOA Software; TJ Virdi, Computing Architect in the Systems Architecture Group at Boeing and also a co-chair of the Open Platform 3.0 Forum; Louis Dietvorst, Enterprise Architect at Enexis in The Netherlands; Sjoerd Hulzinga, Charter Lead at KPN Consulting, and Frans van der Reep, Professor at the Inholland University of Applied Sciences.

Here are some excerpts:

Boardman: Welcome to the session about obtaining value from Open Platform 3.0, and how we're actually going to get value out of the things that we want to implement from big data, social, and the Internet-of-Things, etc., in collaboration with each other. 

Boardman
We're going to start off with Chris Harding, who is going to give us a brief explanation of what the platform is, what we mean by it, what we've produced so far, and where we're trying to go with it. 

He'll be followed by Lydia Duijvestijn, who will give us a presentation about the importance of non-functional requirements (NFRs). If we talk about getting business value, those are absolutely central. Then, we're going to go over to a panel discussion with additional guests. 

Without further ado, here's Chris Harding, who will give you an introduction to Open Platform 3.0. 

Purpose of architecture

Harding: Hello, everybody. It's a great pleasure to be here in Amsterdam. I was out in the city by the canals this morning. The sunshine was out, and it was like moving through a set of picture postcards. 

Harding
It's a great city. As you walk through, you see the canals, the great buildings, the houses to the sides, and you see the cargo hoists up in the eaves of those buildings. That reminds you that the purpose of the arrangement was not to give pleasure to tourists, but because Amsterdam is a great trading city, that is a very efficient way of getting goods distributed throughout the city. 

That's perhaps a reminder to us that the primary purpose of architecture is not to look beautiful, but to deliver business value, though surprisingly, the two often seem to go together quite well. 

Probably when those canals were first thought of, it was not obvious that this was the right thing to do for Amsterdam. Certainly it would not be obvious that this was the right layout for that canal network, and that is the exciting stage that we're at with Open Platform 3.0 right now.

We have developed a statement, a number of use cases. We started off with the idea that we were going to define a platform to enable enterprises to get value from new technologies such as cloud computing, social computing, mobile computing, big data, the Internet-of-Things, and perhaps others.

We developed a set of business use cases to show how people are using and wanting to use those technologies. We developed an Open Group business scenario to capture the business requirements. That then leads to the next step. All these things sound wonderful, all these new technologies sound wonderful, but what is Open Platform 3.0? 

Jones
Though we don't have the complete description of it yet, it is beginning to take shape. That's what I am hoping to share with you in this presentation, our current thoughts on it. 

Looking historically, the first platform, you could say, was operating systems -- the Unix operating system. The reason why The Open Group, X/Open in those days, got involved was because we had companies complaining, "We are locked into a proprietary operating system or proprietary operating systems. We want applications portability." The value delivered through a common application environment, which was what The Open Group specified for Unix, was to prevent vendor lock-in. 

The second platform is the World Wide Web. That delivers a common services environment, for services either through accessing web pages for your browser or for web services where programs similarly can retrieve or input information from or to the web service. 

The benefit that that has delivered is universal deployment and access. Pretty much anyone or any company anywhere can create a services-based solution and deploy it on the web, and everyone anywhere can access that solution. That was the second platform. 

Common environment

The way Open Platform 3.0 is developing is as a common architecture environment, a common environment in which enterprises can do architecture, not as a replacement for TOGAF. TOGAF is about how you do architecture and will continue to be used with Open Platform 3.0. 

Open Platform 3.0 is more about what kind of architecture you will create, and by the definition of a common environment for doing this, the big business benefit that will be delivered will be integrated solutions. 

Yes, you can develop a solution, anyone can develop a solution, based on services accessible over the World Wide Web, but will those solutions work together out of the box? Not usually. Very rarely. 
There is an increasing need, which we have come upon in looking at The Open Platform 3.0 technologies. People want to use these technologies together. There are solutions developed for those technologies independently of each other that need to be integrated. That is why Open Platform 3.0 has to deliver a way of integrating solutions that have been developed independently. That's what I am going talk about. 

The Open Group has recently published its first thoughts on Open Platform 3.0, that's the White Paper. I will be saying what’s in that White Paper, what the platform will do -- and because this is just the first rough picture of what Open Platform 3.0 could be like -- how we're going to complete the definition. Then, I will wrap up with a few conclusions. 

So what is in the current White Paper? Well, what we see as being eventually in the Open Platform 3.0 standards are a number of things. You could say that a lot of these are common architecture artifacts that can be used in solution development, and that's why I'm talking about a common architecture environment.

Statement of need objectives and principles is not that of course; it's why we're doing it. 

Dietvorst
Definition of key terms: clearly you have to share an understanding of the key terms if you're going to develop common solutions or integrable solutions. 

Stakeholders and their concerns: an important feature of an architecture development. An understanding of the stakeholders and their concerns is something that we need in the standard. 

A capabilities map that shows what the products and services do that are in the platform. 

And basic models that show how those platform components work with each other and with other products and services. 

Explanation: this is an important point and one that we haven’t gotten to yet, but we need to explain how those models can be combined to realize solutions. 

Standards and guidelines

Finally, it's not enough to just have those models; there needs to be the standards and guidelines that govern how the products and services interoperate. These are not standards that The Open Group is likely to produce. They will almost certainly be produced by other bodies, but we need to identify the appropriate ones and, probably in some cases, coordinate with the appropriate bodies to see that they are developed.

van der Reep
What we have in the White Paper is an initial statement of needs, objectives, and principles; definitions of some key terms; our first-pass list of stakeholders and their concerns; and maybe half a dozen basic models. These are in an analysis of the use cases, the business use cases, for Open Platform 3.0 that were developed earlier. 

These are just starting points, and it's incomplete. Each of those sections is incomplete in itself, and of course we don't have the complete set of sections. It's all subject to change. 

This is one of the basic models that we identified in the snapshot. It's the Mobile Connected Device Model and it comes up quite often. And you can see, that stack on the left is a mobile device, it has a user, and it has a platform, which would probably be Android or iOS, quite likely. And it has infrastructure that supports the platform. It’s connected to the World Wide Web, because that’s part of the definition of mobile computing. 

On the right, you see, and this is a frequently encountered pattern, that you don't just use your mobile phone for running an app. Maybe you connect it to a printer. Maybe you connect it to your headphones. Maybe you connect it to somebody's payment terminal. You might connect it to various things. You might do it through a USB. You might do it through Bluetooth. You might do it by near field communication (NFC)
It's fundamental to mobile computing and also somewhat connected to the Internet of Things.

But you're connecting to some device, and that device is being operated possibly by yourself, if it was headphones; and possibly by another organization if, for example, it was a payment terminal and the user of the mobile device has a business relationship with the operator of the connected device.

That’s the basic model. It's one of the basic models that came up in the analysis of use cases, which is captured in the White Paper. As you can see, it's fundamental to mobile computing and also somewhat connected to the Internet-of-Things.

That's the kind of thing that's in the current White Paper, a specific example of all those models in the current White Paper. Let’s move on to what the platform is actually going to do? 

There are three slides in this section. This slide is probably familiar to people who have watched presentations on Open Platform 3.0 previously. It captures our understanding of the need to obtain information from these new technologies, the social media, the mobile devices, sensors, and so on, the need to process that information, maybe on the cloud, and to manage it, stewardship, query and search, all those things. 

Ultimately, and this is where you get the business value, it delivers it in a form where there is analysis and reasoning, which enables enterprises to take business decisions based on that information.

So that’s our original picture of what Open Platform 3.0 will do. 

IT as broker

This next picture captures a requirement that we picked up in the development of the business scenario. A gentleman from Shell gave the very excellent presentation this morning. One of the things you may have picked up him saying was that the IT department is becoming a broker.

Traditionally, you would have had the business use in the business departments and pretty much everything else on that slide in the IT department, but two things are changing. One, the business users are getting smarter, more able to use technology; and two, they want to use technology either themselves or to have business technologists closely working with them.

Systems provisioning and management is often going out to cloud service providers, and the programming, integration, and helpdesk is going to brokers, who may be independent cloud brokers. This is the IT department in a broker role, you might say. 

But the business still needs to retain responsibility for the overall architecture and for compliance. If you do something against your company’s principles, it's not a good defense to say, "Well, our broker did it that way." You are responsible. 
That's why we're looking for Open Platform 3.0 to define the common models that you need to access the technologies in question.

Similarly, if you break the law, your broker does not go to jail, you do. So those things will continue to be more associated with the business departments, even as the rest is devolved. And that’s a way of using IT that Open Platform 3.0 must and will accommodate. 

Finally, I mentioned the integration of independently developed solutions. This next slide captures how that can be achieved. Both of these, by the way, are from the analysis of business use cases. 

Also, you'll also notice they are done in ArchiMate, and I will give ArchiMate a little plug at this point, because we have found it actually very useful in doing this analysis. 

But the point is that if those solutions share a common model, then it's much easier to integrate them. That's why we're looking for Open Platform 3.0 to define the common models that you need to access the technologies in question.

It will also have common artifacts, such as architectural principles, stakeholders, definitions, descriptions, and so on. If the independently developed architectures use those, it will mean that they can be integrated more easily.

So how are we going to complete the definition of Open Platform 3.0? This slide comes from our business use cases’ White Paper and it shows the 22 use cases we published. We've added one or two to them since the publication in a whole range of areas: multimedia, social networks, building energy management, smart appliances, financial services, medical research, and so on. Those use cases touch on a wide variety of areas.

You can see that we've started an analysis of those use cases. This is an ArchiMate picture showing how our first business use case, The Mobile Smart Store, could be realized. 

Business layer

And as you look at that, you see common models. If you notice, that is pretty much the same as the TOGAF Technical Reference Model (TRM) from the year dot. We've added a business layer. I guess that shows that we have come architecturally a little way in that direction since the TRM was defined. 

But you also see that the same model actually appears in the same use case in a different place, and it appears all over the business use cases.

But you can also see there that the Mobile Connected Device Model has appeared in this use case and is appearing in other use cases. So as we analyze those use cases, we're finding common models that can be identified, as well as common principles, common stakeholders, and so on. 

So we have a development cycle, whereby the use cases provide an understanding. We'll be looking not only at the ones we have developed, but also at things like the healthcare presentation that we heard this morning. That is really a use case for Open Platform 3.0 just as much as any of the ones that we have looked at. We'll be doing an analysis of those use cases and the specification and we'll be iterating through that. 
This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially new technologies. 

The White Paper represents the very first pass through that cycle. Further passes will result in further White Papers, a snapshot, and ultimately The Open Platform 3.0 standard, and no doubt, more than one version of that standard.

In conclusion, Open Platform 3.0 provides a common environment for architecture development. This enables enterprises to derive business value from social computing, mobile computing, big data, the Internet-of-Things, and potentially new technologies. 

Cognitive computing no doubt has been suggested as another technology that Open Platform 3.0 might, in due course, accommodate. What would that lead to? That would lead to additional use cases and further analysis, which would no doubt identify some basic models for common computing, which will be added to the platform. 

Open Platform 3.0 enables enterprise IT to be user-driven. This is really the revolution on that slide that showed the IT department becoming a broker, and devolvement of IT to cloud suppliers and so on. That's giving users the ability to drive IT directly themselves, and the platform will enable that. 

It will deliver the ability to integrate solutions that have been independently developed, with independently developed architectures, and to do that within a business ecosystem, because businesses typically exist within one or more business ecosystems. 

Those ecosystems are dynamic. Partners join, partners leave, and businesses cannot necessarily standardize the whole architecture across the ecosystem. It would be nice to do so, but by the time you finish the job, the business opportunity would be gone. 

So independently developed integration of independently developed architectures is crucial to the world of business ecosystems and delivering value within them. 

Iterative process

The platform will deliver that and is being developed through an iterative process of understanding the content, analyzing the use cases, and documenting the common features, as I have explained.

The development is being done by The Open Platform 3.0 Forum, and these are representatives of Open Group members. They are defining the platform. And the forum is not only defining the platform, but it's also working on standards and guides in the technology areas. 

For example, we have reformed a group to develop a White Paper on big data. If you want to learn about that, Ken Street, who is one of the co-chairs, is in this conference. And we also have cloud projects and other projects.

But not only are we doing the development within the Forum, we welcome input and comments from other individuals within and outside The Open Group and from other industry bodies. That’s part of the purpose of publishing the White Paper and giving this presentation to obtain that input and comment. 
The platform will deliver that and is being developed through an iterative process of understanding the content, analyzing the use cases, and documenting the common features

If you need further information, here's where you can download the White Paper from. You have to give your name and email address and have an Open Group ID and then it's free to download. 

If you are looking for deeper information on what the Forum is doing, the Forum Plato page, which is the next URL, is the place to find it. Nonmembers get some information there; Forum members can log in and get more information on our work in progress. 

If your organization is not a member of The Open Group, you can find out about Open Group membership from that URL. So thank you very much for your attention.

Boardman: Next is Lydia Duijvestijn, who is one of these people who, years ago when I first got involved in this business, we used to call Technical Architects, when the term meant something. The Technical Architect was the person who made sure that the system actually did what the business needed it to do, that it performed, that it was reliable, and that it was trustworthy. 

That's one of her preoccupations. Lydia is going to give us a short presentation about some ideas that she is developing and is going to contribute to The Open Platform 3.0. 

Quality of service

Duijvestijn: Like Stuart said, my profession is being an architect, apart from your conventional performance engineer. I lead a worldwide community within IBM for performance and competency. I've been working a couple of years with the Dutch Research Institute on projects around quality of service. That basically is my focus area within the business. I work for Global Services within IBM. 

Duijvestijin
What I want to achieve with this presentation is for you to get a better awareness of what functional requirements, functional characteristics, or quality of service characteristics are, and why they won't just appear out of the blue when the new world of Platform 3.0 comes along. They are getting more and more important. 

I will zoom in very briefly on three categories; performance and scalability, availability and business continuity, and security and privacy. I'm not going to talk in detail about these topics. I could do that for hours, but we don’t have the time. 

Then, I'll briefly start the discussion on how that reflects into Platform 3.0. The goal is that when we're here next year at the same time, maybe we would have formed a stream around it and we would have many more ideas, but now, it's just in the beginning.

This is a recap, basically, of a non-functional requirement. We have to start the presentation with that, because maybe not everybody knows this. They basically are qualities or constraints that must be satisfied by the IT system. But normally, it's not the highest priority. Normally, it's functionality first and then the rest. We'll find out about that later when the thing is going into production, and then it's too late. 

So what sorts of non-functionals do we have? We have run-time non-functionals, things that can be observed at run-time, such as performance, availability, or what have you. We also have non-run-time non-functionals, things that cannot apparently be tested, such as maintainability, but they are all very important for the system. 
Non-functionals are fairly often seen as a risk. If you did not pay attention to them, very nasty things could happen.

Then, we have constraints, limitations that you have to be aware of. It looks like in the new world, there are no limitations, cloud is endless, but in fact it's not true. 

Non-functionals are fairly often seen as a risk. If you did not pay attention to them, very nasty things could happen. You could lose business. You could lose image. And many other things could happen to you. It's not seen as something positive to work on it. It's seen as a risk if you don’t do it, but it's a significant risk. 

We've seen occasions where a system was developed that was really doing what it should do in terms of functionality. Then, it was rolled into production, all these different users came along, and the website completely collapsed. The company was in the newspapers, and it was a very bad place to be in. 

As an example, I took this picture in Badaling Station, near the Great Wall. I use this in my performance class. This depicts a mismatch between the workload pattern and the available capacity. 

What happens here is that you take the train in the morning and walk over to Great Wall. Then you've seen it, you're completely fed up with it, and you want to go back, but you have to wait until 3 o’clock for the first train. The Chinese people are very patient people. So they accept that. In the Netherlands people would start shouting and screaming, asking for better.

Basic mismatch

This is an example from real life, where you can have a very dissatisfied user because there was a mismatch between the workload, the arrival pattern, and available capacity. 

But it can get much worse, here we have listed a number of newspaper quotes as a result of security incidents. This is something that really bothers companies. This is also non-functional. It's really very important, especially when we go towards always on, always accessible, anytime, anywhere. This is really a big issue. 

There are many, many non-functional aspects, as you can see. This guy is not making sense out of it. He doesn’t know how to balance it, because it's not as if you can have them all. If you put too much focus on one, it could be bad for the other. So you really have to balance and prioritize. 

Not all non-functionals are equally important. We picked three of them for our conference in February: performance, availability and security. I now want to talk about performance. 
It's really very important, especially when we go towards always on, always accessible, anytime, anywhere. This is really a big issue. 

Everybody recognizes this picture. This was Usain Bolt winning his 100 meters in London. Why did I put this up? Because it very clearly shows what it's all about in performance. There are three attributes that are important.

You have the response time, basically you compare the 100 meters time from start to finish. 

You have the throughput, that is the number of items that can be processed with the time limit. If this is an eight-lane track, you can have only eight runners at the same time. And the capacity is basically the fact that this was an eight lane track, and they are all dependent on each other. It's very simple. But you have to be aware of all of them when you start designing your system. So this is performance. 

Now, let’s go to availability. That is really a very big point today, because with the coming of the Internet in the '90s, availability really became important. We saw that when companies started opening up their mainframes for the Internet, they weren't designed for being open all the time. This is about scheduled downtime. Companies such as eBay, Amazon, Google are setting the standard. 

We come to a company, and they ask us for our performance engineering. We ask them what their non-functional requirements are. They tell us that it has to be as fast as Google.

Well, you're not doing the same thing as Google; you are doing something completely different. Your infrastructure doesn’t look as commodity as Google's does. So how are you going to achieve that? But that is the perception. That is what they want. They see that coming their way.

Big challenge

They're using mobile devices, and they want it also in the company. That is the standard, and disaster recovery is slowly going away. RTO/RPO are going to 0. It's really a challenge. It's a big challenge.

The future is never-down technology independence, and it's very important to get customer satisfaction. This is a big thing.

Now, a little bit about security incidents. I'm not a security specialist. This was prepared by one of my colleagues. Her presentation shows that nothing is secure, nothing, and you have all these incidents. This comes from a report that tracks over several months what sort of incidents are happening. When you see this, you really get frightened. 

Is there a secure site? Maybe, they say, but in fact, no, nothing is secure. This is also very important, especially nowadays. We're sharing more and more personal information over the net. It's really important to think about this. 

What does this have to do with Platform 3.0? I think I answered it already, but let's make it a little bit more specific. Open Platform 3.0 has a number of constituents, and Chris has introduced that to you. 
In the Internet of Things,we have all these devices, sensors, creating huge amounts of data. They're collected by very many different devices all over the place. 

I want to highlight the following clouds, the ones with the big letters in it. There is Internet-of-Things, social, mobile, cloud, big data, but let’s talk about this and briefly try to figure out what it means in terms of non-functionals. 

In the Internet of Things,we have all these devices, sensors, creating huge amounts of data. They're collected by very many different devices all over the place. 

If this is about healthcare, you can understand that privacy must be ensured. Social security privacy is very important in that respect. And it doesn’t come for free. We have to design it into the systems. 

Now, big data. We have the four Vs there; Volume, Variety, Velocity, and Veracity. That already suggests a high focus on non-functionals, especially volume, performance, veracity, security, velocity, performance, and also availability, because you need this information instantaneously. When decisions have to be made based on it, it has to be there. 

So non-functionals are really important for big data. We wrote a white paper about this, and it's very highly rated. 

Cloud has a specific capacity of handling multi-tenant environments. So we have to make sure that the information of one tenant isn’t entered in another tenant’s environment. That's a very important security problem again. There are different workloads coming in parallel, because all these tenants have to have very specific types of workloads. So we have to handle it and balance it. That’s a performance problem. 

Non-functional aspects

Again, there are a lot of non-functional aspects. For mobile and social, the issue is that  you have to be always on, always there, accessible from anywhere. In social especially, you want to share your photos, you personal data, with your friends. So it's social security again. 

It's actually very important in Platform 3.0 and it doesn’t come for free. We have to design it into our model. 

That's basically my presentation. I hope that you enjoyed it and that it has made you aware of this important problem. I hope that, in the next year, we can start really thinking about how to incorporate this in Platform 3.0. 
Boardman: Let me introduce the panelists: Andy Jones of SOA Software, TJ Virdi from Boeing, Louis Dietvorst from Enexis, Sjoerd Hulzinga from KPN, and Frans van der Reep from Inholland University. 
The subject of interoperability, the semantic layer, is going to be a permanent and long running problem.

We want the panel to think about what they've just heard and what they would like Platform 3.0 to do next. What is actually going to be the most important, the most useful, for them, which is not necessarily the things we have thought of.

Jones: The subject of interoperability, the semantic layer, is going to be a permanent and long running problem. We're seeing some industries. for example, clinical trials data, where they can see movement in that area. Some financial services businesses are trying to abstract their information models, but without semantic alignment, the vision of the platform is going to be difficult to achieve. 

Dietvorst: For my vision on Platform 3.0 and what it should support, I am very much in favor of giving the consumer or the asking party the lead, empower them. If you develop this kind of platform thinking, you should do it with your stakeholders and not for your stakeholders. And I wonder how can we attach those kind of stakeholders that they become co-creators. I don’t know the answer. 

Male Speaker: Neither do I, but I feel that what The Open Group should be doing next on the platform is, just as my neighbor said, keep the business perspective, the user perspective, continuously in your focus, because basically that’s the only reason you're doing it. 

In the presentation just now from Lydia about NFRs, you need to keep in mind that one of the most difficult, but also most important, parts of the model ought to be the security and blind spots over it. I don’t disagree that they are NFRs. They are probably the most important requirements. It’s where you start. That would be my idea of what to do next. 

Not platform, but ecosystem

Male Speaker: Three remarks. First, I have the impression this is not a platform, but an ecosystem. So one should change the wording, number one.You should correct the wording. 

Second, I should stress the business case. Why should I buy this? What problem does it solve? I don’t know yet. 

The third point, as the Open Group, I would welcome a lobby to make IT vendors, in a formal sense, product reliable like other industries -- cars, for example. That will do a lot for the security problem the last lady talked about. IT centers are not reliable. They are not responsible. That should change in order to be a grownup industry. 

Virdi: I agree about what’s been said, but I will categorize in three elements here what I am looking for from a Boeing perspective on what platform should be doing: how enterprises could create new business opportunities, how they can actually optimize their current business processes or business things, and how they can optimize the operational aspects. 

So if there is a way to expedite these by having some standardized way to do things, Open Platform 3.0 would be a great forum to do that. 
In the bottom layers, in the infrastructure, there is lot of reliability. Everything is very much known and has been developed for a long time.

Boardman: Okay, thanks.Louis made the point that we need to go to the stakeholders and find out what they want. Of course, we would love if everybody in the world were a member of The Open Group, but we realize that that isn’t going to be the case tomorrow, perhaps the day after, who knows. In the meantime, we're very interested in getting the perspectives of a wider audience. 

So if you have things you would like to contribute, things you would like to challenge us with, questions, more about understanding, but particularly if you have ideas to contribute, you should feel free to do that. Get in touch probably via Chris, but you could also get in touch with either TJ or me as co-chairs, and put in your ideas. Anybody who contributes anything will be recognized. That was a reasonable statement, wasn’t it Chris? You're official Open Group? 

Is there anybody down there who has a question for this panel, raise your hand? 

Duijvestijn: Your remark was that IT vendors are not reliable, but I think that you have to distinguish the layers of the stack. In the bottom layers, in the infrastructure, there is lot of reliability. Everything is very much known and has been developed for a long time. 

If you look at the Gartner reports about incidents in performance and availability, what you see is that most of these happen because of process problems and application problems. That is where the focus has to be. Regarding the availability of applications, nobody ever publishes their book rate.

Boardman: Would anybody like to react to that?

Male Speaker: I totally agree with what Lydia was just saying. As soon as you go up in the stack, that’s where the variation starts. That’s where we need to make sure that we provide some kind of capabilities to manage that easily, so the business can make those kind of expedited way to provide business solutions on that. That’s where we're actually targeting it. 

The lower in the stack we go, it's already commoditized. So we're just trying to see how far high we can go and standardize those things.

Two discussions

Male Speaker: I think there are two discussions together; one discussion is about the reliability on the total [IT process], where the fault is in a [specific IT stack]. I think that’s two different discussions.

I totally agree that IT, or at least IT suppliers, need to focus more on reliability when they get the service as a whole. The customers aren’t interested in where in the stack the problem is. It should be reliable as a whole, not on a platform or in the presentation layer. That’s a non-issue, non-operational, but a non-issue. The issue is it should be reliable, and I totally agree that IT has a long way to go in that department.  
Boardman: I'm going to move on to another question, because an interesting question came up on the Tweets. The question is: "Do you think that Open Platform 3.0 will change how enterprises will work, creating new line of business applications? What impact do you see?" An interesting question. Would anybody like to endeavor to answer that?

Male Speaker: That’s an excellent question actually. When creating new lines of business applications, what we're really looking for is semantic interoperability. How can you bridge the gap between social and business media kind of information, so you can utilize the concept of what’s happening in the social media? Can you migrate that into a business media kind of thing and make it a more agile knowledge or information transfer. 
We are seeing a trend towards line of business apps being composed from micro-apps. So there's less ownership of their own resources.

For example, in the morning we were talking about HL7 as being very heavyweight for healthcare systems. There may be need to be some kind of an easy way to transform and share information. Those kind of things. If we provide those kind of capabilities in the platform, that will make the new line-of-business applications easier to do, as well as it will have an impact in the current systems as well. 

Jones: We are seeing a trend towards line of business apps being composed from micro-apps. So there's less ownership of their own resources. And with new functionality being more focused on a particular application area, there's less utility bundling. 

It also leads on to the question of what happens to the existing line of business apps. How will they exist in an enterprise, which is trying to go for a Platform 3.0 kind of strategy? Lydia’s point about NFRs and the importance of the NFRs brings into light a question of applications that don’t meet NFRs which are appropriate to the new world, and how you retrofit and constrain their behavior, so that they do play well in that kind of architecture. This is an interesting problem for most enterprises. 
Boardman: There's another completely different granularity question here. Is there a concept of small virtualization, a virtual machine on a watch or phone? 

Male Speaker: On phones and all, we have to make a compartmentalized area, where it's kind of like a sandbox. So you can consider that as a virtualization of area, where you would be doing things and then tearing that apart. 

It's not similar to what virtualization is, but it's creating a sandbox in smart devices, where enterprises could utilize some of their functionality, not mingling up with what are called personal device data. Those things are actually part of the concept, and could be utilized in that way. 

Architectural framework

Question: My question about virtualization is linked to whether this is just an architectural framework. Because when I hear the word platform, it's something I try to build something on, and I don’t think this is something I build on. If you can, comment on the validity of the use of the word platform here. 

Male Speaker: I don’t care that much what it is called. If I can use it in whatever I am doing and it produces a positive outcome for me, I'm okay with it. I gave my presentation the Internet-of-Things, or the Internet of everything, or the everywhere or the Thing of Net, or the Internet of People. Whatever you want to call it, just name it, if you can identify its object that’s important to you. That’s okay with me. The same thing goes for Platform 3.0 or whatever.

I'm happy with whatever you want to call it. Those kinds of discussions don't really contribute to the value that you want to produce with this effort. So I am happy with anything. You don't agree?
What we're really trying to do is provide some kind of capabilities that would expedite enterprises to build their business solutions on that.

Male Speaker: A large part of architecture is about having clear understandings and what they mean.

Male Speaker: Let me augment what was just said, and I think Dr. Harding was also alluding to this. It is in the stage where we're defining what Platform 3.0 is. One thing for sure is that we're going to be targeting it as to how you can build that architectural environment. 

Whether it may have frameworks or anything is still to be determined. What we're really trying to do is provide some kind of capabilities that would expedite enterprises to build their business solutions on that. Whether it's a pure translation of a platform per se is still to be determined. 

Boardman: The Internet-of-Things is still a very fuzzy definition. Here we're also looking at fuzzy definitions, and it's something that we constantly get asked questions about. What do we mean by Platform 3.0? 

The reason this question is important, and I also think Sjoerd’s answer to it is important, is because there are two aspects of the problem. What things do we need to tie down and define because we are architects and what things can we simply live with. As long as I know that his fish is my bicycle, I'm okay. 

It's one of the things we're working on. One of the challenges we have in the Forum is what exactly are we going to try and tie down in the definition and what not? Sorry, I had to slip that one in. 

I wanted to ask about trust, how important you see the issue of trust. My attention was drawn to this because I just saw a post that the European Court of Justice has announced that Google has to make it possible for any person or organization who asks for it to have Google erase all information that Google has stored anywhere about them

I wonder whether these kinds of trust issues going to become critical for the success of this kind of  ecosystem, because whether we call it a platform or not, it is an ecosystem.

Trust is important

Male Speaker: I'll try to start an answer. Trust is a very important part ever since the Internet became the backbone of all of those processes and all of those systems in those data exchanges. The trouble is that it's very easy to compromise that trust, as we have seen with the word from the NSA as exposed by Snowden. So yes, trust ought to be a part of it, but trust is probably pretty fragile the way w're approaching it right now. 

Do I have a solution to that problem? No, I don't. Maybe it will come in this new ecosystem. I don't see it explicitly being addressed, but I am assuming that, between all those little clouds, there ought to be some kind of a trust relationship. That's my start of an answer.

Jones: Trust is going to be one of those permanently difficult questions. In historical times, maybe the types of organizations that were highest in trust ratings would have been perhaps democratic governments and possibly banks, neither of which have been doing particularly well in the last five years in that area. 

It’s going to be an ethical question for organizations who are gathering and holding data on behalf of their consumers. We know that if you put a set of terms and conditions in front of your consumers, they will probably click on "agree" without reading it. So you have to decide what trust you're going to ask for and what trust you think you can deliver on. 
That data can then be summarized across groups of individuals to create an ensemble dataset. At what level of privacy are we then?

Data ownership and data usage is going to be quite complex. For example, in clinical trials data, you have a set of data, which can be identified against the named individual. That sounds quite fine, but you can then make that set of data so it’s anonymized and is known to relate to a single individual, but can no longer identify who. Is that as private? 

That data can then be summarized across groups of individuals to create an ensemble dataset. At what level of privacy are we then? It seems to quickly goes out of the scope of reason and understanding of the consumer themselves. So the responsibility for ethical behavior appears to lie with the experts, which is always quite a dangerous place.

Male Speaker: We probably all agree that trust management is a key aspect when we are converging different solutions from so many partners and suppliers. When we're talking about Internet of data, Internet-of-Things, social, and mobile, no one organization would be providing all the solutions from scratch. 

So we may be utilizing stuff from different organizations or different organizational boundaries. Extending the organizational boundaries requires a very strong trust relationship, and it is very significant when you are trying to do that.

Boardman: There was a question that went through a little while ago. I'm noticing some of these questions are more questions to The Open Group than to our panel, but one I felt I could maybe turn around. The question was: "What kind of guidelines is the Forum thinking of providing?"

I'd like to do is turn that around to the panel and ask: what do you think it would be useful for us to produce? What would you like a guideline on, because there would be lots of things where you would think you don’t need that, you'll figure it out for yourself. But what would actually be useful to you if we were to produce some guidelines or something that could be accepted as a standard? 

Does it work?

Male Speaker: Just go to a number of companies out there and test whether it works. 

Male Speaker: In terms of guidelines, you mentioned it very well about semantic interoperability. How do you exchange information between different participants in an ecosystem or things built on a platform. 

The other thing is how you can standardize things that are yet to be standardized. There's unstructured data. There are things that need to be interrogated through that unstructured data. What are the guiding principles and guidelines that would do those things? So maybe in those areas, Platform 3.0 with the participations from these Forum members, can advance and work on it. 

Jones: I think contract, composition, and accumulation. If an application is delivering service to its end users by combining dozens of complementary services, each of which has a separate contract, what contract can it then offer to its end user?

Boardman: Does the platform plan to define guidelines and directions to define application programming interfaces (APIs) and data models or specific domains? Also, how are you integrating with major industry reference models? 

Just for the information, some of this is work of other parts of The Open Group's work around industry domain reference models and that kind of thing. But in general, one of the things we've said from the Platform, from the Forum, is that as much as possible, we want to collate what is out there in terms of standards, both in APIs, data models, open data, etc.
No single organization would be able to actually tap into all the advancement that’s happening in technologies, processes, and other areas where business could utilize those things so quickly.

We're desperate not to go and reproduce anybody else’s work. So we are looking to see what’s out there, so the guideline would, as far as possible, help to understand what was available in which domain, whether that was a functional domain, technical domain, or whatever. I just thought I would answer those because we can’t really ask the panel that.

We said that the session would be about dealing with realizing business value, and we've talked around issues related to that, depending on your own personal take. But I'd like to ask the members of the panel, and I'd like all of you to try and come up with an answer to it: What do you see are the things that are critical to being able to deliver business value in this kind of ecosystem?

I keep saying ecosystem, not to be nice to Frans, I am never nice to Frans, but because I think that that captures what we are talking about better. So do you want to start TJ? What are you looking for in terms of value? 

Virdi: No single organization would be able to actually tap into all the advancement that’s happening in technologies, processes, and other areas where business could utilize those things so quickly. The expectations from business values or businesses to provide new solutions in real-time, information exchange, and all those things are the norm now. 

We can provide some of those as a baseline to provide as maybe foundational aspects to business to realize those new things what we are looking as in social media or some other places, where things are getting exchanged so quickly, and the kind of payload they have is a very small payload in terms of information exchange.

So keeping the integrity of information, as well as sharing the information with the right people at the right time and in the right venue, is really the key when we can provide those kind of enabling capabilities.

Ease of change

Jones: In Lydia’s presentation, at the end, she added the ease of use requirement as the 401st. I think the 402nd is ease of change and the speed of change. Business value pretty much relies on dynamism, and it will become even more so. Platforms have to be architected in a way that they are sufficiently understood that they can change quickly, but predictably, maintaining the NFRs. 

Dietvorst: One of the reasons why I would want to adopt this new ecosystem is that it gives me enough feeling that it is a reliable product. What we know from the energy system innovations we've done the last three or four years is that the way you enable and empower communities is to build up the trust themselves, locally, like you and your neighbor, or people who are close in proximity. Then, it’s very easy to build trust. 

Some call it social evidence. I know you, you know me, so I trust you. You are my neighbor and together we build a community. But the wider this distance is, the less easy it is to trust each other. That’s something you need to build in into the whole concept. How do you get the trust if it is something that's a global concept. It seems hardly possible.

van der Reep: This ecosystem, or whatever you're going to call it, needs to bring the change, the rate of change. "Change is life" is a well-known saying, but lightning-fast change is the fact of life right now, with things like social and mobile specifically. 

One Twitter storm and the world has a very different view of your company, of your business. Literally, it can happen in minutes. This development ought to address that, and also provide the relevant hooks, if you will, for businesses to deal with that. So the rate of change is what I would like to see addressed in Platform 3.0, the ecosystem. 
In order to create meaningful customer interaction, what we used to call center or whatever, that is where the cognition comes in.

Male Speaker: It should be cheap and reliable, it should allow for change, for example Cognition-as-a-Service, and it should hide complexity for those "stupid businesspeople" and make it simple. 

Boardman: I want to pick up on something that Frans just said because it connects to a question I was going to ask anyway. People sometimes ask us why the particular five technologies that we have named in the Forum: cloud, big data, big-data analysis, social, mobile, and the Internet-of-Things? It's a good question, because fundamental to our ideas in the Forum that it’s not just about those five things. Other things can come along and be adopted. 

One of the things that we had played with at the beginning and decided not to include, just on the basis of a feeling about lack of maturity, was cognitive computing. Then, here comes Frans and just mentions cognitive things. 

I want to ask the panel: "Do you have a view on cognitive computing? Where is it? When we can expect it to be something we could incorporate? Is it something that should be built into the platform, or is it maybe just tangential to the platform?" Any thoughts? 

Male Speaker: I did a speech on this last week. In order to create meaningful customer interaction, what we used to call center or whatever, that is where the cognition comes in. That's a very big market and there's no reason not to include it in the lower levels of the platform and to make it into cloud. 

We have lots of examples already in the Netherlands that ICT devices recognize emotions and from recognizing speech. Recognizing emotion, you can optimize the matching of the company with the customer, and you can hide complexity. I think there’s a big market for that. 

What the business wants

Virdi: We need to look at it in the context of what business wants to do with that. It could be enabling things that could be what I consider as proprietary things, which may not be part of the platform for others to utilize. So we have to balance out what would be the enabling things we can provide as a base of foundation for everyone to utilize. Or companies can build on top of it what values it would provide. We probably have to do a little bit further assessment on that.

Male Speaker: I'd like to follow up on this notion of cognitive computing, the notion that maybe objects are self-aware, as opposed to being dumb -- self-aware being an object, a sensor that’s aware of its neighbor. When a neighbor goes away, it can find other neighbors. Quite simple as opposed to a bar code. 

We see that all the time. We have kids that are civil engineers and they pour it in concrete all the time. In terms of cost, in terms of being able to have the discussion, it's something that’s in front of us all the time. So at this time, should we probably think about at least the binary aspect of having self-aware sensors as opposed to dumb sensors?

Male Speaker: From aviation perspective, there are some areas where dumb devices would be there, as well as active devices. There are some passive sensor devices where you can just interrogate them when you request and there are some devices that are active, constantly sending sensor messages. Both are there in terms of utilization for business to create new business solutions. 
I'm certainly all in favor of devices in the field being able to tell you what they're doing and how they think they're feeling.

Both of them are going to be there, and it depends upon what business needs are to support those things. Probably we could provide some ways to standardize some of those and some other specifications. For example, an ATA, for aviation. They're doing that already. Also, in healthcare, there's HL7, looking for doing some smart sensor devices to exchange information as well. So some work is already happening in the industry. 

There are so many business solutions that have already been built on those. Maybe they're a little bit more proprietary. So a platform could provide some ways to provide a standard base to exchange that information. It may be some things relate to guidelines and how you can exchange information in those active and passive sensor devices.

Jones: I'm certainly all in favor of devices in the field being able to tell you what they're doing and how they think they're feeling. I have an interest in complex consumer devices in retail and other field locations, especially self-service kiosks, and in that field quite a lot of effort has been spent trying to infer the states of devices by their behavior, rather than just having them tell you what's going on, which should be so much easier. 

Male Speaker: Of course, it depends on where the boundary is between aware and not aware. If there is thermometer in the field and it sends data that it's 15 degrees centigrade, for example, do I really want to know whether it thinks it's chilly or not? I'm not really sure about it. 

I'd have to think about it a long time to get a clear answer on whether ther's a benefit in self-aware devices in those kinds of applications. I can understand that there will be an advantage in self-aware sensor devices, but I struggle a little to see any pattern or similarities in those circumstances. 

I could come up with use cases, but I don’t think it's very easy to come up with a certain set of rules that leads to the determination whether or not a self-aware device is applicable in that particular situation. It's a good question. I think it deserves some more thought, but I can't come up with a better answer than that right now.

Case studies

Skilton: I just wanted to add to the embedded question, because I thought it was a very good one. Three case studies happened to me recently. I was doing some work with Rolls Royce and the MH370, the flight that went down. One of the key things about the flight was that the engines had telemetry built in. TJ, you're more qualified to talk about this than I am, but essentially there was information that was embedded in the telemetry of the technology of the plane. 

As we know from the mass media that reported on that, that they were able to analyze from some of the data potentially what was going on in the flight. Clearly, with the band connection, it was the satellite data that was used to project it was going south, rather than north. 

So one of the lessons there was that smart information built into the object was of value. Clearly, there was a lesson learned there. 

With Coca Cola, for example, what's very interesting in retail is that a lot of the shops now have embedded sensors in the cooler systems or into products that are in the warehouse or on stock. Now, you're getting that kind of intelligence over RFID coming back into the supply chain to do backfilling, reordering, and stuff like that. So all of this I see is smart. 
Embedded technology in the dashboard is going to be something that is going to be coming in the next three to five years.

Another one is image recognition when you go into a car park court. You have your face being scanned in, whether you want it or not. Potentially, they can do advertising in context. These are all smart feedback loops that are going on in these ecosystems and are happening right now. 

There are real equations of value in doing that. I was just looking at the Open Automotive Alliance. We've done some work with them around connected car forecast. Embedded technology in the dashboard is going to be something that is going to be coming in the next three to five years with BMW, Jaguar Land Rover, and Volvo. All the major car players are doing this right now. 

So Open Platform 3.0 for me is riding that wave of understanding where the  intelligence and the feedback mechanisms work within each of the supply chains, within each of the contexts, either in the plane, in the shop, or whatever, starting to get intelligence built in. 

We talk about big data and small data at the university that I work at. At the moment, we're moving from a big-data era, which is analytics, static, and analyzing the process in situ. Most likely it's Amazon sort of purchasing recommendations or advertisement that you see on your browser today. 

We 're moving to a small-data era, which is where you have very much data in context of what's going on in the events at that time. I would expect this with embedded technologies. The feedback loops are going to happen within each of the traditional supply chains and will start to build that strength.

The issue for The Open Group is to capture the sort of standards of interoperability and connectivity much like what Boeing is already leading with, with the automotive sector , and with the airline sector. It's riding that wave, because the value of bringing that feedback into context, the small-data context is where the future lies. 

Infrastructure needed

Male Speaker: I totally agree. Not only are the devices or individual components getting smarter, but that requires infrastructures to be there to utilize that sensing information in a proper way. From the Platform 3.0 guidelines or specifications perspective, determining how you can utilize some devices, which are already smart, and others, which are still considered to be legacy, and how you can bridge those gap would be a good thing to do.
Boardman: Would anyone like to add anything, closing remarks?

Jones: Everybody’s perspective and everybody’s context is going to be slightly different. We talked about whether it's a platform ora framework. In the end there will be a built universal 3.0 Platform, but everybody will still have a different view and a different perspective of what it does and what it means to them. 
My suggestion would be that, if you're going to continue with this ecosystem, try to built it up locally, in a locally controlled environment.

Male Speaker: My suggestion would be that, if you're going to continue with this ecosystem, try to built it up locally, in a locally controlled environment, where you can experiment and see what happens. Do it at many places at the same time in the world, and let the factors be proof of the pudding. 

Male Speaker: Whatever you are going to call it, keep to 3.0, that sounds snappy, but just get the beneficiaries in, get the businesses in, and get the users in.

Male Speaker: The more open, the more a commodity it will be. That means that no company can get profit from it. In the end, human interaction and stewardship will enter the market. If you come to London city airport and you find your way in the Tube, there is a human being there who helps you into the system. That becomes very important as well. I think you need to do both, stewardship and these kinds of ecosystems that spread complexity. 

Listen to the podcast. Find it on iTunesRead a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Wednesday, June 18, 2014

Big data meets the supply chain — SAP’s Supplier InfoNet and Ariba Network combine to predict supplier risk

The next BriefingsDirect case study interview explores how improved visibility analytics and predictive responses are improving supply-chain management. We’ll now learn how SAP’s Supplier InfoNet, coupled with the Ariba Network, allows for new levels of transparency in predictive analytics that reduce risk in supplier relationships.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

BriefingsDirect had an opportunity to uncover more about about how the intelligent supply chain is evolving at the recent 2014 Ariba LIVE Conference in Las Vegas when we spoke to David Charpie, Vice President of Supplier InfoNet at SAP, and Sundar Kamakshisundaram, Senior Director of Solutions Marketing at Ariba, an SAP company. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: We’ve brought two things together here, SAP’s Supplier InfoNet and Ariba Network. What is it about these two that gives us the ability to analyze or predict, and therefore reduce, risk?

Charpie: To be able to predict and understand risk, you have to have two major components together. One of them is actually understanding this multi-tiered supply chain. Who is doing business with whom, all the way down the line, from the customer to the raw material in a manufacturing sense? To do that you need to be able to bring together a very large graph, if you will, of how all these companies are inter-linked.

Charpie
And that is ultimately what the Ariba Network brings to bear. With over 1.5 million companies that are inter-linked and transacting with each other, we can really see what those supply chains look like.

The second piece of it is to bring together, as Sundar talked about, lots of information of all kinds to be able to understand what’s happening at any point within that map. The kinds of information you need to understand are sometimes as simple as who is the company, what do they make, where are they located, what kind of political, geopolitical issues are they dealing with?
What we find is that suppliers don’t behave the same for everybody.

The more complex issues are things around precisely what exact product are they making with what kind of requirements, in terms of performance, and how they’re actually doing that on a customer-by-customer basis. What we find is that suppliers don’t behave the same for everybody.

So InfoNet and the network have come together to bring those two perspectives, all the data about how companies perform and what they are about with this interconnectedness of how companies work with each other. That really brings us to the full breadth of being able to address this issue about risk.

Gardner: Sundar, we have a depth of transactional history. We have data, we have relationships, and now we’re applying that to how supply chains actually behave and operate. How does this translate into actual information? How does the data go from your systems to someone who is trying to manage their business process?

Kamakshisundaram
Kamakshisundaram: A very good question. If you take a step back and understand the different data points you need to analyze to predict risk, they fall into two different buckets. The first bucket is around the financial metrics that you typically get from any of the big content providers you have in place. We can understand how the supplier is performing, based on current data, and exactly what they’re doing financially, if they’re a public company.

The second aspect, through the help of Ariba Network or Supplier InfoNet, is the ability to understand the operational and the transactional relationship a supplier has in place to predict how the supplier is going to behave six-to-eight months from now.

For example, you may be a large retailer or a consumer packaged goods (CPG) organization, maybe working with a very large trucking company. This particular trucking company may be doing really well and they may have great historical financial information, which basically puts them in a very good shape.

Financial viability

But if only one-third of the business is from retail and CPG and the remaining two-thirds comes from some of the challenging industries, all of a sudden, operational and financial viability of the transportation supply chain may not look good. Though the carrier's historical financials may be in good shape, you can’t really predict how the supplier is going to have working capital management in terms of cash available for them to run the business and maintain the operation in a sustainable manner.

How does Ariba, Ariba Network, and InfoNet help? By taking all the information across this multitude of variables, not only in a financial metrics, but also the operational metrics, and modeling the supply chain.

You don’t limit yourself with the first tier or second tier, but go all the way to the multi-tier supply chain and also the interactions that some of these suppliers may have with their customers. It will help you understand whether this particular supplier will be able to supply the right product and get you the right product to your docks at the right time.

Without having this inter-correlation of network data well laid out in a multi-tier supply chain, it would have been almost impossible to predict what is going to happen in this particular supply-chain example.

Gardner: What sort of trends or competitive pressures are making companies seek better ways to identify, acquire, or manage information and data to have a better handle on their supply chains?

Kamakshisundaram: The pressures are multifaceted. To start with, many organizations are faced with globalization pressure. Finding the right suppliers who can actually supply both the product and service at the right time is a second challenge. And the third challenge many companies grapple with right now is the ability to balance savings and cost reductions with risk mitigation.

These two opposing variables have to be in check in order to drive sustainable savings from the bottom line. These challenges, coupled with the supply-chain disruptions, are making it difficult not only to find suppliers, but also to get the right product at the right time.

Gardner: When we talk about risk in a supply-chain environment what are we really talking about? Risk can be a number of things in a number of different directions.

Many variables

Kamakshisundaram: Risk, at a very high level, is composed of many different variables. Many of us understand that risk is a function of, number one, the supply. If you don’t have the right supplier, if you don’t have the right product at the right time, you have risk.

And, there is the complexity involved in finding the suppliers to address needs in different parts of the world. You may have a supplier in North America, but if you really want to expand your market share in the Far East, especially in China, you need to have the right supply chain to do that.

Companies traditionally have looked at historical information to predict risk. And this is no longer enough because more and more, supply chains are becoming complex. Supply chains are affected by the number of globalized variables including the ability to have suppliers in different parts of the world, and also other challenges which will make risk more difficult to predict in the long run.

Gardner: Where do you see the pressures to change or improve how supply-chain issues are dealt with, and how do you also define the risks that are something to avoid in supply-chain management?

Charpie: When we think about risk we’re really thinking about it from two dimensions. One of them is environmental risk. That is, what are all the factors outside of the company that are impacting performance?

That can be as varied as wars, on one hand, right down to natural disasters and other political types of events that can also cause them to be disrupted in terms of managing their supply base and keeping the kind of cost structure they are looking for.

The other kind are more inherent operational types of risks. These are the things like on-time performance risk, as Sundar was referring to. What do we have in terms of quality? What do we have in terms of product and deliverables, and do they meet the needs of the customer?

As we look at these two kinds of risks, we’ve seen increasing amounts of disruption, because we’re in a time where the supply chains are getting much longer, leaner, and more complex to manage. As a result of that, you’re seeing that over 40 percent of interruptions right now are caused by interruptions in the supply chain downstream, tiers two, tier three, all the way to tier N.

So now we need a different way of managing suppliers than we had in the past. Just working with them and talking to them about how they do things and what they do isn’t enough. We need to understand how they’re actually managing their suppliers, and so on, down the line.

Predicting risk
These are models that behave more like the human brain than like some of the statistical math we learned when we were back in high school

Gardner: So, David, it sounds to me as algorithmic or as if a score card is there to generate this analysis. Is that the right way to look at this, or is it just making the data available for other people to reach conclusions that then allows them to reduce their risk?

Charpie: There absolutely is an algorithmic component to this. In fact, what we do in Supplier InfoNet and with the Ariba Network is to run machine-learning models. These are models that behave more like the human brain than like some of the statistical math we learned when we were back in high school and college.

What it looks for is patterns of behavior, and as Sundar said, we’re looking at how a company has performed in the past with all of their customers. How is that changing? What other variables are changing at the same time or what kinds of events are going on that may be influencing them?

We talked about environmental risk a bit ago. We capture information from about 160,000 newswire sources on a daily basis and, on an automated basis, are able to extract what that article is about, who it’s about, and what the impact on supply chain could be.

By integrating that with the transactional history of the Ariba Network and by integrating that with all the linkage on who does business with whom, we can start to see a pattern of behavior. That pattern of behavior can then help us understand what’s likely to happen moving forward.

To make it a little more concrete, let’s take Sundar’s example of a company having financial trouble. If I take a company, for example, under $100 million, what we have found is that if we see a company that begins to deliver late, within three months of that begins to have quality problems, and within two months or less begins to have cash-flow problems and can’t pay their bills on time, we may be seeing the beginning of a company that’s about to have a financial disaster.

Interestingly, what we find is for the pattern that really means something, after those three events. If they begin paying their bills on time all of a sudden, that’s the worst indicator there possibly could be. It’s very counterintuitive, but the models tell us that when that happens, we’re on the verge of someone who will go bankrupt within two to three months of that time frame.

Delivery model

Gardner: Now I can see why this wasn’t something readily available until fairly recently. We needed to have a cloud infrastructure delivery model. We needed to have the data available and accessible. And then we needed to have a big data capability to drive real-time analysis across multiple tiers on a global scale.

So here we are, Ariba LIVE 2014. What are we going to hear when can people start to actually use this? Where are we on the timeline for delivery in this really compelling value?

Kamakshisundaram: Both Supplier InfoNet and Ariba Network are available today for customers, so that they can continue to leverage these solutions. With the help of SAP’s innovation team, we’re planning to bring in additional solutions that not only help customers look at real-time risk modeling, but also more of predicted analytical capability show.
They can identify the suppliers they want to track to as many as the entire supply base.

Charpie: In terms of the business benefits in what we are offering, the features that really bring to life this notion of integrating the Ariba Network with InfoNet are, first and foremost, an ability to push alerts to our customers on a proactive basis to let them know when something is happening within their supply chain and could be impacting them in any way whatsoever.

That is, they can set their own levels. They can set what interests them. They can identify the suppliers they want to track to as many as the entire supply base. We will track those on an automated basis and give them updates to keep them abreast of what’s happening.

Second, we’re also going to give them the ability to monitor the entire supply base, from a heat-map perspective, to strategically see the hot pockets -- by industry, by spend, or by geography -- that they need to pay particular attention to.

Third, we’re also going to bring to them this automated capability to look at these 160,000 newswire sources and tell them the newswires that they need to pay attention to, so they can determine what kind of actions can they take from those, based on the activity that they see.

We’re also going to bring those predictions to them. We have the ability now to look at and predict performance and disruption and deliver those also as alerts, as well as deeper analytics. By leveraging the power of HANA, we’re able to bring real-time analysis to the customer.

They have those tools today, and so it’d be creating a totally personalized experience, where they can look at big data, look at it the way they want to, look at it the way that they believe risk should be measured and monitored, and be able to use that information right there and then for themselves.

Sharing environment

Last, they also have the ability to do this in an environment where they can share with each other, with their suppliers, and with others in the network, if they choose. What I mean by that is the model that we have used within Supplier InfoNet is very much like you see in Facebook.

When you have a supplier and you would like to see more of their supply base you request that you can see that, much like friending someone on Facebook. They will open up that portion -- some, little, none -- of their supply base that they would like you to be able to have access to. Once you have that, you can get alerts on them, you can manage them, and you can get input on them as well.

So there’s an ability for the community to work together, and that’s really the key piece that we see in the future, and it’s going to continue to expand and grow as we take InfoNet and the Network out to the market.
Focusing on a certain industry and having the suppliers only in that particular industry will give you only a portion of that information to understand and predict risk.

Kamakshisundaram: If you take a step back, you can see why companies haven’t been able to do something like this in the past. There were analytical models available. There were tools and technologies available, but in order to build a model that will help customers identify a multi-tier supply chain risk, you need a community of suppliers who are able to participate and provide information which will continue to help understand where the risk points are.

As David mentioned, where is your heat map? What does it say? And also, point to how you not only collect the information, but what kind of mitigating processes you have to put in place to mitigate those risks.

In certain industries, we see certain trends, whether it’s automotive or aerospace. A lot of the suppliers that are critical in these industries are cross-industry. Focusing on a certain industry and having the suppliers only in that particular industry will give you only a portion of that information to understand and predict risk.

And this is where a community where participants actively share information and insights for the greater good helps. And this is exactly what we’re trying to do with the Ariba Network and Supplier InfoNet.

Gardner: I’m trying to help our listeners solidify their thinking of how this would work in a practical sense in the real world. David, do you have any use-case scenarios that come to mind that would demonstrate the impact and the importance and reinforce this notion that you can’t do this without the community involvement?

Case study

Charpie: Let’s start with a case study. I’m going to talk about one of our customers that is a relatively small electronics distributor.

They signed on to use InfoNet and the Ariba Network to better understand what was happening down the multiple tiers of their supply chain. They wanted to make sure that they could deliver to their ultimate customers, a set of aerospace and defense contractors. They knew what they needed, when they needed it, and the quality that was required.

To manage that and find out what was going to happen, they loaded up Supplier InfoNet, began to get the alerts, and began to react to them. They found very quickly that they were able to find savings in three different areas that ultimately they could pass on to their customers through lower prices.

One of them was that they were able to reduce the amount of time their folks would spend just firefighting the risks that would come up when they didn’t have information ahead of time. That saved about 20 percent on an annual basis.
They needed an independent third party doing it, and SAP and Ariba are a trusted source for doing that.

Second, they also found that they were able to reduce the amount of inventory obsolescence by almost 15 percent on an annual basis as a result of that.

And third, they found that they were avoiding shortages that historically cut their revenues by about 5 percent due to the fact that previously they couldn’t deliver on product that was demanded often on short notice. With the InfoNet all of these benefits were realized for them and became practical to achieve.

Their own perspective on this, relative to the second part of your question, was they couldn’t do this on their own and that no one else could. As they like to say, I certainly wouldn’t share my supply base with my competitor. The idea is that we can take those in aggregate, anonymize them, and make sure the information is cleansed in such a way that no one can know who the contributing folks are.

The fact that they ultimately have control of what people see and what they don’t allows them to have an environment where they feel like they can trust it and act on it, and ultimately, they can. As a result, they’re able to take advantage of that in a way that no one could on their own.

We’ve even had a few of the aerospace and defense folks who tried to build this on their own. All of them ultimately came back because they said they couldn’t get the benchmark data and the aggregate community data. They needed an independent third party doing it, and SAP and Ariba are a trusted source for doing that.

Gardner: For those folks here at Ariba LIVE who are familiar with one or other of these services and programs or maybe not using either one, how do they start? They’re saying, “This is a very compelling value in the supply chain, taking advantage of these big-data capabilities, recognizing that third party role that we can’t do on our own.” How do they get going on this?

Two paths

Charpie: There are two paths you can take. One of them is that you can certainly call us. We would be more than happy to sit down and go through this and look at what your opportunities are by examining your supply base with you.

Second, is to look at this a bit on your own and be reflective. We often take customers through a process, where sit down and look at supply risk and disruption they’ve have had in the past, and based on that, categorize those into the types of disruptions they’ve seen. What is based on quality? What is based on sub-tier issues? What is based on environmental things like natural disasters? Then, we group them.

Then we say, let’s reflect on if you had known these problems were going to happen, as Sundar said three, six, eight months ahead, could you have done something that would have impacted the business, saved money, driven more revenue, whatever the outcome may be?

If the answer to those questions is yes, then we’ll take those particular cases where the impact is understood and where an early warning system would have made a difference financially. We’ll analyze what that really looks like and what the data tells us. And if we can find a pattern within that data, then we know going in that you're going to be successful with the Network and with InfoNet before you ever start.
We would be more than happy to sit down and go through this and look at what your opportunities are by examining your supply base with you.

Gardner: This also strikes me as something that doesn’t fall necessarily into a traditional bucket, as to who would go after these services and gain value from them. That is to say, this goes beyond procurement and just operations, and it enters well into governance, risk, and compliance (GRC).

Who should be looking at this in a large organization or how many different types of groups or constituencies in a large organization should be thinking about this unique service?

Kamakshisundaram: We have found that it depends on the vertical and the industry. Typically, it all starts with the procurement, trying to understand, making sure they can assure supply, that they can get the right suppliers.

Very quickly, procurement also continues to work with supply chain. So you have procurement, supply chain, and depending on how the organization is set up, you also have finance involved, because you need all these three areas to come together.

This is one of the projects where you need complete collaboration and trust within the internal procurement organization, supply chain/operations organization, and finance organization.

As David mentioned, when we talk to aerospace, as well as automotive or even heavy industrial or machinery companies, some of these organizations already are working together. If you really think about how product development is done, procurement participates at the start of the black-box process, where they actually are part and parcel of the process. You also have finance involved.

Assurance of supply

To really understand and manage risk in your supply chain, especially for components that go into your end-level product, which makes up significant revenue for your organization, Supplier Management continues all the way through, even after you actually have assurance of supply.

The second type of customers we have worked with are in the business services/financial/insurance companies, where the whole notion around compliance and risk falls under a chief risk officer or under the risk management umbrella within the financial organization.

Again, here in this particular case, it's not just the finance organization that's responsible for predicting, monitoring, and managing risk. In fact, finance organizations work collaboratively with the procurement organization to understand who their key suppliers are, collect all the information required to accurately model and predict risk, so that they can execute and mitigate risk.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Tuesday, June 17, 2014

Latest ServiceNow update makes turning any awkward process into a managed service available to more workers

IT service management (ITSM) has long been a huge benefit to complex and exception-rich IT operations by helping to standardize, automate and apply a common system-of-record approach to tasks, incidents, assets, and workflows.

ServiceNow has been growing rapidly as a software-as-a-service (SaaS) provider of ITSM, but clearly sees a larger opportunity — making service creation, use, and management a benefit to nearly all workers for any number of business processes.

It’s one of those rare instances where IT has been more mature and methodological in solving complexity than many other traditional business functions. Indeed, siloed and disjointed "productivity applications" that require lots of manual effort have been a driver to bring service orientation to the average business process.
Traditional applications in any business setting can soon reach their point in inflexibility and break down and therefore don’t scale.

Just as in IT operations and performance monitoring, traditional applications in any business setting can soon reach their point in inflexibility and break down and therefore don’t scale. Despite human productivity efforts — via shuffling emails, spreadsheets, phone calls, sticky pads and text messages — processes bog down. Exceptions are boondoggles. Tasks go wanting. Customers can sense it all through lackluster overall performance.

So ServiceNow this week launched its Eureka version of its online service management suite with new features aimed at letting non-technical folks build custom applications and process flows, just like the technical folks in IT have been doing for years. Think of it as loosely coupled interactions that span many apps and processes for the rest of us.

Available globally

Now available globally, the fifth major release of ServiceNow includes more than 100 changes, new modules, and has a new user interface (UI) that allows more visualizations and drag and drop authoring and is more "mobile friendly," says Dave Wright, Chief Strategy Officer at ServiceNow, based in Santa Clara, CA.

“Enterprise users just can’t process work fast enough,” says Wright. “So our Service Creator uses a catalog and an new UI to allow workers to design services without IT.”

IT does, however, get the opportunity to vet and manage these services, and can decide what gets into the service catalog or not. Those of us who have been banging the SOA drum for years, well predicted this level of user-driven services and self-service business process management.

I, for one, am very keen to see how well enterprises pick up on this, especially as the cloud-deployed nature of ServiceNow can allow for extended enterprise process enablement and even a federated approach to service catalogs. Not only are internal processes hard to scale, but those work flows and processes that include multiple companies and providers are also a huge sticking point.
Systems integrators and consultancies may not like it as much, but the time has come for an organic means of automating tasks and complexity that most power users can leverage and innovate on.

Systems integrators and consultancies may not like it as much, but the time has come for an organic means of automating tasks and complexity that most power users can leverage and innovate on.

With this new release, it’s clear that ServiceNow has a dual strategy. One, it’s expanding its offerings to core IT operators, along the traditional capabilities of application lifecycle management, IT operations management, IT service management, project management, and change management. And there are many features in the new release to target this core IT user.

Additionally, ServiceNow has its sights on a potentially much larger market, the Enterprise Service Management (ESM) space. This is where today’s release is more wholly focused. Things like visualization, task boards, a more social way of working, and use of HTML 5 for the services interface, giving the cloud-delivered features native support and adaptability across devices. There is also a full iOS client on the App Store.

Indeed, this shift to ESM is driving the ServiceNow roadmap. I attended last month’s Knowledge 14 conference in Las Vegas, and came away thinking that this level of services management could be a sticky on-ramp to a cloud relationship for enterprises. Other cloud on-ramps include public cloud infrastructure as a service (IaaS), hybrid cloud platforms and management, business SaaS apps like Salesforce and Workday, and data lifecycle and analytics services. [Disclosure: ServiceNow paid my travel expenses to the user conference.]

Common data model

But as a cloud service, ServiceNow, if it attracts a large clientele outside of IT, could prove sticky too. That’s because all the mappings and interactions for more business processes would be within its suite — with the common data model shared by the entire ServiceNow application portfolio.

The underlying portfolio of third-party business apps and data are still important, of course, but the ways that enterprises operate at the process level — the very rules of work across apps, data and organizations — could be a productivity enhancement offer too good to refuse if they solve some major complexity problems.

Strategically, the cloud provider that owns the processes solution also owns the relationship with the manager corps at companies. And if the same cloud owns the relationship with IT processes — via the same common data model, well, then, that’s where a deep, abiding and lasting cloud business could long dwell. Oh, and its all paid for on an as-needed, per user, OpEx basis.
As a cloud service, ServiceNow, if it attracts a large clientele outside of IT, could prove sticky too. 

Specifically, the new ServiceNow capabilities include:
  • Service Creator -- a new feature that allows non-technical business users to create service-oriented applications faster than ever before
  • Form Designer -- a new feature that enables rapid creation and modification of forms with visual drag-and-drop controls
  • Facilities Service Automation -- a new application that routes requests to the appropriate facilities specialists and displays incidents on floor plan visualizations
  • Visual Task Boards -- a new feature to organize services and othervtasks using kanban-inspired boards that foster collaboration and increase productivity
  • Demand Management -- a new application that consolidates strategic requests from the business to IT and automates the steps in the investment decision process
  • CIO Roadmap — a new timeline visualization feature that displays prioritized investment decisions across business functions
  • Event Management - a new application that collects and transforms infrastructure events from third-party monitoring tools into meaningful alerts that trigger service workflows
  • Configuration Automation -- an application that controls and governs infrastructure configuration changes, enhanced to work in environments managed with Chef data center automation.
For more, a blog post on today's news from Wright.

You may also be interested in:

Wednesday, June 11, 2014

Big data should eclipse cloud as priority for enterprises

Big data is big -- but just how big may surprise you. 

According to a new QuinStreet survey, 77 percent of respondents consider big data analytics a priority. Another 72 percent cite enhancing the speed and accuracy of business decisions as a top benefit of big-data analytics. And 71 percent of mid-sized and large firms are planning for, if they are not already active, in big-data initiatives.

And based on what I'm hearing this week at the HP Discover conference, much of the zeitgeist has shifted from an emphasis on cloud benefits to the more meaningful and long-term implications of big data improvements. 

I recently discussed in a BriefingsDirect podcast how big data’s big payoff has arrived as customer experience insights drive new business advantages. But there are also some interesting case studies worth pointing out as we look at the big momentum behind big data. Despite the hype, big data may deliver productivity goods and benefits better, bigger than, and earlier than, cloud for enterprises and small and medium-sized businesses (SMBs) alike.
We’ve been able to do some deep analytic research on what that is and get valuable information.

Auto racing powerhouse NASCAR, for example, has engineered a way to learn more about its many fans -- and their likes and dislikes -- using big data analysis. The result is that they can rapidly adjust services and responses to keep connected best to those fans across all media and social networks.

BriefingsDirect had an opportunity to learn first-hand how NASCAR engages with its audiences using big data and the latest analysis platforms when we interviewed Steve Worling, Senior Director of IT at NASCAR, based in Daytona Beach, Fla. at the recent HP Discover 2013 Conference in Barcelona. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Listen to what Worling said: “As we launch a new car this year, our Gen-6 Car, what is the engagement or sentiment from our fans? We’ve been able to do some deep analytic research on what that is and get valuable information to be able to hand GM, who launched this car with us this year and say, ‘This is the results of the news’ instantly -- a lot of big data.”

Nimble Storage

Meanwhile, Nimble Storage is leveraging big data and the cloud to produce data performance optimization on the fly. It turns out that high-performing, cost-effective big-data processing helps to make the best use of dynamic storage resources by taking in all the relevant storage activities data, analyzing it and then making the best real-time choices for dynamic hybrid storage optimization.

BriefingsDirect recently sat down with optimized hybrid storage provider Nimble Storage to hear their story on the use of HP Vertica as their data analysis platform of choice. Yes, it’s the same Nimble that this year had a highly successful IPO. The expert is Larry Lancaster, Chief Data Scientist at Nimble Storage Inc. in San Jose, California. The discussion is, again, moderated by me.

Listen to how Nimble gets the analysis in speed, at the scale and at the cost, it requires. Lancaster explains how he uses HP Vertica to drive results:
When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store.

“When you start thinking about collecting as many different data points as we like to collect, you have to recognize that you’re going to end up with a couple choices on a row store. Either you’re going to have very narrow tables and a lot of them or else you’re going to be wasting a lot of I/O overhead, retrieving entire rows where you just need a couple fields. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

That was what piqued his interest at first. But as he began to use it more and more at Glassbeam, where he was previously CTO, he realized that the performance benefits you could gain by using HP Vertica properly were another order of magnitude beyond what you would expect just with the column-store efficiency.

“That’s because of certain features that Vertica allows, such as something called pre-join projections. We can drill into that sort of stuff more if you like, but, at a high-level, it lets you maintain the normalized logical integrity of your schema, while having under the hood, an optimized denormalized query performance physically on disk.”

Healthcare industry

The healthcare industry is also turning to big-data analytics platforms to gain insight and awareness for improved patient outcomes. Indeed, analytics platforms and new healthcare-specific solutions together are offering far greater insight and intelligence into how healthcare providers are managing patient care, cost, and outcomes.

To learn how, BriefingsDirect sat down with Patrick Kelly, Senior Practice Manager at the Avnet Services Healthcare Practice, and Paul Muller, Chief Software Evangelist at HP, to examine the impact that big-data technologies and solutions are having on the highly dynamic healthcare industry. I moderated the discussion.
Medical information can be sensitive when available not just to criminals but even to prospective employers, members of the family, and others.

Muller said dealing with large volumes of sensitive personally identifiable information (PII) is not just a governance issue, but it’s a question of morals and making sure that we are doing the right thing by the people who are trusting themselves not just with their physical care, but with how they present in society. 

“Medical information can be sensitive when available not just to criminals but even to prospective employers, members of the family, and others,” he said. “The other thing we need to be mindful of is we’ve got to not just collect the big data, but we’ve got to secure it. We’ve got to be really mindful of who’s accessing what, when they are accessing, are they appropriately accessing it, and have they done something like taking a copy or moved it else where that could indicate that they have malicious intent. It’s also critical we think about big data in the context of health from a 360-degree perspective.”

So with all this in mind, how big will big data get? It’s not clear. The challenges are as big as big data itself but the QuinStreet survey suggests survey responses are pressing forward, with 45 percent expecting to data volumes to grow 45 percent in the next two years along.

You may also be interested in:

Thursday, June 5, 2014

Perfecto Mobile goes to cloud-based testing so developers can build the best apps faster

We have surely entered a golden age of mobile apps development, not just for app stores wares, but across all kinds of enterprise and productivity applications. The notion of mobile-first has altered the development landscape so much that the very notion of software development writ large will never be the same.

With the shift comes a need for speed, but not so much so that security and performance requirements suffer. How to maintain the balance between rapid delivery and quality assurance falls to the testing teams. Into the fray comes cloud-based testing efficiencies.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

Our next innovation case study interview therefore highlights how Perfecto Mobile is using a variety of cloud-based testing tools to help its developers rapidly create the best mobile apps for both enterprises and commercial deployment.

BriefingsDirect had an opportunity to learn first-hand how rapid cloud testing begets better mobile development when we interviewed Yoram Mizrachi, CTO and Founder of Perfecto Mobile, based in Woburn, Mass. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about the state of the mobile development market. How fast is it growing, and who are building mobile apps these days?

Mizrachi
Mizrachi: Everyone is building mobile applications today. We have not gone into a single company that doesn’t have anything on mobile. It’s like what happened on the web 15 years ago. Mobile is moving fast. Even today, we have customers with more transactions on mobile than any other channel that they’re offering, including web or making calls. Mobile is here.

Gardner: So that’s a big challenge for companies that perhaps are used to a development cycle that took a lot longer, where they had more time to do testing and quality assurance. Mobile development seems to be speeding up. Is there a time crunch that they’re concerned about?

Mizrachi: Absolutely. In mobile there are two factors that come into play. The first one is that everyone today is expecting things to happen much faster. So everyone is talking about agile and DevOps, and crunching the time for a version from a few months, maybe even a year, into few weeks.

Bigger problem

With mobile, there’s a bigger problem. The market itself is moving faster. Looking at the mobile market, you see hundreds of mobile models being launched every year. Apple is releasing many models. Android is releasing tremendous amount of new models every year. The challenge for enterprises is how to release faster on one side, but still maintain a decent quality on all the wide ranges of devices available.

Gardner: So that’s a big challenge in terms of coming up with a test environment for each of those iterations.

Of course, we’re also seeing mobile first, where they’re going to build mobile, and it's changing the whole nature of development. It's a very dynamic and busy time for developers and enterprises. Tell us about Perfecto Mobile and how you’re helping them to manage these difficult times.

Mizrachi: Yes, it is mobile first. Many of our existing customers, as I mentioned, have more transactions on mobile than anything else. Today, they’re building an interface for their customers starting from mobile. This means there are tremendous issues that they need to handle, starting with automation. If automation was nice to have on traditional web -- with mobile it’s no longer a question. Building a robust and continuous automated testing environment is a must in mobile.

Gardner: Now, we’re talking about not only different targets for mobile, but we’re talking about different types of applications. There’s Android, Apple, native, HTML 5, Web, hybrid. How wide a landscape of types of apps are you supporting with your testing capabilities?
We support native, hybrid applications, Web services, iOS, Android, and any other platform.

Mizrachi: When you look at the market today, mobile is moving very fast, and you’re right, there are lots of solutions available in the market. One of the things that Perfecto Mobile is bringing to the market is the fact that we support them all. We support native, hybrid applications, Web services, iOS, Android, and any other platform. All of this is provided as a cloud service. We enable our customers to worry a little bit less about the environment and a little bit more about the actual testing.

Gardner: Tell us how you’re doing this? I know that you are a software-as-a-service (SaaS) provider and that the testing that you provide is through a cloud-based model. A lot of organizations have traditionally done their own testing or used some tools that may have been SaaS-provided. How are companies viewing going purely to a SaaS model for their testing with their mobile apps?

Mizrachi: The nice thing about what we do with cloud is that it solves a huge logistical problem for the enterprises. We’re providing managed solution for those physical devices. So it’s many things.

One of them is just physically managing those devices and enabling access to them from anywhere in the world. For example, if I’m a U.S.-based company, I can have my workforce and my testing, located anywhere in the world without the need to worry about the logistics of managing devices, offshoring, or anything like that. Our customers are utilizing this cloud model to not change their existing processes when moving into mobile.

ALM integration

Gardner: And in order to be able to use cloud amid a larger application lifecycle, you must also offer application lifecycle management (ALM) or at least integrate with ALM, source code management, and other aspects of development. How does that work?

Mizrachi: Our approach was to not reinvent the wheel. When looking at the large enterprises, we figured out that the existing ALM solutions in the market, led by HP, is there, and the right approach is to integrate or to extend them into mobile and not to replace them.

What we have is an extension to the ALM products  in such a way that you, as a customer, don’t have to change your existing processes and practices in order to move to mobile. You’ll have a lot of issues when moving into mobile, and we don’t believe that changing the processes should be one of them.

Gardner: Of course with HP having some 65 percent of the market for ALM and a major market presence for a lot of other testing and business service management capabilities, it was a no-brainer for you to have to integrate to HP. But you’ve gone beyond that. You’re using HP yourself for your own testing. Tell us how you came to do that.

Mizrachi: HP has the largest market in ALM, and looking at our customers in Fortune 500 companies, it was really obvious that we needed to utilize, integrate, or extend HP ALM tools in order to provide a market with the best solution.
One of the things I’m quite proud of is that we, as a company, have proofs of success in the market.

Internally, of course, we’re using the HP suites, including Unified Functional Testing (UFT) Performance Center, and Load Runner in order to manage our own development.

One of the things I’m quite proud of is that we, as a company, have proof of success in the market, with hundreds of customers already using us and tens of thousands of hours of automation every month being utilized.
We have customers with thousands of automated scripts running continuously in order to validate the applications. It's a competitive environment, obviously, but with Perfecto Mobile, the value that we’re bringing to the table is that we have a proven solution today used by the largest Fortune 500 companies in finance, retail, travel, utilities, and they have been using us not for months, but for years.

Gardner: Where do you see this going next? Is there a platform-as-a-service (PaaS) opportunity where we’re going to do not just testing but development and deployment ultimately? If you are in the cloud for more and more of what you do in development and deployment, it makes sense to try to solidify and unify across a cloud from start to finish.

Mizrachi: I’m obviously a little bit biased, but, yes, my belief is that the software development life cycle (SDLC) is moving to the cloud. If you want to go ahead, you don’t really have a choice. One of the major failures in SDLC is setup of the environment. If you don’t have the right environment, just in time, you will fail to deliver regardless of the tool that you have.

Just in time

Moving to the cloud means that you have everything that you need just in time. It's available for you. Someone has to make sure this solution is available with a given service-level agreement (SLA) and all of that. This is what Perfecto Mobile is doing of course, but I believe the entire market is going into that. Software development is moving to the cloud. This is quite obvious.

For our customers, the top insurance and top financial banks customers, healthcare organizations, all of them, security is extremely important, and of course it is for us. Our hosting solution is a SOC 2-certified solution. We have dedicated personnel for security and we make sure that our customers enjoy the highest level of privacy and, of course, security -- physical security, network security, and all the tools and processes in place.
As the mobile market matures, organization are relying more on mobile to assure and  increase their revenue.

Gardner: And, as we know, HP has been doing testing in the cloud successfully for more than 10 years and moving aggressively in that space early on.

Mizrachi: We’re enjoying the fact that our research and development center and HP's research and development center are close-by. So the development of the two products is very close. We have weekly or biweekly meetings between products and R and D teams in order to make sure that those two tools are moving together.

SDLC, as you mentioned, is a lifecycle. It's not only about one time testing; it's ongoing. And post-deployment, when moving into production, you need to see that what you’re offering to the market on the real device is actually what you expect. That’s extremely important.
As the mobile market matures, organization are relying more on mobile to assure and  increase their revenue. So making sure the mobile offering is up and running and meets the right key performance indicators (KPIs) on an ongoing basis is extremely important. The integration that we’ve made with BSM is utilizing an existing extremely mature product on the monitoring aspect and extending that with cloud-based real mobile devices for application monitoring.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, June 3, 2014

SAP’s Ariba teams with eBay to improve rogue B2B procurement for buyers, sellers and enterprises

It remains one of the last bastions of enterprise spend over which companies have little or no control. Yet companies have been loathe to tamper with how their employers and managers buy ad-hoc goods — known as indirect spend, shadow purchasing, or "spot buying."

Now, SAP’s Ariba cloud is bringing the best of flexible, innovative spot-buying practices into a more controlled and sanctioned process by teaming with eBay and its B2B marketplace for an integrated, yet dynamic, approach to those indirect purchases not covered by contracts and formal invoicing.

Such scattered, and often unmonitored, spot buying amounts to 15 to 20 percent of a typical enterprise’s total purchasing. And so it provides a huge opportunity for improvement, the type that cloud, big-data analytics, and a marketplace of marketplaces approach can best solve. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

The Ariba Network Spot Buy service was announced today in Orlando at Sapphire by SAP CEO Bill McDermott. “The most intractable CEO issue of our time is complexity,” McDermott said in a keynote address Tuesday. “It’s getting worse and worse. We see a dream for a simpler SAP, and a simpler customer experience.”

Long before the Web, the Thomas Register or vertical industry buyers’ catalogs were the mainstays for how many business goods were discovered and procured. There were often done with no contracts, no bids, and no invoices. A material or product was needed, and so it was bought and paid for — fast.

The Web -- and especially Internet search — only increased the ability for those workers in need to find and buy whatever they had to to get their jobs done. Because these buys were deemed “emergency” purchases, or amounted to smaller total amounts, the rogue process essentially flew under the corporate radar.

Under the new Ariba Network Spot Buy service, major and public online marketplaces are brought into the procurement process inside of SAP and Ariba applications and services. eBay is the first, but Ariba expects to extend the process efficiency to other online B2B markets, said Joe Fox, Vice President of Business Network Strategy at SAP.

Pilot program

The new indirect procurement approach, which will operate as a pilot program between August and December this year, before general availability, will allow those buying through the integrated Ariba Network Spot Buy services to use eBay’s PayPal service to transfer and manage funding, said Fox.

Consistently updated content about B2B goods (services support will come later) will be available to users inside of their existing procurement applications, including Ariba, SAP and later third-party enterprise resource planning (ERP) applications, explained Fox. The users can search inside their Ariba apps, including soon-to-be-delivered mobile versions, alongside of their traditional purchasing app services, he said.

“It’s consumerizing business,” said Fox, adding that users gain convenience and access inside of procurement apps and processes while enjoying ad hoc flexibility and one-click, no-invoice payments from converged markets, catalogs and sanctioned search. Enterprises, on the other hand, gain a new ability to monitor spot buying, analyze it, and provide guidance and curation of what goods should be available to buy — and on what general terms. “It’s the best of Web-based buying but with some corporate control,” said Fox.
We are facilitating access — with controls and filters — to all the public and third-party content from various markets. It’s basically unlimited appropriate content for buyers and seekers.

Eventually, as multiple marketplaces become seamlessly available to procurement apps users, deeper analysis — via SAP’s HANA big-data infrastructure on which all Ariba apps and cloud services are being deployed — will allow business to determine if redundancy or waste indicates that the sourcing should be done differently.

The net net, said Fox, is that more unmonitored spending can fall under spot buying, even as some spot buying can move to more formal procurement where bids, negotiation and payment efficiencies such as dynamic discounting can play a role. What’s more, analytics can be applied to a whole new area of spend, amounting to higher productivity over many billions of dollars of B2B spending per year worldwide.

“We are not going to build any marketplaces," said Fox. “We are facilitating access — with controls and filters — to all the public and third-party content from various markets. It’s basically unlimited appropriate content for buyers and seekers.”

These marketplaces will also allow those selling goods and products to gain improved access into the B2B environments (such as the SAP installed base globally) as a new way to go seller-direct with information and content about their wares. New business models and relationships are no doubt bound to develop around that.

Fox said no other business app or procurement services providers have anything like the new offering, one that targets rogue and unmonitored buying by workers using open and aggregated mainstream markets for B2B goods.

You may also be interested in: