Wednesday, October 31, 2012

BMC's MyIT puts IT and business services into the hands of employees with app store ease

BMC Software this week launched MyIT, an enterprise IT help desk solution that empowers employees to take more personal control over their IT services and to get the right type of help they need -- anytime, anywhere, from any device. 
 
Frustration with company IT departments is a widely shared experience.  Forrester Research reports that just 35 percent of business decision-makers say IT provides “high quality, timely end user support.” What’s more, employees are increasingly circumventing their IT organizations in search of faster IT support and problem resolution.

Moreover, studies show that the friction between users and IT help capabilities saps as much as 20 percent of productivity away from workers. That's a day a week when things go wrong.

“The IT people and non-IT people sometimes talk two different languages, and it’s hard to cross that barrier. In fact, a lot of times there’s this unfounded fear of IT because the users typically don’t get the information they need, or don’t understand it when it is given to them,” said Robert Stinnett, senior analyst at Carfax.

What's largely been missing is a focus on the complete processes of IT help desk -- from the users' point of view. Too often help comes in the form of a technology fix for a specific product, leaving users in the role of integrator, if they can. Or, when they are able, they find that they manage their personal IT services better using online resources than their IT experiences provide at work. [Disclosure: BMC is a sponsor of BriefingsDirect podcasts.]

To improve on this, MyIT delivers a personalized portfolio of technology and services to each employee, including a content locker, mobile corporate app store, and other location-aware services and solutions. MyIT also integrates with BMC’s Remedy IT Service Management suites and will bring the power of the larger Business Service Management portfolio to workers.

The result is a merging of IT provisioning and access functions with the support information and help functions when things get dicey.  It makes a lot of sense to me that these functions overlap and come through similar, user-friendly interfaces and processes.

"This is a game-changing way of presenting data and services to end-users," said Jason Frye, Director, Office of the CTO, at BMC Software.

Gaining productive value

Today, in a powerful irony, an employee’s personal IT experience is much better than their IT experience at work, yet they’re forced to relinquish the productive value of their personal IT when they go to work,” said Kia Behnia, BMC’s CTO.  Employees want IT organizations that provide a modern 'store front' for IT services and information delivery and a 'genius bar' ability to manage and control the IT services and information they need to do their jobs.  IT organizations must respond to this change, and MyIT is the bridge that connects their industrialized infrastructure with the needs and expectations of their fellow employees.”

Among the features and benefits of MyIT:
  • The combination of self-service, process automation and the right employee-facing UI slash the IT costs associated with resolving trouble tickets – as much as 25 percent in large companies.
    IT organizations must respond to this change, and MyIT is the bridge that connects their industrialized infrastructure with the needs and expectations of their fellow employees.

  • MyIT allows employees to focus on productivity and value creation, rather than fixing IT problems.  Employees can specify and manage their own personalized IT service and information delivery.  Services and information required by individual employees are immediately updated as new information comes online or an employee’s location changes.
  • MyIT takes an employee’s positive experience with IT in their personal lives and extends it into their work life with immediate access to the right services and context-aware content, unhampered by old-line IT processes.
Speaking about the new solution, Abraham Galan, CIO at energy giant PEMEX, said: “PEMEX will be among the first companies in the world to deliver BMC Software’s MyIT solution – in our case, that means more than 75,000 IT users. Employees are demanding a much better service experience than many IT organizations have been able to provide. PEMEX has been a leader in this area, and we believe that BMC’s MyIT will reduce our cost of service delivery and enable us to compete more effectively, both for markets and for talent.”

The implications of the service also involve the cloud. MyIT can easily be delivered as an on-premises or as SaaS services. This sets the stage for IT to begin outsourcing more help desk functions that it makes sense to, but deliver them all with a singular front end. The MyIT services will come with web, as well as native mobile apps, when the service goes to beta in January. General availabilty is expected in April.

The timing is great, given the uptick in BYOD interest and use, too. I can also see where a social environment meshes well with MyIT, so that the "wall" interface and community-based help and knowledge are shared to more benefit of all. And this also takes the load off of IT while building a better knowledge base.

Lastly, the MyIT approach also fosters more of a two-way street, so that usage, problem and remediation data are being delivered back to the CMDB, the IT system of record, to build a continuous and integrated IT lifecycle capability. I can even imagine more automation and data-driven IT support from the IT systems themselves, a IT help cloud provider, or both, in the coming years.

For more information and to see a video of the live demo, go to http://www.bmc.com/products/myit/it-self-service.html.

You may also be interested in:

Friday, October 26, 2012

It's happening: Hadoop and SQL worlds are converging


This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

By Tony Baer

With Strata, IBM IOD, and Teradata Partners conferences all occurring this week, it’s not surprising that this is a big week for Hadoop-related announcements. The common thread of announcements is essentially, “We know that Hadoop is not known for performance, but we’re getting better at it, and we’re going to make it look more like SQL.” In essence, Hadoop and SQL worlds are converging, and you’re going to be able to perform interactive BI analytics on it.

Tony Baer
The opportunity and challenge of Big Data from new platforms such as Hadoop is that it opens a new range of analytics. On one hand, Big Data analytics have updated and revived programmatic access to data, which happened to be the norm prior to the advent of SQL. There are plenty of scenarios where taking programmatic approaches are far more efficient, such as dealing with time series data or graph analysis to map many-to-many relationships.

It also leverages in-memory data grids such as Oracle Coherence, IBM WebSphere eXtreme Scale, GigaSpaces and others, and, where programmatic development (usually in Java) proved more efficient for accessing highly changeable data for web applications where traditional paths to the database would have been I/O-constrained. Conversely Advanced SQL platforms such as Greenplum and Teradata Aster have provided support for MapReduce-like programming because, even with structured data, sometimes using a Java programmatic framework is a more efficient way to rapidly slice through volumes of data.
But when you talk analytics, you can’t simply write off the legions of SQL developers that populate enterprise IT shops.

Until now, Hadoop has not until now been for the SQL-minded. The initial path was, find someone to do data exploration inside Hadoop, but once you’re ready to do repeatable analysis, ETL (or ELT) it into a SQL data warehouse. That’s been the pattern with Oracle Big Data Appliance (use Oracle loader and data integration tools), and most Advanced SQL platforms; most data integration tools provide Hadoop connectors that spawn their own MapReduce programs to ferry data out of Hadoop. Some integration tool providers, like Informatica, offer tools to automate parsing of Hadoop data. Teradata Aster and Hortonworks have been talking up the potentials of HCatalog, in actuality an enhanced version of Hive with RESTful interfaces, cost optimizers, and so on, to provide a more SQL friendly view of data residing inside Hadoop.

But when you talk analytics, you can’t simply write off the legions of SQL developers that populate enterprise IT shops. And beneath the veneer of chaos, there is an implicit order to most so-called “unstructured” data that is within the reach programmatic transformation approaches that in the long run could likely be automated or packaged inside a tool.

At Ovum, we have long believed that for Big Data to crossover to the mainstream enterprise, that it must become a first-class citizen with IT and the data center. The early pattern of skunk works projects, led by elite, highly specialized teams of software engineers from Internet firms to solve Internet-style problems (e.g., ad placement, search optimization, customer online experience, etc.) are not the problems of mainstream enterprises. And neither is the model of recruiting high-priced talent to work exclusively on Hadoop sustainable for most organizations; such staffing models are not sustainable for mainstream enterprises. It means that Big Data must be consumable by the mainstream of SQL developers.

Making Hadoop more SQL-like is hardly new

Hive and Pig became Apache Hadoop projects because of the need for SQL-like metadata management and data transformation languages, respectively; HBase emerged because of the need for a table store to provide a more interactive face – although as a very sparse, rudimentary column store, does not provide the efficiency of an optimized SQL database (or the extreme performance of some columnar variants). Sqoop in turn provides a way to pipeline SQL data into Hadoop, a use case that will grow more common as organizations look to Hadoop to provide scalable and cheaper storage than commercial SQL. While these Hadoop subprojects that did not exactly make Hadoop look like SQL, they provided building blocks from which many of this week’s announcements leverage.

Progress marches on

One train of thought is that if Hadoop can look more like a SQL database, more operations could be performed inside Hadoop. That’s the theme behind Informatica’s long-awaited enhancement of its PowerCenter transformation tool to work natively inside Hadoop. Until now, PowerCenter could extract data from Hadoop, but the extracts would have to be moved to a staging server where the transformation would be performed for loading to the familiar SQL data warehouse target. The new offering, PowerCenter Big Data Edition, now supports an ELT pattern that uses the power of MapReduce processes inside Hadoop to perform transformations. The significance is that PowerCenter users now have a choice: load the transformed data to HBase, or continue loading to SQL.

There is growing support for packaging Hadoop inside a common hardware appliance with Advanced SQL. EMC Greenplum was the first out of gate with DCA (Data Computing Appliance) that bundles its own distribution of Apache Hadoop (not to be confused with Greenplum MR, a software only product that is accompanied by a MapR Hadoop distro).

Teradata Aster has just joined the fray with Big Analytics Appliance, bundling the Hortonworks Data Platform Hadoop; this move was hardly surprising given their growing partnership around HCatalog, an enhancement of the SQL-like Hive metadata layer of Hadoop that adds features such as a cost optimizer and RESTful interfaces that make the metadata accessible without the need to learn MapReduce or Java. With HCatalog, data inside Hadoop looks like another Aster data table.

Not coincidentally, there is a growing array of analytic tools that are designed to execute natively inside Hadoop. For now they are from emerging players like Datameer (providing a spreadsheet-like metaphor; which just announced an app store-like marketplace for developers), Karmasphere (providing an application develop tool for Hadoop analytic apps), or a more recent entry, Platfora (which caches subsets of Hadoop data in memory with an optimized, high performance fractal index).
Yet, even with Hadoop analytic tooling, there will still be a desire to disguise Hadoop as a SQL data store, and not just for data mapping purposes.

Yet, even with Hadoop analytic tooling, there will still be a desire to disguise Hadoop as a SQL data store, and not just for data mapping purposes. Hadapt has been promoting a variant where it squeezes SQL tables inside HDFS file structures – not exactly a no-brainer as it must shoehorn tables into a file system with arbitrary data block sizes. Hadapt’s approach sounds like the converse of object-relational stores, but in this case, it is dealing with a physical rather than a logical impedance mismatch.

Hadapt promotes the ability to query Hadoop directly using SQL. Now, so does Cloudera. It has just announced Impala, a SQL-based alternative to MapReduce for querying the SQL-like Hive metadata store, supporting most but not all forms of SQL processing (based on SQL 92; Impala lacks triggers, which Cloudera deems low priority). Both Impala and MapReduce rely on parallel processing, but that’s where the similarity ends. MapReduce is a blunt instrument, requiring Java or other programming languages; it splits a job into multiple, concurrently, pipelined tasks where, at each step along the way, reads data, processes it, and writes it back to disk and then passes it to the next task.

Conversely, Impala takes a shared nothing, MPP approach to processing SQL jobs against Hive; using HDFS, Cloudera claims roughly 4x performance against MapReduce; if the data is in HBase, Cloudera claims performance multiples up to a factor of 30. For now, Impala only supports row-based views, but with columnar (on Cloudera’s roadmap), performance could double. Cloudera plans to release a real-time query (RTQ) offering that, in effect, is a commercially supported version of Impala.

By contrast, Teradata Aster and Hortonworks promote a SQL MapReduce approach that leverages HCatalog, an incubating Apache project that is a superset of Hive that Cloudera does not currently include in its roadmap. For now, Cloudera claims bragging rights for performance with Impala; over time, Teradata Aster will promote the manageability of its single appliance, and with the appliance has the opportunity to counter with hardware optimization.

The road to SQL/programmatic convergence

Either way – and this is of interest only to purists – any SQL extension to Hadoop will be outside the Hadoop project. But again, that’s an argument for purists. What’s more important to enterprises is getting the right tool for the job – whether it is the flexibility of SQL or raw power of programmatic approaches.

SQL convergence is the next major battleground for Hadoop. Cloudera is for now shunning HCatalog, an approach backed by Hortonworks and partner Teradata Aster. The open question is whether Hortonworks can instigate a stampede of third parties to overcome Cloudera’s resistance. It appears that beyond Hive, the SQL face of Hadoop will become a vendor-differentiated layer.

Part of conversion will involve a mix of cross-training and tooling automation. Savvy SQL developers will cross train to pick up some of the Java- or Java-like programmatic frameworks that will be emerging. Tooling will help lower the bar, reducing the degree of specialized skills necessary.

And for programming frameworks, in the long run, MapReduce won’t be the only game in town. It will always be useful for large-scale jobs requiring brute force, parallel, sequential processing. But the emerging YARN framework, which deconstructs MapReduce to generalize the resource management function, will provide the management umbrella for ensuring that different frameworks don’t crash into one another by trying to grab the same resources. But YARN is not yet ready for primetime – for now it only supports the batch job pattern of MapReduce. And that means that YARN is not yet ready for Impala or vice versa.
Either way – and this is of interest only to purists – any SQL extension to Hadoop will be outside the Hadoop project. But again, that’s an argument for purists.

Of course, mainstreaming Hadoop – and Big Data platforms in general – is more than just a matter of making it all look like SQL. Big Data platforms must be manageable and operable by the people who are already in IT; they will need some new skills and grow accustomed to some new practices (like exploratory analytics), but the new platforms must also look and act familiar enough. Not all announcements this week were about SQL; for instance, MapR is throwing a gauntlet to the Apache usual suspects by extending its management umbrella beyond the proprietary NFS-compatible file system that is its core IP to the MapReduce framework and HBase, making a similar promise of high performance.

On the horizon, EMC Isilon and NetApp are proposing alternatives promising a more efficient file system but at the “cost” of separating the storage from the analytic processing. And at some point, the Hadoop vendor community will have to come to grips with capacity utilization issues, because in the mainstream enterprise world, no CFO will approve the purchase of large clusters or grids that get only 10 – 15 percent utilization. Keep an eye on VMware’s Project Serengeti.

They must be good citizens in data centers that need to maximize resource (e.g., virtualization, optimized storage); must comply with existing data stewardship policies and practices; and must fully support existing enterprise data and platform security practices. These are all topics for another day.

This guest post comes courtesy of Tony Baer's OnStrategies blog. Tony is senior analyst at Ovum.

You may also be interested in:

Monday, October 22, 2012

Heartland CSO instills novel culture that promotes proactive and open responsiveness to IT security risks

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the next edition of the HP Discover Performance Podcast Series. Our latest discussion examines how the culture of security -- and the openness and therefore responsiveness about it -- can have a huge beneficial impact on organizations.

We'll be learning from the example of Heartland Payment Systems, and how they moved rapidly from a security breach to an overall improved security stance largely thanks to embedding a security-as-culture position across their operations and into their business strategy.

Join co-host Raf Los, Chief Security Evangelist at HP Software, and special guest John South, Chief Security Officer at Heartland Payment Systems, based in Princeton, New Jersey. The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: You've been at Heartland Payment Systems for several years now, but you got there at a pretty tough time. Why don't you tell us a little about what was going on at Heartland when you arrived?

South: Certainly 2009, when I joined, was one of turmoil and anxiety, because they had just gone through a breach. The forensics had been completed. We understood how the breach had taken place, and we entered a period of how to not only remediate and contain that and future breaches, but also how to make that security consistent and reliable in the future.

Cultural problem

It was not only a technical problem, but it became very quickly a business and a cultural problem that we also had to solve. As we took the elements of the breach and broke it down, we were able to figure out technically the kinds of controls that we could put in place that would assist in shortening the gap between the time we would see a future breach and the time we were able to respond.

John South
More importantly, as you pointed out, it was developing that culture of security. Certainly, the people who made it through the breach understood the impact of the breach, but we wanted to make sure that we had sustainability built it into the process, so that people would continue to use security as the foundation.

Whether they were developing programs, or whatever their aspect in their business, security would be the core of what they looked at, before they got too far into their projects. So, it's been an interesting couple of years for Heartland.

Gardner: Just for background for our listeners, in early 2009, something on the order of 94 million credit card records were stolen due to a SQL injection inserted into your data-processing network. I’d also like to hear more about Heartland Payment Systems, again for those of our listeners who might not know. I believe you’re one of a handful of the largest credit card processors in the U.S., if not the world.

South: We are. Right now, we’re number six in the US, and with consolidation and other aspects, that number floats around a bit. We're basically the pipeline between merchants and the banking system. We bring in payments from credit cards and debit cards. We handle payroll, micro payments and a number of other types of payroll channel or payment channels that we can then move from whatever that source, the merchant, to the appropriate bank that needs to handle that payment.

It's a very engaging process for us, because we’re dealing with card brands on one side, banks on another, and the merchants and their customers. But the focus for Heartland has always been that our merchants are number one for our company.
The way they handled the breach was just an extension of the way they always thought about our merchants and our customers themselves.

That's the approach we took to the breach itself, as you may know. We’ve been very open with the way we work with our merchants. In fact, we established what we call The Merchants Bill of Rights. That was part of the culture, part of the way that our executive team thought all along. So, the way they handled the breach was just an extension of the way they always thought about our merchants and our customers themselves.

Gardner: Raf Los, we’ve seen a variety of different ways companies have reacted to breaches of this magnitude, and even for things smaller and everything in between. Most of the time, the reaction is to put up more barriers, walls, or a perimeter, not only around the systems, but around the discussion of what happens to their systems when security can become an issue. So, why is Heartland’s case different, and why do you think it's interesting and perhaps beneficial in how they’ve handled it?

Los: Dana, first, there are two ways that you can take a monumental impact like this to your business. You can either be negative about it, and in some cases, try to minimize it, keep the media from it, keep your customers from getting the full information, and try to sweep it under the rug.

In some cases, that even works. Maybe the world forgets about it, and you get a chance to move on. But, that's one of those karmic things that comes back to bite you. I fully believe that.

Phoenix transformation

What Heartland did is the poster child for the phoenix transformation. John touched on an interesting point earlier. For them, it was a focus on the merchants, or their customers. The most important thing wasn’t the fact that they had a data breach, but it was the fact that a lot of their merchants were impacted. The people they did business with were impacted. Their reputation was impacted.

Their executives took a stand and said, "Look, we can do this the easy way, try to get out of it and scoot, and pretend it didn’t happen. Or, we can take responsibility for it, step up, and take the big kick in the pants in the short run. But in the long term, we'll both earn the industry’s respect, the respect of our customers, and come out of it with a transformation of the business into a culture where, from the people that lead the company down to the technologist, security is pervasive." That's gutsy, and now we know that it works, because they did it.

Gardner: It's my understanding that it only took them a couple of months after this breach to issue a statement about being in compliance with payment card industry data security standard (PCI DSS) and returning to Visa's list of validated service providers. So you had a fairly quick response to the major issues.

I'd like to hear more, John, about how the culture has changed since that time, so that others might learn from it, not only the openness benefits, but how the culture of security itself has changed?

South: Dana, you made a very good point that going back to becoming compliant under the eyes of PCI and the card brands took six weeks. I have to plug the guys in the company for this, because that was six weeks of some people working 20-22 hours a day to bring that about.

There was a huge effort, because it was important for us and important for our customers to be able to have the reliance that we could stem this thing quickly. So, there was a lot of work in that period of time to bring that together.
There was a huge effort, because it was important for us and important for our customers to be able to have the reliance that we could stem this thing quickly.

That also helped build that culture that we’re talking about. If you look at the two parameters that Raf had put out there, one being we could have obfuscated, just hid the fact, tried to run from the press, and been very evasive in our wording. That may have worked. And it may not have worked. But, for us, it wasn’t an option, and it wasn’t an option at all in the process.

For us, it was part of the executive culture to be very open and the people who participated in the breach understood that. They knew the risk and they knew that it was a time of great distress for them to be able to handle the breach and handle the pressure of having been breached.

What that did for our customers is build a strong reliance upon the fact that we took this very seriously. If we had taken this as “let's hide the fact, let's go ahead and fix the problem and see what we can get away with,” it would have been the wrong message to carry to our people to begin with. It would have said to our people that it's okay if we go ahead and fix the problem, but it's just a fix. Fix it and walk away from it.

For us, it became more that this is something we need to take responsibility for. We took that responsibility. As we say, we put on the big-boy pants, and even though we had the financial hit in the short run, the benefits have been wonderful from there. For instance, during the course of the breach, our attrition was very, very low. Our customers realized by our being that open that we were seriously involved in that process.

Honesty and openness

Los: John, that speaks perfectly to the fact that honesty and openness in the face of a failure like that, a big issue, is the thing to do. If I found that something like that happened and the first thing you told me was, "It's no big deal. Don’t worry about it," I'd get suspicious. But if you told me, "Look, we screwed up. This is our fault. We're working to make it better. Give us some time, and it will be better," as a customer, I'm absolutely more apt to give you that benefit of the doubt.

Raf Los
In fact, if you deliver on that promise long-term, now you’ve got a really good relationship. I hope by now we've realized, most people have realized, that security is never going to reach that magical utopian end state. There is no secure.

We provide the best effort to the alignment of the business and sometimes, yes, bad things happen. It's the response and recovery that’s absolutely critical. I don't want to beat a dead horse, but you guys did a fantastic job there.

South: Thank you, Raf, and you hit a really important point. Security is not that magic pill. We can't just wave a security wand and keep people out of our networks. If someone is motivated enough to get into your network, they're going to get into your network. They have the resources, the time, the money, and, in many cases, nation-state protection.

So they have the advantage in almost every case. This goes back into the concept of asymmetric warfare, where the enemy has a great deal more power to execute their mission than you may have to defend against it. For us, it's a message that we have to carry forward to our people and to our customers -- that our effort is to try to minimize the time from when we see an attempt at a compromise to the time we can react to it.
Real control on this kind of sprawl is virtually impossible.

Los: I took that note earlier, because you said that a couple times now and I'm intrigued by "mean time to discovery" (MTTD). I think that’s very meaningful, and I don’t know how many organizations really and truly know what their MTTD is, whether it's in applications, and how long it takes to find a bug now in the wild, once it’s made it past your relief cycle, or how long it takes to discover an intrusion.

That's extremely important, because it speaks to the active defenses and the way we monitor and audit, because audit isn't just a dirty word that says somebody walks through, checks a couple of boxes, and walks out.

I mean audit in the true sense. Someone goes through and looks at systems, does some critical thinking, and does some deep analysis. Because, at the end of the day, John, I think will probably be the first to say this, systems have gotten so complex right now to maintain. Real control on this kind of sprawl is virtually impossible. Forget how much budget you can have. Forget how many staff you can hire. It's just not possible with the way the business moves and the way technology speeds along.

The rational way to look at that is to have a team that, every so often, takes a look at a system, looking to fully audit on this. Let's figure out what's going, what's really going on, in this platform.

South: That’s one of the cultural changes that we've made in the company. I have the internal IT audit function also, which is very nontraditional for a company to do. A lot of times, the audit function is buried up in an internal audit group that is external to the operation. That makes it a more difficult for them to do a truly effective audit of IT security.

Separate and independent

I have an audit group that stands separate and independent of IT, but yet is close enough with IT that we can go in and effectively conduct the audits. We do a large number of them a year.

What's important about that audit function and what positively influences the effectiveness of an audit is that you go into the meeting with, say, a technical group or a development group that you want to audit, with a positive, reinforcing attitude -- an attitude of not only finding the issues, but also of a willingness to help the group work out its solutions.  If you go into the audit with the attitude that “I am the auditor. I'm here to see what you are doing,” you're going to evoke a negative reaction. 

My auditors go in with a completely different attitude. "I'm here to help you understand where your risks are." That whole concept of both moving from an adversarial to a proactive response to auditing, as well as having a very proactive engagement with security, is what's really made a big cultural shift in our company.
Gardner: Help me better understand how we get companies, for those who are listening, to shift perceptions about security.

South: That’s always a strong question that has to be put to your executive team. How do we shift the understanding and the culture of security? In our case, our executive team realized that one of the fundamental things that was important for security of our company as a whole was that security had to be baked into everything that we did.

So we've taken that shift. The message that I take out to my people, and certainly to the people who are listening to this podcast, is that when you want to improve that security culture, make security the core of everything that takes place in a company. So whether you're developing an application or working in HR, whether you're the receptionist, it doesn't matter. Security has to be the central principle around which everything is built.

Core principle

If you make security the adjunct to your operation, like many companies do, where security is buried several layers down in the IT department, then you don't have the capability of making it the fundamental and core principle of your company. Again, it doesn't matter who you are in a company, you have some aspect of security that is important to the company itself.

For us, the message that we're trying to get out to people is to wrap everything you do around the security core. This is really big, particularly in the application world. If you look at many other traditional ways that people do application development, they'll develop a certain amount of the code and then they'll say, "Okay, security, go check it."

And of course, security runs their static and dynamic code analysis and they come back with a long list of things that need to be fixed, and then that little adversarial relationship starts to develop.

Los: John, as you're talking about this, I think back. Everybody's been there in their career and made mistakes. I'll readily admit that this is exactly what I was doing about 12 or 13 years ago in my software security role.

I was a security analyst. The application would be ready to go live. I'd run a scan, do a little bit of testing and some analysis on it, and generate a massive PDF report. Now you either walk it over to somebody’s cube, drop it off, walk away, and tell them to go fix their stuff, or I email it, or virtually lob it over the wall.
It's always better to lead by example, and hold those who do a good job in higher esteem.

There was no relationship. It's like, "I can't believe you're making these mistakes over and over. Now go fix these things.” They'd give me that “I am so confused. I don’t know what you're talking about look." Does it ever get fixed? Of course, not.

South: And, Raf, the days of finishing a project on Thursday, turning it over to security, saying, "This is going live on Friday," are long gone. If you're still doing that, you're putting your company at risk.

Gardner: Perhaps, Raf, for those of us who are in the social media space, where we're doing observations and we're being evangelists, that there is a necessary shift, too, on how we react to these security breaches in the media.

Rather than have a scoreboard about who screwed up, perhaps it's a better approach to say who took what problems they had and found a quick fix and limited the damage best. Is there a need for a perception shift in terms of how security issues in IT and in business in general are reported on and exposed?

Los: I absolutely believe that rather than a shamed look, it's always better to lead by example, and hold those who do a good job in higher esteem, because then people will want to aspire to be better. I fundamentally believe that human beings want to be better. It's just we don’t always have the right motivations. And if your motivation is, "I don’t want to be on that crap list," for lack of a better term, or "I don’t want to be on that worst list," then you'll do the bare minimum to not be on that worst list.

People will respond

If there's a list of top performing security companies or top performing companies that have the best security culture, whatever you want to call it, however you want to call that out, I firmly believe people will respond. By nature, people and companies are competitive.

What if we had an industry banquet and we invited everybody from all the heads of different industries and said, "Nominees for best security in an industry are, finance, health care, whatever?" It would be a show like that, or something.

It wouldn't have to be glitzy, but if we had some way of demonstrating to people that your customers in the world genuinely care about you doing a good job -- here are the people who really do a good job; let's hold them up at high esteem rather than shame the bad ones -- I think people will aspire to be better. This is always going to work going forward. The other way just hasn’t worked. I don’t see anything changing.

South: I think that's the right direction, Raf. We still have some effort to go in that direction. I know of one very, very large company, and one of their competitors had been breached just recently. So I called a contact I had in their security group and passed on the malware. I said you might want to check to see if this is in your organization.

He said, thanks and I called him up a couple of days later and I asked, "How did it go?" He said, "Upper management kind of panicked for a little bit, but I think everything settled down now." This was code for "they didn't do much."
The more these people see successful examples of how you can deal with security issues, the more it's going to drive that cultural change for them.

We have some progress still to make in that direction, but I think you're absolutely correct that the more these people see successful examples of how you can deal with security issues, the more it's going to drive that cultural change for them. Too often they see the reverse of that and they say, "Thank God that wasn’t us."

Gardner: Is there something that comes together between what's new and interesting about the technologies that are being deployed to improve posture around security and that might aid and abet this movement toward openness and the ability to be direct, and therefore more effective in security challenges?

Los: One of the keys is the pace of change in technology. That technology, for a number years, in our personal lives, used to lead technology in the business world.

So a laptop or desktop you had at home was usually in the order of magnitude greater than what was sitting on your desktop at the office and your corporate phone would be an ancient clamshell, while you have your smartphone in your pocket for home use.

Fewer devices

What's starting to happen is people are getting annoyed with that, and they want to carry fewer devices. They want to be able to interact more and organizations want maximum productivity.

So those worlds are colliding, and technology adoption is starting to become the big key in organizations to figure out what the direction is going to be like, what is the technology trend going to be. Then, how do we adapt to it and then how do we apply technology as a measure of control to make that workable? So understand technology, understand direction, apply policy, use technology to enforce that policy.

South: And it's finding what elements of technology are relevant to what you're doing. You see a large push today on bring your own device (BYOD), and the technologies that are making almost a commodity of the ability to handle information inside your company.

The biggest challenge that we are facing today is being able to make relevant technology decisions, as well as to effectively apply that new technology to our organizations. It's very simple, for instance, put a product like an iPad onto your network and start using it, but is it effectively protected and have you thought about all of the risks and how to manage those risks by putting that device out there?

Technology is advancing, as it always does, at a very high clip, and business has to take a more measured response to that, but yet be able to effectively provide something for its employees, as well its costumers, to be able to take advantage of the new technologies in today's world.

That's what you're seeing a lot in our customer base and the payments space in mobile technologies, because that's the direction that a lot of the payment streams are going to go in the future, whether it be contact or contactless Europay, Mastercard, and Visa (EMV) cards or phones that have near field communication (NFC) on them. Whatever that direction might be, you need to be responsive enough to be able to be in that market.

As you said, it's technology that’s driving something of the business itself, as well as the business and the culture in the company being able to find ways to effectively use that technology.

Gardner: I'm afraid we will have to leave it there. Please me join me in thanking our co-host, Raf Los, Chief Security Evangelist at HP Software, and our special guest, John South, Chief Security Officer at a Heartland Payment Systems. You can gain more insights and information on the best of IT Performance Management at http://www.hp.com/go/discoverperformance.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, October 18, 2012

VMware-powered cloud adoption delivers slew of data and performance benefits for Revlon's ERP, says CIO

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

It's been a year since we first spoke to Revlon at the VMworld Conference in 2011. We heard then about their world-class private cloud, and how as an early adopter Revlon gained many benefits from aggressively embracing the cloud.

So we decided to go back and see how things have progressed at Revlon, how their comprehensive cloud has matured, and how the benefits from aggressively embracing the cloud have evolved into positive consequences for their data architecture.

This special BriefingsDirect podcast then comes to you from the recent 2012 VMworld Conference in San Francisco, part of a series on cloud computing and software-defined datacenter infrastructure developments.

To fill us in on Revlon's latest IT developments, we're joined by David Giambruno, Senior Vice President and CIO of Revlon. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Now that you've been doing private cloud as an early adopter and at an advanced state, what has it been doing for you?

Giambruno: We have a couple of fronts. The biggest, which you alluded to, was the unintended consequences, and we've had a couple of them. When you think of Revlon, we're global and we have a huge application portfolio. As we put everything on our cloud and are using our cloud, we realized that all of our data sits in one place now.

So when you think of big-data management, we've been able to solve the problem by classifying all the unstructured data in Revlon and we did that efficiently. We still joke that it's like chewing glass. You've got to go through this huge process.

But, we have the ability to look at all of our data, a couple of petabytes, in the same place. Because the cloud lets us look at it all, we can bring up all of Revlon in our disaster recovery (DR) test environments and have our developers work with it at no cost. We have disconnected that cost and effort.

Once we realized we had this opportunity to start working on our big data, the other unintended consequences was our master data model. On top of our big data, we were able to able to efficiently and effectively build a global master data model.

Chief directive

At Revlon, one of our chief directives from the executive team is to globalize. So we're collapsing 21 enterprise resource planning (ERP) systems into one. The synergies of having this big-data structure and having this master data model is changing how to deploy a global ERP. Loading that data is now just a few clicks of a button. It's highly automated. We're not ETLing data and facing all the old challenges. We're not copying environments. Everything is available to us and it’s constantly updating.

At Revlon, we replicate all of our cloud activity every 15 minutes. You've seen on VMware, where we had disasters and we were able to recover a country quickly and effectively. That replication process and constantly updating allows us to update all these instances at no cost and with little effort.

You have to build the structure and you have to go through that process, but once it's done, it's now automated and you march that out. It's the ability to quickly and effectively manage all your big data coming in. For us, it's point of sale -- roughly 600 million-plus attributes.

For us to provide information to the business teams, to build good products, to sell good products, is a key differentiator in helping them.

Gardner: Why did you do the data the way you did that’s now led to this cloud architecture benefit?

We live in the information age, and to me, the most important thing is delivering information to the business teams.
Giambruno: We started the cloud architecture, and I always joke it's like having a Ferrari that you can take out for a spin. When we were building it, we didn't realize all the things we can do.

So it's really that je ne sais quoi, the little thing that, as you see it, you realize all these things you can do. You are always planning to do those things for the business, because that’s what we do, but it's how you do them.

I've always said the cloud is what the local area network (LAN) was 15 or 20 years ago. The LAN changed the way people dealt with information and applications, and the cloud is doing the same thing. Actually, it's on a bigger geometry, because it really eliminates geography and provides the ability to move data and information around.

We live in the information age, and to me, the most important thing is delivering information to the business teams. That's what we see as one of the big next evolutions in our cloud -- making information out of all this data and delivering that on whatever device they want to be on, wherever they are, securely and effectively, in a context that they can understand. Not in a way that we can understand, but in a way that they can consume.

Gardner: Understanding a bit about how you did this chronologically, for those that are still in the process of getting there with private cloud, did you focus on the data issues first and then application and workloads? Did you do them simultaneously? Is there some lesson to learn about how you did it in an orderly sense that others could benefit from?

First things first

Giambruno: I live in this simple world of crawl, walk, run. Whenever I say that my team starts cringing, because they think, "Oh, there he goes again." But it was literally fix the infrastructure first and then, from an application and data perspective, the low-hanging fruit, the file servers.

It's this progressive capability of learning how to do things -- low risk to high risk. What you end up doing is figuring out how to effectively do those things, because not only do you manage the technology, but you have to manage the people and the process changes, and all those things that have to happen.

But all ships have to rise at the same time. So it's the ability to run these concurrent streams. From a management perspective, it's how not to get overwhelmed and how to take advantage of the technology, the automation, and the capabilities that come along with that to free up work that you used to do and put it towards making the change.

I'm a big believer in not doing big bang. So it's not like, tomorrow we're going to have a private cloud. Throw the switch. It's the small incremental changes that help organizations adapt. It's a little bit every day. You look back, and at the end of six months or a year, you realize how much we've done.

It's been the same in Revlon. I constantly take my team and sit them down and say, "Look what we've done. You're in the forest. You're in the trees. It's time to look at the forest. Step back and look what you guys have done." Because it's a little bit every day, and you don't realize the magnitude or the mass, when you have a team of people doing something every day and going forward.
From a management perspective, it's how not to get overwhelmed and how to take advantage of the technology.

Gardner: For those of our listeners who may not be that familiar with Revlon, at least your IT operations, give us a sense of the scale -- the number of applications, size of data, just so we better appreciate the task that you've accomplished.

Giambruno: I usually quantify it by our cloud, because those are the simple metrics and we seem to be pretty steady, so the metrics are holding. Our cloud makes about 14,000 transactions a second. Our applications move around Revlon 15,000 times a month with no human intervention. Our change rate of data is between 17 and 30 terabytes a week.

We have roughly, depending on the ups and downs, between 97 and 98 of our total compute on our internal cloud, we have some AS/400s and I think one UNIX box left. But that's really the scale of what we do.

All of our geographies are around the world. We sit in all the continents except for Antarctica. We have a global manufacturing facility in Oxford, North Carolina, that produces 72 percent of everything we sell in the world. We have some other factors around the world. And we are delivering north of six nines uptime.

Gardner: An unintended consequence was a benefit for how data can be accessed and consumed, but a lot of people are hoping for consequences around cost. Is there something going on now a year later vis-à-vis your total cost, or maybe even the cost of data? Maybe you have been able to reduce the footprint of data, even while you have accessed more and more quickly. What's the cost equation?

Cost avoidance and savings

Giambruno: There is a history there, as we talked about. We have given back north of $70 million in cost avoidance and cost savings, and we're continuously figuring out how to use everything. My team is highly technical, so I call it turning screws. We are always turning screws on how to more effectively manage everything.

We're always looking at how to not spend money. It's simple. The more money we don't spend, the more that R&D, marketing, and advertising have to grow our company. That's the key to us.

We leverage capability, so one of the big things this year also was our mobile business intelligence (BI) capability. We've disconnected most of the costs for things in Revlon around IT. We only manage at a top line.

But if someone wants to try a new application, generally by the time the business team gets in a meeting with us, it's no cost. We have servers set up. We have the environment. We have the access control set up for the vendor to come in and set everything up. So that's still ongoing.

We have got this huge mobile BI initiative, which is delivering information to business teams and contacts. That's the new thing where we have disconnected the cost. We're not laying out money for it, and we're just now executing around that.
While data keeps growing, we're still figuring out how to manage things better and better in the background.

For me, the cost equation is more and more around cost avoidance and keeping on extending the capability of that cloud.

Gardner: And it seems as if those costs are more of an operational ongoing nature, predictable, recurring, easy to budget, rather than those big-bang types of cost?

Giambruno: Very, very predictable. For the past three years, we have had the same line items. While data keeps growing, we're still figuring out how to manage things better and better in the background, because  the cloud generates lot of data, which we want it to do. Data, information, and how we use that is the competitive weapon.

This cost avoidance, or cost containment, while extending capability, is the little magical thing that happens, that we do for the business. We're very level in our spend, but we keep delivering more and more and more.

Gardner: Because we are here at VMworld in San Francisco, tell me a little bit about the VMware impact for the cloud. How do you view the VMware suite and portfolio vis-à-vis the impact it has had on your maturation and benefits?

Very advanced

Giambruno: I want to use technology as a competitive weapon. My team masters it. We own it intellectually.

For us, it's where VMware is going. We're always pushing VMware. "What have you got next? What have you got next?" It's up to us to take capability and extend it. I don't mean to be flip or narcissistic or anything like that, but we've got that piece under control. It's about how do you do it better.

Every time there's an upgrade, what features and functionalities can we then take advantage and translate that into a business use? When I say business use, we tell the business teams, "Here's a new capability. You can do this and keep changing the structure of operating."

The new version of vSphere 5.1 came out, and we're in the process of exposing our internal cloud to our vendors and suppliers. We're eliminating all these virtual private networks (VPNs). It's about how we change and how IT operates, changing the model. For me, that's a competitive advantage, and it's the opportunity to reduce structural cost and take people away from managing firewalls.

We did that. We got that. Now we're going to do this a different way. We're going to expose to our vendors securely the information that they need, that they can interact with as easily and effectively.
Changing the model is really the opportunity, making it easier for the auditors to audit and making it easier for your supply chain.

There's even the idea of taking a portion of our apps and presenting those to our suppliers on their iPads and their iPhones so they can update our data and our systems much more cleanly and effectively. We can get the synergies and effectiveness and have our partners like to work with us and make it easy on them as well. It's always a quid pro quo, "It's Revlon. They're good to deal with. Let’s help them."

It's how you create those partnerships and effectiveness to get business done better. It makes it easier on the business teams, contracts go better, and it's cascading. I call it the spiral up effect, changing the way you operate to spiral up and take advantage of capabilities.

Gardner: Is that something we could classify as another unintended consequence -- a benefit that you have been able to enjoy these efficiencies around cloud internally for your enterprise, but now you are taking it to an extended enterprise benefit?

Giambruno: Absolutely. Look at the security complexity around VPNs and managing that and the audits. That's so much fun. Changing the model is really the opportunity, making it easier for the auditors to audit and making it easier for your supply chain, for all of those people to interact with you in a much more effective manner.

It's about enabling procurement to process their information and work with the vendors, because everything is about change. It's about speed of change. If we get a demand signal that changes and we need to buy more raw materials or whatever for our factories, we have the ability for not only our procurement teams, but our vendors to interact with us easily to make those changes. It ensures that we can deliver the right products, to the right stores, to the right peoples, so at the end the consumers are happy. It's about how do you change the model of delivering that.

Technology enabler

VMware has done that for us, and we keep taking advantage of all the stuff. I joke that I'm like a technology enabler: "What have you got for me? What have you got for me?" So I can give it out to the business and my teams, because it keeps people interested. We can say, "We saw you guys, and it was hard for you. Now, you can do this." And it's done.

"What do you mean it's done?" "It's done. Just use it. It's okay. Let us know what you think, if you want us to change something." But it's always being on the front of the bow, saying, "Here's what we can do. Here's how we can help."

That’s the culture of IT in Revlon. I'm merciless about how we're just here to help. We run the technology and own the technology intellectually, so we can help. That’s my only concern.
VMware has done that for us, and we keep taking advantage of all the stuff.

Internal cloud

In the future, in my internal cloud, I should be able to take a vertical instance of functionality and push that. To me, that's next. If the vendors can figure out how to do that and have my internal cloud manage those transactions back, but push the pieces of functionality wherever it needs to be, so it sits in those Mini Me data centers and let it be close to people, so I don’t have to deal with latency and then manage those transactions back, that's the next big evolution.

Then there's mobile computing. What I mean by mobile computing is viewing applications, so data never leaves my data center.

I know a device. I know the person. When they hit the edge of my network, essentially hit my data center, we know their device. We know who they are, and they only get access to information they are supposed to have and they only view it.

I could encrypt my entire data center, and at a hypervisor level, encrypt everything, because if you encrypt the VBK file, the job is done. The compliance and security impact is huge. No more data leakage, audits become easier, all of those things.

Again, it's a completely different way to operate and think about things, but we need to slice applications up, move them out, and then view the applications. That’s a whole new geometry of operating IT in a much more efficient manner.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

SOA provides needed support for enterprise architecture in cloud, mobile, big data, says Open Group panel

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

There's been a resurgent role for service-oriented architecture (SOA) as a practical and relevant ingredient for effective design and use of cloud, mobile, and big-data technologies.

To find out why, BriefingsDirect recently gathered an international panel of experts to explore the concept of "architecture is destiny," especially when it comes to hybrid services delivery and management. The panel shows how SOA is proving instrumental in allowing the needed advancements over highly distributed services and data, when it comes to scale, heterogeneity support, and governance.

The panel consists of Chris Harding, Director of Interoperability at The Open Group, based in the UK; Nikhil Kumar, President of Applied Technology Solutions and Co-Chair of the SOA Reference Architecture Projects within The Open Group, and he's based in Michigan, and Mats Gejnevall, Enterprise Architect at Capgemini and Co-Chair of The Open Group SOA Work Group, and he's based in Sweden. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why this resurgence in the interest around SOA?

Harding: My role in The Open Group is to support the work of our members on SOA, cloud computing, and other topics. We formed the SOA Work Group back in 2005, when SOA was a real emerging hot topic, and we set up a number of activities and projects. They're all completed.

I was thinking that the SOA Work Group would wind down, move into maintenance mode, and meet once every few months or so, but we still get a fair attendance at our regular web meetings.

In fact, we've started two new projects and we're about to start a third one. So, it's very clear that there is still an interest, and indeed a renewed interest, in SOA from the IT community within The Open Group.

Larger trends

Gardner: Nikhil, do you believe that this has to do with some of the larger trends we're seeing in the field, like cloud Software as a Service (SaaS)? What's driving this renewal?

Kumar: What I see driving it is three things. One is the advent of the cloud and mobile, which requires a lot of cross-platform delivery of consistent services. The second is emerging technologies, mobile, big data, and the need to be able to look at data across multiple contexts.


The third thing that’s driving it is legacy modernization. A lot of organizations are now a lot more comfortable with SOA concepts. I see it in a number of our customers. I've just been running a large enterprise architecture initiative in a Fortune 500 customer.

At each stage, and at almost every point in that, they're now comfortable. They feel that SOA can provide the ability to rationalize multiple platforms. They're restructuring organizational structures, delivery organizations, as well as targeting their goals around a service-based platform capability.

So legacy modernization is a back-to-the-future kind of thing that has come back and is getting adoption. The way it's being implemented is using RESTful services, as well as SOAP services, which is different from traditional SOA, say from the last version, which was mostly SOAP-driven.

Gardner: Mats, do you think that what's happened is that the marketplace and the requirements have changed and that’s made SOA more relevant? Or has SOA changed to better fit the market? Or perhaps some combination?

Gejnevall: I think that the cloud is really a service delivery platform. Companies discover that to be able to use the cloud services, the SaaS things, they need to look at SOA as their internal development way of doing things as well. They understand they need to do the architecture internally, and if they're going to use lots of external cloud services, you might as well use SOA to do that.

Also, if you look at the cloud suppliers, they also need to do their architecture in some way and SOA probably is a good vehicle for them. They can use that paradigm and also deliver what the customer wants in a well-designed SOA environment.

Gardner: Let's drill down on the requirements around the cloud and some of the key components of SOA. We're certainly seeing, as you mentioned, the need for cross support for legacy, cloud types of services, and using a variety of protocol, transports, and integration types. We already heard about REST for lightweight approaches and, of course, there will still be the need for object brokering and some of the more traditional enterprise integration approaches.

This really does sound like the job for an Enterprise Service Bus (ESB). So let's go around the panel and look at this notion of an ESB. Some people, a few years back, didn’t think it was necessary or a requirement for SOA, but it certainly sounds like it's the right type of functionality for the job.

Loosely coupled

Harding: I believe so, but maybe we ought to consider that in the cloud context, you're not just talking about within a single enterprise. You're talking about a much more loosely coupled, distributed environment, and the ESB concept needs to take account of that in the cloud context.

Gardner: Nikhil, any thoughts about how to manage this integration requirement around the modern SOA environment and whether ESBs are more or less relevant as a result?

Kumar: In the context of a cloud we really see SOA and the concept of service contracts coming to the fore. In that scenario, ESBs play a role as a broker within the enterprise. When we talk about the interaction across cloud-service providers and cloud consumers, what we're seeing is that the service provider has his own concept of an ESB within its own internal context.

If you want your cloud services to be really reusable, the concept of the ESB then becomes more for the routing and the mediation of those services, once they're provided to the consumer. There's a kind of separation of concerns between the concept of a traditional ESB and a cloud ESB, if you want to call it that.

The cloud context involves more of the need to be able to support, enforce, and apply governance concepts and audit concepts, the capabilities to ensure that the interaction meets quality of service guarantees. That's a little different from the concept that drove traditional ESBs.

That’s why you're seeing API management platforms like Layer 7, Mashery, or Apigee and other kind of product lines. They're also coming into the picture, driven by the need to be able to support the way cloud providers are provisioning their services. As Chris put it, you're looking beyond the enterprise. Who owns it? That’s where the role of the ESB is different from the traditional concept.

Most cloud platforms have cost factors associated with locality. If you have truly global enterprises and services, you need to factor in the ability to deal with safe harbor issues and you need to factor in variations and law in terms of security governance.
The platforms that are evolving are starting to provide this out of the box. The service consumer or a service provider needs to be able to support those. That's going to become the role of their ESB in the future, to be able to consume a service, to be able to assert this quality-of-service guarantee, and manage constraints or data-in-flight and data-at-rest.

Gardner: Mats, are there other aspects of the concept of ESB that are now relevant to the cloud?

Entire stack

Gejnevall: One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products that all the consultancies were asking companies to buy, wanting them to buy an ESB, governance tools, business process management tools, and a lot of sort of quite large investments to just get your foot into the door of doing SOA.

These days you can buy that kind of stuff. You can buy the entire stack in the cloud and start playing with it. I did some searches on it today and I found a company that you can play with the entire stack, including business tools and everything like that, for zero dollars. Then you can grow and use more and more of it in your business, but you can start to see if this is something for you.

In the past, the suppliers or the consultants told you that you could do it. You couldn’t really try it out yourself. You needed both the software and the hardware in place. The money to get started is much lower today. That's another reason people might be thinking about it these days.

Gardner: It sounds as if there's a new type of on-ramp to SOA values, and the componentry that supports SOA is now being delivered as a service. On top of that, you're also able to consume it in a pay-as-you-go manner.

Harding: That's a very good point, but there are two contradictory trends we are seeing here. One is the kind of trend that Mats is describing, where the technology you need to handle a complex stack is becoming readily available in the cloud.
One of the reasons SOA didn’t really take off in many organizations three, four, or five years ago was the need to buy the entire stack of SOA products

And the other is the trend that Nikhil mentioned: to go for a simpler style, which a lot of people term REST, for accessing services. It will be interesting to see how those two tendencies play out against each other.

Kumar: I'd like to make a comment on that. The approach for the on-ramp is really one of the key differentiators of the cloud, because you have the agility and the lack of capital investment (CAPEX) required to test things out.

But as we are evolving with cloud platforms, I'm also seeing with a lot of Platform-as-a-Service (PaaS) vendor scenarios that they're trying the ESB in the stack itself. They're providing it in their cloud fabric. A couple of large players have already done that.

For example, Azure provides that in the forward-looking vision. I am sure IBM and Oracle have already started down that path. A lot of the players are going to provide it as a core capability.
Pre-integrated environment

Gejnevall: Another interesting thing is that they could get a whole environment that's pre-integrated. Usually, when you buy these things from a vendor, a lot of times they don't fit together that well. Now, there’s an effort to make them work together.

But some people put these open-source tools together. Some people have done that and put them out on the cloud, which gives them a pretty cheap platform for themselves. Then, they can sell it at a reasonable price, because of the integration of all these things.

Gardner: The cloud model may be evolving toward an all-inclusive offering. But SOA, by its definition, advances interoperability, to plug and play across existing, current, and future sets of service possibilities. Are we talking about SOA being an important element of keeping clouds dynamic and flexible -- even open?

Kumar: We can think about the OSI 7 Layer Model. We're evolving in terms of complexity, right? So from an interoperability perspective, we may talk SOAP or REST, for example, but the interaction with AWS, Salesforce, SmartCloud, or Azure would involve using APIs that each of these platforms provide for interaction.

Lock-in

So you could have an AMI, which is an image on the Amazon Web Services environment, for example, and that could support a lab stack or an open source stack. How you interact with it, how you monitor it, how you cluster it, all of those aspects now start factoring in specific APIs, and so that's the lock-in.

From an architect’s perspective, I look at it as we need to support proper separation of concerns, and that's part of [The Open Group] SOA Reference Architecture. That's what we tried to do, to be able to support implementation architectures that support that separation of concerns.

There's another factor that we need to understand from the context of the cloud, especially for mid-to-large sized organizations, and that is that the cloud service providers, especially the large ones -- Amazon, Microsoft, IBM -- encapsulate infrastructure.

If you were to go to Amazon, Microsoft, or IBM and use their IaaS networking capabilities, you'd have one of the largest WAN networks in the world, and you wouldn’t have to pay a dime to establish that infrastructure. Not in terms of the cost of the infrastructure, not in terms of the capabilities required, nothing. So that's an advantage that the cloud is bringing, which I think is going to be very compelling.

The other thing is that, from an SOA context, you're now able to look at it and say, "Well, I'm dealing with the cloud, and what all these providers are doing is make it seamless, whether you're dealing with the cloud or on-premise." That's an important concept.
From an SOA perspective, the cloud has become very compelling.

Now, each of these providers and different aspects of their stacks are at significantly different levels of maturity. Many of these providers may find that their stacks do not interoperate with themselves either, within their own stacks, just because they're using different run times, different implementations, etc. That's another factor to take in.

From an SOA perspective, the cloud has become very compelling, because I'm dealing, let's say, with a Salesforce.com and I want to use that same service within the enterprise, let's say, an insurance capability for Microsoft Dynamics or for SugarCRM. If that capability is exposed to one source of truth in the enterprise, you've now reduced the complexity and have the ability to adopt different cloud platforms.

What we are going to start seeing is that the cloud is going to shift from being just one à-la-carte solution for everybody. It's going to become something similar to what we used to deal with in the enterprise context. You had multiple applications, which you service-enabled to reduce complexity and provide one service-based capability, instead of an application-centered approach.

You're now going to move the context to the cloud, to your multiple cloud solutions, and maybe many implementations in a nontrivial environment for the same business capability, but they are now exposed to services in the enterprise SOA. You could have Salesforce. You could have Amazon. You could have an IBM implementation. And you could pick and choose the source of truth and share it.

So a lot of the core SOA concepts will still apply and are still applying.

Another on-ramp

Gardner: Perhaps yet another on-ramp to the use of SOA is the app store, which allows for discovery, socialization of services, but at the same time provides overnance and control?

Kumar: We're seeing that with a lot of our customers, typically the vendors who support PaaS solution associate app store models along with their platform as a mechanism to gain market share.

The issue that you run into with that is, it's okay if it's on your cellphone or on your iPad, your tablet PC, or whatever, but once you start having managed apps, for example Salesforce, or if you have applications which are being deployed on an Azure or on a SmartCloud context, you have high risk scenario. You don't know how well architected that application is. It's just like going and buying an enterprise application.

When you deploy it in the cloud, you really need to understand the cloud PaaS platform for that particular platform to understand the implications in terms of dependencies and cross-dependencies across apps that you have installed. They have real practical implications in terms of maintainability and performance. We've seen that with at least two platforms in the last six months.

Governance becomes extremely important. Because of the low CAPEX implications to the business, the business is very comfortable with going and buying these applications and saying, "We can install X, Y, or Z and it will cost us two months and a few million dollars and we are all set." Or maybe it's a few hundred thousand dollars.
When you deploy it in the cloud, you really need to understand the cloud PaaS platform for that particular platform.

They don't realize the implications in terms of interoperability, performance, and standard architectural quality attributes that can occur. There is a governance aspect from the context of the cloud provisioning of these applications.

There is another aspect to it, which is governance in terms of the run-time, more classic SOA governance, to measure, assert, and to view the cost of these applications in terms of performance to your infrastructural resources, to your security constraints. Also, are there scenarios where the application itself has a dependency on a daisy chain, multiple external applications, to trace the data?

In terms of the context of app stores, they're almost like SaaS with a particular platform in mind. They provide the buyer with certain commitments from the platform manager or the platform provider, such as security. When you buy an app from Apple, there is at least a reputational expectation of security from the vendor.

What you do not always know is if that security is really being provided. There's a risk there for organizations who are exposing mission-critical data to that.

The second thing is there is still very much a place for the classic SOA registries and repositories in the cloud. Only the place is for a different purpose. Those registries and repositories are used either by service providers or by consumers to maintain the list of services they're using internally.

Different paradigms

There are two different paradigms. The app store is a place where I can go and I know that the gas I am going to get is 85 percent ethanol, versus I also have to maintain some basic set of goods at home to make that I have my dinner on time. These are different kind of roles and different kind of purposes they're serving.

Above all, I think the thing that's going to become more and more important in the context of the cloud is that the functionality will be provided by the cloud platform or the app you buy, but the governance will be a major IT responsibility, right from the time of picking the app, to the time of delivering it, to the time of monitoring it.

Gardner: How is The Open Group allowing architects to better exercise SOA principles, as they're grappling with some of these issues around governance, hybrid services delivery and management, and the use and demand in their organizations to start consuming more cloud services?

Harding: The architect’s primary concern, of course, has to be to meet the needs of the client and to do so in a way that is most effective and that is cost-effective. Cloud gives the architect a usability to go out and get different components much more easily than hitherto.

There is a problem, of course, with integrating them and putting them together. SOA can provide part of the solution to that problem, in that it gives a principle of loosely coupled services. If you didn’t have that when you were trying to integrate different functionality from different places, you would be in a real mess.
The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

What The Open Group contributes is a set of artifacts that enable the architect to think through how to meet the client’s needs in the best way when working with SOA and cloud.

For example, the SOA Reference Architecture helps the architect understand what components might be brought into the solution. We have the SOA TOGAF Practical Guide, which helps the architect understand how to use TOGAF in the SOA context.

We're working further on artifacts in the cloud space, the Cloud Computing Reference Architecture, a notational language for enabling people to describe cloud ecosystems on recommendations for cloud interoperability and portability. We're also working on recommendations for cloud governance to complement the recommendations for SOA governance, the SOA Governance Framework Standards that we have already produced, and a number of other artifacts.

The Open Group’s real role is to support the architect and help the architect to better meet the needs of the architect client.

From the very early days, SOA was seen as bringing a closer connection between the business and technology. A lot of those promises that were made about SOA seven or eight years ago are only now becoming possible to fulfill, and that business front is what that project is looking at.

We're also producing an update to the SOA Reference Architectures. We have input the SOA Reference Architecture for consideration by the ISO Group that is looking at an International Standard Reference Architecture for SOA and also to the IEEE Group that is looking at an IEEE Standard Reference Architecture.

We hope that both of those groups will want to work along the principles of our SOA Reference Architecture and we intend to produce a new version that incorporates the kind of ideas that they want to bring into the picture.

We're also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

So those are three new initiatives that should result in new Open Group standards and guides to complement, as I have described already, the SOA Reference Architecture, the SOA Governance Framework, the Practical Guides to using TOGAF for SOA.
We're also thinking of setting up an SOA project to look specifically at assistance to architects building SOA into enterprise solutions.

We also have the Service Integration Maturity Model that we need to assess the SOA maturity. We have a standard on service orientation applied to cloud infrastructure, and we have a formal SOA Ontology.

Those are the things The Open Group has in place at present to assist the architect, and we are and will be working on three new things: version 2 of the Reference Architecture for SOA, SOA for business technology, and I believe shortly we'll start on assistance to architects in developing SOA solutions.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.


You may also be interested in: