Tuesday, March 4, 2014

Case study: How Dell converts social media analytics into strategic business advantage

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

The next BriefingsDirect business innovation case study examines how Dell has recognized the value of social media for more than improved interactions and brand awareness. Dell has successfully learned from social media how to meaningfully increase business sales and revenue.

The data, digital relationships, and resulting analysis inherent in social media and social networks interactions provide a lasting resource for businesses and their customers, says Dell. And this resource has a growing and lasting impact on many aspects of business -- from research, to product management, to CRM, to helpdesk, and, yes, to sales.

To learn more about how Dell has been making the most of social media for the long haul, BriefingsDirect sat down with Shree Dandekar, Senior Director of Business Intelligence and Analytics at Dell Software. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: Businesses seem to recognize that social media and social-media marketing are important, but they haven’t very easily connected the dots in how to use social media for actual business results. Why?

Dandekar: It’s not that businesses don’t realize the value of social media. In fact, many businesses are looking at simple, social media listening and monitoring tools to start their journey into social media.

Dandekar
The challenge is that when you make these investments into any kind of a listening or monitoring capability, people tend to stop there. It takes them a while to start collecting all the data on LinkedIn, Facebook, or Twitter. It takes some time for them to make some meaningful sense out of that. That’s where the dynamic comes in when you talk to an enterprise business. They’ve really moved on.

So, there are several stages within a social media journey, the very first one being listening and monitoring, where you start capturing and aggregating data.

From there, you start doing some kind of sentiment analysis. You go into some kind of a social-media engagement, which leads to customer care. Then, you go into questions like social return on investment (ROI) and then, try to bring in business data and mash up that together. This is what’s known as Social CRM.

So, if you say that these are the six stages of social-media maturity model or a social-media lifecycle, some of the enterprise businesses have really matured in the first three or four phases, where they have taken social media all the way to customer care. Where they are struggling now is in implementing technologies where you can derive an actual ROI or business value from this data.

Listening and monitoring

Whereas, if you look at some of the small businesses or even mid-sized companies, they have just started getting into listening and monitoring, and the reason is that there are not many tools out there that appeal to them.

I won’t name any specifically, but you know all the big players in the social media listening space. They tend to be expensive and require a lot of reconfiguration and hands-on training. The adoption of social media in the small-sized business or even mid-sized businesses has been slow because these guys don't want to invest in these types of tools.

By the way, here is another big differentiator. If you look at enterprises, they don't shy away from investing in multiple tools, and Dell is a great example. We have a Radian6 deployment, social-media engagement tools, and our own analytic tools that we build on top of that. We tried each and every tool that's out there because we truly believe that we have to gain meaningful insights from social media, and we won't shy away from experimenting with different tools.

Mid-sized companies don't have the budget or resources to try out different tools. They want a single platform that can do multiple things for them – essentially a self-service-enabled social-media intelligence platform.

If I start with listening, I just want to understand who is talking about me, who my influences are, who are my detractors, what are my competitors talking about, and whether or not I can do a quick sentiment analysis. That's where I want to start.

Gardner: Dell has been doing social media since 2006, so going quite a ways back. How important is this to Dell as a company and how important do you think other companies should view this? Is this sort of a one-trick pony, or is there a lasting and expanding value to doing social media interactions analysis?
Dell was built on the value of going direct to consumers and the blog had to communicate and live by those same values.

Dandekar: In addition to leadership from the top, it took a perfect storm to propel us fully into social. In July 2006 when pictures and a report surfaced online out of Osaka, Japan of a Dell laptop spontaneously combusting due to a battery defect (which happened to impact not just Dell, but nearly every laptop manufacturer), it was a viral event of the sort you don’t want. But we posted a blog titled “Flaming Notebook” and included a link to a photo showing our product in flames – which caused some to raise an eyebrow.

I will pause there for a second. How many of you would do that if something similar happened to your business? But Michael Dell made it crystal clear: Dell was built on the value of going direct to consumers and the blog had to communicate and live by those same values.

This is 2006, when the internet and the true value of blogging and everything was just becoming more relevant. That was a turning point in the way we did customer care and the way we engaged with our customers. We realized that people are not only going to call an 800 support number, but are going to be much more vocal about it through sources like social media blogging on Twitter and Facebook.

That's how our journey in social media began and it’s been a multi-year, multi-investment journey. We started looking at simple listening and monitoring. We built a Social Media Command Center. And even before that, we built communities for both our employees and our customers to start interacting with Dell.

Idea Storm

One of the most popular communities that we built was called Idea Storm. This was a community in which we invited our customers to come in and share ideas product improvements they want. This community was formed around 2007. To date, there have been close to 550 different ideas that we got from this community that have been implemented in Dell products.

Similarly, we launched Employee Storm, which was for all the employees at Dell, and the idea was similar. If there are some things in terms of processes or products that can be changed, that was a community for people to come in and share those ideas.

Beyond that, as I said, we built a Social Media Command Center back in 2010. And we also stood up the Social Media and Communities University program. We started training our internal users, our employees, to take on social media.

Dell firmly believes that you need to train employees to make them advocates for your brand instead of shying away and saying, “You know what, I'm scared, because I don't know what this guy is going to be saying about me in the social media sphere.”

Instead, we’re trying to educate them on what is the right channel and how to engage with customers. That's something that Dell has developed over the last six years.
Social media has become a core part of our DNA, and it fits well because of the fact that our DNA has always been built on directly interacting with our customers.

Gardner: You’ve taken a one-way interaction, made it two-way, and then expanded well beyond that. How far and wide do the benefits of social media go? Are you applying this to help desk, research, new products, service and support, or all the above? Is there any part of Dell that doesn't take advantage from social media?

Dandekar: No, social media has become a core part of our DNA, and it fits well because of the fact that our DNA has always been built on directly interacting with our customers. If a customer is going to use social media as one of their primary communication channels, we really need to embrace that channel and make sure we can communicate and talk to our customers that way.

We have a big channel through Salesforce.com where we interact with all the leads that come in through Salesforce.
Taking that relationship to the next level, is there a way I can smartly link the Salesforce leads or opportunities to someone's social profile? Is there a way I can make those connections, and how smartly can I develop some sales analytics around that? That way, I can target the right people for the right opportunities.

Creating linkage

That's one step that Dell has taken compared to some of our industry competitors, to be very proactive in making that linkage. It’s not easy. It requires some investment on your part to take that next step. That's also very close to the sixth stage that I talked about, which is social CRM.

You’ve done a good job at making sure you’re taking all the social media data, massaging it, and deriving insight just from that. Now, how can you bring in business data, mash it up with social data, and then create even powerful insights where you can track leads properly or generate opportunities through Twitter, Facebook or any other social media sources?

Gardner: Shree, it seems to me that what you’re doing is not only providing value to Dell, but there is a value to the buyer as well. I think that as a personal consumer and a business consumer I’d like for the people that I am working with in the supply chain or in a procurement activity to know enough about me that they can tailor the services, gain insight into what my needs are, and therefore better serve me. Is there an added-value to the consumer in doing all this well, too?

Dandekar: The power of social media is real-time. Every time you get a product from Dell and tweet about it or say you like it on Facebook, there is a way that I can, in real-time, get back to that customer and say I heard you and thanks for giving us positive or a negative feedback on this. For me to take that and quickly change a product decision or change a process within Dell is the key.
The power of social media is real-time.

There are several examples. One example that comes to mind is the XPS 13 platform that we launched. The project was called “Project Sputnik.” This was an open-source notebook that we deployed on one of our consumer platforms XPS 13.

We heard a lot of developers saying they like Dell, but really wanted a cool, sexy notebook PC with all the right developer tools deployed on that platform. So, we started this project where we identified all the tools that would resonate with developers, packaged them together, and deployed it on the XPS 13 platform.

From the day when we announced the platform launch, we were tracking the social media channels to see if there was any excitement around this product.

The day we launched the product, within the first three or four hours, we started receiving negative feedback about the product. We were shocked and we didn’t know what was going on.

But then, through the analytics that we have developed on top of our social media infrastructure, we were able to pinpoint that one of the product managers had mistakenly priced the notebook higher than that of a Windows notebook. The price should not have been higher than that of a Windows notebook, and that’s why a lot of developers were angry. They thought that we were trying to price it higher than traditional notebooks.

We were able to pinpoint what the issue was and within 24 hours, we were able to go back to our product and branding managers and talk to them about the pricing issue. They changed the pricing on dell.com and we were able to post a blog on Engadget.

Brand metrics

Then, in real time, we were able to monitor the brand metrics around the product. After that, we saw an immediate uptick in product sentiment. So, the ability to monitor product launches in real time and fix issues in real time, related with product launches, is pretty powerful.

One traditional way you would have done that is something called Net Promoter Score (NPS). We use NPS a lot within Dell. The issue with it is that it is survey-based. You have to send out the survey. You collect all the data. You mine through it and then you generate a score.

That entire process takes 90 to 120 days and, by the time you get it, you might have missed out on a lot of sales. If there was a simple tweak, like pricing, that I could have done overnight, I would have missed out on it by two months.

That’s just an example, where if I had waited for NPS to tell me that pricing was wrong, I would have never reacted in real-time and I would have lost my reputation on that particular product.

Gardner: How extensive is your listening and analysis from social media?

Dandekar: Just to cite some quick stats, Dell has more than 21 million social connections through fans on Facebook, followers on Twitter, Dell community members, and more across the social web.

We talked about customer care and the engagement centers, and I talked about those six stages of the social media journey. Based on the Social Media Command Center that we have deployed within Dell, we also have a social outreach services team that responds to an average of 3,500 posts a week in 14 languages and we have an over 97 percent resolution rate.

We talked about Idea Storm and I had talked about the number of ideas that have been generated out of that. Again, that’s close to 550 plus ideas to date.

Then, we talked about the Social Media and Communities University. That’s an education program that we have put in place, and to date, we have close to 17,000 plus team members who have completed the social media training certification through that program.

Social-media education

By the way, that’s the same module that we have started deploying through our social media professional services offering, where we’ve gone in and instituted the Social Media and Communities University program for our customers as well.

We have had a high success rate just finding some of the customers that have benefited through our social media professional services team and also deploying Social Media Command Center.

Red Cross is a great example where we have gone and deployed the Social Media Command Center for them to be much more proactive in responding to people during the times of calamities.

Clemson University is another example, where we've gone and deployed a Social Media Command Center for them that’s used for alternate academic research methods and innovative learning environments.

Gardner: Tell me a little bit about Dell's SNAP.

Dandekar: SNAP stands for Social Net Advocacy Pulse. This was a product that we developed in-house. As I said, we have been early users of listening and monitoring platforms and we have deployed Social Media Command Centers within Dell.
It takes a long time to get to that ease of use ability for anybody to go in and look at all these social conversations and quickly pinpoint to an issue.

The challenge, as we kept using some of these tools, was that we realized that the sentiment accuracy was really bad. Most of the times when you take a quote and you run it through one of the sentiment analyzers, it pretty much comes back saying it's neutral, when there’s actually a lot of rich context that’s hidden in the quote that was never even looked at.

The other thing was that we were tracking a lot of metrics around graphs and charts and reports, which was important, but we kind of lost the ability to derive actual meaningful insights from that data. We were just getting bogged down by generating these dashboards for senior execs without making a linkage on why something happened and what were some of the key insights that could have been derived from this particular event.

None of these tools are easy to use. Every time I have to generate a report or do something from one of these listening platforms, it requires some amount of training. There is an expectation that the person who is going to do that has been using this tool for some time. It takes a long time to get to that ease of use ability for anybody to go in and look at all these social conversations and quickly pinpoint an issue.

Those are some of the pain points that we realized. We asked, “Is there a way we can change this so we can start deriving meaningful insights? We don’t have to look at each and every quote and say, it's a neutral sentiment. We can actually start deriving some meaningful contact out of these quotes.”

Here is an example. A customer purchased a drive to upgrade a dead drive from a Dell Mini 9 system, which originally came with an 8 GB PCI solid state drive. He took the 16 GB drive and replaced the 8 GB drive that was dead. The BIOS on the system instantly recognized it and booted it just fine. That’s the quote that we got from one of the customer’s feedback.

Distinct clauses

If I had run that quote through one of the regular sentiment analyzing solutions, it would have pretty much said it's neutral, because there was really nothing much that it could get from that it. But if you stop for a second and read through that quote, you realize that, there are a couple of important distinct clauses that can be separated out.

One thing is that he’s talking about a hard drive in the first line. Then, he’s talking about the Dell Mini 9 platform, and then he’s talking about a good experience he had with swapping the hard drive and that the BIOS was able to quickly recognize the drive. That’s a positive sentiment.

Instead of looking at the entire statement and assigning a neutral rating to it, if I can chop it down into meaningful clauses, then I can go back to customer care or my product manager and say, “Out of this, I was able to assign an intensity to the sentiment analysis score.” That makes it even more meaningful to understand what the quote was.

It's not going to be just a neutral or it's not going to be a positive or negative every time you run it through a sentiment analysis engine. That’s just one flavor.

You asked about sentiment gravity. That’s just one step in the right direction, where you take sentiment and assign a degree to it. Is it -2, -5, +5, or +10? The ability to add that extra color is something that we wanted to do on top of our sentiment analysis.
I can really mine that data to understand how I can take that and derive meaningful insights out of that.

Beyond that, what if I could add where the conversation took place. Did it take place on Wall Street Journal or Forbes, versus someone’s personal blog, and then assign it an intensity based on where the conversation happened?

The fourth area that we wanted to add to that was author credibility. Who talked about it? Was it a person who is a named reputed person in that area, or was it an angry off customer who just had a bad experience. Based on that, I can rate and rank it based on author credibility.

The fifth one we added was relevance. When did this event actually happen? If this event happened a year or two back, or even six months back, and someone just wants to cite it as an example, then, I really don’t want to give it that high rating. I might change the sentiment to reflect that it's not that relevant based on today’s conversations.

If I take some of these attributes, sentiment, degree of sentiment, where the conversation happened, who talked about it and when and why did that conversation happen and then convert that into a sentiment score, that’s now a very powerful mechanism for me to calculate sentiment on all these conversations that are happening.

That gives me meaningful insights in terms of context. I can really mine that data to understand how I can take that and derive meaningful insights out of that. That’s what SNAP does, not just score a particular quote by pure sentiment, but add these other flavors on top of that to make it much more meaningful.

Make it usable

Gardner: Have you considered productizing this and perhaps creating a service for the smaller companies that want to do this sort of social analysis and help them along the way?
We also want to make sure we’re bringing tools to market to service those mid-market companies as well.

Dandekar: We’re still working through those details and figuring out as we always do the best ways to bring solutions to market, but for us, mid-market is our forte. That’s an area where Dell has really excelled. For us to be in the forefront of enterprise social media is great, but we also want to make sure we’re bringing tools to market to service those mid-market companies as well.

By the way, we have stood up several solutions for our customers. One of them is the Social Media Command Center. We’ve also stood up social media professional services and we offer consulting services even to small- and mid-sized companies on how to mature in a social media maturity cycle. We are also looking at bringing SNAP to market. But if you’re talking about specific software solutions, that’s an area that we’re certainly looking into, and I would just say, “Stay tuned.”

Gardner: We’ll certainly look for more information along those lines. It's something that makes a lot of sense to me. Looking to the future, how will social become even more impactful?

People are increasing the types of activities they do on their mobile devices and that includes work and home or personal use and a combination of them, simultaneous perhaps. They look to more cloud models for how they access services, even hybrid clouds. It’s stretching across your company’s on-premises activities and more public cloud or managed service provider hosted services.

We expect more machine-to-machine data and activities to become relevant. Social becomes really more of a fire hose of data from devices, location, cloud, and an ever-broadening variety of devices. Maybe the word social is outdated. Maybe we’re just talking about data in general?

How do you see the future shaping up, and how do we consider managing the scale of what we should expect as this fire hose grows in size and in importance?

Embarking on the journey

Dandekar: This is a great question and I like the way you went on to say that we shouldn’t worry about the word social. We should worry about the plethora of sources that are generating data. It can be Facebook, LinkedIn, or a machine sensor, and this fits into the bigger picture of what's going to be your business analytics strategy going forward.

Since we’re talking about this in the context of social, a lot of companies that we talk to -- it can be an enterprise-size company or a mid-market-size company -- most of the time, what we end up seeing is that people want to do social media analytics or they want to invest in the social media space. Some of their competitors are doing that, and they really don’t know what to expect when they embark on this journey.

A lot of companies have already gone through that transformation, but many companies are still stuck in asking, “Why do I need to adopt social media data as part of my enterprise data management architecture?”

Once you cross that chasm, that’s where you actually start getting into some meaningful data analytics. It's going to take a couple of years for most of the businesses to realize that and start making their investments in the right direction.
It's going to take a couple of years for most of the businesses to realize that and start making their investments in the right direction.

But coming back to your question on what's the bigger picture, I think it’s business analytics. The moment you bring in social media data, device data, the logs, sources like Salesforce, NetSuite -- all this data together now presents the unified picture using all the datasets that were out there.

And these datasets can also be datasets like something from Dun and Bradstreet, which has a bunch of data on leads or sales, mixing that data with something like Salesforce data and then bringing in social media data. If I can take those three datasets and convert that into a powerful sales analytics dashboard, I think that’s the nirvana of business analytics. We’re not there yet, but I do feel a lot of industry momentum going in that direction.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

You may also be interested in:

Thursday, February 20, 2014

Istanbul-based Finansbank manages risk and security using HP ArcSight, Server Automation

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Governance, risk management and compliance (GRC) form a top-tier of requirements for banks anywhere in the world as they create and deploy applications. A close second nowadays is speed to market, and rapid responsiveness to changing customer expectations and demands.

So when Finansbank, an Istanbul-based bank, knew they had to better manage risk -- but not lose time-to-market advantages -- they did a thorough analysis of available IT products and services. The result was an impressive record of managed risk and deployments, with an eye to greater automation over time.

BriefingsDirect had an opportunity to learn first-hand at the recent HP Discover 2013 Conference in Barcelona how Finansbank extended its GRC prowess -- while smoothing operational integrity and automating speed to deployment -- using several HP solutions.

Learn how from a chat with Ugur Yayvak, Senior Designer of Infrastructure at Finansbank in Istanbul. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: Tell us a bit about your organization and how you're keeping compliance and risk issues in check?

Yayvak
Yayvak: Finansbank is one of the largest banks in Turkey and it has more than 12,000 employees and 600 branches in the country. Banking is a competitive world in Turkey, and for compliance we have to be rapid. We have to do things faster. And security is a big deal for us.

Because we’re a bank, we need to obey the payment-card industry (PCI) and Sarbanes-Oxley (SOX) rules. To accomplish this, we had to create some scripts to check the data on our servers. It takes lots of time to do compliance reporting. Security is a must for the servers, because of attacks. We need to be compliant and secure, and we need to move fast.
 Gardner: And so as you began to look for solutions to these problems, how did you come up with a solution?

Compliance and integrity

Yayvak: First of all, we needed a compliance and integrity-check solution. We did a proof of concept (POC) with three different vendors and we checked for performance, compliance, tool support, ease of use, reporting tools, and the support that the vendor would give us. After all that, we chose HP Server Automation.

We’ve been using it for six months. Three months was for the implementation process, but during implementation, we created our first rules. We did some basic agent rollouts on the servers. Now, we have 90 percent coverage on all of our UNIX servers on the Server Automation site.
We’re also using Service Management and the ArcSight tool. We integrated Server Automation with the Service Management, ArcSight, and also Operations Orchestration to do our jobs in less time.
Gardner: What have been some of the results? What have you been gaining in terms of better control?
With the help of the Server Automation, it’s very simple and we can get the results in much less  time.

Yayvak: We’re creating monthly reports for our audit teams, and it takes less time. With the help of Server Automation, we’ve scheduled our jobs and the audit rules and reports that we want to share with our audit teams.

It takes much less time than it did before. Also, with the help of the scripts, the daily system administration tasks are very easy. Previously, we were doing everything by hand. With the help of the Server Automation, it’s very simple and we can get the results in much less  time.

Looking to the future

Gardner: What about the future? Do you have plans to move further, perhaps using ArcSight? Are there other security benefits that you have in mind?

Yayvak: One is to improve audit server automation, because there are some scripts that we’ve changed. Those changes that we’ve done on the servers must be audited. We also want to integrate Server Automation with ArcSight to track the changes that we’ve made. And if we’ve made an error, we will be alerted by the ArcSight server.

Right now, we’re using these solutions across our central data center, and also the disaster recovery site. But maybe later on, we can implement this for the branches to take care of the data servers there.
Gardner: What announcements or advances in the recent HP products capture your interest?

Yayvak: The new version of Server Automation came out this year, and we wanted to know what has changed. Also Finansbank will use lots of HP's products like Service Manager, Orchestration Manager, Operations Manager. This event was a good place to learn what has changed across these services.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, February 13, 2014

HP Access Catalog smooths the way for streamlined deployment of mobile apps

HP today announced HP Access Catalog, a SaaS-delivered mobile app and content store that allows corporations to quickly and securely deliver resources across mobile and desktop devices to their employees anywhere.

IT organizations are facing pressure to deliver a marketplace experience to employees who expect access to content and apps from their device of choice. But non-business controlled exchanges and app stores lack enterprise security and control. Companies must also protect their apps from access by outsiders.

So the new catalog from HP, which can be branded as the business's own store, offers organizations a secure, private “app store” for employees to browse, search and download mobile applications and digital content onto their devices, including mobile and tablets, as well as desktops. The catalog supports Android and iOS platforms, which make up close to 94 percent of the mobile-device market share in the third quarter of 2013.

Earlier this week HP launched the HP Vertica Marketplace, a hub for developers, partners and customers to create and share extensions, enhancements, and solutions that integrate with the HP Vertica Analytics Platform. Both the Vertica Marketplace and HP Access Catalog are powered by technology developed by Palm, which HP acquired in 2010.

Delivered via native mobile clients and a web interface, the HP Access Catalog is a pure software-as-a-service (SaaS) offering that helps organizations reduce the cost and complexity of managing applications on company-issued and bring-your-own-device (BYOD) mobile devices, said Tim Rochte, Director of Product Management at HP Software Web Services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Streamlined deployment

Through the catalog’s native identity management system or seamless integration with enterprise identity systems, IT organizations ensure that users can find and download the right applications for their role, he said. Those organizations have 100% control over their content and apps.

In addition, the catalog allows IT organizations to drive updates to users to ensure they have the most current applications and data, increasing their mobile productivity and effectiveness without compromising security. Via a CDN, the delivery speed and global reach of the apps and content -- even large video objects and streams to remote branches -- is assured, something a home-grown app store may not be able to do, said Rochte.
As organizations embrace mobility, they need a simple, secure and reliable mechanism to manage the delivery of apps to their employees.

The Access Catalog uses HTML5 and single-sign-on authentication and authorization capabilities with SAML 2.0 integration. It coexists with "public" stores like iTunes and Google Play.

Hosted in HP’s PCI-compliant data center, the access catalog also is offered as an integrated component of the HP Anywhere enterprise mobility platform, enabling customers to manage all their mobile apps

While the HP Access Catalog is currently used for free content, an e-commerce element that allows selling and/or charge backs is in the offing, said Rochte. As applications developers go mobile-first, the store may become a primary way to distribute, track and manage all corporate applications. Or at least it will help manage the expected huge growth in mobile apps in businesses.

You could even say the Access Catalog marketplace model is the new intranet, for those of you that recall intranets.

HP Access Catalog will be available worldwide from HP and its channel partners in March. Pricing will be based on a simple per-user per month or annual subscription. The means the more content and apps per employee, the better the cost ratio -- and productivity.

Additional information is available at go.pronq.com/HP-Access-Catalog.

You may also be interested in:


Monday, February 10, 2014

HP adds new value to Vertica data analytics platform with community marketplace

HP today launched the HP Vertica Marketplace, a hub for developers, partners and customers to create and share extensions, enhancements, and solutions that integrate with the HP Vertica Analytics Platform.

These add-ons and solutions include connectors and third-party extensions, business intelligence (BI)  tools, exact transform load (ETL) and data transformation products, connectors and tools for HP HAVEn big-data analytics platform, as well as industry and other original equipment manufacturer (OEM) solutions.

In addition, the HP Vertica Marketplace includes the latest solutions from HP Vertica’s innovations incubation program, allowing users to create cutting-edge big-data applications. [HP is a sponsor of BriefingsDirect podcasts.]

“Our rapidly growing community of customers, partners and developers are building vertical and horizontal solutions on, and creating new add-on capabilities to, Vertica every day,” said Colin Mahony, vice president and general manager, Vertica, HP. “The HP Marketplace provides a place where our community can share and market their capabilities to help other organizations and developers fuel further innovation.”

New capabilities

With the HP Vertica Marketplace, developers and companies can:
  • Gain value from 100 percent of information -- spanning structured, semi-structured and unstructured data -- through connectors and extensions that store, manage and analyze big data. This includes integration with the Hadoop Distributed Filesystem (HDFS) and the HP Autonomy IDOL platform.
  • Get business insights from big data with flexible plug-ins and extensions to integrate and visualize users’ data, including BI and data-visualization tools and products.
  • Capitalize on shared intelligence by engaging developers via a social interface that allows users to pose questions, interact with subject matter experts, and review previous discussions, as all questions are cataloged and searchable.
    HP Vertica Marketplace members also will gain access to the latest innovations from HP Vertica through its incubation program.
HP Vertica Marketplace members also will gain access to the latest innovations from HP Vertica through its incubation program. These new technologies and solutions will be available for developers to evaluate and provide feedback, helping guide future development.

The current marketplace is geared toward free and open community sharing of extensions, connectors and tools, but I think this could easily blossom into a commerce hub for analytics apps and/or data enhancements. We'll have to keep an eye out for that. I also think some sort of vertical industry segmentation of analytics capabilities is in the offing. That would allow for ecosystem-defined solutions to emerge, either as open contributions or for-pay offerings. In any event, it's now quite a powerful destination for developers to showcase their big data analytics endeavors.


New innovations in the market include:
  • HP Vertica Distributed R, which helps data scientists overcome the scalability and performance limitations of R programming language and tackle problems not previously solvable by accelerating the analysis of large data sets by running R computations on multiple nodes.
  • HP Vertica Pulse, which helps organizations leverage an in-database sentiment analysis tool that scores short data posts, including social data, such as Twitter feeds or product reviews, to gauge the most popular topics of interest, analyze how sentiment changes over time, and identify advocates and detractors.
  • HP Vertica Place, which stores and analyzes geospatial data in real time, including locations, networks, and regions. This analytics pack provides Open Geospatial Consortium (OGC) standards–based functionality and integrates with third-party applications.
The HP Vertica Analytics Platform is a key component of the HP HAVEn big-data analytics platform, which enables HP customers and partners to create next-generation applications and solutions that accelerate the monetization of big data. HP HAVEn combines proven technologies including HP Autonomy IDOL, HP Vertica Analytics Platform, HP ArcSight Enterprise Security Manager and HP ArcSight Logger, as well as key industry offerings such as Hadoop.

HP Vertica Community Marketplace is currently availble at www.vertica.com/marketplace and can be accessed through the "Community" tab on www.vertica.com.

You may also be interested in:

Tuesday, February 4, 2014

Network virtualization eases developer and operations snafus in the mobile and cloud era

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

As developers are pressured to produce mobile and distributed cloud apps ever faster and with more network unknowns, the older methods of software quality control can lack sufficient predictability.

And as Agile development means faster iterations and a constant stream of updates, newer means of automated testing of the apps in near-production realism prove increasingly valuable.

Fortunately, a tag-team of service and network virtualization for testing has emerged just as the mobile and cloud era requires unprecedented focus on DevOps benefits and rapid quality assurance.

BriefingsDirect had an opportunity to learn first-hand how Shunra Software and HP have joined forces to extend the capabilities of service virtualization for testing at the recent HP Discover 2013 Conference in Barcelona.

Learn here how Shunra Software uses service virtualization to help its developer users improve the distribution, creation, and lifecycle of software applications from Todd DeCapua, Vice President of Channel Operations and Services at Shunra Software, based in Philadelphia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: There are a lot of trends affecting software developers. They have mobile on their minds. They have time constraints issues. They have to be faster, better, and cheaper along the apps lifecycle way. What among the trends is most important for developers?

DeCapua
DeCapua: One of the biggest ones -- especially around innovation and thinking about results, specifically business results -- is Agile. Agile development is something that, fortunately, we've had an opportunity to work with quite a bit. Our capabilities are all structured around not only what you talked about with cloud and mobile, but we look at things like the speed, the quality, and ultimately the value to the customers.

We’re really focusing on these business results, which sometimes get lost, but I try to always go back to them. We need to focus on what's important to the business, what's important to the customer, and then maybe what's important to IT. How does all that circle around to value?

Gardner: With mobile we have many more networks, and people are grasping at how to attain quality before actually getting into production. How does service virtualization come to bear on that?

Distributed devices

DeCapua: As you look at almost every organization today, something is distributed. Their customers might be on mobile devices out in the real world, and so are distributed. They might be working remotely from home. They might have a distribution center or a truck that has a mobile device on it.

There are all these different pieces. You’re right. Network is a significant part that unfortunately many organizations have failed to notice and failed to consider, as they do any type of testing.

Network virtualization gives you that capability. Where service virtualization comes into play is looking at things like speed and quality. What if the services are not available? Service virtualization allows you to then make them available to your developers.

In the early stage, where Shunra has been able to really play a huge difference in these organizations is by bringing network virtualization in with service virtualization. We’re able to recreate their production environments with 100 percent scale -- all prior to production.

When we think about the value to the business, now you’re able to deliver the product working. So, it is about the speed to market, quality of product, and ultimately value to your customer and to your business.

Gardner: And another constituency that we should keep in mind are those all-important operators. They’re also dealing with a lot of moving parts these days -- transformation, modernization, and picking and choosing different ways to host their data centers. How do they fit into this and how does service virtualization cut across that continuum to improve the lives of operators?
Service virtualization and network virtualization can benefit them is by being able to recreate these scenarios.

DeCapua: You’re right, because as the delivery has sped up through things like Agile, it's your operations team that is sitting there and ultimately has to be the owners of these applications. Service virtualization and network virtualization can benefit them by being able to recreate these in-production scenarios.

Unfortunately, there are still some reactive actions required in production today, so you’re going to have a production incident. But, you can now understand the network in production, capture those conditions, and recreate that in the test environment. You can also do the same for the services.

We now have the ability to quickly and easily recreate a production incident in a prior-to-production environment. The operations team can be part of the team that's fixing it, because again, the ultimate question from CIOs is, “How can you make sure this never happens again?”

We now have the way to quickly and confidently recreate incidents and fix it the first time, not having to change code in production, on the fly. That is one of the scariest moments in any of the times when I've been at the customer site or when I was an employee and had to watch that happen.

Agile iterations

Gardner: As you mentioned earlier, with Agile we’re seeing many more iterations on applications as they need to be rapidly improved or changed. How does service and network virtualization aid in being able to produce many more iterations of an application, but still maintain that high quality?

DeCapua: One of our customers actually told us that -- prior to leveraging network virtualization with service virtualization -- he was doing 80 percent of his testing in-production, simply because he knew the shortcomings, and he needed to test it, but he had no way of re-creating it. Now, let's think about Agile. Let's think about how we shift and get the proven enterprise tools in the developer’s hands sooner, more often, so that we can drive quality early in the process.

That's where these two components play a critical role. As you look at it more specifically and go just a hair deeper, how in integrated environments can you provide continuous development and continuous deployment? And with all that automated testing that you’re already doing, how you can incorporate performance into that? Or, as I call it, how do you “build performance in” from the beginning?

As a business person, a developer, a business analyst, or a Scrum Master, how is it that you’re building performance into your user scenarios today? How is it that you’re setting them up for understanding how that feature or function is going to perform? Let's think about it as we’re creating, not once we get two or three sprints into use and we have our hardening sprint, where we’re going to run our performance scenario. Let's do it early, and let's do it often.
Get the proven enterprise tools in the developer’s hands sooner, more often, so that we can drive quality early in the process.

Gardner: If we’re really lucky, we can control the world and the environment that we live in, but more often than not these days, we’re dealing with third-party application programming interfaces (APIs). We’re dealing with outside web services. We have organizational boundaries that are being crossed, but things are happening across that boundary that we can't control.

So, is there a benefit here, too, when we’re dealing with composite applications, where elements of that mixed service character are not available for your insight, but that you need to be able to anticipate and then react quickly should a change occur?

DeCapua: I can't agree with you more. It’s funny, I am kind of laughing here, Dana, because this morning I was riding the metro in Barcelona and before I got to the stop here, I looked down to my phone, because I was expecting a critical email to come in. Lo and behold, my phone pops up a message and says, “We’re sorry, service is unavailable.”

I could clearly see that I had one out of five bars on the Orange network, and I was on the EDGE network. So, it was about a 2.5G connection. I should still have been able to get data, but my phone simply popped up and said, “Sorry, cannot retrieve email because of a poor data connection.”

I started thinking about it some more, and as I was engaging with other folks today at the show, I asked them why is it that the developer of the application found it necessary to alert me three times in a row that it couldn’t get my email because of a poor data connection? Why didn’t it just not wait 30 seconds, 60 seconds, 90 seconds until it did, and then have it reach out and query it again and pull the data down?

Changing conditions

This is just one very simple example that I had this morning. And you’re right, there are constantly changing conditions in the world. Bandwidth, latency, packet loss and jitter are those conditions that we’re all exposed to every day. If you’re in a BMW driving down the road at 100 miles per hour, that car is now a mobile phone or a mobile device on wheels, constantly in communication. Or if you’re riding the metro or the tube and you have your mobile device on your hands, there are constantly changing conditions.

Network virtualization and service virtualization give you the ability to recreate those scenarios so that you can build that type of resiliency into your applications and, ultimately, the customers have the experience that you want them to have.

Gardner: Todd, tell us about so-called application-performance engineering solutions?

DeCapua: So, application performance engineering (APE) is something that was created within the industry over a number of years. It's meant to be a methodology and an approach. Shunra plays a role in that.

A lot of people had thought about it as testing. Then people thought about it as performance testing. At the next level, many of us in the industry have defined it is application engineering. It’s a lot more than just that, because you need to dive behind the application and understand the in’s and the out’s. How does everything tie together?
Understanding APE will help you to reduce those types of production incidents.

You’d mentioned some of the composite applications and the complexities there -- and I’m including the endpoints or the devices or mobile devices connecting through it. Now, you introduce cloud into the equation, and it gets 10 times worse.

Thinking about APE, it's more of an art and a skill. There is a science behind it. However, having that APE background knowledge and experience gives you the ability to go into these composite apps, go into these cloud deployments, and leverage the right tools and the right process to be able to quickly understand and optimize the solutions.

Gardner: Why aren’t the older scripting and test-bed approaches to quality control good enough? Why can't we keep doing what we've been doing?

DeCapua: In the United States recently, October 1 of 2013, there was a large healthcare system being rolled out across the country. Unfortunately, they used the old testing methodologies and have had some significant challenges. HP and Shunra were both engaged on October 2 to assist.

Understanding APE will help you to reduce those types of production incidents. All due to inaccurate results in the test environment, using the current methodologies, about 50 percent of our customers come to us in a crisis mode. They say, “We just had this issue, I know that you told us this is going to happen, but we really need your help now.”

They’re also thinking about how to shift and how to build performance in all these components -- just have it built in, have it be automatic, and get the results that are accurate.

Coming together

Gardner: Of course HP has service virtualization, you have network virtualization. How are they coming together? Explain the relationship and how Shunra and HP work together?

DeCapua: To many people's surprise, this relationship is more than a decade old. Shunra’s network-virtualization capability has, for a long time, been built in to HP LoadRunner, also is now being built into HP Performance Center.

There are other capabilities that we have that are built into their Unified Functional Testing (UFT) products. In addition, within service virtualization, we’re now building that product into there. It’s one that, when you think about anything that has some sort of distribution or network involved, network virtualization needs to come into play.

Some people have a hard time initially understanding the service virtualization need, but a very simple example I often use is an organization like a bank. They’ll have a credit check as you’re applying for a loan. That credit check is not going to be a service that the bank creates. They’re going to outsource it to one of the many credit-check services. There is a network involved there.

In your test environment, you need to recreate that and take that into consideration as a part of your end-to-end testing, whether it's functional, performance, or load. It doesn’t matter.
In your test environment, you need to recreate that and take that into consideration as a part of your end-to-end testing, whether it's functional, performance, or load.

As we think about Shunra, network virtualization and the very tight partnership that we've had with HP for service virtualization, as well as their ability to virtualize the users, it's been an OEM relationship. Our R and D teams sit together as they’re doing the development so that this is a seamless product for the HP customer to be able to get the benefit and value for their business and for their customers.

Gardner: Let's talk a little bit about what you get when you do this right. It seems to me the obvious point is getting to the problem sooner, before you’re in production, extending across network variables, across other composite application-type variables. But, I’m going to guess that there are some other benefits that we haven't yet hit on.

So, when you've set up you're testing, when you have virtualization as your tool, what happens in terms of paybacks?

DeCapua: There are many benefits there, which we have already covered. There are dozens more that we could get into. One that I would highlight, being able to pull all the different pieces that we've been talking about, are shorter release times.

TechValidate did a survey in February of 2013. The findings were very compelling in that they found a global bank was able to speed up their deployment or application delivery by 30 to 40 percent. What does that mean for that organization as compared to their competitor? If you can get to market 30 to 40 percent faster, it means millions or billions of dollars over time. Talk about numbers of customers or brands, it's a significant play there.

Rapid deployment

There are other things like rapid deployment. As we think about Agile and mobile, it's all about how fast we get this feature function out, leveraging service virtualization in a greater way, and reducing associated costs.

In the example that I shared, the customer was able to virtualize the users, virtualize the network, and virtualize the services. Prior to that, he would never have been able to justify the cost of rebuilding a production environment for test. Through user virtualization, network virtualization, and service virtualization, he was able to get to 100 percent at a fraction of the cost.

Time and time again we mention automation. This is a key piece of how you can test early, test often, ultimately driving these accurate results and getting to the automated optimization recommendations.

Gardner: What comes next in terms of software productivity? What should organizations be thinking in terms of vision?

Slow down

DeCapua: I see Agile, mobile, and cloud. There are some significant risks out in the marketplace today. As organizations look to leverage these capabilities to benefit their business and the customers, maybe they need to just slow down for a moment and not create this huge strategy, but go after “How can I increase my revenue stream by 20 percent in the next 90 days?” Another one that I've had great success with is, “What is that highest visibility, highest risk project that you have in your organization today?”

As I look at The Wall Street Journal, and I read the headlines everyday, it's scary. But, what's coming in the future? We can all look into our crystal balls and say that this is what it is. Why not focus on one or two small things of what we have now, and think about how we’re mitigating our risk of  looking at larger organizations that are making commitments to migrate critical applications into the cloud?

You’re biting off a fairly significant risk, which that there isn’t a lot there to catch you when you do it wrong, and, quite frankly, nearly everybody is doing it wrong. What if we start small and find a way to leverage some of these new capabilities? We can actually do it right, and then start to realize some of the benefits from cloud, mobile, and other channels that your organization is looking to.

Gardner: The role of software keeps increasing in many organizations. It's becoming the business itself and, as a fundamental part of the business, requires lots of tender love and care.
The more that we can think about that and tune ourselves and make ourselves lean and focused on delivering better quality products, we’re going to be in the winning circle more often.

DeCapua: You got it. The only other bit that I would add on to that is looking at the World Quality Report that was presented this morning by HP, Capgemini, and Sogeti, they highlighted that there is an increased spend from the IT budget, and a rather significant increase in spend from last year in testing.

It’s exactly what you’re saying. Organizations didn’t enter the market thinking of themselves as a software house. But time and time again, we’re seeing how people who treat what they do as a software house ultimately is improving not only life for their internal customers, but also their external customers.

So I think you’re right. The more that we can think about that and tune ourselves and make ourselves lean and focused on delivering better quality software products, we’re going to be in the winning circle more often.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.
Sponsor: HP.

You may also be interested in:

Wednesday, January 29, 2014

Healthcare among most opportunistic use cases for boundaryless information flow improvement

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

Healthcare, like no other sector of the economy, exemplifies the challenges and the opportunity for improving how the various participants in a complex ecosystem interact.

The Open Group, at its next North American conference on Feb. 3, has made improved information flow across so-called boundaryless organizations the theme of its gathering of IT leaders, enterprise architects, and standards developers and implementers.

And so the next BriefingsDirect discussion explores what it takes to bring rigorous interactions, process efficiency, and governance to data and workflows that must extend across many healthcare participants with speed and dependability.

Learn now how improved cross-organization collaboration plays a huge part in helping to make healthcare more responsive, effective, safe, and cost-efficient. And also become acquainted with what The Open Group’s new Healthcare Industry Forum is doing to improve the situation.

The panel of experts consists of Larry Schmidt, the Chief Technologist at HP for the America’s Health and Life Sciences Industries, as well as the Chairman of The Open Group Healthcare Industry Forum, and Eric Stephens, an Oracle Enterprise Architect. The moderator is Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts. The views of the panelists are theirs alone and not necessarily those of their employers.]

Here are some excerpts:
Gardner: Why is healthcare such a tough nut to crack when it comes to this information flow? Is there something unique about healthcare that we don't necessarily find in other vertical industries?

Schmidt: We’ve progressed in healthcare from a delivery model that was more based on acute care -- that is, I get sick, I go to the doctor -- to more of a managed care-type capability with the healthcare delivery, where a doctor at times is watching and trying to coach you. Now, we’ve gotten to where the individual is in charge of their own healthcare.

A lot of fragmentation

With that, the ecosystem around healthcare has not had the opportunity to focus the overall interactions based on the individual. So we see an awful lot of fragmentation occurring. There are many great standards across the powers that exist within the ecosystem, but if you take the individual and place that individual in the center of this universe, the whole information model changes.

Then, of course, there are other things, such as technology advances, personal biometric devices, and things like that that come into play and allow us to be much more effective with information that can be captured for healthcare. As a result, it’s the change with the focus on the individual that is allowing us the opportunity to redefine how information should flow across the healthcare ecosystem.

The scenario of the individual being more in charge of their healthcare -- care of their health would be a better way to think of this -- is a way to see both improvements in the information flow  as well as making improvements in the overall cost of healthcare going forward.
Schmidt
Because the ecosystem had pretty much been focused around the doctor's visit, or the doctor’s work with an individual, as opposed to the individual’s work with the doctor, we see tremendous opportunity in making advancements in the communications models that can occur across healthcare.

Gardner: Larry, is this specific to the United States or North America, is this global in nature, or is it very much a mixed bag, market to market as to how the challenges have mounted?

Schmidt: I think in any country, across the world, the individual being the focus of the ecosystem goes across the boundaries of countries. Of course, The Open Group is responsible and is a worldwide standards body. As a result of that, it's a great match for us to be able to focus the healthcare ecosystem to the individual and use the capabilities of The Open Group to be able to make advances in the communication models across all countries around healthcare.

Gardner: Eric, thinking about this from a technological point of view, as an enterprise architect, we’re now dealing with this hub and spoke with the patient at the middle. A lot of this does have to do with information, data, and workflow, but we’ve dealt with these things before in many instances in the enterprise and in IT.

Is there anything particular about the technology that is difficult for healthcare, or is this really more a function of the healthcare verticals and the technology is really ready to step up to the plate?

Information transparency

Stephens: Well, Dana, the technology is there and it is ready to step up to the plate. I’ll start with transparency of the information. Let’s pick a favorite poster child, Amazon. In terms of the detail that's available on my account. I can look at past orders. I can look up and see the cost of services, I can track activity that's taking place, both from a purchase and a return standpoint. That level of visibility that you’re alluding to exists. The technology is there, and it’s a matter of applying it.

Stephens
As to why it's not being applied in a rapid fashion in the healthcare industry, we could surmise a number of reasons. One of them is potentially around the cacophony of standards that exist and the lack of a “Rosetta Stone” that links those standards together to maximum interoperability.

The other challenge that exists is simply the focus in healthcare around the healthcare technology that’s being used, the surgical instruments, the diagnostic tools, and such. There is focus and great innovation there, but when it comes to the plumbing of IT, oftentimes that will suffer.

Gardner: So we have some hurdles on a number of fronts, but not necessarily the technology itself. This is a perfect case study for this concept of the boundaryless information flow, which is really the main theme of The Open Group Conference coming up on February 3. [Register for the event here.]

Back to you, Larry, on this boundaryless issue. There are standards in place in other industries that help foster a supply-chain ecosystem or a community of partners that work together.

Is that what The Open Group is seeking? Are they going to take what they’ve done in other industries for standardization and apply it to healthcare, or do you perhaps need to start from scratch? Is this such a unique challenge that you can't simply retrofit other standardization activities? How do you approach something like healthcare from a standards perspective?
I think it's a great term to reflect the vast number of stakeholders that would exist across the healthcare ecosystem.

Schmidt: The first thing we have to do is gain an appreciation for the stakeholders that interact. We’re using the term “ecosystem” here. I think it's a great term to reflect the vast number of stakeholders that would exist across the healthcare ecosystem. Anywhere from the patient, to the doctor, to payment organization for paying claims, the life sciences organizations, for pharmaceuticals, and things like that, there are so many places that stakeholders can interact seamlessly.

So it’s being able to use The Open Group’s assets to first understand what the ecosystem can be, and then secondly, use The Open Group’s capabilities around things like security, TOGAF from an architecture methodology, enablement and so on. Those assets are things that we can leverage to allow us to be able to use the tools of The Open Group to make advances within the healthcare industry.

It’s an amazing challenge, but you have to take it one step at a time, and the first step is going to be that definition of the ecosystem.

Gardner: I suppose there’s no better place to go for teasing out what the issues are and what the right prioritization should be than to go to the actual participants. The Open Group did just that last summer in Philadelphia at their earlier North American conference. They had some 60 individuals representing primary stakeholders in healthcare in the same room and they conducted some surveys.

Larry, maybe you can provide us an overview of what they found and how that’s been a guide to how to proceed?

Participant survey

Schmidt: What we wanted to do was present the concept of boundaryless information flow across the healthcare ecosystem. So we surveyed the participants that were part of the conference itself. One of the questions we asked was about the healthcare quality of data, as well as the efficiency and the effectiveness of data. Specifically, the polling questions, were designed to gauge the state of healthcare data quality and effective information flow.

We understood that 86 percent of those participants felt very uncomfortable with the quality of healthcare information flows, and 91 percent of the participants felt very uncomfortable with the efficiency of healthcare information flows.

In the discussion in Philadelphia, we talked about why information isn’t flowing much more easily and freely within this ecosystem. We discovered that a lot of the standards that currently exist within the ecosystem are very much tower-oriented. That is, they only handle a portion of the ecosystem, and the interoperability across those standards is an area that needs to be focused on.

But we do think that, because the individual should be placed into the center of the ecosystem, there's new ground that will come into play. Our Philadelphia participants actually confirmed that, as we were working through our workshop. That was one of the big, big findings that we had in the Philadelphia conference.
We understood that 86 percent of those participants felt very uncomfortable with the quality of healthcare information flows.

Gardner: Just so our audience understands, the resulting work that’s been going on for months now will culminate with the Healthcare Industry Forum being officially announced and open for business,, beginning with the San Francisco Conference. [Register for the event here.]

Tell us a little about how the mission statement for the Healthcare Industry Forum was influenced by your survey. Is there other information, perhaps a white paper or other collateral out there, that people can look to, to either learn more about this or maybe even take part in it?

Schmidt: We presented first a vision statement around boundaryless information flow. I’ll go ahead and just offer that to the team here. Boundaryless information flow of healthcare data is enabled throughout a complete healthcare ecosystem to standardization of both vocabulary and messaging that is understood by all participants within the system. This results in higher quality outcomes, streamlined business processes, reduction of fraud, and innovation enablement.

When we presented that in the conference, there was big consensus among the participants around that statement and buy in to the idea that we want that as our vision for a Healthcare Forum to actually occur.

Since then, of course, we’ve published this white paper that is the findings of the Philadelphia Conference. We’re working towards the production of a treatise, which is really the study of the problem domain that we believe we can be successful in. We also can make a major impact around this individual communication flow, enabling individuals to be in charge of more of their healthcare.

Our mission will be to provide the means to enable boundaryless information flow across the ecosystem. What we’re trying to do is make sure that we work in concert with other standards bodies to recognize the great work that’s happening around this tower concept that we believe is a boundary within the ecosystem.

Additional standards

Hopefully, we’ll get to a point where we’re able to collaborate, both with those standards bodies, as well as work within our own means to come up with additional standards that allows us to make this communication flow seamless or boundaryless.

Gardner: Eric Stephens, back to you with the enterprise architect questions. Of course, it’s important to solve the Tower of Babel issues around taxonomy, definitions, and vocabulary, but I suppose there is also a methodology issue.

Frameworks have worked quite well in enterprise architecture and in other verticals and in the IT organizations and enterprises. Is there something from your vantage point as an enterprise architect that needs to be included in this vision, perhaps looking to the next steps after you’ve gotten some of the taxonomy and definitions worked out?

Stephens: Dana, in terms of working through the taxonomies and such, as an enterprise architect, I view it as part of a larger activity around going through a process, like the TOGAF methodology, it’s architecture development methodology.
In the healthcare landscape, and in other industries, there are a lot of players coming to the table and need to interact.

By doing so, using a tailored version of that, we’ll get to that taxonomy definition and the alignment of standards and such. But there's also the addressing alignment and business processes and other application components that comes into play. That’s going to drive us towards improving the viscosity of the information, that's moving both within an enterprise and outside of the enterprise.

In the healthcare landscape, and in other industries, there are a lot of players coming to the table and need to interact, especially if you are talking about a complex episode of care. You may have two, three, or four different organizations in play. You have labs, the doctors, specialized centers, and such, and all that requires information flow.

Coming back to the methodology, I think it’s bringing to bear an architecture methodology like provided in TOGAF. It’s going to aid individuals in getting a broad picture, and also a detailed picture, of what needs to be done in order to achieve this goal of boundaryless information flow.

Drive standardization

One of the things that we can do in the Forum is start to drive standardization, so that we have the data and devices working together easily, and it provides the necessary medical professionals the information they need, so they can make more timely decisions. It’s giving the right information, to the right decision maker, at the right time. That, in turn, drives better health outcomes, and it's going to, we hope, drive down the overall cost profile of healthcare, specifically here in the United States.

Gardner: Getting back to the conference, I understand that the Healthcare Industry Forum is going to be announced. There is going to be a charter, a steering committee program, definitions, and treatise in the works. So there will be quite a bit kicking off. I would like to hear from you two, Larry and Eric, what you will specifically be presenting at the conference in San Francisco in just a matter of a week or two. Larry, what’s on the agenda for your presentations at the conference? [Register for the event here.]

Schmidt: Actually, Eric and I are doing a joint presentation and we’re going to talk about some of the challenges that we think we can see is ahead of us as a result of trying to enable our vision around boundaryless information flow, specifically around healthcare.
As an enterprise architect, I look at things in terms of the business, the application, information, technology, and architecture.

The culture of being able to produce standards in an industry like this is going to be a major challenge to us. There is a lot of individualization that occurs across this industry. So having people come together and recognize that there are going to be different views, different points of views, and coming into more of a consensus on how information should flow, specifically in healthcare. Although I think any of the forums go through this cultural change.

We’re going to talk about that at the beginning in the conference as a part of how we’re planning to address those challenges as part of the Industry Forum itself.  Then, other meetings will allow us to continue with some of the work that we have been doing around a treatise and other actions that will help us get started down the path of understating the ecosystem and so on.

Those are the things that we’ll be addressing at this specific conference.

Stephens: As an enterprise architect, I look at things in terms of the business, the application, information, technology, and architecture. When we talk about boundaryless information flow, my remarks and contributions are focused around the information architecture and specifically around an ecosystem of an information architecture at a generic level, but also the need and importance of integration. I will perhaps touch a little bit on the standards to integrate that with Larry’s thoughts.

Soliciting opinions

Schmidt: Dana, I just wanted to add the other work that we’ll be doing there at the conference. We’ve invited some of the healthcare organizations in that area of the country, San Francisco and so on, to come in on Tuesday. We plan to present the findings of the paper and the work that we did in the Philadelphia Conference, and get opinions in refining both the observations, as well as some of the direction that we plan to take with the Healthcare Forum.

Obviously we’ve shared here some of the thoughts of where we believe we’re moving with the Healthcare Forum, but as the Forum continues to form, some of the direction of it will morph based on the participants, and based on some of the things that we see happening with the industry.

So, it’s a really exciting time and I’m actually very much looking forward to presenting the findings of the Philadelphia Conference, getting, as I said, the next set of feedback, and starting the discussion as to how we can make change going toward that vision of boundaryless information flow.
We’re actually able to see a better profile of what the individual is doing throughout their life and throughout their days.

Gardner: I should also point out that it’s not too late for our listeners and readers to participate themselves in this conference. If you’re in the San Francisco area, you’re able to get there and partake, but there are also going to be online activities. There will be some of the presentations delivered online and there will be Twitter feeds.

So if you can't make it to San Francisco on February 3, be aware that The Open Group Conference will be available in several different ways online. Then, there will be materials available after the fact to access on-demand. Of course, if you’re interested in taking more activity under your wing with the Forum itself, there will be information on The Open Group website as to how to get involved.

Before we sign off, I want to get a sense of what the stakes are here. It seems to me that if you do this well and if you do this correctly, you get alignment across these different participants -- the patient being at the hub of the wheel of the ecosystem. There’s a tremendous opportunity here for improvement, not only in patient care and outcomes, but costs, efficiency, and process innovation.

So first to you Larry. If we do this right, what can we expect?

Schmidt: There are several things to expect. Number one, I believe that the overall health of the population will improve, because individuals are more knowledgeable about their individualized healthcare and doctors have the necessary information, based on observations in place, as opposed to observations or, again, through discussion and/or interview of the patient.

We’re actually able to see a better profile of what the individual is doing throughout their life and throughout their days. That can provide doctors the opportunity to make better diagnosis. Better diagnosis, with better information, as Eric said earlier, the right information, at the right time, to the right person, gives the whole ecosystem the opportunity to respond more efficiently and effectively, both at the individual level and in the population. That plays well with any healthcare system around the world. So it’s very exciting times here.

Metrics of success

Gardner: Eric, what’s your perspective on some of the paybacks or metrics of success, when some of the fruits of the standardization begin to impact the overall healthcare system?

Stephens: At the risk of oversimplifying and repeating some of things that Larry said, it comes down to cost and outcomes as the two main things. That’s what’s in my mind right now. I look at these very scary graphs about the cost of healthcare in the United States, and it's hovering in the 17-18 percent of GDP. If I recall correctly, that’s at least five full percentage points larger than other economically developed countries in the world.

The trend on individual premiums and such continues to tick upward. Anything we can do to drive that cost down is going to be very beneficial, and this goes right back to patient-centricity. It goes right back to their pocketbook.

And the outcomes are important as well. There are a myriad of diseases and such that we’re dealing with in this country. More information and more education is going to help drive a healthier population, which in turn drives down the cost. The expenditures that are being spent are around the innovation. You leave room for innovation and you leave room for new advances in medical technology and such to treat diseases going. So again, it’s back to cost and outcomes.
Anything we can do to drive that cost down is going to be very beneficial, and this goes right back to patient centricity.

Gardner: Very good. I’m afraid we will have to leave it there. We’ve been talking with a panel of experts on how the healthcare industry can benefit from improved and methodological information flow. And we have seen how the healthcare industry itself is seeking large-scale transformation and how improved cross-organizational interactions and collaborations seem to be intrinsic to be able to move forward and capitalize and make that transformation possible.

And lastly, we have learned that The Open Group’s new Healthcare Industry Forum is doing a lot now and is getting into its full speed to improve the situation.

This special BriefingsDirect discussion comes to you in conjunction with The Open Group Conference on February 3 in San Francisco. It’s not too late to register at The Open Group website and you can also follow the proceedings during and after the conference online and via Twitter.

So a big thank you to our panel, Larry Schmidt, the Chief Technologist at HP for the America’s Health and Life Sciences Industries, as well as the Chairman of The new Open Group Healthcare Industry Forum, and Eric Stephens, an Oracle Enterprise Architect. We appreciate your time Eric.

I'm Dana Gardner, Principal Analyst at Interarbor Solutions, your host and moderator for this look at the healthcare ecosystem process. Thanks for listening, and come back next time for more BriefingsDirect podcast discussions.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group. Register for the event here.

Transcript of a BriefingsDirect podcast on how The Open Group is addressing the information needs and challenges in the healthcare ecosystem. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2014. All rights reserved.

You may also be interested in: