Friday, August 9, 2013

Here's how healthcare businesses can more efficiently manage their suppliers, purchases and processes

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP Company.

This latest BriefingsDirect podcast, from the recent 2013 Ariba LIVE Conference in Washington, D.C., examines how the healthcare sector has unique and daunting operational efficiency and regulatory challenges.

To help transition to new levels of productivity, we explore how the game is being changed by MedAssets, a healthcare industry procurement, spend, operations, and supply-chain services company, which currently manages some $50 billion of supply spend for its customers annually. We'll learn how MedAssets, in partnership with Ariba, an SAP company, has found impactful ways to improve health provider and supplier compliance, reduce costs, and develop better accuracy.

To hear how they do it, we sat down with Rick Grodin, Senior Vice President of Product Management at MedAssets, based in Alpharetta, Georgia. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What's going on in the business of healthcare, and why is this such an important area for focusing on innovation, productivity, and cost reduction?

Grodin: We manage spend on behalf of 3,000-plus providers, both on the non-acute care side, as well as in the for-profit and not-for-profit acute care hospital community. The challenges that they're facing are quite remarkable, both from an incremental-cost perspective, whether that be supply cost or labor cost, as well as continued pressure on what are already razor-thin operating margins -- typically between 0 and 3 percent.

Significant consequences

With the Affordable Care Act coming down the pike, officially passed and certainly soon to be implemented, reimbursement per unit is going to come down materially for hospitals, and that’s going to have significant consequences on provider operations and financial health.

Grodin
As millions of new people come into the healthcare system, likely to be reimbursed through the state exchanges somewhere between Medicaid and Medicare rates, that’s going to have a significant impact on that operating margin, because hospitals are already losing money at Medicaid and Medicare rates.

You're going to have a significant influx of new patient volume at lower reimbursements. Therefore, the need for the healthcare community to take out substantial cost over the next couple of years is just going to continue to intensify significantly.

Anything that we can do, as a healthcare provider partner, to help them bring down those costs from a back-office operational efficiency perspective is going to be extremely important.

Gardner: When you look toward supply chains, the networked economy, and cloud providers, what's interesting for innovation?
It’s not only about how we can improve the financial health of our hospital customers, but also our supplier partners.

Grodin: For us, specifically at MedAssets, the supplier community is extremely important to us. It’s not only about how we can improve the financial health of our hospital customers, but also our supplier partners. If we can continue to work with our supplier partners to bring down their cost, they can then pass along those efficiencies and offer lower price points to our provider customers. So it’s a win-win for everybody.

Today, through MedAssets eCommerce Exchange and transaction management services, we help create a more efficient operating environment, with respect to getting purchase orders to suppliers. But because it’s through an EDI-based system, it’s basically just getting paper there more quickly, as opposed to correcting and rejecting invoices that are wrong on the front-end, so that they don’t need to be worked on the back end.

Creating a more efficient operating environment with respect to that purchase order (PO) or invoice, and basically enabling a provider and a supplier to conduct that commerce through the cloud, our exchange, or a combination thereof, will create significant operating efficiencies on both sides of the house.

Now, all of a sudden, the accounts payable (AP) clerk that’s sitting in a hospital doesn’t have to manage an exception. Today, they're constantly struggling with whether the PO price is the same as the contract price and the same as the invoice price. In many instances, it’s not.

So they need to circle back with the supplier to say, "The invoice is wrong, and you need to fix it." Or they need to circle back internally and ask why they're cutting a PO that doesn’t match the contract price, whether it’s a locally negotiated contract or a contract through a group purchasing organization.

Added value

So, it's the ability to catch those invoice exceptions upfront. All of that exception management activity can be repurposed to value-add activities internally, whether that’s reinvesting completely in patient care delivery or just repurposing those FTEs on the back end to again do more value-added activities that are not related to just managing an exception.

Gardner: Tell us a bit about the history of MedAssets, what you do, and the size.

Grodin: We touch approximately 4,200 acute-care hospitals across the country, as well as over 120,000 non-acute care providers. We have two operating segments within the organization.

The one that I primarily focus on for product management is our Spend and Clinical Resource Management group. Within this segment, we deliver value to providers through our  group purchasing organization, technology-enabled services, an analytics platform and procure-to-pay solutions that are all aimed at reducing cost on behalf of our providers.

The other element that we bring to the table is through our Advisory Solutions group, which is a number of consulting practices that can address operational improvement opportunities or other areas of cost that are not impacted just through procurement or through a group purchasing organization.
As most people are aware, labor cost is approximately 50 to 60 percent of total cost for a hospital. It’s a significant area of opportunity.

As an example, we have a phenomenal group that focuses on clinical utilization and bringing down physician preference-item costs. We have a group that focuses on permanent labor and agency labor. As most people are aware, labor cost is approximately 50 to 60 percent of total cost for a hospital. It’s a significant area of opportunity.

Finally, we bring lean transformation and process-improvement capabilities to healthcare through another practice in our Advisory Solutions group. There have been tremendous benefits brought through Lean to other industries, and we're trying to bring that to the healthcare environment as well.

We have our Spend and Clinical Resource Management segment that manages over $50 billion in spend, but we also have another large operating segment where we provide revenue cycle management services.
So we have a whole suite of technologies that can impact everything -- the front, middle and back portion of the revenue cycle -- as well as the Revenue Cycle Services group that provides both consulting services, as well as a shared-service environment for taking on revenue-cycle activities within a hospital environment.

Gardner: How about the relationship between MedAssets and Ariba? Do you utilize their services in their cloud activities, technologies, and processes, and then apply that?

Two fronts

Grodin: Our relationship with Ariba is on two fronts. We're currently in the process of implementing their Procure-to-Pay solution for our own internal use within MedAssets, and our team is extremely excited about how things are going so far.

I was mainly focused on working with the Ariba team on putting together the strategic partnership that we announced in early April and that we're extremely excited about. We wanted to partner with the leader in global e-commerce and there was no doubt that that was the Ariba team.

We’d like to bring the capabilities that are proven in other industries, where Ariba has basically gone to market and been extremely successful, and bring those similar cloud-based and network activities into healthcare.

As I alluded to before, we have our own eCommerce Exchange and Transaction Management services, as well as a partnership on the front-end for requisitioning through Prodigo.

Historically, we've done a very good job of working with the buyers in hospitals to requisition an item and get that purchase order out through our eCommerce Exchange and Transaction Management services to the vendor. Where we’ve fallen short is in helping our suppliers and providers get that invoice back most efficiently.
The other thing that’s extremely exciting about what Ariba brings to the table is the fact that they have over one million vendors on their network.

What's great about the Ariba Network is that we can link our eCommerce Exchange with the Ariba Network to enable a more efficient transaction process. We enable providers to get a PO out through our eCommerce Exchange or through the Ariba Network electronically, and then enable suppliers to send that invoice back electronically through their exchange or through invoice conversion services, which is basically taking the paper invoice and converting it into an electronic invoice.

Multiple benefits come out of that. It’s a perfect complement to what MedAssets has already been doing in the healthcare community with our provider clients, but taking it to the next level. The other thing that’s extremely exciting about what Ariba brings to the table is the fact that they have over one million vendors on their network.

Today, we do commerce through our exchange with about 350 traditional medical/surgical vendors, whereas Ariba has perfected the world that they call "indirect spend" and we call "purchased services." That's a huge unlock both for us and for the provider community.

We believe that purchased services spend is just as big as the spend that goes through the GPO, if not even bigger. Typically, that has been a very hard area for providers to get their arms around, because they haven’t had access to the data.

The main reason for that is that most of the purchased services spend is a non-PO transaction. So it’s very hard to get to that granular line-item level detail to break down that spend, whether it’s by contract category or specific vendor. You can’t manage anything if you can’t see it.

Significant value

So we're extremely excited about leveraging the Ariba Network and working with them to capture 100 percent of provider spend, not just med/surg and PO-backed spend, but all of the spend that’s coming out of the hospital. The value this can bring to the provider community is significant.

Gardner: How do MedAssets and Ariba come together to offer new capabilities into the market?
Ariba has created a smart invoicing capability, because it’s a network, as opposed to just an EDI pipe.

Grodin: This is where I get very excited about the potential of what Ariba and MedAssets can do together in the marketplace. As I mentioned before, we have our eCommerce Exchange, which is EDI-based, and we can get a certain portion of invoices back electronically through our exchange.

There are other offerings in the marketplace that are very similar, but really what they do is just get a paper invoice back into the provider’s hands more quickly. But you don’t know if that invoice is correct. If it’s not correct, there is a whole lot of inefficiency in managing that exception on the backend.

Ariba has created a smart invoicing capability, because it’s a network, as opposed to just an EDI pipe. Those invoices that are inaccurate can be rejected on the front-end, so they never even get to the provider until they are accurate.

The best part about it is that rules engine -- and that I believe that you can customize up to 70 different rules -- is dictated by the provider themselves. It’s not a built-in, one-size-fits-all type of solution. Depending on the unique needs of that provider, they can customize that rules engine to reject inaccurate invoices back to the supplier in real-time.

It’s the whole notion of garbage in, garbage out. We're preventing the garbage from coming through, which is then creating those efficiencies in accounts payable. That is absolutely something that’s going to be unique to healthcare and doesn’t exist today, and which again will create tremendous operational efficiencies on the back end.

Because of smart invoicing and the overall transaction efficiency that’s created through the exchange and the network, we're going to be able to enable providers to get invoices in a ready-to-pay status much more quickly. Industry best practice is five days. We've seen metrics, where it could take anywhere between 20 and 40 days to get that invoice approved for most healthcare providers today.

Dynamic discounting

Our relationship with Ariba will enable us to leverage Ariba’s working capital management solutions as well. They’ve got something that they refer to as Dynamic Discounting, which creates the ability to have an ad-hoc negotiation for further cost-of-goods-sold reductions between a provider and a supplier.

Because of the increased visibility into where an invoice is sitting and what the status of that invoice is between suppliers and providers -- something that doesn’t exist in healthcare today -- a supplier can go in and see that an invoice is sitting in a ready-to-pay status.

They can then offer an incremental discount to the provider, so that if the provider  has additional cash on hand and it’s better used to drive additional discounts as opposed to sitting and getting short-term interest, that can make a tremendous amount of sense.

So, there's also the ability to optimize prompt-pay discounts, where appropriate, because we're getting those invoices in a ready-to-pay status much more quickly. So if it’s a two percent discount if you pay within 10 days, and the average invoice isn’t being approved for 20 days, all of a sudden I've missed that window. Even if I have cash on hand, I can’t leverage it.

Even better, if I've missed that prompt-pay window, but am willing to pay on day 20, instead of day 30 or day 40, all of a sudden there is value coming back to the provider as opposed to no incremental value for paying early. It’s just another lever or another tool in the toolkit that we can use to drive further cost reductions in our partnership with Ariba.
As the reimbursement models are changing in healthcare, they're getting more-and-more focused on clinical quality, safety, etc.

The benefits are significant in a couple of areas, making that back-office function, specifically in AP, more efficient, more scalable, and being able to repurpose the work that was being done in that department and in other back-office administrative areas. Also, the ability to reinvest those resources in front-line patient care delivery.
As the reimbursement models are changing in healthcare, they're getting more-and-more focused on clinical quality, safety, etc. That’s where a hospital’s core focus needs to be, not in the back-office. It needs to be with the patient. Certainly there are significant FTE and operating efficiency benefits created by this partnership, but what we are particularly excited about is more from a contract-compliance perspective.

Through our eCommerce Exchange, our transaction management service, as well as what Ariba is going to bring to the table through PO and invoice automation, invoice conversion services, invoice professional which is their workflow tool, we'll have the ability to ensure that folks are buying on contract where they should be and also ensuring that they are paying the right price. We do a good job today of ensuring that that PO price matches the contract price, but where we have been challenged in the past is the ability to bring that invoice price in.

Significant benefits

It’s going to bring significant benefits, because in some of the research that we're doing with very sophisticated health systems, they're finding that they may only be buying on contract 30-40 percent of the time. So a contract is only as good as its use. If it’s just sitting in a drawer and nobody is accessing it, all the great work that’s been done by their sourcing team or our sourcing team is for naught.

The ability to do all of that in real time, to take that PO price match it up against our contract price and against the invoice price, is going to ensure not only are they buying on contract, but they are paying the right price.

Gardner: What's in store for further eking out productivity gains?

Grodin: As our relationship continues to blossom with Ariba, I'm sure we’ll be having conversations around their spend visibility and other analytic tools that they can bring to the table. Within MedAssets, we have  our own analytics tools, including service line analytics, spend analytics and pharmacy analytics.
For us, the true unlock is the ability to get access to purchasing and spend data, which is where we are very excited.

For us, the true unlock is the ability to get access to purchasing and spend data, which is where we are very excited. We capture a lot of financial and spend data today, but this purchasing and  indirect spend area is really an untapped horizon where the data and the technology that Ariba is going to bring, in combination with our analytics, people and process, will provide significant benefit.

We currently manage about $5 billion of spend through our National Procurement Center, which is the largest shared services operation of its kind in healthcare today. That combination of people, process, and technology is absolutely going to unlock new opportunities in healthcare from a spend-management and cost-reduction perspective.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP Company.

You may also be interested in:

Thursday, August 8, 2013

T-Mobile swaps manual cloud provisioning for HP-powered services portal, gains lifecycle approach across multiple platforms

Listen to the podcast. Find it on iTunes. Read a transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series explores how wireless services provider T-Mobile US, Inc. improved how it delivers cloud- and data-access services to its enterprise customers.

It's a story about walking back manual cloud provisioning services and moving to a centralized service portal to manage and deploy infrastructure better. In doing so, they have improved their service offerings across multiple platforms and enabled a lifecycle approach to delivering advanced cloud services.

To learn how T-Mobile did it, we recently sat down with Daniel Spurling, Director of IT Infrastructure at T-Mobile US, Inc. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What are the major IT requirements driving your mobile carrier business right now?

Spurling: To answer that question, I'm going to frame up a little history and go into where T-Mobile has come from in the last few years and what has driven some of that business shift in our space.

As many know, in 2011 AT&T attempted to acquire T-Mobile. When that dissolved, there was a heavy recognition that we needed to drive greater innovation on our business side. We had received a generous donation, we’ll call it, of $4 billion and a lot a spectrum. We drove a lot of innovation on our network side, on the RF side, but the IT side also had to evolve.

We, as an IT group, were looking at where we needed to start evolving within the infrastructure space. We recognized that manual processes are a very rudimentary way of delivering servers or compute storage, etc. This was not going to meet the agility needs that our business was exhibiting. So we started on this path of driving a significant cultural shift, and mindset shift, as well as the actual technological shift in the infrastructure space -- with cloud as one of the core anchor points within that.

Gardner: When you decided that cloud was the right model to gain this agility, what were some of the problems that you faced?

Not a surprise

Spurling: We recognized that cloud is almost like a progression of where we've been going within IT. It is not like it is a surprise.

Spurling
We've been trying to figure out how to enable more self-service. We've been trying to figure out how to drive greater automation. We've been trying to figure out how to utilize those ubiquitous network access points, the ubiquitous services, external or internal of the company, but in a more standardized and consolidated fashion.

It wasn't so much that we were surprised and said, "Oh, we need to go cloud." It was more on the lines of we recognized that we needed to double-down our efforts in those key tenets within cloud. For T-Mobile, those key tenets really were how we drive greater standardization, consolidation and to enable greater automation -- and then to provide self-service capabilities to our customers.

Gardner: Were there particular types or sets of applications that you identified as being the first and foremost to go into this new model?

Spurling: That's a great question. A lot of people look at the applications, as either an application play or an infrastructure play, because of the ecosystem that existed when the cloud ecosystem was kind of birthing. We started more on the infrastructure side. So we looked at it and said, "How do we enable the application growth that you are talking about? How do we enable that from an infrastructure perspective?"
We recognized that we needed to double down our efforts in those key tenets within cloud.

And we saw that we needed to focus more on the infrastructure side and enable our partners within our IT teams -- our development partners, our application support partners, etc. -- to be able to transform the application stacks to be more cloud-capable and cloud-aware.

We started giving them the self-service capability on the infrastructure side, started on that infrastructure-as-a-service (IaaS) type capability, and then expanded into the platform-as-a-service (PaaS) capability across our database, application, and presentation layers.

Gardner: The good news with cloud is that you do away with manual processes and you have self-service and automation. The bad news is that you have self-service and automation, and they can get very complex and unwieldy, and like with virtual machines (VMs), sometimes there is a sprawl issue. How did you go about this in such a way that you didn’t suffer in terms of these new automation capabilities?

Spurling: I'm going to break it into two parts. Look at the complexity of an IT organization today, especially for a company of T-Mobile's size. T-mobile has 46,000 employees, around 43 million customers. It's not a small entity. The complexity that we have in the IT space mirrors that large complexity that we have in the business space.

Tough choices

We recognized on the infrastructure side, as well as in the application, test and support sides, that we cannot automate everything. We had to really drive heavy consolidation and standardization. We had to make some tough choices about the stuff that we were -- for lack of a better term -- going to pare off our infrastructure tree: different operating systems, different hardware platforms, and data centers that we were going to shut down.

We had to drive that heavy rationalization across all of the towers within our IT space, in order to enable the automation you talked about, without creating a significant amount of complexity.

On the sprawl question though, we made a conscious decision that we were going to allow or permit some level of sprawl, because of the business agility that was gained.

When you look at server sprawl, there are concerns around licensing, computer utilization, and stranding resources or assets. There are a lot of concerns around sprawl, but when you look at how much business benefit we got from enabling that agility -- or that speed to deliver, and speed to market -- the minimal amount of sprawl that was incurred was worth it from a business perspective.
You have to continue to deliver for your customers, but you need to prioritize what you are doing in that maintenance space.

We still try to manage it. We still make sure that we're utilizing our compute storage data centers, etc., as efficiently as possible, but we've almost back-burnered the sprawl issue in favor of enabling business.

Gardner: So with multiple platforms -- Windows, Linux, AIX, Unix -- and multiple data centers across large geographies, how can you do that without a larger staff? Do you find that such centralization possible, or is it pie in the sky?

Spurling: It’s a bit of both. When you look at how much work there is to enable an automation solution, you almost have to be -- and my team hates it when I use the term -- ambidextrous. On one hand, you have to continue to deliver for your customers, but you need to prioritize what you are doing in that maintenance space and shave off a bit to invest in the innovation space.

You're going to have to make some capital investments, and maybe some resource investments as well, to drive that innovation the next step forward. But you almost have to do it within the space that you are coexisting in that maintains and innovates at the same time, because you can't drop one in favor of the other.

We did have to make some tradeoffs on the maintenance side, in order to take some qualified and some bright resources that we are excited about in our burgeoning cloud future, and then invest those resources to continue driving us forward in the technological and also cultural space. We made a significant cultural change too.

Gardner: That was going to be my next question. When it comes to making these transitions in technology, platform, and approach, I often hear companies say they have a lagging cultural shift. What did that involve in terms of your internal IT department making that shift to more of a service bureau supporting your business -- like a business within a business?

Buggy whips

Spurling: A lot of times when you talk about evolution in either business context or kind of an academic context, you hear the story about the buggy whip. The buggy whip, back in the day, was something that everybody knew. About 125 years ago, everybody probably knew someone who made buggy whips or who sold buggy whips. Today, no one knows anybody who makes or sells buggy whips.

The buggy whip industry went away, but a brand-new industry emerged in the automobile space. In the same context. the old IT way of manually building servers, provisioning storage, and loading applications may be going away, but there is a brand-new environment that's been created in a higher value space.

As to the cultural shift you talked about, we had to make significant investments in our leadership to be able to help set a vision, show our employees where that vision intersected with their personal careers and how they continue to move on.

Then, you lead and help them to do that kind of emotional change. I'm not a server builder anymore. I'm now a consultant with the business on delivering a value, I'm now an automation engineer, or I'm now delivering future value and looking at new products that we can drive further automation into. That cultural change is ongoing, and it’s certainly not done.

Gardner: And given that this transition and transformation is fairly broad in terms of its impact, you don’t just buy this out of a box. How did the combination of people, process, technology and outside your knowledge come together?
With those tools, with HP professional services, and with our own internal team members, we created a tactical team that went out there and "attacked cloud."

Spurling: We knew that, from a leadership perspective, we weren’t going to get the time-to-market that we wanted by training our resources, helping them learn and make mistakes. We had to rely on professional services. So we partnered with HP very heavily to drive greater, instant-on services in our cloud solution.

On the technology side, we have everybody under the sun from a tooling perspective, but we do have a significant investment in HP software. We made a decision to move forward with the HP Cloud Suite. Pieces like HP Operations Orchestration (HPOO) or Cloud Service Automation (CSA), and building out those platforms to be the overarching cloud solution that, for lack of a better term, created that federation of loosely coupled systems that enabled cloud delivery.

With those tools, and with HP professional services, and with our own internal team members, we created a tactical team that went out there and "attacked cloud," delivered that, and continues to deliver that now.

Paybacks

Gardner: Can you look at results, either business, technological, or financial from going to a cloud model, provisioning with that automation, advancing the technology, making those cultural hurdles? What do you get for it?

Spurling: When we look at the cloud opportunity, and the agility that has been gained -- the ability to deliver things in an almost immediate fashion -- one of the byproducts that we may not exactly have intended was that our internal customers have demanded a lot of complexity, or a lot of significant specific systems.

When we said, you can get that significant system, whatever it is, in a couple of weeks -- or you can get this cloud solution that delivers 95 percent of what you asked for in a couple of hours, almost always those things that we thought were "hard" requirements melted away. The customer said, "You know what, I'm okay with this 95-percent deal because it gets me to my business objective faster."
Because of the investments we made in standardization and automation, our cloud portfolio, we were able to build out that capacity in record time.

We're realizing now that that complexity may not have been required all along, because we are able to deliver so quickly. The byproduct of that is that we're seeing massive amounts of standardization that we could never have thought would organically be possible.

From an agility perspective, there's time to market. We had a significant launch with the iPhone, a big event in T-Mobile’s history, probably one of the largest launches that we've had. That required a significant amount of investment in our back-end systems because of the load that was put in our activations and payment inside our systems.

Because of the investments we made in standardization and automation, our cloud portfolio, we were able to build out that capacity in record time, in days versus what would have taken in weeks or months two years previously. We were able to support our business with very little lead time, and the results were very impressive for us as a business. So those two areas, that standardization and consolidation and that rapid ability to deliver on business objectives, are the two key ones that we take away.

Gardner: What does the future hold?

Spurling: Cloud is just one step in continuing to evolve IT to be more of a business partner.

That's really how we are looking at it. We're making great strides in that space. In every single area, we're setting ourselves up to be closer to the business, to move that self-service capability. I'm not just talking about a webpage. I am talking about being able to consume an IT service as a business leader in a simple way. We're moving that closer-and-closer to the business and we are being less and less of a gatekeeper for technology, which is super-exciting for us.

We're recognizing that the investments we made in our PaaS plays as well as test automation as well as some of the development platforms. We're seeing those start to have payoffs in the fact that we're developing cloudware applications that are now scalable in a way that we've never seen before, without massive human invention.

So we're able to tell our business, "Go ahead and have a great marketing idea, and let’s move it forward. Let’s try that thing out. If it doesn't work, it’s not going to hurt IT. It's not going to take 18 months to deliver that." We're seeing IT able to respond about as fast as the business wants to go.

We are not there yet today. It’s a continuing journey, but that’s our trajectory in the next six to 12 months, and then who knows what’s going to happen, but we are excited to see.
Listen to the podcast. Read a transcript or download a copy. Sponsor: HP.

You may also be interested in:

Tuesday, August 6, 2013

HP Vertica General Manager Colin Mahony on the next generation of anywhere analytics platforms

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

This next edition of the HP Discover Performance Discussion Series welcomes Colin Mahony, General Manager at HP Vertica, on this first day of the inaugural HP Vertica Big Data Conference in Boston.

It's been well over two years since HP acquired Vertica, and the analytics platform has become a pillar of HP's recently announced HAVEn Initiative. Now Vertica is poised to advance beyond its MPP column store database origins into a next generation anywhere analytics platform. New Vertica benefits include ease in cloud deployments and appliance delivery, as well as new features coming later this year for improved speed, lower-cost and greater ease in data input and access.

Learn how Mahony is guiding the future of the HP Vertica Analytics Platform, and how users are finding new ways to leverage its unique speed and attributes. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Follow Colin on Twitter.] [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: One of the things that strikes me about the market nowadays is that there seems to be a sense of tradeoffs going on when organizations are trying to pick their big data engine or platform. They have a set of value on one side, but it’s opposed by value on the other. They can’t have everything. One size does not fit all.

So how are you at Vertica able to help people deal with these tradeoffs that they're facing when it comes to a next-generation data platform?
Vertica was founded on the premise that one size does not fit all.

Mahony: Vertica was founded on the premise that one size does not fit all. Using a single OLTP transactional database to do everything, including analytics, just doesn't make a lot of sense.

If you think about the areas that the people have to trade off, usually it’s scale for performance or analytics functionality for performance. One of things that I've spent a lot of time looking at is, especially over the last couple of years, is just some of the alternative platforms, not just for analytics, but for all of the different data needs.

You can take something like Hadoop as an example. Hadoop really is a distributed file system and has capabilities to run rudimentary analytics and transform processed data. But I think what people love about Hadoop is that it's really easy to load data into Hadoop. You don't have to define the schema or anything.

Mahony
Instead of schema on write or load time, it’s schema on read time. People like that. They also like at least the perception that it is free and the scalability of it. On the database side, what people love about the database is that you're going to get really good performance, because the data is structured. If you're using a NexGen MPP platform like Vertica, you'll get the performance of the scalability.

Hadoop-like

We've been doing a lot of work in areas like making it easier to get the data into the platform, doing more with it, making it seem much more like a Hadoop-like environment. You can look at our past releases and see that there's been a lot of work done on that and we continue to make those investments.

One thing has been consistent at Vertica since the beginning. What we focus on is to make it really easy for people to get information onto the platform. Then, we make sure we continue to deliver new capabilities, performance, and functionality within the platform.

We make sure we’re enabling our customers and partners to deploy Vertica anywhere and everywhere, whether it’s cloud appliances, software, or the like. Those are the three tenets of the company. It’s all around this notion of making data matter and help people make better decisions that lead to better outcomes with superior information.

There's so much that can be done in this space, but I think the key for us is to focus on the things that we know we do really well. The good news is that it's such a large space with so many demands that we know we can make a huge impact without trying to take on the world. We know we can make a huge impact in what we’re doing.

I think you'll continue to see some interesting developments along the lines of what I'm describing, and it's very much in line with where we've been.
No matter what on-ramp they take, they tend to find a lot of the other capabilities once they get on.

Gardner: Do more and more IT functions and business functions begin and end with big data? It seems to be at the center of so many things.

Exponential growth

Mahony: It is. To go back to the founding of Vertica [in 2005], I remember when Mike Stonebraker was giving the early presentations on the need for it. He talked a lot about the exponential growth of data and how that was outpacing any laws like Moore’s law or other hardware laws. So much information was being created, there was no way that just using more paralyzed hardware was going to be able to address the issue.

The state of the union back then was there was no such thing as "big data." But I think Mike, as a visionary, knew what was going to happen in the industry. And it has happened.

It wasn’t a long time ago, but I remember that I was trying to find our first sample dataset that was over a terabyte and we had a difficult time finding it. When we would talk to the early customers, they looked at us like we were crazy when we were asking about a terabyte.

We have an easy time now finding terabytes of data. The state of the union today is that what's driving so much around big data is that you have obviously the volume, variety, and velocity that we talk about often, but what's really driving those three things is human information, whether it's social media, tweets, or expressive content that’s just so prevalent right now, as well machine information.

If you look at the traditional structured database market by any number, it’s a small percentage of the amount of data that’s out there. The strength of Vertica, and really the strength of HP overall, is that we have the best assets for the unstructured human information in Autonomy, as well as the best assets when it comes to machine information and large data.
When we would talk to the early customers, they looked at us like we were crazy when we were asking about a terabyte.

That has some structure. It’s semi-structured information, but it’s not your traditional transaction system. The power of all of that data comes together when you can have an engine that applies some structure to it and then is able to deliver the analytics that the organization needs. It's both IT as well as line of business, and even this new category we often talk about, which is the data scientist.

One of the great things about this show here is that we’ve got Billy Beane of Moneyball fame as our keynote speaker. The reason that we wanted Billy to come speak here is that Moneyball is exactly what’s happening right now in the world when it comes to big data.

You have the data scientist or the statistician, you have the line of business folks, and you have IT. They all have a part to play in the success of how information is used in companies. By bringing them together and by making the software that much easier for them to come together and solve these problems, you can create very real and differentiated value within organization.

So Moneyball is exactly what’s happening, certainly in corporate America, but also in government and in many other institutions that want to leverage information to be more efficient and create a competitive advantage.

Gardner: Colin, what about the notion of big data as agent for business transformation. We've been hearing about this for 30 years. It's been big part of the academic work in business schools. Process re-engineering has evolved into balanced scorecards. Getting more detailed information in real time about the customers and the marketplace probably has as much or more of a opportunity to transform businesses than just about anything else that's happened over the past 20 years.

More than technology

Mahony: It's an enormous opportunity for business transformation, and definitely the whole is greater than the sum of the parts. What makes companies really successful with information is not trying to boil the ocean, not trying to do a traditional enterprise data warehouse project that's going to take 24 months, if you're lucky, 36 most likely.

They’ll end up with some monolithic inflexible platform that will probably be outdated by the time it gets deployed. What is making a lot of companies successful is they find a particular use, they find a problem area that they want to drill down on, and they mobilize to do it.

For that, they need a solution that is quickly deployed, but also has that capability to become something much larger. Whether it's Vertica, Talend, or any of the other portfolios that we offer, we strive to make sure that somebody can get up and running quickly, whether it's Autonomy and human information analytics, Vertica and machine data or other types of transactional structured data.

The most important thing is that you find that business case, you focus on it, and prove very quickly. There's something we refer to as “Time to Terabyte,” which is less than a month, typically for Vertica. You get a return on investment (ROI) in less than a month for the investments that you made. If you prove that out, then everybody in the organization is happy, the line of business, the technology folks in IT, even the statisticians, data scientists.
It's not just about faster speeds and feeds. It's about fundamentally stepping back and asking how we're running this business.

From there, you start expanding the project, and that's exactly how we win most of our customers. We very rarely go in and say, "Buy an enterprise license for our product across the company." We certainly do those, but more typically we get into a business unit, we find the acute pain, and we solve that problem.

What they're betting on is the ability for us to expand and for them to expand in this platform. That's why we are, on the one hand, all about the platform and the integration, but on the other hand, not about to lose the flexibility and the modularity of what we do, because that's also a huge differentiator for HP's portfolio.

I think that this is a wonderful time in the world of business transformation, and I think, unlike what has been talked about for the last 30 years, you now have the data that can back it up and prove it in real-time to the organization.

That's the big difference. You gave the balanced scorecard as an example. If you look at the balance scorecard methodology, you can take that methodology and drill down into a thousand fields of detail and be able to get that information in real time. That's the opportunity here, and that's I think why this market is so huge.

It's not just about faster speeds and feeds. It's about fundamentally stepping back and asking how we're running this business. What assets, especially information assets, do we have that could dramatically boost the productivity to the same extent that computers, when they were first introduced, boosted productivity. That's the goal that everybody is looking for when it comes to information.

Gardner: Tell our listeners and readers a bit more about yourself and your background.

Mahony: I've been with Vertica since the beginning. In fact, long before Vertica, my background has always been databases. I've always loved computer science, and had a minor in computer science in my undergraduate degree. In my first job out of school, I was taking databases and working with civilian US Government clients, and getting a lot of information published up to the web in the earliest days of the web.

I had a couple of other roles, but they were always very technology focused. Then I got my MBA on the business side and went into venture capital for seven years. That's where I met Mike Stonebraker, the founder of Vertica.
Those are all the things that we have taken into account while we built Vertica, and I think we have always been on the fast track to a platform.

I just loved the idea, everything I knew about databases and the challenges of traditional database and everything I knew about the new world order of information -- at the time we didn’t even talk about the term big data -- it just seemed to align really well.

So I decided to leave the dark side of venture capital and I jumped into something that I have been incredibly passionate about. If you look at that lifecycle even my own background with Vertica and where we’ve come, it’s just been a great. The timing was great and as always it takes a lot more than just great technology and great people.

Gardner: It's been well over two years since HP acquired Vertica and, as we begin the inaugural 2013 Big Data Conference, how would you best characterize how Vertica has evolved since its founding back in 2005?

Mahony: Yes, this is our first user conference. It’s ironic that we've never had one before, but I think also this is a testament to that scale that HP can bring. We have wanted a user conference since the beginning. Obviously, it takes some critical mass to get there which we now have, but also it takes the support of an organization that knows how to do these conferences and understand the value of them.

And we’ve evolved quite a bit. It’s been a busy couple of years here, certainly post the HP acquisition. But I think at a high level, we’ve really shifted and expanded from being an MPP column store, very narrowly-focused database company, really into an analytic platform company.

With that comes several developments, obviously on the product side, but also as an organization, going through that maturation in terms of being able to operate at a global scale across the spectrum of what you would expect an analytics provider to offer.

Gardner: And how do you characterize the difference between a store and a platform? Are there many ecosystem players or is this an organic evolution of your capabilities or both?

Mahony: It’s both, the ecosystem and the tools that you interact with. And of course, we support a very rich and vibrant ecosystem of business-intelligencve (BI) tools, extract, transform and load (ETL) tools, and other types of management tools. Not just the ecosystem around it, but also looking within our own products.

So it's adding a lot of the capabilities like backup and recovery, additional analytics capabilities beyond just standard SQL with the SDKs that Vertica supports, the ability to run both the procedural and the other types of code within the product, being able to express things like MapReduce beyond what a traditional database system would do.

Since the founding of the company, we've tried to take the best part of the database world and the best parts of the SQL world, but address the most challenging issues that traditional databases have had. So whether it is scalability or it’s being able to run things beyond SQL or it’s just the performance, those are all the things that we have taken into account while we built Vertica, and I think we have always been on the fast track to a platform.

We knew it would be a journey and we knew that building a product and a platform from the bottom up is not an easy thing, but we also knew that once we got there, once we sort of crossed that chasm, if you will, then all those decisions that made in the beginning about this product and building an engine from the bottom up would pay off.

Platform modularity

For probably the last year, that's where we’ve been. Right now, we're seeing that it’s easy to add functionality to the platform because of the modularity of the platform, and we can add that functionality without giving up any of the performance.

For me, it’s probably the most exciting time. Being part of HP offers us so many things that make it a lot easier to become a platform, not only on the development side, but a much greater ecosystem, a global scale, being able to support customers globally 24/7.

Gardner: It’s only been a few months since the HP Discover 2013 Conference in Las Vegas where the HAVEn Initiative was announced. This puts Vertica in a very prominent place among other HP properties, technologies, platforms and approaches to solving this big data issue. Recap for us, if you would, what HAVEn is and why Vertica formed such an important pillar for this larger HP initiative?

Big-data lake

Mahony: What companies are looking for is this notion of the big-data lake. To me, it can mean many different things, but at the end of the day, companies want to take all the information assets that they have and they want to put them into a safe place, but a place where access to that information can be used by many different constituencies, whether it's IT, line of business, or data scientist.

So the notion of having a safe place, a harbor, or a port is what we announced as HP HAVEn, which is HP’s big data platform. It is primarily for analytics, but it can be used for just about anything when it comes to information and data.

What's so important about information right now is that there are different constituencies in the companies that want to take the information. First of all they want to capture all the information, not just structured, not just unstructured, but 100 percent of their information.

They want to get it to a place where they can leverage it and use it for a lot of different use cases, but the first part is get that information into the right place. For us, that is one of three components of HAVEn, which is the connectors.

We have over 700 connectors as part of HAVEn coming from Autonomy, coming from our Enterprise Security Group, the ArcSight core Logger and those connectors. That can be human information, extreme log information, or traditional database structured information.
They're driven by vast volumes of information and they close the loop, meaning that the experiences that are happening with an application.

Step one is the connectors to get these components. Step two is to put that data into the best engine for that data. Vertica obviously is one component, but you also have the Autonomy IDOL Engine, you have the ArcSight Logger engine, and also open-source technologies like Hadoop, which is actually the HP HAVEn. So we’ve got a place to put the information.

Step three is any N number of applications. What I'm seeing happening in the industry right now is just like we went from mainframe to client-server, and client-server to LAN, we're in a period now where applications are being developed. They're certainly web-based and distributed, but they're also analytical in nature.

They're driven by vast volumes of information and they close the loop, meaning that the experiences that are happening with an application, if you're driving a car, or whatever it might be, information is being passed, closed loop, back to a system that can then optimize the experience. That is creating a new class of applications.

For that new class of applications, you need the platform to be able to drive those. What we're bringing together in HAVEn is Hadoop, Autonomy, Vertica, Enterprise Security, core assets, and the N number of applications.

At Discover, we announced some of our own internal applications, which are powered by the HAVEn platforms. We announced our HP Analytics offering, which is built using Hadoop, Vertica, Enterprise Security, and Autonomy assets.

About community

We're making some of our own applications, but this is about the community and getting people to be able to build new set of applications that can use these components to really change how people are interacting with their data.

That’s HAVEn, and I am always careful to point out to people that HAVEn itself is not a product, but it's a platform and it’s a broader platform than the one that is just Vertica, Autonomy, or Enterprise Security. It’s a platform where 1+1+1+1+1, instead of equaling 5, should equal 8 or 10 or 12, and that's the goal. Of course, it's also a roadmap into areas that each of these components are working on to bring those closer together. So it’s exciting.

One thing I've certainly noticed over the years with our customers is that the shiny object of why a customer chooses Vertica may look very different across our customers. For some, it's the price. For some, it's the performance and the scale, massive volumes. For some it's a particular analytic function or several pattern matching capabilities. And for others, it's something entirely different.

But what's so exciting, especially about this conference, is that no matter what on-ramp they take, they tend to find a lot of the other capabilities once they get on. Hopefully, here at the conference, we're going to accelerate some of that just by getting our customers and our partners together in an environment where they can share stories.

Cloud and hybrid

Gardner: For our last item today, I wonder if we could take out our crystal ball apparatus and try to do a little blue-sky thinking. One of the other big trends these days of course is cloud computing and hybrid models for the distribution of workloads for applications, but also for data. I'm wondering, as we go down this journey over the next year or two, how do big data and cloud computing come together?

Mahony: As I mentioned in terms of the three things that we are focused on, number one is make it easy to get data into the platform. Number two is do a lot more with the platform, so that there is better analytic capabilities, better pattern matching, and better analytics packs on top of it.

Number three is make sure you can deploy Vertica everywhere, and in the everywhere and anywhere categories, the cloud is certainly the first name that comes to mind. That is absolutely the future of computing. In some ways, I guess, it's the past, but it's interesting how the past repeats itself.
All these activities that are happening up on the cloud are generating a lot of information, information that will be analyzed, I'm sure, in many different ways.

We do run Vertica on hosted environments like Amazon cloud. We're in a private beta on the HP Cloud Service. So there are definitely offerings and developments that that has been underway here at Vertica for a while.

We embrace that, and to us, it's not mutually exclusive. What you described in the hybrid environment where you can run certain things locally. You can burst up to the cloud to do other workloads, especially if you're looking to pull some quick processing power and storage. That's going to be the future and that's the way, just like any other utilities, that we're going to consume some of these capabilities.

This is one of the strengths of a company the size and scale of HP. We have these offerings, whether it's software only, appliance, or cloud. We have the ability to deliver however the customer wants it, and we can also provide not only the flexible technologies, but the flexible business capabilities to make that happen with a lot of ease.

It's an exciting time. If you look at the pillars of the HP, we have cloud, mobility, big data, and security. All four of those pillars tie well into one another, because they're all related. Of course, all these activities that are happening up on the cloud are generating a lot of information, information that will be analyzed, I'm sure, in many different ways.

So it's something that kind of feeds on itself, the same way the mobility does. All of that is a good thing for the analytic space, wherever it is. The final thing I would say is that  the most important thing about analytics is that you do want it embedded into the various applications, just like when you are driving a car, you just want the GPS system to tell you where you are going.

Analytics is the same. You want it within the context of whatever it is that you are doing. Given that so many things are going to be served off the cloud, it's natural that that's the place that will host some of the analytics as well.

So it's an incredibly exciting time, and we're looking forward to having many more of these user conferences and are certainly going to enjoy the rest of the show this week. [Follow Colin on Twitter.]
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:


Wednesday, July 31, 2013

Businesses can remain dependable only if they get a full grip on risk and complexity, says The Open Group CEO Allen Brown

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

This latest BriefingsDirect discussion from The Open Group Conference earlier this month in Philadelphia explores the essential role of standards in an increasingly complex and unpredictable world.

From risks around cybersecurity to supply chain concerns to fast-changing trends around cloud computing, the pace of change and pressures on businesses to adjust well have never been higher. To gain a fuller grip on such risk and complexity, The Open Group is shepherding a series of standards and initiatives to provide better tools for understanding and managing true operational dependability.

BriefingsDirect sat down with the President and CEO of The Open Group, Allen Brown, at the July conference to gather an update on the efforts. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What are the environmental variables that many companies are facing now as they try to improve their businesses and assess the level of risk and difficulty?

Brown: There are a lot of moving targets. We're looking at a situation where organizations are having to put in increasingly complex systems. They're expected to make them highly available, highly safe, highly secure, and to do so faster and cheaper. That’s kind of tough.

Gardner: One of the ways that organizations have been working toward a solution is to have a standardized approach, perhaps some methodologies, because if all the different elements of their business approach this in a different way, we don’t get too far too quickly, and it can actually be more expensive.

Perhaps you could paint for us the vision of an organization like The Open Group in terms of helping organizations standardize and be a little bit more thoughtful and proactive toward these changed elements?

Brown
Brown: With the vision of The Open Group, the headline is "Boundaryless Information Flow." That was established back in 2002, at a time when organizations were breaking down the stovepipes or the silos within and between organizations and getting people to work together across functioning. They found, having done that, or having made some progress toward that, that the applications and systems were built for those silos. So how can we provide integrated information for all those people?

As we have moved forward, those boundaryless systems have become bigger and much more complex. Now, boundarylessness and complexity are giving everyone different types of challenges. Many of the forums or consortia that make up The Open Group are all tackling it from their own perspective, and it’s all coming together very well.

We have got something like the Future Airborne Capability Environment (FACE) Consortium, which is a managed consortium of The Open Group focused on federal aviation. In the federal aviation world they're dealing with issues like weapons systems.

New weapons

Over time, building similar weapons is going to be more expensive, inflation happens. But the changing nature of warfare is such that you've then got a situation where you’ve got to produce new weapons. You have to produce them quickly and you have to produce them inexpensively.

So how can we have standards that make for more plug and play? How can the avionics within a cockpit of whatever airborne vehicle be more interchangeable, so that they can be adapted more quickly and do things faster and at lower cost. After all, cost is a major pressure on government departments right now.

We've also got the challenges of the supply chain. Because of the pressure on costs, it’s critical that large, complex systems are developed using a global supply chain. It’s impossible to do it all domestically at a cost. Given that, countries around the world, including the US and China, are all concerned about what they're putting into their complex systems that may have tainted or malicious code or counterfeit products.

The Open Group Trusted Technology Forum (OTTF) provides a standard that ensures that, at each stage along the supply chain, we know that what’s going into the products is clean, the process is clean, and what goes to the next link in the chain is clean. And we're working on an accreditation program all along the way.

We're also in a world, which when we mention security, everyone is concerned about being attacked, whether it’s cybersecurity or other areas of security, and we've got to concern ourselves with all of those as we go along the way.
The big thing about large, complex systems is that they're large and complex. If something goes wrong, how can you fix it in a prescribed time scale?

Our Security Forum is looking at how we build those things out. The big thing about large, complex systems is that they're large and complex. If something goes wrong, how can you fix it in a prescribed time scale? How can you establish what went wrong quickly and how can you address it quickly?

If you've got large, complex systems that fail, it can mean human life, as it did with the BP oil disaster at Deepwater Horizon or with Space Shuttle Challenger. Or it could be financial. In many organizations, when something goes wrong, you end up giving away service.

An example that we might use is at a railway station where, if the barriers don’t work, the only solution may be to open them up and give free access. That could be expensive. And you can use that analogy for many other industries, but how can we avoid that human or financial cost in any of those things?

A couple of years after the Space Shuttle Challenger disaster, a number of criteria were laid down for making sure you had dependable systems, you could assess risk, and you could know that you would mitigate against it.

What The Open Group members are doing is looking at how you can get dependability and assuredness through different systems. Our Security Forum has done a couple of standards that have got a real bearing on this. One is called Dependency Modeling, and you can model out all of the dependencies that you have in any system.

Simple analogy

A very simple analogy is that if you are going on a road trip in a car, you’ve got to have a competent driver, have enough gas in the tank, know where you're going, have a map, all of those things.

What can go wrong? You can assess the risks. You may run out of gas or you may not know where you're going, but you can mitigate those risks, and you can also assign accountability. If the gas gauge is going down, it's the driver's accountability to check the gauge and make sure that more gas is put in.

We're trying to get that same sort of thinking through to these large complex systems. What you're looking at doing, as you develop or evolve large, complex systems, is to build in this accountability and build in understanding of the dependencies, understanding of the assurance cases that you need, and having these ways of identifying anomalies early, preventing anything from failing. If it does fail, you want to minimize the stoppage and, at the same time, minimize the cost and the impact, and more importantly, making sure that that failure never happens again in that system.

The Security Forum has done the Dependency Modeling standard. They have also provided us with the Risk Taxonomy. That's a separate standard that helps us analyze risk and go through all of the different areas of risk.
You can't just dictate that someone is accountable. You have to have a negotiation.

Now, the Real-time and Embedded Systems Forum  has produced the Dependability through Assuredness, a standard of The Open Group, that brings all of these things together. We've had a wonderful international endeavor on this, bringing a lot of work from Japan, working with the folks in the US and other parts of the world. It's been a unique activity.

Dependability through Assuredness depends upon having two interlocked cycles. The first is a Change Management Cycle that says that, as you look at requirements, you build out the dependencies, you build out the assurance cases for those dependencies, and you update the architecture. Everything has to start with architecture now.

You build in accountability, and accountability, importantly, has to be accepted. You can't just dictate that someone is accountable. You have to have a negotiation. Then, through ordinary operation, you assess whether there are anomalies that can be detected and fix those anomalies by new requirements that lead to new dependabilities, new assurance cases, new architecture and so on.

The other cycle that’s critical in this, though, is the Failure Response Cycle. If there is a perceived failure or an actual failure, there is understanding of the cause, prevention of it ever happening again, and repair. That goes through the Change Accommodation Cycle as well, to make sure that we update the requirements, the assurance cases, the dependability, the architecture, and the accountability.

So the plan is that with a dependable system through that assuredness, we can manage these large, complex systems much more easily.

Gardner: Many of The Open Group activities have been focused at the enterprise architect or business architect levels. Also with these risk and security issues, you're focusing at chief information security officers or governance, risk, and compliance (GRC), officials or administrators. It sounds as if the Dependability through Assuredness standard shoots a little higher. Is this something a board-level mentality or leadership should be thinking about, and is this something that reports to them?

Board-level issue

Brown: In an organization, risk is a board-level issue, security has become a board-level issue, and so has organization design and architecture. They're all up at that level. It's a matter of the fiscal responsibility of the board to make sure that the organization is sustainable, and to make sure that they've taken the right actions to protect their organization in the future, in the event of an attack or a failure in their activities.

The risks to an organization are financial and reputation, and those risks can be very real. So, yes, they should be up there. Interestingly, when we're looking at areas like business architecture, sometimes that might be part of the IT function, but very often now we're seeing as reporting through the business lines. Even in governments around the world, the business architects are very often reporting up to business heads.

Gardner: Here in Philadelphia, you're focused on some industry verticals, finance, government, health. We had a very interesting presentation this morning by Dr. David Nash, who is the Dean of the Jefferson School of Population Health, and he had some very interesting insights about what's going on in the United States vis-à-vis public policy and healthcare.

One of the things that jumped out at me was, at the end of his presentation, he was saying how important it was to have behavior modification as an element of not only individuals taking better care of themselves, but also how hospitals, providers, and even payers relate across those boundaries of their organization.
One of the things about The Open Group standards is that they're pragmatic and practical standards.

That brings me back to this notion that these standards are very powerful and useful, but without getting people to change, they don't have the impact that they should. So is there an element that you've learned and that perhaps we can borrow from Dr. Nash in terms of applying methods that actually provoke change, rather than react to change?

Brown: Yes, change is a challenge for many people. Getting people to change is like taking a horse to water, but will it drink? We've got to find methods of doing that.

One of the things about The Open Group standards is that they're pragmatic and practical standards. We've seen' in many of our standards' that where they apply to product or service, there is a procurement pull through. So the FACE Consortium, for example, a $30 billion procurement means that this is real and true.

In the case of healthcare, Dr. Nash was talking about the need for boundaryless information sharing across the organizations. This is a major change and it's a change to the culture of the organizations that are involved. It's also a change to the consumer, the patient, and the patient advocates.

All of those will change over time. Some of that will be social change, where the change is expected and it's a social norm. Some of that change will change as people, generations develop. The younger generations are more comfortable with authority that they perceive with the healthcare professionals, and also of modifying the behavior of the professionals.

The great thing about the healthcare service very often is that we have professionals who want to do a number of things. They want to improve the lives of their patients, and they also want to be able to do more with less.

Already a need

There's already a need. If you want to make any change, you have to create a need, but in the healthcare, there is already a pent-up need that people see that they want to change. We can provide them with the tools and the standards that enable it to do that, and standards are critically important, because you are using the same language across everyone.

It's much easier for people to apply the same standards if they are using the same language, and you get a multiplier effect on the rate of change that you can achieve by using those standards. But I believe that there is this pent-up demand. The need for change is there. If we can provide them with the appropriate usable standards, they will benefit more rapidly.

Good folks

The focus of The Open Group for the last couple of decades or so has always been on horizontal standards, standards that are applicable to any industry. Our focus is always about pragmatic standards that can be implemented and touched and felt by end-user consumer organizations.

Now, we're seeing how we can make those even more pragmatic and relevant by addressing the verticals, but we're not going to lose the horizontal focus. We'll be looking at what lessons can be learned and what we can build on. Big data is a great example of the fact that the same kind of approach of gathering the data from different sources, whatever that is, and for mixing it up and being able to analyze it, can be applied anywhere.

The challenge with that, of course, is being able to capture it, store it, analyze it, and make some sense of it. You need the resources, the storage, and the capability of actually doing that. It's not just a case of, "I'll go and get some big data today."

I do believe that there are lessons learned that we can move from one industry to another. I also believe that, since some geographic areas and some countries are ahead of others, there's also a cascading of knowledge and capability around the world in a given time scale as well.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: