Friday, May 31, 2013

Why should your business care about Platform 3.0? A Tweet Jam

On Thursday, June 6, The Open Group will host a "tweet jam" examining Platform 3.0 and why the concept has great implications for businesses.

Over recent years a number of technologies -- cloud, mobile, big data, social -- have emerged and converged to disrupt the way we engage with each other in both our personal and business lives. Most of us are familiar with the buzz words, including "the Internet of things," "machine-to-machine (M2M)," and "consumerization of IT," but what do they mean when they act in concert? How can we treat them as separate? How can we react best?
Technologies have emerged and converged to disrupt the way we engage with each other in both our personal and business lives.

I was early to recognize this confluence as more than the sum of its parts, back in 2010. And Gartner was early too to recognize this convergence of trends representing a number of architectural shifts which it called a "Nexus of Forces." This nexus was presented as both an opportunity in terms of innovation of new IT products and services and a threat for those who do not keep pace with evolution, rendering current business architectures obsolete.

Understanding opportunities

Rather than tackle this challenge solo, The Open Group is working with a number of IT experts, analysts and thought leaders to better understand the opportunities available to businesses and the steps they need to benenefit and prosper from Platform 3.0, not fall behind. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

So please join the burgeoning Platform 3.0 community on Twitter on Thursday, June 6 at 9 a.m. PT/12 p.m. ET/5 p.m. GMT for a tweet jam, moderated by me, Dana Gardner (@Dana_Gardner), BriefingsDirect, that will discuss and debate the issues and implications around Platform 3.0.
All are welcome, including The Open Group members and interested participants from all backgrounds.

Key areas that will be addressed during the discussion include: the specific technical trends (big data, cloud, consumerization of IT, etc.), and ways businesses can use them – and are already using them – to increase their business opportunities.

All are welcome, including The Open Group members and interested participants from all backgrounds, to join the one-hour online chat session and interact with our panel's thought leaders. To access the discussion, please follow the #ogp3 and #ogChat hashtags during the discussion time.

You may also be interested in:

Friday, May 24, 2013

User-centric tools go long way to reaping most benefits from big data projects, says IDG survey

Big data is proving to be like the proverbial 800-pound gorilla -- big and powerful, but difficult to tame and control.

While nearly 90 percent of business and IT leaders agree that big data can be useful in making intelligent business decisions, only one-third of companies have implemented big-data initiatives. That's the finding from a recent International Data Group (IDG) survey, sponsored by Kapow Software.

Furthermore, more than 50 percent of survey respondents said that they had only lukewarm success with getting big data to deliver value in terms of competitive advantage, differentiation, top-line growth, strategic insights, employee productivity and effectiveness, among other business metrics.

Respondents reported that big-data projects take too long, cost too much, and aren't delivering a sufficient return on investment (ROI). Part of this is because these projects require expensive consultants or hard-to-find data scientists. Yet, while this lag in adoption continues, the mass of data from a variety of sources is growing.

Among the barriers to drawing value out of big data, according to survey respondents, are:
  • High cost and complexity. Many business leaders believe such projects require a prohibitively expensive infrastructure. Sixty percent said projects take 18 months or more to complete.
    Despite the current low reliance on big data, adoption is expected to increase over the next 12 months, as business and IT leaders turn to user-centric tools

  • Employee workarounds. Respondents said employees often take matters into their own hands, but without effective solutions, are resorting to manual aggregation. This is putting pressure on IT to automate these efforts.
  • Poor data accessibility. Nearly half of IT leaders said they find it difficult to find, access, and integrate the right information, which is often unstructured and spread among a wide variety of sources.
  • Lacking skills and tools. Big data is proving to be inaccessible by employees without special training, again putting pressure on IT to pave the way.
Despite the current low reliance on big data, adoption is expected to increase over the next 12 months, as business and IT leaders turn to user-centric tools -- such as those provided by Kapow Software. With such tools, IT leaders anticipate improved productivity and a better relationship with the business leaders.

Business leaders surveyed are looking for a variety of benefits from an increased use of big data. They say the following are either "critical" or "very important:"
  • More informed business decision - 80 percent
  • Increased competitive advantage - 71 percent
  • Improved customer satisfaction - 68 percent
  • Increased end-user productivity - 62 percent
  • Improved security or compliance - 60 percent
  • New products and services - 55 percent
  • Monitoring and responding to social media in real time - 33 percent.
For more information on the survey results, go to http://www.slideshare.net/Kapowmarketing/kapow-idg-bigdataidg051513 or http://www.kapowsoftware.com/. [Disclosure: Kapow Software is a sponsor of BriefingsDirect podcasts.]

You may also be interested in:

Thursday, May 23, 2013

Ariba LIVE roadmap debrief: Solutions manager Chris Haydon on cloud data analytics, AribaPay, mobile support, and managed services procurement

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba.

This latest BriefingsDirect podcast, from the 2013 Ariba LIVE Conference in Washington, D.C., explores Ariba's product and services roadmap and future strategy insights unveiled by Ariba, an SAP company, at the recent user event.

Our guest is Chris Haydon, Vice President of Solutions Management for Procurement, Finance, and Network at Ariba, here to explain the latest conference news, and to offer insights into how Ariba will be broadening its services procurement management value, mobile push and AribaPay roll-out.

The interview is conducted Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba, an SAP company, is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Where we are now with Ariba in terms of some of the big news at LIVE?

Haydon: We have some really exciting innovation coming in the near-term to Ariba in a couple of areas. First, let's talk about Network RFQ or the Spot Buy. We think this is part of the undiscovered country, where, according to The Hackett Group, 40-plus percent of spend is not sourced.

Haydon
By linking this non-sourced spend to the Ariba Network, we think we're going to be able to address a large pain-point for our buyers and our sellers. Network RFQ or Spot Buy is a near-term solution that we announced at LIVE, and we're bringing that forward over the next six months.

The next exciting innovation is at the other end of the process. That’s a solution we call AribaPay. AribaPay is what we think is a game-changing solution that delivers rich remittance and invoice information that’s only available from the Ariba Network through solution secure, global payment infrastructure.

Down market

Gardner: It seems to me, Chris, that you're going to the mid-market. You're creating some services with Spot Buy that help people in their ad-hoc, low-volume purchasing.

You're providing more services types of purchasing capabilities, maybe for those mid-market organizations or different kinds of companies like services-oriented companies. And, you're also connecting via Dell Boomi to QuickBooks, which is an important asset for how people run small businesses. Are we expanding the addressable market here?

Haydon: We are, and that’s an excellent point. We look at it two ways. We're looking to address all commerce. Things like the Spot Buy, AribaPay, services, procurement, and estimate-based services are really addressing the breadth of spend, and that applies at the upper end and the lower end.

There are important pieces that you touched on, especially with our Dell Boomi partnership and the announcement here for QuickBooks. We want to make it accessible to grow the ecosystem and to make the collaboration across the network as frictionless as possible.

With Dell Boomi announcing QuickBooks, it enables suppliers specifically with that back-end system to be able to comply with all the collaboration of business processes on the Ariba Network, and we're really only just getting started.
We want our customers on both the buy-side and the sell-side of their partners to make their own choices.

There is a massive ecosystem out there with QuickBooks, but when we have a look around, there are more than 120 prominent backend systems. So it's not just the SAPs, the Oracles, the JD Edwards, and Lawsons. It's the QuickBooks and the Intuits. It's the Great Plains of the world.

Think about at it as back-end agnostic. We want our customers on both the buy-side and the sell-side of their partners to make their own choices. It's really their own choice of deployment.
If they want to take an integrated business-to-business (B2B) channel, they can. If they want to come to a portal, they can. If they want to have an extract that goes into their own customized system, they can do that as well, or all of the above at the same time, and really just taking that process forward.

Gardner: How does AribaPay work? Is this a credit card, a debit card? Is this a transactional banking interface?

Brand new

Haydon: Number one, it's brand-new. First, let's talk about the problems that we had, and how we think we are going to address it. More than 40 percent of payments in corporate America are still check based. Check-based payments present their own problems, not just for the buyers, but also from the sellers. They don’t know when they're going to get paid. And when they are getting paid, how do they reconcile what they're actually getting paid for?

AribaPay is a new service. It's not a P-Card. It's leveraging a new type of electronic payment through an ACH-styled channel. It enables buyers to take 100 percent of their payments through the Ariba Network. It lets the suppliers opt in to be able to match and move from our paper-based payment channel check, to an electronic channel that is married. This is the interesting value prop for the network. That is married with their rich information.

So that’s the value. We think it's very differentiated. We're going to be leveraging a large financial institution provider who has great breadth and penetration, not just here in the United States, but globally as well, and that's via Discover Financial Services.

We announced this at LIVE this month, and I know they're as excited as we are. Discover has the wherewithal to bring the credibility and the scale to the payments channel, while Ariba has the credibility in the scale of the supply base and the commercial B2B traffic. We think that that one plus one equals three and is a game changer in electronic payment.

Gardner: Moving on to the future or vision that you're painting, what should we expect in the roadmap of the next two or three years for the Ariba Network?
The ability to apply your own business rules and logic to those collaborations is massive.

Haydon: We're really excited about the Ariba Network and we have four or five themes. One piece of big news is that we're getting into and supporting supply chain and logistics processes, and adding that level of collaboration. Today, we have 10 or 11 types of collaborations that you can do on the Ariba Network, like an order, an invoice, and so on.

Over the next several releases, we're going to be more than doubling that amount of collaboration that you can do between trading partners on the network. That’s exciting, and there are things like forecasting and goods receipt notices.

I won’t go into the specifics of every single transaction, but think about of doubling the amount of collaboration that you can do and the visibility in that. The ability to apply your own business rules and logic to those collaborations is massive.

The second thing we're doing on the network is adding a new spend category, which we call services invoicing. This is estimate-based spend and this is another up market, down market, broad approach, in which there are a whole heap of services.

This is more of an estimate-based style spend where you don’t necessarily know the full cost of an item until you finish it. Whether you're drilling an oil well or constructing a building, there are variations there. So we're adding that capability into the network.

User interface

Another area is what we call Network 2.0, and this is extending and changing not just the user interface, but extending and adding more intrinsic core capabilities to the network. Ariba has a number of network assets and we think it's important to have a single network platform globally. It's the commerce internet, the network.

So our Network 2.0 program is a phase delivery of extending the core capabilities of the Ariba network over the next couple of years in terms of order status, results, requests in terms of goods receipt notices, advanced shipping notices, more invoice capability, and just growing that out globally.

Last but not least is just more and more supply collaboration, focusing on the ability for suppliers to more easily respond, comply, and manage their profiles on the Ariba Network.

Gardner: The Ariba applications themselves, what should we expect there?

Haydon: We have a whole raft of capability coming across that whole application suite. We can break that into two or three areas. In our sourcing, contract management, supplier information management, and supply performance management suite, we're doing functionality enhancements on one of the exciting pieces.
We're introducing a new look and feel, a consumer like look and feel, to our catalog and our search engine.

In the spend visibility area, we're going to be leveraging the SAP In-Memory technology HANA. What we are doing there is early for us, but there are some very exciting, encouraging results in terms of the speed and the performance we've heard about from SAP. Running our own technology on that and seeing the results is exciting for us and will be exciting for our customers.

As we move more into our procurement suite, we're introducing a new look and feel, a consumer like look and feel, to our catalog and our search engine. The more Amazon-style search touches more users than anyone else. As you can imagine, that’s how they need to requisition tools. So making that a friendly UI and taking that UI or user experience through to the other products is fantastic.

One of the other most exciting areas for us is services procurement, a very large investment for us. Services procurement is our application to be able to support temporary or contingent labor, statement of work or consulting labor, print, marketing and also light industrial. This really is one of the underpinning differences for Ariba, and this is where we're bringing it together.

We're not just building applications any more. We're building network-centric applications or network-aware applications. It means that when we're launching our new services procurement solution, not only are we are going to have a brand-new, refreshed, modern user interface, which is very important.

Differential insights

We're going to be able to leverage the power of the Ariba Network to provide differential insights, into standard day-to-day services procurement on-boarding. That will be looking at average labor rates in the area for the type of service that you're buying and using the network intelligence to give you advice, to give you instruction, to help you manage exceptions on the network.

Gardner: What’s really interesting to me is all of your vision so tightly aligns with the mega trends of today, from cloud to mobile to big data. Tell me little bit about the potential.

Haydon: Absolutely. When we think about the networked economy, the networked apps, the network-centric apps, the network itself, one should be able to connect any demand generating or receiving system. We touched on that with Dell Boomi, but it's seamless integration across the piece. We want to be comprehensive, which is adding more collaboration.

Critical mass

The interesting thing about this collaboration, is it starts driving at some levels a critical mass of data. The trend is that the network is intelligent. It's actually able to piece together not just the transaction itself, but who you are. We're quite excited, because this is the massive differentiator of the network. You talked about apps. We have not just the transactional data, but we have the master data, and we can also take other sources of information.

That could be weather, location, stock reports, SEC filings, Dun and Bradstreet writings, whatever you like, to intersect.
So this data plus knowledge gives you information. With SAP, it's a very exciting technology. SAP InfoNet, Supplier InfoNet, is able to leverage network data. Today, it has over 160 feeds. It's smart, meaning it's smart intelligence. It can automatically take those feeds and contextualize.

And that's the real thing we're trying to do -- knowing who the user is, knowing the business process they are trying to execute, and also knowing what they are trying to achieve. And it's bringing that information to the point of demand to help them make actionable, intelligent, and sometimes predictive decisions.
The trend is that the network is intelligent. It's actually able to piece together not just the transaction itself, but who you are.

Where we would like to go is, heaven forbid there is another tsunami, but let's just work through that use case. You get a news alert there is tsunami in Japan again, terrible event. What if you knew that, and what if 80 percent of your core, raw material inputs came from there? Just that alert of that to notify you to saying you've got to know that you might well have a supply problem. What are you going to do?

And by the way, here are three or four other suppliers who can supply this material to you, and they're available on the network. What is that worth? Immeasurable.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba.

You may also be interested in:

Monday, May 13, 2013

HP Software delivers integrated management for apps deployment, banking on simpler approach to navigating cloud choice

Often lost amid the talk of cloud deployment models and hybrid hosting efficiencies is the actual task of properly deploying enterprise applications. Deploying applications touches so many aspects of IT systems and business processes, and requires ongoing updates and management, that only the enterprise IT staffs can really do the job.

So if cloud is a way of doing an end-run around IT -- yet IT is integral to proper applications deployment and care -- how exactly do these disparate propositions co-exist?

Not too well, it turns out, especially as the pace that apps development and deployment -- and the skyrocketing need to bring more mobile apps into production -- complicates the already tough task of overall applications management.

HP Software today announced four products that aim to tackle this thorny reality -- that traditional apps deployment was already broken, and that the new requirements make automation and comprehensive management an inescapable necessity. [Disclosure: HP is a sponsor of BriefingDirect podcasts.]
Server, data, middleware, cloud and orchestration all need to be part of the management solution for the scale, simplicity and automation to be impactful and practical.

HP is also banking on the role it can play as a neutral party to better orchestrate the apps lifecycle because -- unlike most other large enterprise software vendors -- it doesn't have a legacy applications, operating system, hypervisor, database and/or middleware heritage (and cash cows) to favor and protect. That means supporting heterogeneity in total is the imperative, not the exception, for HP.

The next generation of HP's data-center automation, orchestration, and cloud management software scales in terms of volume, supports all the installed enterprise kit, and allows for unprecedented simplicity, so that IT can get control before its too late, said Manoj Raisinghani, Senior Director of Worldwide Product Marketing for Cloud Automation Software and SaaS at HP Software.

It's not enough to solve parts of the enterprise IT complexity problem, said Raisinghani. The management of the server deployment and management has an impact on the database and middleware management, which then need to be orchestrated as a whole, which then needs to apply to the cloud services deployment options. So, server, data, middleware, cloud and orchestration all need to be part of the management solution for the scale, simplicity and automation to be impactful and practical, he said.

And that's why HP has bundled these four major products under a common release, with a common version number: 10.

Key to cloud

"Server automation is key to the cloud path," said Raisinghani. He said the announcements were a "10" on a scale of 1 to 10 for HP Software.

Managing complex distributed systems and heterogeneous environments is so time-consuming and complex -- hindering business agility and innovation -- that IT has relied on systems integrators, and is now being tempted to hand over more process orchestration to the cloud providers. But the trends around mobility, big data and software-as-a-service (SaaS) services mean that IT need to be more in control, not less. And IT needs the means to deploy the answer themselves, and rely on the software orchestration they control to move the workloads and date to where the model works best, said Raisinghani.

Therefore, whether it's routine data center maintenance to the delivery of extended enterprise business processes, automation and cloud management software reduces automating repetitive, manual and time-consuming operations, and makes the entire approach more secure and more easily tracked for intrusions, according to HP.

Even deploying the HP Server Automation (SA) 10 product itself is being streamlined via a virtual appliance, said Raisinghani. IT users can do it themselves, he said. Thanks to the virtual appliance model, the suite is "customer installable," said Raisinghani.
Thanks to the virtual appliance model, the suite is "customer installable."

HP Database and Middleware Automation (DMA) 10 further automates manual database management tasks. HP Cloud Service Automation 3.2 provides service life cycle automation and IT assets management capabilities to scale to cloud services safely. HP Operations Orchestration (OO) 10 automates up to 15,000 simultaneous operations to track all of the above products, processes, and services.

HP SA 10, the life cycle management platform, enables IT to manage more than 100,000 physical and virtual servers from a single pane of glass, as well as improves operational economics by reducing the administrator-to-server ratio by up to 60 percent, said Raisinghani.

This HP Software approach has been long in the making -- from the acquisition of Mercury and Opsware, to the business service management emphasis to the early recognition that hybrid cloud was the long-term IT model.

And while the total management approach -- supporting all the major OSes, hypervisors, RDBs, apps, and clouds -- makes HP a services management Switzerland, there are some advantages too for HP. By focusing on the automation and orchestration, they are building a default capability to the HP public cloud for those organization seeing an integrated advantage over the more maul efforts require for other public clouds such as Amazon Web Services, said Raisinghani.

"You can go agile, to where the applications can be best deployed," said Raisinghani. "But this is seamless to the user. It just gets deployed. IT can automate how the services are prepackaged and cloud-burst."

Up and running

And HP is determined to make the HP public cloud the best way to get those services up and running, although the customer will have choice on which cloud or clouds to target, said Raisinghani. "The user gets choice -- but the default is the HP Cloud," he said. "HP on HP is going to work better. We'll be making them an offer that's very attractive."

So think about it. Would you as a vendor rather be in a race to the bottom on hypervisor price? On public cloud price? On database price? On storage price? Or would you rather be building market at being best at enabling the automation, speed and security of the workloads and processes that IT needs to navigate the new IT landscape?

Management, orchestration and automation may well be the killer apps of the cloud era. Management, orchestration and automation from apps and data cradle to grave is the sticky value that locks-in based on productivity, not technology. HP has clearly got its eyes on this prize, and the latest releases this week are a major salvo in the cloud enablement as a function of IT -- not outside of IT. Because, like it or not, enterprise IT is the ultimate cloud broker to win over.
HP is determined to make the HP public cloud the best way to get those services up and running, although the customer will have choice on which cloud or clouds to target.

In other cloud applications automation news, ServiceNow on Monday announced its ServiceNow App Creator, designed to enabling "citizen developers" to rapidly create enterprise and mobile applications on the ServiceNow Service Automation Platform.

Originally targeting the ITSM function, ServiceNow is broadening the use of its tools and platform for apps outside the IT management domain, but with IT as the driver as to what platforms the developers will use. The App Creator technology itself is now included in the platform.

"This arms IT to provide developers with a rich RAD platform and puts those apps on a single platform in a single place," said Arne Josefsberg, CTO at ServiceNow.

Leveraging a forms-based workflow on making and deploying apps and process flows, App Creator ensures "best practice" development of custom applications without requiring coding or technology expertise, said Josefsberg.

Applications that the enterprise builds on the platform are then separately licensed on a per user basis. The ServiceNow App Creator is available today to all current ServiceNow customers.

You may also be interested in:

Thursday, May 9, 2013

Thomas Duryea Consulting provides insights into how leading adopters successfully solve cloud risks

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next BriefingsDirect IT leadership discussion focuses on how leading Australian IT services provider Thomas Duryea Consulting made a successful journey to cloud computing as a business.

We'll learn why a cloud-of-clouds approach is providing new types of IT services to Thomas Duryea’s many Asia-Pacific region customers. The first part of our series addressed the rationale and business opportunity for TD's cloud-services portfolio, which is built on VMware software.

The latest discussion continues a three-part series on how Thomas Duryea, or TD, designed, built and commercialized an adaptive cloud infrastructure. This second installment focuses on how a variety of risks associated with cloud adoption and cloud use have been identified and managed by actual users of cloud services.

Learn more about how adopters of cloud computing have effectively reduced the risks of implementing cloud models from Adam Beavis, General Manager of Cloud Services at Thomas Duryea in Melbourne, Australia. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]


Here are some excerpts:
Gardner: Adam, we've been talking about cloud computing for years now, and I think it's pretty well established that we can do cloud computing quite well technically. The question that many organizations keep coming back with is whether they should do cloud computing. If there are certain risks, how do they know what risks are important? How do they get through that? What are you in learning so far at TD about risk and how your customers face that?

Beavis: People are becoming more comfortable with the cloud concept as we see cloud becoming more mainstream, but we're seeing two sides to the risks. One is the technical risks, how the applications actually run in the cloud.

Moving off-site

What we're also seeing -- more at a business level -- are concerns like privacy, security, and maintaining service levels. We're seeing that pop up more and more, where the technical validation of the solution gets signed off from the technical team, but then the concerns begin to move up to board level.

We're seeing intense interest in the availability of the data. How do they control that, now that it's been handed off to a service provider? We're starting to see some of those risks coming more and more from the business side.

Gardner: I've categorized some of these risks over the past few years, and I've put them into four basic buckets. One is the legal side, where there are licenses and service-level agreements (SLAs), issues of ownership, and permissions.

The second would be longevity. That is to say, will the service provider be there for the long term? Will they be a fly-by-the-seat-of-the-pants organization? Are they are going to get bought and maybe merged into something else? Those concerns.

The third bucket I put them in is complexity, and that has to do with the actual software, the technology, and the infrastructure. Is it mature? If it's open source, is there a risk for forking? Is there a risk about who owns that software and is that stable?
One of the big things that the legal team was concerned about was what the service level was going to be, and how they could capture that in a contract.

And then last, the long-term concern, which always comes back, is portability. You mentioned that about the data and the applications. We're thinking now, as we move toward more software-defined data centers, that portability would become less of an issue, but it's still top of mind for many of the people I speak with.

So let's go through these, Adam. Let's start with that legal concern. Do you have any organizations that you can reflect on and say, here is how they did it, here is how they have figured out how to manage these license and control of the IP risks?

Beavis: The legal one is interesting. As a case study, there's a not-for-profit organization for which we were doing some initial assessment work, where we validated the technical risk and evaluated how we were going to access the data once the information was in a cloud. We went through that process, and that went fine, but obviously it then went up to the legal team.

One of the big things that the legal team was concerned about was what the service level agreeement was going to be, and how they could capture that in a contract. Obviously, we have standard SLAs, and being a smaller provider, we're flexible with some of those service levels to meet their needs.

But the one that they really started to get concerned about was data availability ... if something were to go wrong with the organization. It probably jumps into longevity a little bit there. What if something went wrong and the organization vanished overnight? What would happen with their data?

Escrow clause

That's where we see legal teams getting involved and starting to put in things like the escrow clause, similar to what we had with software as a service (SaaS) for a long time. We're starting to see organizations' legal firms focus on doing these, and not just for SaaS -- but infrastructure as a service (IaaS) as well. It provides a way for user organizations to access their data if provider organizations like TD were to go down.

Beavis
So that's one that we're seeing at the legal level. Around the terms and conditions, once again being a small service provider, we have a little more flexibility in what we can provide to the organizations on those.

Once our legal team sits down and agrees on what they're looking for and what we can do for them, we're able to make changes. With larger organizations, where SLAs are often set in stone, there's no flexibility about making modifications to those contracts to suit the customer.

Gardner: Tell us about your organization, how big you are, and who your customers are, and then we'll get back into some of these risks issues and how they have been managed.

Beavis: Traditionally, we came from a system-integrator background, based on the east coast of Australia -- Melbourne and Sydney. The organization has been around for 12 years and had a huge amount of success in that infrastructure services arena, initially with VMware.
Being a small service provider, we have a little more flexibility in what we can provide to the organizations.

Other companies heavily expanded into the enterprise information systems area. We still have a large focus on infrastructure, and more recently, cloud. We've had a lot of success with the cloud, mainly because we can combine that with a managed services.

We go to market with cloud. It's not just a platform where people come and dump data or an application. A lot of the customers that come into our cloud have some sort of managed service on top of that, and that's where we're starting to have a lot of success.

As we spoke about in part one, our customers drove us to start building a cloud platform. They can see the benefits of cloud, but they also wanted to ensure that for the cloud they were moving to, they had an organization that could support them beyond the infrastructure.

That might be looking after their operating systems, looking after some of their applications such as Citrix, etc. that we specialize in, looking after their Microsoft Exchange servers, once they move it to the cloud and then attaching those applications. That's where we are. That's the cloud at the moment.

Gardner: Is there something about the platform and industry-standard decisions that you've made that helps your customers feel more comfortable? Do they see less risk because, even though your organization is one organization, the infrastructure, is broader, and there's some stability about that that comes to the table?

Beavis: Definitely. Partnering with VMware was one of our core decisions, because their platform everywhere is end-to-end standard VMware. It really gives us an advantage when addressing that risk if organizations ask what happens if our company doesn't run or they're not happy with the service.
It's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS.

The great thing is that within our environment -- and it's one part of VMware’s vision -- you can then pick up those applications, and move them to another VMware cloud provider. Thank heaven, we haven't had that happen, and we intend it not to happen. But, for organizations to understand that, if something were to go wrong, they can move that to another service provider without having to re-architect those applications or make any major changes. This is one area where we're well getting around that longevity risk discussion.

Gardner: Is there a confluence between portability and what organizations are doing with disaster recovery (DR)? Maybe they're mirroring data and/or infrastructure and applications for purposes of business continuity and then are able to say, "This reduces our risk, because not only do we have better DR and business continuity benefits, but we’re also setting the stage for us to be able to move this where we want, when we want."

They can create a hybrid model, where they can pick and choose on-premises, versus a variety of other cloud providers, and even decide on those geographic or compliance issues as to where they actually physically place the data. That's a big question, but the issue is business continuity, as part of this movement toward a lower risk, how does that pan out?

Beavis: That's actually one of the biggest movements that we’re seeing at the moment. Organizations, when they refresh their infrastructure, don’t see the the value refreshing DR on-premise. Let the first step cloud be "let's move the DR out to the cloud, and replicate from on-premises out into our cloud."

Then, as you said, we have the advantage to start to do things like IaaS testing, understanding how those applications are going to work in the cloud, tweak them, get the performance right, and do that with little risk to the business. Obviously, the production machine will continue to run on-premises, while we're testing snapshots.
DR is still the number one use case that we're seeing people move to the cloud.

It's a good way to put a live snapshot of that environment, and how it’s going to perform in the cloud, how your users are going to access it, bandwidth, and all that type of stuff that you need to do before starting to run up. DR is still the number one use case that we’re seeing people move to the cloud.

Gardner: As we go through each of these risks, and I hear you relating how your customers and TD, your own organization, have reacted to them, it seems to me that, as we move toward this software-defined data center, where we can move from the physical hardware and the physical facilities, and move things around in functional blocks, this really solves a lot of these risk issues.

You can manage your legal, your SLAs, and your licenses better when you know that you can pick and choose the location. That longevity issue is solved, when you know you can move the entire block, even if it's under escrow, or whatever. Complexity and fear about forking or immaturity of the infrastructure itself can be mitigated, when you know that you can pick and choose, and that it's highly portable.

It's a round-about way of getting to the point of this whole notion of software-defined data center. Is that really at heart a risk reduction, a future direction, that will mitigate a lot of these issues that are holding people back from adopting cloud more aggressively?

Beavis: From a service provider's perspective it certainly does. The single-pane management window that you can do now, where you can control everything from your network -- the compute and the storage -- certainly reduces risk, rather than needing several tools to do that.

Backup integration

And the other area where the venders are starting to work together is the integration of things like backup, and as we spoke about earlier, DR. Tools are now sitting natively within that VMware stack around the software-defined data center, written to the vSphere API, as we're trying to retrofit products to achieve file-level backups within a virtual data center, within vCloud. Pretty much every day, you wake up there's a new tool that's now supported within that.

From a service provider's perspective it's really reducing the risk and time to market for the new offerings, but from a customer's perspective it's really getting in that experience that they used to. On-premise over a TD cloud, from their perspective, makes it a lot easier for them to start to adopt and consume the cloud.

Gardner: I suppose this is a good segue into this notion of how to make your data, applications, and the configuration metadata portable across different organizations, based on some kind of a standard or definition. How does that work? What are the ways in which organizations are asking for and getting risk reduction around this concept of portability?

Beavis: Once again, it's about having a common way that the data can move across. The basics come into that hybrid-cloud model initially, like how people are getting things out. One of the things that we see more and more is that it's not as simple as people moving legacy applications and things up to the cloud.

To reduce that risk, we're doing a cloud-readiness assessment, where we come in and assess what the organization has, what their environment looks like, and what's happening within the environment, running things like the vCenter Operations tools from VMware to right-size those environments to be ready for the cloud.

Gardner: Now the flip-side of that would be that some of your customers who have been dabbling in cloud infrastructure, perhaps open-source frameworks of some kind, or maybe they have been integrating their own components of open-source available software, licensed software. What have you found when it comes to their sense of risk, and how does that compare to what we just described in terms of having stability and longevity?

More comfortable

Beavis: Especially in Australia, we probably have 85 percent to 90 percent of organizations with some sort of VMware in their data center. They no doubt seem to be more comfortable gravitating to some providers that are running familiar platforms, with teams familiar with VMware. They're more comfortable that we, as a service provider, are running a platform that they're used to.

We'll probably talk about the hybrid cloud a bit later on, but that ability for them to still maintain control in a familiar environment, while running some applications across in the TD cloud, is something that is becoming quite welcoming within organizations. So there's no doubt that choosing a common platform that they're used to working on is giving them confidence to start to move to the cloud.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in: