Thursday, May 9, 2013

Thomas Duryea Consulting provides insights into how leading adopters successfully solve cloud risks

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next BriefingsDirect IT leadership discussion focuses on how leading Australian IT services provider Thomas Duryea Consulting made a successful journey to cloud computing as a business.

We'll learn why a cloud-of-clouds approach is providing new types of IT services to Thomas Duryea’s many Asia-Pacific region customers. The first part of our series addressed the rationale and business opportunity for TD's cloud-services portfolio, which is built on VMware software.

The latest discussion continues a three-part series on how Thomas Duryea, or TD, designed, built and commercialized an adaptive cloud infrastructure. This second installment focuses on how a variety of risks associated with cloud adoption and cloud use have been identified and managed by actual users of cloud services.

Learn more about how adopters of cloud computing have effectively reduced the risks of implementing cloud models from Adam Beavis, General Manager of Cloud Services at Thomas Duryea in Melbourne, Australia. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]


Here are some excerpts:
Gardner: Adam, we've been talking about cloud computing for years now, and I think it's pretty well established that we can do cloud computing quite well technically. The question that many organizations keep coming back with is whether they should do cloud computing. If there are certain risks, how do they know what risks are important? How do they get through that? What are you in learning so far at TD about risk and how your customers face that?

Beavis: People are becoming more comfortable with the cloud concept as we see cloud becoming more mainstream, but we're seeing two sides to the risks. One is the technical risks, how the applications actually run in the cloud.

Moving off-site

What we're also seeing -- more at a business level -- are concerns like privacy, security, and maintaining service levels. We're seeing that pop up more and more, where the technical validation of the solution gets signed off from the technical team, but then the concerns begin to move up to board level.

We're seeing intense interest in the availability of the data. How do they control that, now that it's been handed off to a service provider? We're starting to see some of those risks coming more and more from the business side.

Gardner: I've categorized some of these risks over the past few years, and I've put them into four basic buckets. One is the legal side, where there are licenses and service-level agreements (SLAs), issues of ownership, and permissions.

The second would be longevity. That is to say, will the service provider be there for the long term? Will they be a fly-by-the-seat-of-the-pants organization? Are they are going to get bought and maybe merged into something else? Those concerns.

The third bucket I put them in is complexity, and that has to do with the actual software, the technology, and the infrastructure. Is it mature? If it's open source, is there a risk for forking? Is there a risk about who owns that software and is that stable?
One of the big things that the legal team was concerned about was what the service level was going to be, and how they could capture that in a contract.

And then last, the long-term concern, which always comes back, is portability. You mentioned that about the data and the applications. We're thinking now, as we move toward more software-defined data centers, that portability would become less of an issue, but it's still top of mind for many of the people I speak with.

So let's go through these, Adam. Let's start with that legal concern. Do you have any organizations that you can reflect on and say, here is how they did it, here is how they have figured out how to manage these license and control of the IP risks?

Beavis: The legal one is interesting. As a case study, there's a not-for-profit organization for which we were doing some initial assessment work, where we validated the technical risk and evaluated how we were going to access the data once the information was in a cloud. We went through that process, and that went fine, but obviously it then went up to the legal team.

One of the big things that the legal team was concerned about was what the service level agreeement was going to be, and how they could capture that in a contract. Obviously, we have standard SLAs, and being a smaller provider, we're flexible with some of those service levels to meet their needs.

But the one that they really started to get concerned about was data availability ... if something were to go wrong with the organization. It probably jumps into longevity a little bit there. What if something went wrong and the organization vanished overnight? What would happen with their data?

Escrow clause

That's where we see legal teams getting involved and starting to put in things like the escrow clause, similar to what we had with software as a service (SaaS) for a long time. We're starting to see organizations' legal firms focus on doing these, and not just for SaaS -- but infrastructure as a service (IaaS) as well. It provides a way for user organizations to access their data if provider organizations like TD were to go down.

Beavis
So that's one that we're seeing at the legal level. Around the terms and conditions, once again being a small service provider, we have a little more flexibility in what we can provide to the organizations on those.

Once our legal team sits down and agrees on what they're looking for and what we can do for them, we're able to make changes. With larger organizations, where SLAs are often set in stone, there's no flexibility about making modifications to those contracts to suit the customer.

Gardner: Tell us about your organization, how big you are, and who your customers are, and then we'll get back into some of these risks issues and how they have been managed.

Beavis: Traditionally, we came from a system-integrator background, based on the east coast of Australia -- Melbourne and Sydney. The organization has been around for 12 years and had a huge amount of success in that infrastructure services arena, initially with VMware.
Being a small service provider, we have a little more flexibility in what we can provide to the organizations.

Other companies heavily expanded into the enterprise information systems area. We still have a large focus on infrastructure, and more recently, cloud. We've had a lot of success with the cloud, mainly because we can combine that with a managed services.

We go to market with cloud. It's not just a platform where people come and dump data or an application. A lot of the customers that come into our cloud have some sort of managed service on top of that, and that's where we're starting to have a lot of success.

As we spoke about in part one, our customers drove us to start building a cloud platform. They can see the benefits of cloud, but they also wanted to ensure that for the cloud they were moving to, they had an organization that could support them beyond the infrastructure.

That might be looking after their operating systems, looking after some of their applications such as Citrix, etc. that we specialize in, looking after their Microsoft Exchange servers, once they move it to the cloud and then attaching those applications. That's where we are. That's the cloud at the moment.

Gardner: Is there something about the platform and industry-standard decisions that you've made that helps your customers feel more comfortable? Do they see less risk because, even though your organization is one organization, the infrastructure, is broader, and there's some stability about that that comes to the table?

Beavis: Definitely. Partnering with VMware was one of our core decisions, because their platform everywhere is end-to-end standard VMware. It really gives us an advantage when addressing that risk if organizations ask what happens if our company doesn't run or they're not happy with the service.
It's something that SaaS organizations have been doing for a long time, and we’re only just starting to see it more and more now when it comes to IaaS.

The great thing is that within our environment -- and it's one part of VMware’s vision -- you can then pick up those applications, and move them to another VMware cloud provider. Thank heaven, we haven't had that happen, and we intend it not to happen. But, for organizations to understand that, if something were to go wrong, they can move that to another service provider without having to re-architect those applications or make any major changes. This is one area where we're well getting around that longevity risk discussion.

Gardner: Is there a confluence between portability and what organizations are doing with disaster recovery (DR)? Maybe they're mirroring data and/or infrastructure and applications for purposes of business continuity and then are able to say, "This reduces our risk, because not only do we have better DR and business continuity benefits, but we’re also setting the stage for us to be able to move this where we want, when we want."

They can create a hybrid model, where they can pick and choose on-premises, versus a variety of other cloud providers, and even decide on those geographic or compliance issues as to where they actually physically place the data. That's a big question, but the issue is business continuity, as part of this movement toward a lower risk, how does that pan out?

Beavis: That's actually one of the biggest movements that we’re seeing at the moment. Organizations, when they refresh their infrastructure, don’t see the the value refreshing DR on-premise. Let the first step cloud be "let's move the DR out to the cloud, and replicate from on-premises out into our cloud."

Then, as you said, we have the advantage to start to do things like IaaS testing, understanding how those applications are going to work in the cloud, tweak them, get the performance right, and do that with little risk to the business. Obviously, the production machine will continue to run on-premises, while we're testing snapshots.
DR is still the number one use case that we're seeing people move to the cloud.

It's a good way to put a live snapshot of that environment, and how it’s going to perform in the cloud, how your users are going to access it, bandwidth, and all that type of stuff that you need to do before starting to run up. DR is still the number one use case that we’re seeing people move to the cloud.

Gardner: As we go through each of these risks, and I hear you relating how your customers and TD, your own organization, have reacted to them, it seems to me that, as we move toward this software-defined data center, where we can move from the physical hardware and the physical facilities, and move things around in functional blocks, this really solves a lot of these risk issues.

You can manage your legal, your SLAs, and your licenses better when you know that you can pick and choose the location. That longevity issue is solved, when you know you can move the entire block, even if it's under escrow, or whatever. Complexity and fear about forking or immaturity of the infrastructure itself can be mitigated, when you know that you can pick and choose, and that it's highly portable.

It's a round-about way of getting to the point of this whole notion of software-defined data center. Is that really at heart a risk reduction, a future direction, that will mitigate a lot of these issues that are holding people back from adopting cloud more aggressively?

Beavis: From a service provider's perspective it certainly does. The single-pane management window that you can do now, where you can control everything from your network -- the compute and the storage -- certainly reduces risk, rather than needing several tools to do that.

Backup integration

And the other area where the venders are starting to work together is the integration of things like backup, and as we spoke about earlier, DR. Tools are now sitting natively within that VMware stack around the software-defined data center, written to the vSphere API, as we're trying to retrofit products to achieve file-level backups within a virtual data center, within vCloud. Pretty much every day, you wake up there's a new tool that's now supported within that.

From a service provider's perspective it's really reducing the risk and time to market for the new offerings, but from a customer's perspective it's really getting in that experience that they used to. On-premise over a TD cloud, from their perspective, makes it a lot easier for them to start to adopt and consume the cloud.

Gardner: I suppose this is a good segue into this notion of how to make your data, applications, and the configuration metadata portable across different organizations, based on some kind of a standard or definition. How does that work? What are the ways in which organizations are asking for and getting risk reduction around this concept of portability?

Beavis: Once again, it's about having a common way that the data can move across. The basics come into that hybrid-cloud model initially, like how people are getting things out. One of the things that we see more and more is that it's not as simple as people moving legacy applications and things up to the cloud.

To reduce that risk, we're doing a cloud-readiness assessment, where we come in and assess what the organization has, what their environment looks like, and what's happening within the environment, running things like the vCenter Operations tools from VMware to right-size those environments to be ready for the cloud.

Gardner: Now the flip-side of that would be that some of your customers who have been dabbling in cloud infrastructure, perhaps open-source frameworks of some kind, or maybe they have been integrating their own components of open-source available software, licensed software. What have you found when it comes to their sense of risk, and how does that compare to what we just described in terms of having stability and longevity?

More comfortable

Beavis: Especially in Australia, we probably have 85 percent to 90 percent of organizations with some sort of VMware in their data center. They no doubt seem to be more comfortable gravitating to some providers that are running familiar platforms, with teams familiar with VMware. They're more comfortable that we, as a service provider, are running a platform that they're used to.

We'll probably talk about the hybrid cloud a bit later on, but that ability for them to still maintain control in a familiar environment, while running some applications across in the TD cloud, is something that is becoming quite welcoming within organizations. So there's no doubt that choosing a common platform that they're used to working on is giving them confidence to start to move to the cloud.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:


Wednesday, May 8, 2013

Ariba and Discover to transform B2B payments with cloud-based AribaPay

Ariba, an SAP Company, and Discover Financial Services today unveiled Ariba Pay. The new service, to be offered by Ariba, is expected to transform B2B payments by eliminating paper transactions, providing better visibility into cash flow, and producing rich remittance information that improves reconciliation processes for buyers and sellers.

The cloud-based service. announced at the Ariba LIVE conference, will combine the applications and insights embedded in the Ariba Network and deliver them through a trusted and secure global-payments infrastructure to streamline and enhance settlement and reconciliation of business commerce. The service is expected to be generally available in 2014. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

“It’s the classic joke: The check is in the mail. But few companies find it funny,” said Kevin Costello, president, Ariba. “Buyers are drowning in paper, and sellers have no idea when -- or how much -- they will be paid. AribaPay will effectively eliminate these issues.”

AribaPay will provide a way for buyers to create purchase orders, receive invoices, and send payments, while sellers receive more-detailed remittance information in a fast, secure, electronic environment.

Improving commerce

"Ariba and Discover are seizing the opportunity to digitize a share of the estimated $30 trillion in B2B payments that are still mostly made with paper checks,” said Roger Hochschild, president and chief operating officer for Discover. “Discover is broadening its network capabilities and infrastructure and choosing diverse business partners like Ariba to move beyond facilitating payments to enabling and improving business commerce.”

For buyers and sellers connected to the Ariba Network, AribaPay will deliver data that shows what payments represent at the invoice and line-item level, fueling faster, more accurate reconciliation on both sides.
Buyers are drowning in paper, and sellers have no idea when -- or how much -- they will be paid. AribaPay will effectively eliminate these issues.

Other benefits include:
  • Lower processing costs
  • Richer remittance advice
  • Reduced fraud risk
  • Elimination of paper checks and invoices
  • Fewer payments lost to escheatment
  • Ability to track and trace transactions
  • Faster reconciliation and dispute resolution
To learn more about AribaPay and the benefits it is expected to deliver, visit: www.aribapay.com.

You may also be interested in:


Thursday, May 2, 2013

Ariba, Dell Boomi to unveil collaboration enhancements for networked economy at Ariba LIVE conference

Collaboration will take center stage next week when Ariba, an SAP company, holds its Ariba LIVE conference in Washington, DC. In an effort to fuel greater collaboration between companies through new capabilities and network-derived intelligence, Ariba will announce an enhanced set of tools, as well as a joint offering with Dell Boomi.

Leading the list of enhanced Ariba tools are:
  • Ariba Spot Buy. With the integration of Ariba Procure-to-Pay and Ariba Discovery, buyers can quickly discover and qualify new sources of supply for one-off, time-sensitive, or hard-to-find purchases.
  • Ariba Recommendations. Through new services that push network-derived intelligence and community-generated content directly into the context of specific business processes and use cases, companies can make more informed decisions at the point of transaction or activity. “Suppliers You May Like,” for example, helps guide buyers to qualified suppliers based on a host of inputs, including buyer requirements, supplier capabilities and performance ratings, and how often other buyers on the network have awarded business to them.
“Just as consumers tap into personal networks like Facebook, Twitter and Amazon.com to connect with friends and family, share and shop, companies are leveraging digital networks to more efficiently engage with their trading partners and collaborate across the entire commerce process,” said Sanish Mondkar, Ariba Chief Product Officer. “This new, more social and connected way of operating is redefining the way business is done. But it demands a new set of tools and processes that are only possible at scale in a truly networked environment. Ariba is delivering these tools today.”

Spot buys – or unplanned purchases of unique items -- account for more than 40 percent of a company’s total spend. Spot buys are challenging because they require quick turnaround, and buyers generally lack efficient or effective methods to source them. [Disclosure: Ariba and Dell are sponsors of BriefingsDirect podcasts.]

Selective leveraging

According to independent research firm The Hackett Group, “by selectively leveraging software tools in areas like supplier discovery and online bidding, organizations can reduce the time it takes to find the right suppliers from weeks to days or even hours and drive cost reductions of between two percent and five percent on average.”

Nearly one million selling organizations across more than 20,000 product categories are connected to the Ariba Network. And they have access to the more than 13 million leads worth over $5 billion that are posted each year by more than half of the Global 2000 who are connected to the network as well.
Organizations can reduce the time it takes to find the right suppliers from weeks to days or even hours.

New features added to Ariba Discovery allow selling organizations to get the right messages to the right audience and convert these leads into sales.
  • Profile Pitch. Sellers can create highly targeted profiles and messaging based on industry, commodity, territory and other factors to promote themselves to active buyers. 
  • Badges and Social Sharing. Selling organizations can further raise their visibility by adding Ariba badges to their company websites and/or email signatures, defining vanity URLs for their company profiles and sharing their public URLs and postings on social sites such as Facebook, Twitter, and LinkedIn.
Pre-packaged integration

Ariba and Dell Boomi will announce that they are teaming to deliver pre-packaged integration as a service offerings to help selling organizations drive new levels of efficiency and effectiveness across their operations.

Designed to simplify and speed integration to the Ariba Network, the Ariba Integration Connector, powered by Dell Boomi Integration Packs, enables companies to collaborate more efficiently and drive game-changing improvements in productivity and performance.  The first connector integrates with Intuit QuickBooks. Additional connectors to enable sellers who own Microsoft Dynamics AX, Netsuite and Sage Peachtree solutions to quickly and easily integrate with the Ariba Network are planned to be released later this year.
From the beginning, the Ariba Network has been built to be an open platform to connect all companies.

“From the beginning, the Ariba Network has been built to be an open platform to connect all companies using any system to foster more efficient business-to-business collaboration,” said Tim Minahan, senior vice president, network strategy, Ariba. “With these new connectors, we are making it even easier for sales organizations of all sizes to fully automate their customer transactions and collaborations over the Ariba Network -- directly from their preferred CRM, ERP and accounting systems.”

The Ariba Integration Connector removes the barriers to system-to-network integration by eliminating complexity.  An out-of-the-box solution delivered as a service, the connector provides a fast, easy and affordable way for companies to connect to the Ariba Network -- regardless of the back-end systems they use. The connector currently supports integration with Intuit QuickBooks Desktop 2009-2013, Premier and Enterprise for US, UK, and CA Enterprise and Enterprise Plus.

The connector is available and in use today. To learn more about Ariba’s Connection solutions and the benefits they can deliver to your organization, visit http://www.ariba.com/services/connection-solutions.

You may also be interested in


Dell's Foglight for Virtualization update extends visibility and management control across more infrastructure

Dell Software this week delivered Foglight for Virtualization, Enterprise Edition to extend the depth and breadth of managing and optimizing server virtualization as well as virtual desktop infrastructure (VDI) and their joint impact on such IT resources as storage.

Building on the formerly named Quest vFoglight Pro virtualization management solution, Dell re-branded vFoglight to Foglight for Virtualization to make it the core platform to the Foglight family. Foglight is not sitting still either. Improvements this year move beyond monitoring support for VMware View VDI, to later support for VMware vCloud Director, OpenStack, and Citrix Xen VDI. [Disclosure: Dell Software and WMware are sponsors of BriefingsDirect podcasts.]

The higher value from such ecosystem and heterogeneous management support is the ability for virtualization server and system administrators to comprehensively optimize various flavors of data-center server virtualization, as well as the major VDI types, with added capabilities to track and analyze performance from the application level all the way to the server and storage hardware level. This week's announcements have also shown a spotlight on the recently updated Foglight for Storage Management 2.5.
Dell is showing its commitment to offering a  solution that encompasses all aspects of virtual infrastructure performance monitoring and management.

“With Foglight for Virtualization, Enterprise Edition, Dell is showing its commitment to offering a  solution that encompasses all aspects of virtual infrastructure performance monitoring and management, built on a platform that can scale as the infrastructure grows,” said Steve Rosenberg, general manager for Performance Monitoring, Dell. “This new release expands Foglight’s ability not only to monitor the additional infrastructure area of VDI, but also to correlate metrics from VDI with performance for applications, the virtual layer, the network, and underlying servers and storage.”

Dell Software also last week released a series of BYOD-targeted products and services, which are related to the better VDI management capabilities. That's because many enterprises and mid-market firms that are tasked with moving quickly to BYOD are using VDI to do it.

With the increasing adoption of VMware View in virtualized data centers (including for MSPs), VDI support is fast becoming a mainstay for today’s IT departments and managed service providers. VDI and server virtual machines (VMs) often utilize the same hardware components. Yet, both of these virtualized infrastructures serve different users and have separate requirements and resource demands, explained John Maxwell, vice president of product management for performance monitoring for virtualization, networking,storage and hardware at Dell Software.

Single-source solution 

As a result, VDI and server VMs require dedicated performance monitoring systems. However, these systems must also be connected, because so many underlying resources are often shared. Agent-based Foglight for Virtualization, Enterprise Edition offers virtualization administrators a more single-source solution that not only identifies and fixes performance issues within VMware View, but continues to run all features available in vOPS Server Enterprise with no effect on overall vCenter performance.

Foglight for Storage Management 2.5 has been released as an optional "cartridge" to Foglight for Virtualization. Foglight for Storage Management now offers physical storage performance reporting in addition to virtual reporting, providing customers with complete "VM to physical LUN" visibility. 

Additional enhancements in this release include LUN latency reporting, NPIV support, and the ability for customers to purchase the product either as a stand-alone cartridge, or as an optional cartridge to Foglight for Virtualization.
This new release expands Foglight’s ability to monitor the additional infrastructure area of VDI.

Additionally, Foglight is a unified performance monitoring platform that allows individual product solutions, delivered as sets of pluggable “cartridges,” to run stand-alone or to interoperate. Each individual product delivers best-of-breed functionality to the admin for that area, while simultaneously integrating with other cartridges to deliver true end-to-end monitoring from end-user experience to the underlying storage and server hardware layers, and everything in between, said Maxwell.

Foglight for Virtualization Enterprise Edition 6.8 is available now for a 45-day trial from www.quest.com. Pricing starts at $799 per socket. Foglight for Storage Management 2.5 is also available now for a 45-day trial from www.quest.com.  Pricing starts at $499 per socket.

Because Foglight is built on a common architecture to support the cartridges, it seems likely that it will move from an on-premises only offering to a SaaS-based version too, especially to support cloud- and MSP-based VDI offerings, and also to manage hybrid VDI implementations.

You may also be interested in:

Monday, April 22, 2013

Service Virtualization brings speed benefit and lower costs to TTNET applications testing

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how TTNET, the largest internet service provider in Turkey, with six million subscribers, significantly improved applications deployment while cutting costs and time to delivery.

We'll hear how TTNET deployed advanced Service Virtualization (SV) solutions to automate end-to-end test cases, gaining a path to integrated Unified Functional Testing (UFT).

To learn how, we're joined by Hasan Yükselten, Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom, based in Istanbul. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What was the situation there before you became more automated, before you started to use more software tools?

Yükselten: We're the leading ISP in Turkey. We deploy more than 200 applications per year, and we have to provide better and faster services to our customers every week, every month. Before HP SV, we had to use the other test infrastructures in our test cases.

Yükselten
We mostly had problems on issues such as the accessibility, authorization, downtime, and private data for reaching the other third-party’s infrastructures. So, we needed virtualization on our test systems, and we needed automation for getting fast deployment to make the release time shorter. And of course, we needed to reduce our cost. So, we decided to solve the problems by implementing SV.

Gardner: How did you move from where you were to where you wanted to be?

Yükselten: Before SV, we couldn’t do automation, since the other parties are in discrete locations and it was difficult to reach the other systems. We could automate functional test cases, but for end-to-end test cases, it was impossible to do automation.

First, we implemented SV for virtualizing the other systems, and we put SV between our infrastructure and the third-party infrastructure. We learned the requests and responses and then could use SV instead of the other party infrastructure.

Automation tools

After this, we could also use automation tools. We managed to use automation tools via integrating Unified Functional Testing (UFT) and SV tools, and now we can run automation test cases and end-to-end test cases on SV.

We started to use SV in our test systems first. When we saw the success, we decided to implement SV for the development systems also.
Gardner: Give me a sense of the type of applications we’re talking about.

Yükselten: We are mostly working on customer relationship management (CRM) applications. We deploy more than 200 applications per year and we have more than six million customers. We have to offer new campaigns and make some transformations for new customers, etc.

We have to save all the informations, and while saving the information, we also interact the other systems, for example the National Identity System, through telecom systems, public switched telephone network (PSTN) systems.

We have to ask informations and we need make some requests to the other systems. So, we need to use all the other systems in our CRM systems. And we also have internet protocol television (IPTV) products, value added services products, and the company products. But basically, we’re using CRM systems for our development and for our systems.

Gardner: So clearly, these are mission-critical applications essential to your business, your growth, and your ability to compete in your market.

Yükselten: If there is a mistake, a big error in our system, the next day, we cannot sell anything. We cannot do anything all over Turkey.

Gardner: Let's talk a bit about the adoption of SV. What you actually have in place so far?

Yükselten: Actually, it was very easy to adopt these products into our system, because including proof of concept (PoC), we could use this tool in six weeks. We spent first two weeks for the PoC and after four weeks, we managed to use the tool.

Easy to implement

For the first six weeks, we could use SV for 45 percent of end-to-end test cases. In 10 weeks, 95 percent of our test cases could be run on SV. It was very easy to implement. After that, we also implemented two other SVs in our other systems. So, we're now using three SV systems. One is for development, one is just for the campaigns, and one is for the E2E tests.

HP Software helped us so much, especially R&D. HP Turkey helped us, because we were also using application lifecycle management (ALM) tools before SV. We were using QTP LoadRunners, Quality Center, etc., so we had a good relation with HP Software.
Since SV is a new tool, we needed a lot of customization for our needs, and HP Software was always with us. They were very quick to answer our questions and to return for our development needs. We managed to use the tool in six weeks, because of HP’s Rapid Solutions.

Gardner: My understanding is that you have something on the order of 150 services. You use 50 regularly, but you're able to then spin up and use others on a more ad-hoc basis. Why is it important for you to have that kind of flexibility and agility?
We virtualized all the web services, but we use just what we need in our test cases. 

Yükselten: We virtualized more than 150 services, but we use 48 of them actively. We use these portions of the service because we virtualized our third-party infrastructures for our needs. For example, we virtualized all the other CRM systems, but we don’t need all of them. In gateway remote, you can simulate all the other web services totally. So, we virtualized all the web services, but we use just what we need in our test cases.

In three months we got the investment back actually, maybe shorter than three months. It could have been two and half months. For example, for the campaign test cases, we gained 100 percent of efficiency. Before HP, we could run just seven campaigns in a month, but after HP, we managed to run 14 campaigns in a month.
We gained 100 percent efficiency and three man-months in this way, because three test engineers were working on campaigns like this. For another example, last month we got the metrics and we saw that we had a total blockage for seven days, so that was 21 working days for March. We saved 33 percent of our manpower with SV and there are 20 test engineers working on it. We gained 140 man-months last month.

For our basic test scenarios, we could run all test cases in 112 hours. After SV, we managed to run it in 54 hours. So we gained 100 percent efficiency in that area and also managed to do automation for the campaign test cases. We managed to automate 52 percent of our campaign test cases, and this meant a very big efficiency for us. Totally, we saved more than $50,000 per month.

Broader applications

Gardner: Do you expect now to be able to take this to a larger set of applications across Türk Telekom?

Yükselten: Yes. Türk Telekom licenses these tools and started to use these tools in their test service to get this efficiency for those systems. We have a branch company called AVEA, and they also want to use this tool. After our getting this efficiency, many companies want to use this virtualization. Eight companies visited us in Turkey to get our experiences on this tool. Many companies want this and want to use this tool in their test systems.

Gardner: Do you have any advice for other organizations like those you've been describing, now that you have done this? Any recommendations on what you would advise others that might help them improve on how they do it?

Yükselten: Companies must know their needs first. For example, in our company, we have three blockage systems for third parties and the other systems don't change everyday. So it was easy to implement SV in our systems and virtualize the other systems. We don’t need to do virtualization day by day, because the other systems don't change every day.

Once a month, we consult and change our systems, update our web services on SV, and this is enough for us. But if the other party's systems changes day by day or frequently, it may be difficult to do virtualization every day.
Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

This is an important point. Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

We started to use UFT with integrating SV. As I told you, we managed to automate 52 percent of our campaign test cases so far. So we would like to go on and try to automate more test cases, our end-to-end test cases, the basic scenarios, and other systems.
Our first goal is doing more automation with SV and UFT and the other is using SV in development sites. We plan to find early defects in development sites and getting more quality products into the test.

Rapid deployment

Of course, in this way, we get rapid deployment and we make shorter release times because the product will have more quality. Using performance test and SV also helps us on performance. We use HP LoadRunner for our performance test cases. We have three goals now, and the last one is using SV with integrating LoadRunner.

Gardner: Well, it's really impressive. It sounds as if you put in place the technologies that will allow you to move very rapidly, to even a larger payback. So congratulations on that. Gain more insights and information on the best of IT Performance Management at www.hp.com/go/discoverperformance. And you can always access this and other episodes in our HP Discover performance podcast series on iTunes under BriefingsDirect.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: