Wednesday, September 22, 2010

Data center transformation requires more than new systems, there's also secure data removal, recycling, server disposal

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

An often-overlooked aspect of data center transformation (DCT) is what to do with the older assets as newer systems come online. Much of the retiring IT equipment can possess sensitive data, may be sources of significant economic return, or need at least need to be recycled according to various regulations.

Improperly disposing of data and other IT assets can cause embarrassing security breaches, increase costs, and pose the risk of regulatory penalties. Indeed, many IT organizations are largely unaware of the hazards and risks of selling older systems into auction sites, secondary markets or via untested suppliers.

Compliance and recycling issues, as well as data security concerns and proper software disposition, should therefore be top of mind early in the DCT process, not as an after-thought.

In a recent podcast discussion, I tapped two HP executives on how to best manages productive transitions of data center assets -- from security and environmental impact, to recycling and resale, and even to rental of transitional systems during a managed upgrade process. I spoke with Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O'Grady, Director of Global Life Cycle Asset Management Services with HP Financial Services.

Here are some excerpts:
Helen Tang: Today there are the new things coming about that everybody is really excited about, such as virtualization, and private cloud. ... This time around, enterprises don’t want to repeat past mistakes, in terms of buying just piles of stuff that are disconnected. Instead, they want a bigger strategy that is able to modernize their assets and tie into a strategic growth enablement asset for the entire business.

Yet throughout the entire DCT process, there's a lot to think about when you look at existing hardware and software assets that are probably aged, and won’t really meet today’s demands for supporting modern applications.

How to dispose of those assets? Most people don’t really think about it nor understand all of the risks involved. ... Even experienced IT professionals, who have been in the business for maybe 10, 20 years, don’t quite have the skills and understanding to grasp all of this.

We're starting to see sort of this IT hybrid role called the IT controller, that typically reports to the CIO, but also dot-lines into the CFO, so that the two organizations can work together from the very beginning of a data center project to understand how best to optimize both the technology, as well as the financial aspects.

Jim O'Grady: We see that a lot of companies try to manage this themselves, and they don’t have the internal expertise to do it. Often, it’s done in a very disconnected way in the company. Because it’s disconnected and done in many different ways, it leads to more risks than people think.

You are putting your company’s brand at stake, through improper environmental recycling compliance, or exposing your clients, customers, or patients’ data to a security breach. This is definitely one of those areas you don’t want to read about in a newspaper to figure out what went wrong.

One of the most common areas where our clients are caught unaware of the complexity of the data security, and the e-waste legislation requirements that are out there, and especially the pace of its change.

We suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning and/or prior to the asset leaving the physical premise of the site. Use your outsource partner, if you have one, as a final validation for data security. So, do it on site, as well as do it off site.

Have a well-established plan and budget up-front, one that’s sponsored by a corporate officer, to handle all of the end-of-use assets well before the end-of-use period comes.

Reams of regulations

E-waste legislation resides at the state, local, national, and regional levels, and they all differ. There's some conflict, but some are in line with each other. So it's very difficult to understand what your legislative requirements are and how to comply. Your best bet is to deal with a highest standard and pick someone that knows and has experience in meeting these legislative requirements.

Legislation resides at the state, local, national, and regional levels, and they all differ.



There are tremendous amounts of global complexities that customers are trying to overcome, especially when they try to do data center consolidation and transformation, throughout their enterprise across different geographies and country borders.

You're talking about a variety of regulatory practices and directives, especially in the EU, that are emerging and restrict how you move used and non-working product across borders. There are a variety of different data-security practices and environmental waste laws that you need to be aware of.

Partner beware

A lot of our clients choose to outsource this work to a partner. But they need to keep in mind that they are sharing risk with whomever they partner with. So they have to be very cautious and be extremely picky about who they select as a partner.

This may sound a bit self-serving, but I always suggest for enterprises to resist smaller local vendors. ... If you don’t kick the tires with your partner and you don’t find out that the partner consists of a man, a dog, and a pickup truck, you just may have a hard time defending yourself as to why you selected that partner.

Also, develop a very strong vendor audit qualification and ongoing inspection process. Visit that vendor prior to the selection and know where your waste stream is going to end up. Whatever they do with the waste stream, it’s your waste stream. You are a part of the chain of custody, so you are responsible for what happens to that waste stream, no matter what that vendor does with it.

You need to create rigorous documented end-to-end controls and audit processes to provide audit trails for any future legal issues. And finally, select a partner with a brand name and reputation for trust and integrity. Essentially, share the risk.

Total asset management

Enterprises should well consider how they retire and recover value for their entire end-of-use IT equipment, whether it's a PDA or supercomputer, HP or non-HP product. Most data center transformations and consolidations typically end with a lot of excess or end-of-use product.

We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market. This is a strength of HP Financial Services (HPFS).

Typically, what we find with companies trying to recover value for product is that they give it to their facilities guys or the local business units. These guys love to put it on eBay and try to advertise for the best price. But, that’s not always the best way to recover the best value for your data center equipment.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.



We're now seeing it migrate into the procurement arm. These guys typically put it out for bid and select the highest bid from a lot of the open market brokers. A better strategy to recover value, but not the best.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.

From a financial asset ownership model, HPFS has the ability to come in and work with a client, understand their asset management strategy, and help them to personalize the financial asset ownership model that makes sense for them.

For example, if you look at a leasing organization, when you lease a product, it's going to come back. A key strength in terms of managing your residual is to recover the value for the product as it comes back, and we do that on a worldwide basis.

We have the ability to reach emerging markets or find the market of highest recovery to be able to recover the value for that product. As we work with clients and they give us their equipment to remarket on their behalf, we bring it into the same process.

When you think about it, an asset recovery program is really the same thing as a lease return. It's really a lot of reverse logistics -- bring it into a technical center, where it's audited, the data is wiped, the product is tested, there’s some level of refurbishment done, especially if we can enhance the market value. Then, we bring it into our global markets to recover value for that product.

We have skilled product traders within our product families who know how to hold product, and wait for the right time to release it into the secondary market. If you take a lot of product and sell it in one day, you increase the supply, and all of the recovery rates for the brokers drop overnight. So, you have to be pretty smart. You have to know when to release product in small lot sizes to maximize that recovery value for the client.

Legacy support as core competency

We're seeing a big uptake in the need to support legacy product, especially in DCT. We're able to provide highly customized pre-owned authentic legacy HP product solutions, sometimes going back 20 years or more. The need for temporary equipment just scaling out legacy data center hardware platform capacity that’s legacy locked is an increasing need that we see from our clients.

Clients also need to ensure their product is legally licensed and they do not encounter intellectual property right infringements. Lastly, they want to trust that the vendor has the right technical skills to deal with the legacy configuration and compatibility issues.

Our short-term rental program covers new or legacy products. Again, many customers need access to temporary product to prove out some concepts, or just to test some software application on compatibility issues. Or, if you're in the midst of a transformation, you may need access to temporary swing gear to enable the move.

We also help clients understand strategies to recover the best value for decommissioned assets, as well as how to evaluate and how to put in place a good data-security plan.

We help them understand whether data security should be done on-site versus off-site, or is it worth the cost to do it on-site and off-site. We also help them understand the complexities of data wiping enterprise product, versus just the plain PC.

The one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy.



Most of the local vendors and providers out there are skilled in wiping data for PCs, but when you get into enterprise products, it can get really complex. You need to make sure that you understand those complexities, so you can secure the data properly.

Lastly, the one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy, especially on a global basis. How do you get the timing down for all the products coming back on a return basis?

Tang: We reach out to our customers in various interactions to talk them through the whole process from beginning to end.

One of the great starting points we recommend is something we called the Data Center Transformation Experience Workshop, where we actually bring together your financial side, your operations people, and your CIOs, so all the key stakeholders in the same room, and walk through these common issues that you may or may not have thought about to begin with. You can walk out of that room with consensus, with a shared vision, as well as a roadmap that’s customized for your success.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Tuesday, September 21, 2010

IBM acquires Netezza as big data market continues to consolidate around appliances, middle market, new architecture

IBM is snapping up yet another business analytics player. After purchasing OpenPages last week, Big Blue is now laying down $1.7 billion in an all-cash deal to acquire Netezza.

Netezza provides high-performance analytics in a data warehousing appliance that claims to handle complex analytic queries 10 to 100 times faster than traditional systems. Netezza appliances puts analytics into the hands of business users in sales, marketing, product development, human resources and other departments that need to actionable insights to drive decision-making.

With its latest business analytics acquisition, Steve Mills, senior vice president and group executive of IBM Software and Systems, says the company is bringing analytics to the masses.

“We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value,” Mills says. “Netezza is a perfect example of this approach.”

Big Blue’s long haul

Netezza fits in with IBM’s maturing business analytics strategy. Big Blue has long put an emphasis on data analysis and business intelligence (BI) as key drivers of IT infrastructure needs. The company has demonstrated a clear understanding that data analysis and BI can also be easily applied to business issues.

IBM’s relationship database, DB2, also fits into the big picture. Over the years, IBM has built a strong family of database-driven products around DB2. Essentially, IBM has successfully worked to tie the data equation together with the needs of enterprises and the strength of their IT departments.

We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value.



While DB2 reaches into the past and supports the data needs of legacy and distributed systems and applications, new architectures around in-memory and optimized platforms for persistence-driven tasks are in vogue. While Neteeza's strengths are in analytics, this architecture has other uses, ones we'll be seeing more of.

Fast-forward to the Netezza acquisition. The $1.7 billion grab shows that IBM is well aware that big data sets don’t lend themselves to traditional architecture for crunching data. IBM, along with its competitors, have been developing or acquiring new architectures that focus more on in-memory solutions.

Rather than moving the entire database or large caches around on disk or tape, then, new architectures have emerged where the data and logic reside closer together -- and the data is accessed from high-performing persistence.

For example, with Netezza appliances, NYSE Euronext has slashed the time it takes to load and extract massive amounts of historical data so it can run analytic queries more securely and efficiently, while reducing run times from hours to seconds. Virgin Media, a UK provider of TV, broadband, phone and mobile services with millions of subscribers, uses Netezza across its product marketing, revenue assurance and credit services departments to proactively plan, forecast, and respond to the effect of pricing and tariff changes enabling them to quickly respond with competitive offerings.

Business analytics consolidation

W
ith the Netezza acquisition, the business analytics market is seeing consolidation as major players begin preparing to tap into a growing big data opportunity. Much the same as the BI market saw consolidation a few years ago -- IBM acquired Cognos, Oracle bought Hyperion, and SAP snapped up Business Objects -- vendors are now seeing big data analytics as an area that should be embedded into the total infrastructure of solutions. That requires a different architecture.

The competition is heating up. EMC purchased Greenplum, an enabler of big data clouds and self-service analytics, in July. Both companies are planning to sell the hardware and software together in appliances. The vendors tune and optimize the hardware and software to offer the benefits of big data crunching, taking advantage of in memory architecture and high performance hardware.

Expect to see more consolidation, although there aren’t too many players left in the Netezza space. Acquisition candidates include data management and analysis software company Aster Data Systems and Teradata with its enterprise analytics technologies, among others. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Meanwhile, Oracle this week at OpenWorld is pushing against the market with its new Exadata product. The battle is on. My take is that these purchases are for more than the engines that drive analytics -- they are for the engines that drive SaaS, cloud, mobile, web and what we might call the more modern work loads ... data intensive, high-scaling, fast-changing and services-oriented.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Monday, September 20, 2010

Morphlabs eases way for companies to build private cloud infrastructures, partners with Zend

Morphlabs, a provider of enterprise cloud architecture platforms, has simplified the process of building and managing an internal cloud for enterprise environments -- enabling companies to create their own private cloud infrastructure.

The Manhattan Beach, Calif. company today announced a significant upgrade to its flagship product, mCloud Controller. The enhanced version introduces Enterprise Cloud Architecture (ECA), a new approach that provides enterprises with immediate access to the building blocks and binding components of a fault tolerant, elastic, and highly automated platform.

Morphlabs also announced a partnership with Zend Technologies Ltd., whose Zend Server will be shipped as part of the mCloud Enterprise, said Winston Damarillo, CEO at Morphlabs.

mCloud Controller is a comprehensive cloud computing platform, delivered as an appliance or virtual appliance, as well as providing open mCloud APIs (you can manage the ECA cloud from an iPad, for example). To support the leading platforms, mCloud Controller will have built-in ECA compliant support for Java, Ruby on Rails, and PHP.

Fittingly for enterprise private clouds, the Morph offering also provides direct integration to mainstream middleware via standards-based connectors. It also supports a plethora of VMs, from KVM to Xen, and and VMware, and allows for others cluster managers to be used as well.

Look for Morphlabs to seek to sell to both service providers and enterprises for the compatible hybrids benefits. Of course, we're hearing the same from Citrix, VMware, Novell, HP, etc. It's a horse race out there for a de facto hybrid cloud standard, all right.

Productivity gains

“PHP has been broadly adopted for the productivity gains it brings to Web application development, and because it can provide the massive scalability that e-commerce, social networking and media sites require,” said Matt Elson, vice president of business development at Zend. “Integrating Zend Server into Morphlabs’ mCloud Controller enables IT organizations to leverage the elasticity of cloud computing and automate the process of deploying highly reliable PHP applications in the cloud.”

Key features of the mCloud Controller with ECA include:
  • Uniform environments from development to production to help users simplify system configuration. Applications can grow as needed, while maintaining a standardized infrastructure for ease of growth and replacement.

  • Simplified system administration with automated monitoring and self-healing out of the box to avoid complicated system tuning. mCloud Controller also comes with graphical tools for viewing system-wide performance.

  • Self-service resource provisioning, which frees the IT department from numerous application provisioning requests. Without any system administration skills, authorized users can start and stop computes and provision applications as needed. Billing is also included within the system.

  • Streamlined application management automates the process of deploying, monitoring and backing-up applications. Users do not have to deal with configuration files and server settings.
The mCloud Controller v2.5 is available now in the United States, Japan and South East Asia. For more information contact Morphlabs at info@mor.ph.

You may also be interested in:

Wednesday, September 15, 2010

HP Business Service Automation portfolio gives IT the tools it needs to compete with clouds

HP is pushing the automation card again with new tools for hybrid IT environments. The company today announced “enhanced automation solutions” that set the stage for lower-cost business application deployment -- whether those apps are deployed traditionally, virtually or via a cloud.

HP’s latest Business Service Automation (BSA) enhancements beef up its solutions for hybrid IT environments, which the company defines as any combination of on-premise, off-premise, physical and virtual scenarios, including cloud computing. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP has identified a strong need in the enterprise, which is why it’s moving so fast on the BSA front. Although hybrid IT environments can increase a business’s agility and speed time to market, they also increase complexity, risk and costs by creating IT silos -- if the environment isn’t holistically managed. HP’s new BSA software enhancements work to take the “if” out of the equation.

A 360-degree hybrid solution

Today’s BSA announcement builds on HP’s recent cloud announcements for hybrid IT environments. The just-announced software enhances the HP’s BSA portfolio to offer unified server, network, storage, and application management. The goal is to break down IT silos to simplify application development and hybrid IT management.

HP is promising financial returns for companies that adopt its solutions. According to a June 2010 ExpertROI Spotlight conducted by IDC on behalf of HP, organizations that deploy HP BSA solutions can realize up to $4.82 in benefits for every IT dollar invested, reduce annual IT costs by up to $24,000 per 100 end users, and reduce outsourcing costs by 40 percent to 80 percent.

Organizations are seeking solutions that deliver business applications and services with greater agility, speed and at the lowest cost to the enterprise, regardless of their IT environment.



“Organizations are seeking solutions that deliver business applications and services with greater agility, speed and at the lowest cost to the enterprise, regardless of their IT environment,” says Erik Frieberg, vice president of Marketing, Software and Solutions at HP. “Clients can achieve up to 382 percent ROI by deploying HP’s leading automation software and leverage the benefits of new hybrid delivery models.”

HP’s acquisition of Stratavia has strengthened its automation portfolio by adding deployment, configuration and management solutions for enterprise databases, middleware and packaged applications. These solutions aim to bridge the gap between application development and operational teams. With Stratavia’s technology in its portfolio, HP said it can now provision all of the components, rapidly deploy changes and manage the ongoing configuration and compliance management.

Under the BSA hood

HP’s BSA portfolio now offers new capabilities in application deployment and risk mitigation, as well as better efficiency and productivity. For example, HP Server Automation 9.0 helps clients automate the entire server life cycle, control virtualization sprawl, and provide more flexible provisioning and deployment of applications. New Application Deployment Manager (ADM) functionality lets IT organizations automate the release process to bridge the gap between development, quality assurance and operations teams. HP said these enhancements can accelerate application deployment by up to 86 percent.

What’s more, HP Network Automation 9.0 now helps clients contain costs, mitigate risk and improve efficiency of the network by automating error-prone tasks, reducing outages and enforcing policies in real-time regardless of the environment. And HP Operations Orchestration 9.0 helps clients faced with constant alerts and siloed teams improve service quality across hybrid environments. It gives clients the ability to automate the IT processes required to support cloud computing initiatives.

HP Operations Orchestration software can help manage a hybrid infrastructure through a single view while HP Client Automation 7.8 helps clients reduce administration costs for managing physical and virtual machines through a single tool. And HP Storage Essentials 6.3 helps clients reduce complexity in hybrid environments, while improving storage utilization and controlling capacity growth.

IT needs to play at productivity better

The BSA offerings come at a crossroads for enterprise IT. The fact is that IT can no longer just compete against its own past practices and cost structures. There's a looming gulf between what IT costs the IT department to provide and what a small army of outside hosts is coming to market with. IT now needs to compete against the costs structures of pure-play cloud and SaaS providers and hosts.

The solution for IT to remain competitive, and to pick and choose what to retain and what to outsource, is to make all of its systems and apps perform better and more efficiently. And it also needs the governance and management to automate those apps and systems to keep complexity and costs in line.

Visibility, automation and management are essential for IT to stay in the game against hosts, MSPs, clouds, SaaS providers, etc. And the same management allows IT to function as the best broker of services, regardless of where the servers reside. This is clearly the target HP's BSA portfolio has in its sights.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Aster Data's newest offering provides row and column functionality for big data MPP analytics

Aster Data has taken big data management and analytics to the next level with the announcement today of its Aster Data nCluster 4.6, which includes a column data store and provides a universal SQL-MapReduce analytic framework on a hybrid row and column massively parallel processing (MPP) database management system (DBMS).

The San Carlos, Calif. company's new offering will allow users to choose the data format best suited to their needs and benefit from the power of Aster Data’s SQL-MapReduce analytic capabilities, as well as Aster Data’s suite of 1000+ MapReduce-ready analytic functions. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Row stores traditionally have been optimized for look-up style queries, while column stores are traditionally optimized for scan-style queries. Providing both a row store and a column store within nCluster and delivering a unified SQL-MapReduce framework across both stores enables both query types.

Universal query framework

For example, a retailer using historical customer purchases to derive customer behavior indicators may store each customer purchase in a row store to ease retrieval of any individual customer order. This is a look-up style query. This same retailer can see a 5-15x performance improvement by using a column store to provide access to the data for a scan-style query, such as the number of purchases completed per brand or category of product. The Aster Data platform now supports both query types with natively optimized stores and a universal query framework.

Other features include:
  • Choice of storage, implemented per-table partition, which provides customers flexible performance optimization based on analytical workloads.

  • Such services as dynamic workload management, fault tolerance, Online Precision Scaling on commodity hardware, compression, indexing, automatic partitioning, SQL-MapReduce, SQL constructs, and cross-storage queries, among others.

  • New statistical functions popular in decision analysis, operations research, and quality management including decision trees and histograms.
You may also be interested in:

Pulse surges for Eclipse with more than one million developers on board

Getting developers on board. That’s the challenge technologies from Linux to Android face every day. Genuitec has helped Eclipse overcome this challenge with Pulse. Indeed, more than one million developers around the world have now installed Pulse.

Pulse works to give software developers an efficient way to locate, install and manage their Eclipse-based tool suite, among other tools. The software essentially empowers developers to customize their installs while avoiding plug-in management issues -- even when crossing operating systems. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]

“When we envisioned Pulse in 2007, we knew the developer community badly needed an easy technology to help manage their Eclipse tools,” says Maher Masri, president and CEO of Genuitec, a founding and strategic member of the Eclipse Foundation. “Now with one million users, we can happily say Pulse is a great success story.”

The Pulse advantage

O
ne of the advantages Pulse is pushing out to its one million developers is the ability to manage four years of Eclipse platform technologies from a single dashboard, including Eclipse 3.0, also known as Helios.

Pulse, like many other powerful Eclipse-based technologies, continues to attract world-class developers to the Eclipse platform



That’s no small feat, seeing how many enterprises standardize on older Eclipse versions, yet still demand an easy migration path to upgrade their projects, technical artifacts, and other mission-critical subsystems. Developers can even access Eclipse 3.7, also known as Indigo, as the milestones are rolled out in coming months.

This multi-year tool stack feature is part of the reason why Pulse has attracted so many Eclipse developers. Pulse is the only product on the market that supports this type of lifecycle-based stack management.

Getting to know Pulse

P
ulse also provides a product family of offerings. There’s a Community Edition that’s free, a Managed Team Edition that aims at the needs of development teams, and a Private Label software delivery version designed for corporate use. Pulse Community Edition is free for individual developers, while Pulse Managed Team Edition is $60 annually. Pricing for Pulse Private Label, a software delivery and management platform, is based on individual requirements.

“Pulse, like many other powerful Eclipse-based technologies, continues to attract world-class developers to the Eclipse platform,” says Mike Milinkovich, executive director of the Eclipse Foundation. “As we continuously enhance our code base and march toward Eclipse 3.7 next summer, we’re pleased that Genuitec will continue to support developers using Eclipse with its Pulse management software.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, September 14, 2010

Delphix Server launches at DEMO to slash relational database redundant copies, storage waste and cost

Delphix has brought virtualization techniques to database infrastructure with general availability of Delphix Server, which reduces structured and relational data redundancy while maintaining full functionality and performance -- and operating in a fraction of the space at lower cost.

The Palo Alto, Calif. company, just launching this week at DEMO, says that Delphix Server solves two major IT challenges: the operational complexity and redundant infrastructure required to support applications lifecycles via multiple database caches. Delphix software installs on standard x86 servers or in virtual machines, allowing customers to virtualize database infrastructure into a "single virtual authority" and do for relational data what storage innovations and "de-dupe" have done to reduce myriad standing copies of data caches.

The interface for managing the data is very clean and time-line based down to seconds. It reminds me of an enterprise-level version of Apple's Mac OS X Time Machine, but far more granular. This allows all those with access to the data to manage it intelligently but sparingly.

While Delphix consolidates storage and reduces database provisioning and refresh times, it adds little or no impact to production systems through its innovative synchronization technology, says Jed Yueh, CEO at Delphix. Other benefits include:

  • Agile application development: Delphix automates the provisioning and refresh process, enabling developers to instantly create personal sandboxes or virtual databases (VDBs) that are up-to-date and isolated from other VDBs. Developers can cut months out of project schedules and perform destructive or parallel testing to improve overall application quality and performance.
  • Improved data resiliency: Patent-pending TimeFlow technology enables customers to create a running record of database changes; VDBs can be instantly provisioned from multiple points-in-time, with granularity down to the second. This time-shifting capability enables businesses to dramatically reduce the time required to recover from logical data loss.
  • Storage consolidation: The average customer creates seven copies of each production database for development, testing, QA, staging, operational reporting, pilots, and training, with each copy typically having its own dedicated and largely redundant storage. Delphix creates a single virtual environment, where multiple VDBs can be instantly provisioned or refreshed from a shared footprint -- coordinating changes and differences in the background without compromising functionality or performance.
Both enterprises and service providers for SaaS and cloud will benefit from reducing the vast data redundancy across the app dev and ops lifecycle. By shrinking the hardware requirements, those hosts seeking to improve their margins gain, while enterprises and ISVs can devote the server and storage resources to more productive uses.

I should think that the app dev and test folks would grok the benefits too. Why not cut the hardware and storage costs for bringing applications to maturity by virtualizing the databases? What works for the OS and runtime works for the data.

You may also be interested in:

Want client virtualization? Time then to get your back-end infrastructure act together

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

We've all heard about client virtualization or virtual desktop infrastructure (VDI) over the past few years, and there are some really great technologies for delivering a PC client experience as a service.

But today’s business and economic drivers need to go beyond just good technology. There also needs to be a clear rationale for change -- both business and economic. Second, there needs to be proven methods for properly moving to client virtualization at low risk and in ways that lead to both high productivity and lower total costs over time.

Cloud computing, mobile device proliferation, and highly efficient data centers are all aligning to make it clear that the deeper and flexible client platform support from back-end servers will become more the norm and less the exception over time.

Client devices and application types will also be dynamically shifting both in numbers and types, and crossing the chasm between the consumer and business spaces. The new requirements for business mobile use point to the need for planning and proper support of the infrastructures that can accommodate these edge, wireless clients.

To help guide business on client virtualization infrastructure requirements, learn more about client virtualization strategies and best practices that support multiple future client directions, and see why such virtualization makes sense economically, we went to Dan Nordhues, Marketing and Business Manager for Client Virtualization Solutions in HP's Industry Standard Servers Organization. The interview is conducted by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Nordhues: In desktop virtualization, what really comes out to the user device is just pixel information. These protocols just give you the screen information, collect your user inputs from the keyboard and mouse, and take those back to the application or the desktop in the data center.

When you look at desktop virtualization, whether it’s a server-based computing environment, where you are delivering applications, or if you are delivering the whole desktop, as in VDI, to get started you really have to take a look at your whole environment -- and make sure that you're doing a proper analysis and are actually ready.

On the data center side, as we start talking about cloud, the solution is really progressing. HP is moving very strongly toward what we call converged infrastructure, which is wire it once and then have it provisioned and be ready to provide the services that you need. We're on a path where the hardware pieces are there to deliver on that.

But you have to look at the data center and its capacity to house the increased number of servers, storage, and networking that has to go there to support the user.

So now you get the storage folks in IT, the networking folks, and the server support folks all involved in the support of the desk-side environment. It definitely brings a new dynamic.

This is not a prescription for getting rid of those IT people. In fact, there is a lot of benefit to the businesses by moving those folks to do more innovation, and to free up cycles to do that, instead of spending all those cycles managing a desktop environment that may be fairly difficult to manage.

Where we're headed with this, even more broadly than VDI, is back to the converged infrastructure, where we talked about wire it once and have it be a solution. Say you're an office worker and you're just getting applications virtualized out to you. You're going to use Microsoft Office-type applications. You don’t need a whole desktop. Maybe you just need some applications streamed to you.

Maybe, you're more of a power user, and you need that whole desktop environment provided by VDI. We'll provide reference architectures with just wire it once type of infrastructure with storage. Depending on what type of user you are, it can deliver both the services and the experience without having to go back and re-provision or start over, which can take weeks and months, instead of minutes.

Also, really a hybrid solution could deliver in the future VDI plus server-based computing together and cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the highest end power users that you have.

And, we're going to see services wrapped around all of this, just to make it that much simpler for the customers to take this, deploy it, and know that it’s going to be successful.

Why VDI now?

It’s a digital generation of millions of new folks entering the workforce, and they've grown up expecting to be mobile and increasingly global. So, we need to have computing environments that don’t have us having to report to a post number in an office building in order to get work done.

We have an increasingly global and mobile workforce out there. Roughly 60 percent of employees in organizations don’t work where their headquarters are for their company, and they work differently.

When you go mobile, you give up some things. However, the major selling point is that you can get access. You can check in on a running process, if you need to see how things are progressing. You can do some simple things like go in and monitor processes, call logs, or things like that. Having that access is increasingly important.

Delivering packaged services out to the end user is something that’s still being worked out by software providers, and you're going to see some more elements of that come out as we go through the next year.



And, of course, there's the impact of security, which is always the highest on customer lists. We have customers out there, large enterprise accounts, who are spending north of $100 million a year just to protect themselves from internal fraud.

With client virtualization, the security is built in. You have everything in the data center. You can’t have users on the user endpoint side, which may be a thin client access device, taking files away on USB keys or sticks.

It’s all something that can be protected by IT, and they can give access only to users as they see fit. In most cases, they want to strictly control that. Also, you don’t have users putting applications that you don't want ... on top of your IT infrastructure.

And there is really a catalyst coming as well in the Windows 7 availability and launch since late last year. Many organizations are looking at their transition plans there. It’s a natural time to look at a way to do the desktop differently than it has been done in the past.

Reference architectures support all clients

W
e've launched several reference architectures and we are going to continue to head down this path. A reference architecture is a prescribed solution for a given set of problems.

A lot of the deployment issue, and what makes this difficult, is that there are so many choices.



For example, in June, we just launched a reference architecture for VDI that uses some iSCSI SAN storage technology, and storage has traditionally been one of the cost factors in deploying client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So, moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic performance.

In this reference architecture, we've done the system integration for the customer. A lot of the deployment issue, and what makes this difficult, is that there are so many choices. You have to choose which server to use and from which vendor: HP, Dell, IBM, or Cisco? Which storage to choose: HP, EMC, or NetApp? Then, you have got the software piece of it. Which hypervisor to use: Microsoft, VMware, or Citrix? Once you chase all these down and do your testing and your proof of concept, it can take quite a substantial length of time.

We targeted the enterprise first. Some of our reference architectures that are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the lower-end offerings we have, they are still in the 400-500 range.

We're looking at bringing that down even further with some new storage technologies, which will get us down to a couple of hundred users, the small and medium business (SMB) market, certainly the mid-market, and making it just very easy for those folks to deploy. They'll have it come completely packaged.

Today, we have reference architectures based on VDI or based on server-based computing and delivering just the applications. As I mentioned before, were looking at marrying those, so you truly have a wire-it-once infrastructure that can deliver whatever the needs are for your broad user community.

What HP has done with these reference architectures is say, "Look, Mr. Customer, we've done all this for you. Here is the server and storage and all the way out to the thin client solution. We've tested it. We've engineered it with our partners and with the software stack, and we can tell you that this VDI solution will support exactly this many knowledge workers or that many productivity users in your PC environment." So, you take that system integration task away from the customer, because HP has done it for them.

We have a number of customer references. I won’t call them out specifically, but we do have some of these posted out on HP.com/go/clientvirtualization, and we continue to post more of our customer case studies out there. They are across the whole desktop virtualization space. Some are on server-based computing or sharing applications, some are based on VDI environments, and we continue to add to those.

With any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different.



HP also has an ROI or TCO calculator that we put together specifically for this space. You show a customer a case study and they say, "Well, that doesn’t really match my pain points. That doesn’t really match my problem. We don’t have that IT issue," or "We don’t have that energy, power issue."

We created this calculator, so that customers can put in their own data. It’s a fairly robust tool, but we can put in information about what’s your desktop environment costing you today, what would it cost to put in a client virtualization environment, and what you can expect as far as your return on investment. So, it’s a compelling part of the discussion.

Obviously, with any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different, which is why we have provided the tool and the consulting around that.

On that same website that I mentioned, HP.com/go/clientvirtualization, we have our technical white papers that we've published, along with each of these reference architectures.

For example, if you pick the VDI reference architecture that will support 1,000-plus users in general, there is a 100-page white paper that talks about exactly how we tested it, how we engineered it, and how it scales with the VMware view or with Microsoft Hyper-V, plus Citrix XenDesktop.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in: