Wednesday, September 22, 2010

Data center transformation requires more than new systems, there's also secure data removal, recycling, server disposal

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

An often-overlooked aspect of data center transformation (DCT) is what to do with the older assets as newer systems come online. Much of the retiring IT equipment can possess sensitive data, may be sources of significant economic return, or need at least need to be recycled according to various regulations.

Improperly disposing of data and other IT assets can cause embarrassing security breaches, increase costs, and pose the risk of regulatory penalties. Indeed, many IT organizations are largely unaware of the hazards and risks of selling older systems into auction sites, secondary markets or via untested suppliers.

Compliance and recycling issues, as well as data security concerns and proper software disposition, should therefore be top of mind early in the DCT process, not as an after-thought.

In a recent podcast discussion, I tapped two HP executives on how to best manages productive transitions of data center assets -- from security and environmental impact, to recycling and resale, and even to rental of transitional systems during a managed upgrade process. I spoke with Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O'Grady, Director of Global Life Cycle Asset Management Services with HP Financial Services.

Here are some excerpts:
Helen Tang: Today there are the new things coming about that everybody is really excited about, such as virtualization, and private cloud. ... This time around, enterprises don’t want to repeat past mistakes, in terms of buying just piles of stuff that are disconnected. Instead, they want a bigger strategy that is able to modernize their assets and tie into a strategic growth enablement asset for the entire business.

Yet throughout the entire DCT process, there's a lot to think about when you look at existing hardware and software assets that are probably aged, and won’t really meet today’s demands for supporting modern applications.

How to dispose of those assets? Most people don’t really think about it nor understand all of the risks involved. ... Even experienced IT professionals, who have been in the business for maybe 10, 20 years, don’t quite have the skills and understanding to grasp all of this.

We're starting to see sort of this IT hybrid role called the IT controller, that typically reports to the CIO, but also dot-lines into the CFO, so that the two organizations can work together from the very beginning of a data center project to understand how best to optimize both the technology, as well as the financial aspects.

Jim O'Grady: We see that a lot of companies try to manage this themselves, and they don’t have the internal expertise to do it. Often, it’s done in a very disconnected way in the company. Because it’s disconnected and done in many different ways, it leads to more risks than people think.

You are putting your company’s brand at stake, through improper environmental recycling compliance, or exposing your clients, customers, or patients’ data to a security breach. This is definitely one of those areas you don’t want to read about in a newspaper to figure out what went wrong.

One of the most common areas where our clients are caught unaware of the complexity of the data security, and the e-waste legislation requirements that are out there, and especially the pace of its change.

We suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning and/or prior to the asset leaving the physical premise of the site. Use your outsource partner, if you have one, as a final validation for data security. So, do it on site, as well as do it off site.

Have a well-established plan and budget up-front, one that’s sponsored by a corporate officer, to handle all of the end-of-use assets well before the end-of-use period comes.

Reams of regulations

E-waste legislation resides at the state, local, national, and regional levels, and they all differ. There's some conflict, but some are in line with each other. So it's very difficult to understand what your legislative requirements are and how to comply. Your best bet is to deal with a highest standard and pick someone that knows and has experience in meeting these legislative requirements.

Legislation resides at the state, local, national, and regional levels, and they all differ.



There are tremendous amounts of global complexities that customers are trying to overcome, especially when they try to do data center consolidation and transformation, throughout their enterprise across different geographies and country borders.

You're talking about a variety of regulatory practices and directives, especially in the EU, that are emerging and restrict how you move used and non-working product across borders. There are a variety of different data-security practices and environmental waste laws that you need to be aware of.

Partner beware

A lot of our clients choose to outsource this work to a partner. But they need to keep in mind that they are sharing risk with whomever they partner with. So they have to be very cautious and be extremely picky about who they select as a partner.

This may sound a bit self-serving, but I always suggest for enterprises to resist smaller local vendors. ... If you don’t kick the tires with your partner and you don’t find out that the partner consists of a man, a dog, and a pickup truck, you just may have a hard time defending yourself as to why you selected that partner.

Also, develop a very strong vendor audit qualification and ongoing inspection process. Visit that vendor prior to the selection and know where your waste stream is going to end up. Whatever they do with the waste stream, it’s your waste stream. You are a part of the chain of custody, so you are responsible for what happens to that waste stream, no matter what that vendor does with it.

You need to create rigorous documented end-to-end controls and audit processes to provide audit trails for any future legal issues. And finally, select a partner with a brand name and reputation for trust and integrity. Essentially, share the risk.

Total asset management

Enterprises should well consider how they retire and recover value for their entire end-of-use IT equipment, whether it's a PDA or supercomputer, HP or non-HP product. Most data center transformations and consolidations typically end with a lot of excess or end-of-use product.

We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market. This is a strength of HP Financial Services (HPFS).

Typically, what we find with companies trying to recover value for product is that they give it to their facilities guys or the local business units. These guys love to put it on eBay and try to advertise for the best price. But, that’s not always the best way to recover the best value for your data center equipment.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.



We're now seeing it migrate into the procurement arm. These guys typically put it out for bid and select the highest bid from a lot of the open market brokers. A better strategy to recover value, but not the best.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.

From a financial asset ownership model, HPFS has the ability to come in and work with a client, understand their asset management strategy, and help them to personalize the financial asset ownership model that makes sense for them.

For example, if you look at a leasing organization, when you lease a product, it's going to come back. A key strength in terms of managing your residual is to recover the value for the product as it comes back, and we do that on a worldwide basis.

We have the ability to reach emerging markets or find the market of highest recovery to be able to recover the value for that product. As we work with clients and they give us their equipment to remarket on their behalf, we bring it into the same process.

When you think about it, an asset recovery program is really the same thing as a lease return. It's really a lot of reverse logistics -- bring it into a technical center, where it's audited, the data is wiped, the product is tested, there’s some level of refurbishment done, especially if we can enhance the market value. Then, we bring it into our global markets to recover value for that product.

We have skilled product traders within our product families who know how to hold product, and wait for the right time to release it into the secondary market. If you take a lot of product and sell it in one day, you increase the supply, and all of the recovery rates for the brokers drop overnight. So, you have to be pretty smart. You have to know when to release product in small lot sizes to maximize that recovery value for the client.

Legacy support as core competency

We're seeing a big uptake in the need to support legacy product, especially in DCT. We're able to provide highly customized pre-owned authentic legacy HP product solutions, sometimes going back 20 years or more. The need for temporary equipment just scaling out legacy data center hardware platform capacity that’s legacy locked is an increasing need that we see from our clients.

Clients also need to ensure their product is legally licensed and they do not encounter intellectual property right infringements. Lastly, they want to trust that the vendor has the right technical skills to deal with the legacy configuration and compatibility issues.

Our short-term rental program covers new or legacy products. Again, many customers need access to temporary product to prove out some concepts, or just to test some software application on compatibility issues. Or, if you're in the midst of a transformation, you may need access to temporary swing gear to enable the move.

We also help clients understand strategies to recover the best value for decommissioned assets, as well as how to evaluate and how to put in place a good data-security plan.

We help them understand whether data security should be done on-site versus off-site, or is it worth the cost to do it on-site and off-site. We also help them understand the complexities of data wiping enterprise product, versus just the plain PC.

The one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy.



Most of the local vendors and providers out there are skilled in wiping data for PCs, but when you get into enterprise products, it can get really complex. You need to make sure that you understand those complexities, so you can secure the data properly.

Lastly, the one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy, especially on a global basis. How do you get the timing down for all the products coming back on a return basis?

Tang: We reach out to our customers in various interactions to talk them through the whole process from beginning to end.

One of the great starting points we recommend is something we called the Data Center Transformation Experience Workshop, where we actually bring together your financial side, your operations people, and your CIOs, so all the key stakeholders in the same room, and walk through these common issues that you may or may not have thought about to begin with. You can walk out of that room with consensus, with a shared vision, as well as a roadmap that’s customized for your success.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Tuesday, September 21, 2010

IBM acquires Netezza as big data market continues to consolidate around appliances, middle market, new architecture

IBM is snapping up yet another business analytics player. After purchasing OpenPages last week, Big Blue is now laying down $1.7 billion in an all-cash deal to acquire Netezza.

Netezza provides high-performance analytics in a data warehousing appliance that claims to handle complex analytic queries 10 to 100 times faster than traditional systems. Netezza appliances puts analytics into the hands of business users in sales, marketing, product development, human resources and other departments that need to actionable insights to drive decision-making.

With its latest business analytics acquisition, Steve Mills, senior vice president and group executive of IBM Software and Systems, says the company is bringing analytics to the masses.

“We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value,” Mills says. “Netezza is a perfect example of this approach.”

Big Blue’s long haul

Netezza fits in with IBM’s maturing business analytics strategy. Big Blue has long put an emphasis on data analysis and business intelligence (BI) as key drivers of IT infrastructure needs. The company has demonstrated a clear understanding that data analysis and BI can also be easily applied to business issues.

IBM’s relationship database, DB2, also fits into the big picture. Over the years, IBM has built a strong family of database-driven products around DB2. Essentially, IBM has successfully worked to tie the data equation together with the needs of enterprises and the strength of their IT departments.

We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value.



While DB2 reaches into the past and supports the data needs of legacy and distributed systems and applications, new architectures around in-memory and optimized platforms for persistence-driven tasks are in vogue. While Neteeza's strengths are in analytics, this architecture has other uses, ones we'll be seeing more of.

Fast-forward to the Netezza acquisition. The $1.7 billion grab shows that IBM is well aware that big data sets don’t lend themselves to traditional architecture for crunching data. IBM, along with its competitors, have been developing or acquiring new architectures that focus more on in-memory solutions.

Rather than moving the entire database or large caches around on disk or tape, then, new architectures have emerged where the data and logic reside closer together -- and the data is accessed from high-performing persistence.

For example, with Netezza appliances, NYSE Euronext has slashed the time it takes to load and extract massive amounts of historical data so it can run analytic queries more securely and efficiently, while reducing run times from hours to seconds. Virgin Media, a UK provider of TV, broadband, phone and mobile services with millions of subscribers, uses Netezza across its product marketing, revenue assurance and credit services departments to proactively plan, forecast, and respond to the effect of pricing and tariff changes enabling them to quickly respond with competitive offerings.

Business analytics consolidation

W
ith the Netezza acquisition, the business analytics market is seeing consolidation as major players begin preparing to tap into a growing big data opportunity. Much the same as the BI market saw consolidation a few years ago -- IBM acquired Cognos, Oracle bought Hyperion, and SAP snapped up Business Objects -- vendors are now seeing big data analytics as an area that should be embedded into the total infrastructure of solutions. That requires a different architecture.

The competition is heating up. EMC purchased Greenplum, an enabler of big data clouds and self-service analytics, in July. Both companies are planning to sell the hardware and software together in appliances. The vendors tune and optimize the hardware and software to offer the benefits of big data crunching, taking advantage of in memory architecture and high performance hardware.

Expect to see more consolidation, although there aren’t too many players left in the Netezza space. Acquisition candidates include data management and analysis software company Aster Data Systems and Teradata with its enterprise analytics technologies, among others. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Meanwhile, Oracle this week at OpenWorld is pushing against the market with its new Exadata product. The battle is on. My take is that these purchases are for more than the engines that drive analytics -- they are for the engines that drive SaaS, cloud, mobile, web and what we might call the more modern work loads ... data intensive, high-scaling, fast-changing and services-oriented.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Monday, September 20, 2010

Morphlabs eases way for companies to build private cloud infrastructures, partners with Zend

Morphlabs, a provider of enterprise cloud architecture platforms, has simplified the process of building and managing an internal cloud for enterprise environments -- enabling companies to create their own private cloud infrastructure.

The Manhattan Beach, Calif. company today announced a significant upgrade to its flagship product, mCloud Controller. The enhanced version introduces Enterprise Cloud Architecture (ECA), a new approach that provides enterprises with immediate access to the building blocks and binding components of a fault tolerant, elastic, and highly automated platform.

Morphlabs also announced a partnership with Zend Technologies Ltd., whose Zend Server will be shipped as part of the mCloud Enterprise, said Winston Damarillo, CEO at Morphlabs.

mCloud Controller is a comprehensive cloud computing platform, delivered as an appliance or virtual appliance, as well as providing open mCloud APIs (you can manage the ECA cloud from an iPad, for example). To support the leading platforms, mCloud Controller will have built-in ECA compliant support for Java, Ruby on Rails, and PHP.

Fittingly for enterprise private clouds, the Morph offering also provides direct integration to mainstream middleware via standards-based connectors. It also supports a plethora of VMs, from KVM to Xen, and and VMware, and allows for others cluster managers to be used as well.

Look for Morphlabs to seek to sell to both service providers and enterprises for the compatible hybrids benefits. Of course, we're hearing the same from Citrix, VMware, Novell, HP, etc. It's a horse race out there for a de facto hybrid cloud standard, all right.

Productivity gains

“PHP has been broadly adopted for the productivity gains it brings to Web application development, and because it can provide the massive scalability that e-commerce, social networking and media sites require,” said Matt Elson, vice president of business development at Zend. “Integrating Zend Server into Morphlabs’ mCloud Controller enables IT organizations to leverage the elasticity of cloud computing and automate the process of deploying highly reliable PHP applications in the cloud.”

Key features of the mCloud Controller with ECA include:
  • Uniform environments from development to production to help users simplify system configuration. Applications can grow as needed, while maintaining a standardized infrastructure for ease of growth and replacement.

  • Simplified system administration with automated monitoring and self-healing out of the box to avoid complicated system tuning. mCloud Controller also comes with graphical tools for viewing system-wide performance.

  • Self-service resource provisioning, which frees the IT department from numerous application provisioning requests. Without any system administration skills, authorized users can start and stop computes and provision applications as needed. Billing is also included within the system.

  • Streamlined application management automates the process of deploying, monitoring and backing-up applications. Users do not have to deal with configuration files and server settings.
The mCloud Controller v2.5 is available now in the United States, Japan and South East Asia. For more information contact Morphlabs at info@mor.ph.

You may also be interested in:

Wednesday, September 15, 2010

HP Business Service Automation portfolio gives IT the tools it needs to compete with clouds

HP is pushing the automation card again with new tools for hybrid IT environments. The company today announced “enhanced automation solutions” that set the stage for lower-cost business application deployment -- whether those apps are deployed traditionally, virtually or via a cloud.

HP’s latest Business Service Automation (BSA) enhancements beef up its solutions for hybrid IT environments, which the company defines as any combination of on-premise, off-premise, physical and virtual scenarios, including cloud computing. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP has identified a strong need in the enterprise, which is why it’s moving so fast on the BSA front. Although hybrid IT environments can increase a business’s agility and speed time to market, they also increase complexity, risk and costs by creating IT silos -- if the environment isn’t holistically managed. HP’s new BSA software enhancements work to take the “if” out of the equation.

A 360-degree hybrid solution

Today’s BSA announcement builds on HP’s recent cloud announcements for hybrid IT environments. The just-announced software enhances the HP’s BSA portfolio to offer unified server, network, storage, and application management. The goal is to break down IT silos to simplify application development and hybrid IT management.

HP is promising financial returns for companies that adopt its solutions. According to a June 2010 ExpertROI Spotlight conducted by IDC on behalf of HP, organizations that deploy HP BSA solutions can realize up to $4.82 in benefits for every IT dollar invested, reduce annual IT costs by up to $24,000 per 100 end users, and reduce outsourcing costs by 40 percent to 80 percent.

Organizations are seeking solutions that deliver business applications and services with greater agility, speed and at the lowest cost to the enterprise, regardless of their IT environment.



“Organizations are seeking solutions that deliver business applications and services with greater agility, speed and at the lowest cost to the enterprise, regardless of their IT environment,” says Erik Frieberg, vice president of Marketing, Software and Solutions at HP. “Clients can achieve up to 382 percent ROI by deploying HP’s leading automation software and leverage the benefits of new hybrid delivery models.”

HP’s acquisition of Stratavia has strengthened its automation portfolio by adding deployment, configuration and management solutions for enterprise databases, middleware and packaged applications. These solutions aim to bridge the gap between application development and operational teams. With Stratavia’s technology in its portfolio, HP said it can now provision all of the components, rapidly deploy changes and manage the ongoing configuration and compliance management.

Under the BSA hood

HP’s BSA portfolio now offers new capabilities in application deployment and risk mitigation, as well as better efficiency and productivity. For example, HP Server Automation 9.0 helps clients automate the entire server life cycle, control virtualization sprawl, and provide more flexible provisioning and deployment of applications. New Application Deployment Manager (ADM) functionality lets IT organizations automate the release process to bridge the gap between development, quality assurance and operations teams. HP said these enhancements can accelerate application deployment by up to 86 percent.

What’s more, HP Network Automation 9.0 now helps clients contain costs, mitigate risk and improve efficiency of the network by automating error-prone tasks, reducing outages and enforcing policies in real-time regardless of the environment. And HP Operations Orchestration 9.0 helps clients faced with constant alerts and siloed teams improve service quality across hybrid environments. It gives clients the ability to automate the IT processes required to support cloud computing initiatives.

HP Operations Orchestration software can help manage a hybrid infrastructure through a single view while HP Client Automation 7.8 helps clients reduce administration costs for managing physical and virtual machines through a single tool. And HP Storage Essentials 6.3 helps clients reduce complexity in hybrid environments, while improving storage utilization and controlling capacity growth.

IT needs to play at productivity better

The BSA offerings come at a crossroads for enterprise IT. The fact is that IT can no longer just compete against its own past practices and cost structures. There's a looming gulf between what IT costs the IT department to provide and what a small army of outside hosts is coming to market with. IT now needs to compete against the costs structures of pure-play cloud and SaaS providers and hosts.

The solution for IT to remain competitive, and to pick and choose what to retain and what to outsource, is to make all of its systems and apps perform better and more efficiently. And it also needs the governance and management to automate those apps and systems to keep complexity and costs in line.

Visibility, automation and management are essential for IT to stay in the game against hosts, MSPs, clouds, SaaS providers, etc. And the same management allows IT to function as the best broker of services, regardless of where the servers reside. This is clearly the target HP's BSA portfolio has in its sights.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Aster Data's newest offering provides row and column functionality for big data MPP analytics

Aster Data has taken big data management and analytics to the next level with the announcement today of its Aster Data nCluster 4.6, which includes a column data store and provides a universal SQL-MapReduce analytic framework on a hybrid row and column massively parallel processing (MPP) database management system (DBMS).

The San Carlos, Calif. company's new offering will allow users to choose the data format best suited to their needs and benefit from the power of Aster Data’s SQL-MapReduce analytic capabilities, as well as Aster Data’s suite of 1000+ MapReduce-ready analytic functions. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Row stores traditionally have been optimized for look-up style queries, while column stores are traditionally optimized for scan-style queries. Providing both a row store and a column store within nCluster and delivering a unified SQL-MapReduce framework across both stores enables both query types.

Universal query framework

For example, a retailer using historical customer purchases to derive customer behavior indicators may store each customer purchase in a row store to ease retrieval of any individual customer order. This is a look-up style query. This same retailer can see a 5-15x performance improvement by using a column store to provide access to the data for a scan-style query, such as the number of purchases completed per brand or category of product. The Aster Data platform now supports both query types with natively optimized stores and a universal query framework.

Other features include:
  • Choice of storage, implemented per-table partition, which provides customers flexible performance optimization based on analytical workloads.

  • Such services as dynamic workload management, fault tolerance, Online Precision Scaling on commodity hardware, compression, indexing, automatic partitioning, SQL-MapReduce, SQL constructs, and cross-storage queries, among others.

  • New statistical functions popular in decision analysis, operations research, and quality management including decision trees and histograms.
You may also be interested in:

Pulse surges for Eclipse with more than one million developers on board

Getting developers on board. That’s the challenge technologies from Linux to Android face every day. Genuitec has helped Eclipse overcome this challenge with Pulse. Indeed, more than one million developers around the world have now installed Pulse.

Pulse works to give software developers an efficient way to locate, install and manage their Eclipse-based tool suite, among other tools. The software essentially empowers developers to customize their installs while avoiding plug-in management issues -- even when crossing operating systems. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]

“When we envisioned Pulse in 2007, we knew the developer community badly needed an easy technology to help manage their Eclipse tools,” says Maher Masri, president and CEO of Genuitec, a founding and strategic member of the Eclipse Foundation. “Now with one million users, we can happily say Pulse is a great success story.”

The Pulse advantage

O
ne of the advantages Pulse is pushing out to its one million developers is the ability to manage four years of Eclipse platform technologies from a single dashboard, including Eclipse 3.0, also known as Helios.

Pulse, like many other powerful Eclipse-based technologies, continues to attract world-class developers to the Eclipse platform



That’s no small feat, seeing how many enterprises standardize on older Eclipse versions, yet still demand an easy migration path to upgrade their projects, technical artifacts, and other mission-critical subsystems. Developers can even access Eclipse 3.7, also known as Indigo, as the milestones are rolled out in coming months.

This multi-year tool stack feature is part of the reason why Pulse has attracted so many Eclipse developers. Pulse is the only product on the market that supports this type of lifecycle-based stack management.

Getting to know Pulse

P
ulse also provides a product family of offerings. There’s a Community Edition that’s free, a Managed Team Edition that aims at the needs of development teams, and a Private Label software delivery version designed for corporate use. Pulse Community Edition is free for individual developers, while Pulse Managed Team Edition is $60 annually. Pricing for Pulse Private Label, a software delivery and management platform, is based on individual requirements.

“Pulse, like many other powerful Eclipse-based technologies, continues to attract world-class developers to the Eclipse platform,” says Mike Milinkovich, executive director of the Eclipse Foundation. “As we continuously enhance our code base and march toward Eclipse 3.7 next summer, we’re pleased that Genuitec will continue to support developers using Eclipse with its Pulse management software.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, September 14, 2010

Delphix Server launches at DEMO to slash relational database redundant copies, storage waste and cost

Delphix has brought virtualization techniques to database infrastructure with general availability of Delphix Server, which reduces structured and relational data redundancy while maintaining full functionality and performance -- and operating in a fraction of the space at lower cost.

The Palo Alto, Calif. company, just launching this week at DEMO, says that Delphix Server solves two major IT challenges: the operational complexity and redundant infrastructure required to support applications lifecycles via multiple database caches. Delphix software installs on standard x86 servers or in virtual machines, allowing customers to virtualize database infrastructure into a "single virtual authority" and do for relational data what storage innovations and "de-dupe" have done to reduce myriad standing copies of data caches.

The interface for managing the data is very clean and time-line based down to seconds. It reminds me of an enterprise-level version of Apple's Mac OS X Time Machine, but far more granular. This allows all those with access to the data to manage it intelligently but sparingly.

While Delphix consolidates storage and reduces database provisioning and refresh times, it adds little or no impact to production systems through its innovative synchronization technology, says Jed Yueh, CEO at Delphix. Other benefits include:

  • Agile application development: Delphix automates the provisioning and refresh process, enabling developers to instantly create personal sandboxes or virtual databases (VDBs) that are up-to-date and isolated from other VDBs. Developers can cut months out of project schedules and perform destructive or parallel testing to improve overall application quality and performance.
  • Improved data resiliency: Patent-pending TimeFlow technology enables customers to create a running record of database changes; VDBs can be instantly provisioned from multiple points-in-time, with granularity down to the second. This time-shifting capability enables businesses to dramatically reduce the time required to recover from logical data loss.
  • Storage consolidation: The average customer creates seven copies of each production database for development, testing, QA, staging, operational reporting, pilots, and training, with each copy typically having its own dedicated and largely redundant storage. Delphix creates a single virtual environment, where multiple VDBs can be instantly provisioned or refreshed from a shared footprint -- coordinating changes and differences in the background without compromising functionality or performance.
Both enterprises and service providers for SaaS and cloud will benefit from reducing the vast data redundancy across the app dev and ops lifecycle. By shrinking the hardware requirements, those hosts seeking to improve their margins gain, while enterprises and ISVs can devote the server and storage resources to more productive uses.

I should think that the app dev and test folks would grok the benefits too. Why not cut the hardware and storage costs for bringing applications to maturity by virtualizing the databases? What works for the OS and runtime works for the data.

You may also be interested in:

Want client virtualization? Time then to get your back-end infrastructure act together

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

We've all heard about client virtualization or virtual desktop infrastructure (VDI) over the past few years, and there are some really great technologies for delivering a PC client experience as a service.

But today’s business and economic drivers need to go beyond just good technology. There also needs to be a clear rationale for change -- both business and economic. Second, there needs to be proven methods for properly moving to client virtualization at low risk and in ways that lead to both high productivity and lower total costs over time.

Cloud computing, mobile device proliferation, and highly efficient data centers are all aligning to make it clear that the deeper and flexible client platform support from back-end servers will become more the norm and less the exception over time.

Client devices and application types will also be dynamically shifting both in numbers and types, and crossing the chasm between the consumer and business spaces. The new requirements for business mobile use point to the need for planning and proper support of the infrastructures that can accommodate these edge, wireless clients.

To help guide business on client virtualization infrastructure requirements, learn more about client virtualization strategies and best practices that support multiple future client directions, and see why such virtualization makes sense economically, we went to Dan Nordhues, Marketing and Business Manager for Client Virtualization Solutions in HP's Industry Standard Servers Organization. The interview is conducted by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Nordhues: In desktop virtualization, what really comes out to the user device is just pixel information. These protocols just give you the screen information, collect your user inputs from the keyboard and mouse, and take those back to the application or the desktop in the data center.

When you look at desktop virtualization, whether it’s a server-based computing environment, where you are delivering applications, or if you are delivering the whole desktop, as in VDI, to get started you really have to take a look at your whole environment -- and make sure that you're doing a proper analysis and are actually ready.

On the data center side, as we start talking about cloud, the solution is really progressing. HP is moving very strongly toward what we call converged infrastructure, which is wire it once and then have it provisioned and be ready to provide the services that you need. We're on a path where the hardware pieces are there to deliver on that.

But you have to look at the data center and its capacity to house the increased number of servers, storage, and networking that has to go there to support the user.

So now you get the storage folks in IT, the networking folks, and the server support folks all involved in the support of the desk-side environment. It definitely brings a new dynamic.

This is not a prescription for getting rid of those IT people. In fact, there is a lot of benefit to the businesses by moving those folks to do more innovation, and to free up cycles to do that, instead of spending all those cycles managing a desktop environment that may be fairly difficult to manage.

Where we're headed with this, even more broadly than VDI, is back to the converged infrastructure, where we talked about wire it once and have it be a solution. Say you're an office worker and you're just getting applications virtualized out to you. You're going to use Microsoft Office-type applications. You don’t need a whole desktop. Maybe you just need some applications streamed to you.

Maybe, you're more of a power user, and you need that whole desktop environment provided by VDI. We'll provide reference architectures with just wire it once type of infrastructure with storage. Depending on what type of user you are, it can deliver both the services and the experience without having to go back and re-provision or start over, which can take weeks and months, instead of minutes.

Also, really a hybrid solution could deliver in the future VDI plus server-based computing together and cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the highest end power users that you have.

And, we're going to see services wrapped around all of this, just to make it that much simpler for the customers to take this, deploy it, and know that it’s going to be successful.

Why VDI now?

It’s a digital generation of millions of new folks entering the workforce, and they've grown up expecting to be mobile and increasingly global. So, we need to have computing environments that don’t have us having to report to a post number in an office building in order to get work done.

We have an increasingly global and mobile workforce out there. Roughly 60 percent of employees in organizations don’t work where their headquarters are for their company, and they work differently.

When you go mobile, you give up some things. However, the major selling point is that you can get access. You can check in on a running process, if you need to see how things are progressing. You can do some simple things like go in and monitor processes, call logs, or things like that. Having that access is increasingly important.

Delivering packaged services out to the end user is something that’s still being worked out by software providers, and you're going to see some more elements of that come out as we go through the next year.



And, of course, there's the impact of security, which is always the highest on customer lists. We have customers out there, large enterprise accounts, who are spending north of $100 million a year just to protect themselves from internal fraud.

With client virtualization, the security is built in. You have everything in the data center. You can’t have users on the user endpoint side, which may be a thin client access device, taking files away on USB keys or sticks.

It’s all something that can be protected by IT, and they can give access only to users as they see fit. In most cases, they want to strictly control that. Also, you don’t have users putting applications that you don't want ... on top of your IT infrastructure.

And there is really a catalyst coming as well in the Windows 7 availability and launch since late last year. Many organizations are looking at their transition plans there. It’s a natural time to look at a way to do the desktop differently than it has been done in the past.

Reference architectures support all clients

W
e've launched several reference architectures and we are going to continue to head down this path. A reference architecture is a prescribed solution for a given set of problems.

A lot of the deployment issue, and what makes this difficult, is that there are so many choices.



For example, in June, we just launched a reference architecture for VDI that uses some iSCSI SAN storage technology, and storage has traditionally been one of the cost factors in deploying client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So, moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic performance.

In this reference architecture, we've done the system integration for the customer. A lot of the deployment issue, and what makes this difficult, is that there are so many choices. You have to choose which server to use and from which vendor: HP, Dell, IBM, or Cisco? Which storage to choose: HP, EMC, or NetApp? Then, you have got the software piece of it. Which hypervisor to use: Microsoft, VMware, or Citrix? Once you chase all these down and do your testing and your proof of concept, it can take quite a substantial length of time.

We targeted the enterprise first. Some of our reference architectures that are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the lower-end offerings we have, they are still in the 400-500 range.

We're looking at bringing that down even further with some new storage technologies, which will get us down to a couple of hundred users, the small and medium business (SMB) market, certainly the mid-market, and making it just very easy for those folks to deploy. They'll have it come completely packaged.

Today, we have reference architectures based on VDI or based on server-based computing and delivering just the applications. As I mentioned before, were looking at marrying those, so you truly have a wire-it-once infrastructure that can deliver whatever the needs are for your broad user community.

What HP has done with these reference architectures is say, "Look, Mr. Customer, we've done all this for you. Here is the server and storage and all the way out to the thin client solution. We've tested it. We've engineered it with our partners and with the software stack, and we can tell you that this VDI solution will support exactly this many knowledge workers or that many productivity users in your PC environment." So, you take that system integration task away from the customer, because HP has done it for them.

We have a number of customer references. I won’t call them out specifically, but we do have some of these posted out on HP.com/go/clientvirtualization, and we continue to post more of our customer case studies out there. They are across the whole desktop virtualization space. Some are on server-based computing or sharing applications, some are based on VDI environments, and we continue to add to those.

With any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different.



HP also has an ROI or TCO calculator that we put together specifically for this space. You show a customer a case study and they say, "Well, that doesn’t really match my pain points. That doesn’t really match my problem. We don’t have that IT issue," or "We don’t have that energy, power issue."

We created this calculator, so that customers can put in their own data. It’s a fairly robust tool, but we can put in information about what’s your desktop environment costing you today, what would it cost to put in a client virtualization environment, and what you can expect as far as your return on investment. So, it’s a compelling part of the discussion.

Obviously, with any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different, which is why we have provided the tool and the consulting around that.

On that same website that I mentioned, HP.com/go/clientvirtualization, we have our technical white papers that we've published, along with each of these reference architectures.

For example, if you pick the VDI reference architecture that will support 1,000-plus users in general, there is a 100-page white paper that talks about exactly how we tested it, how we engineered it, and how it scales with the VMware view or with Microsoft Hyper-V, plus Citrix XenDesktop.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, September 13, 2010

HP gets more than security benefits from ArcSight acquisition, it gets closer to comprehensive BI for IT

The build, buy or partner equation has favored "buy" once again as HP moves aggressively to dominate IT operations management and governance software and services.

HP on Monday announced the intention to buy 10-year-old ArcSight for $1.5 billion, rapidly filling out its software products portfolio again under Bill Veghte, Executive Vice President of the HP Software & Solutions group. HP has been on a tear after recently acquiring Fortify and 3Par. I guess we should expect even more buying by HP as the economy and stock market makes these companies attractive before their value increases. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

ArcSight -- with a $200 million revenue run rate and 35 percent annual top line growth -- might be best known for providing the means to snuff out cyber crime and user access and data management risks. And the systems log capture and management portfolio at ArcSight is also adept at helping with regulatory oversight requirements and compliance issues. To solve these problems, the company sells to the largest enterprises, including the US government and military, and financial, telco and retail giants.

But for me the real value for HP is in gaining a comprehensive platform and portfolio via ArcSight for total systems log management. Being able to manage and exploit the reams of ongoing log data across all data center devices offers huge benefits, even the ability to correlate business events and IT events for what I call BI for IT.

We're right on the cusp of reliable and penetrating levels predictive types of IT analysis, and HP needs to in the vanguard on this. VMware just last month bought privately held Integrien for the same reason. The market is looking for de facto standard governance systems of record and HP's other governance products plus ArcSight makes that a market opportunity only one for HP to lose.

This predictive approach to IT failures -- of identifying and ameliorating system snafus before they impact applications and data performance -- stands as the progeny of better IT operations continuity. The structured and unstructured systems data and analysis from ArcSight will help HP develop a constant feedback loop between build, manage and monitoring processes, to help ensure that enterprises remain secure and reliable in operations, says HP.

Consider too that managing security and dependability at the edge takes on a whole new meaning as enterprises dive more deeply into smartphones, mobile apps, netbooks, thin clients and desktop virtualization, and the need to not just manage each of them -- but all of them in an orchestra of coordinated data and applications access, provisioning and compliance.

Virtualization drives need for governance


Oh, and then there's the virtualization revolution that's only partly played out in enterprise IT and growing fast. And so how to manage and govern fleeting virtual instances of servers, networking equipment and storage? The logs. The logs data. It's a sure way to gain a complete view of IT operations, even as that picture is rapidly changing moment by moment.

Another complement to the ArcSight-HP match-up: All that log data needs to be crunched and reported, a function of BI-adept hardware and optimized systems, which, of course, HP has in spades.

So all this deep and wide governance capability from ArcSight is a strong complement to HP's Business Service Automation and Cloud Service Automation solutions, among several others. Given that HP already resells ArcSight's appliances (and soon, we're told all-software products, too), we should expect the combined solutions to be moving down-market to the SMBs pretty quickly. This global and massive market has also been a recent priority for HP across other products and services.

Don't just view the ArcSight purchase today through the lens of cyber security and compliance solutions. This is a synergistic acquisition for HP on many levels. The common denominator is comprehensive governance, and the next goal for the combined HP and ArcSight products and services is predictive BI for IT ... and correlating that all to the real-time business events and processes. That's the total business insight capability that companies so desperately need -- and only IT can provide -- to effectively manage complexity and risk.

You may also be interested in:

Thursday, September 9, 2010

SAS joins crowded vendor landscape moving to bring affordable BI to the masses

We're only in the first years of the data-driven decade. More companies will be making more of their business decisions -- and also added revenue -- on their own data services.

Investing in good data analytics infrastructure now allows companies to know themselves and their markets far better. It eliminates guessing and brings more of a real-time picture of their operations, challenges and opportunities.

Good data organizers can also then share or sell that data and analytics to partners and/or customers, and acquire meaningful additional outside data themselves from other data services purveyors.

The trick for IT is to allow their companies to extract business intelligence (BI) from these vast data sets at an affordable price. And more companies, that is small and medium businesses, will want in on the data and analytics revolution. Competition will drive them to.

So what's needed now is a change in the economics of business intelligence via value-oriented offerings for the mid-market. Traditional entry points for large data warehouses are often $500,000 and up, not to mention the ongoing operations costs and need to acquire data and systems management skills.

BI comes to wider audience

SAS at the A2010 conference last week launched Rapid Predictive Modeller (RPM), a service targeting non-analytical business users to help create more BI reports. SAS RPM joins the latest release of SAS Enterprise Miner 6.2, which includes an add-in for Microsoft Excel.

These steps toward making BI and reports available to more users and uses at a lower price will no doubt be welcome to SMBs and enterprises dripping in data, but struggling to make sense of it all.

We're only now seeing massively parallel data warehousing appliances priced at the $50,000 mark. And these appliances tend to be cheaper to administrate and operate. Aster Data Systems, for example, recently came out with a lower-cost competitive solution dubbed MapReduce Data Warehouse Appliance – Express Edition. Aster also has a new CEO, Quentin Gallivan, announced today.

Aster, Netezza and Teradata are all focusing on the mid-market. Green Plum was recently bought by EMC. A recent Forrester report put Teradata, Oracle, IBM and Microsoft at the head of the data warehouse market, with Netezza, Sybase and SAP noted for niche deployments.

Oracle and HP teamed up two years ago on the Exadata appliance for Oracle warehouse workloads. And now Oracle is putting its Sun Microsystems acquisition to use for its own Exadata appliances line-up.

Expect a vendor slugfest on the lower end of the data warehousing and BI market in the next few years. It will be fascinating to see how these vendors will both enter the entry-level markets, while also seeking to maintain the high-end pricing for the largest users. There could be a value sweet spot in the middle.

We should therefore expect to see prices come down on these systems across the board, making the systems more attainable for even more types of uses and users.

Wednesday, September 8, 2010

HP product barrage uses integration, low-cost, simplicity to bring latest IT advances to price-sensitive SMBs

Figuring that small- and medium-sized businesses (SMBs) want the best in IT advances too, HP on Wednesday unleashed a barrage of products and services that use integration, low-cost, and simplicity to bring cutting edge enterprise IT capabilities to the global mid-market.

The new products and services -- ranging from the $329 HP ProLiant MicroServer to $424 minitower PCs to simplified virtualization, networking and storage bundles -- come from multiple organizations across HP, but with a singular Goldilocks target of “Just Right IT” for SMBs. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

The slew of value-oriented offerings is also designed to give HPs various global channel partners a new horse to ride into town on as the SMBs look beyond recession-reckoning for how to grow their operations while becoming more productive. The products and services are also available from HP directly.

HP is also putting financial muscle behind the channel partners and users by providing aggressive financing options leasing, life cycle asset management and upgrade services. HP Financial Services is the second-largest captive IT leasing company in the world, said HP. Leasing provides SMBs with flexibility (with no or low upfront payments) and a path to migrate to newer technology.

While the value and utilization benefits of virtualization have been quickly adopted by larger companies and IT departments, the use of hypervisors has been slower in SMBs. To help solve that, HP has developed more complete virtualization environments using Virtualization Smart Bundles with Microsoft Hyper-V Server 2008 R2. The bundles target storage, servers and networking virtualization technology uses.

The SMB-targeted worker productivity releases include:
  • HP ProLiant MicroServer, an energy-efficient file server designed for businesses with up to 10 employees to centralize information and securely access files faster (at about half the size and 50 percent quieter than most entry-level servers)
The SMB-targeted storage management releases include:
The SMB-targeted networking and communications releases include:
  • HP VCX 9.5 IP Telephony system and 350x IP Phones (starting at $119), which enable the convergence of voice and data onto a single network infrastructure.
SMBs are where economists look for growth to emerge from recessions, and in developing countries. For years, though, large IT vendors have focused on the top ends of the IT market. It makes a lot of sense for HP to scale the technology and offerings down to the SMBs -- which is a huge total market, poised for unprecedented growth in the world's most populous regions.

Fact is, too, that due to proliferating mobile devices and wireless networks, nearly all companies of any size need to deeply embrace technology and networking to remain competitive. Data explosion also makes it unavoidable to bring in managed storage and backup, not to mention the burgeoning requirements of security and managed access.

While many of us analysts harp on about the virtues and inevitability of cloud computing, for many small companies and in many regions, the promise of cloud cannot be considered until the basics of IT are modernized and managed.

Mobile devices alone can not take the place of a LAN and managed storage. In many ways, these new HP products and bundles -- with their pricing and simplicity -- can be seen as stepping stones for SMBs to soon be able to exploit the value and potential of cloud-based services, too.

And then we actually might see these SMBs leap-frog their larger corporate brethren, rather than be seen as a lagging market category, in regards to IT productivity and enablement. And wouldn't that be exciting?

You may also be interested in:

Friday, September 3, 2010

ZapThink defines IT transformation crisis points in 2020 vision framework

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


By Jason Bloomberg

In our last ZapFlash, The Five Supertrends of Enterprise IT, ZapThink announced our new ZapThink 2020 conceptual framework that helps organizations understand the complex interrelationships among the various forces of change impacting IT shops over the next 10 years, and how to leverage those forces of change to achieve the broader goals of the organization.

In many ways, however, ZapThink 2020 is as much about risk mitigation as it is about strategic benefit. Every element of ZapThink 2020 is a problem, as well as an opportunity. Nowhere is this focus on risk mitigation greater than with ZapThink 2020’s six Crisis Points.

Defining a crisis point

Of course, life in general -- as well as business in particular -- are both filled with risks, and a large part of any executive’s job description is dealing with everyday crises. A Crisis Point, however, goes beyond everyday, garden-variety fire fighting. To be a Crisis Point, the underlying issue must both be potentially game-changing as well as largely unexpected. The element of surprise is what makes each Crisis Point especially dangerous – not that the crisis itself is necessarily a surprise, but rather, just how transformative the event promises to be.

Here then are ZapThink 2020’s seven Crisis Points, why they’re surprising, and why they’re game-changing. Over the next several months we’ll dive deeper into each one, but for now, here’s a high-level overview.
Collapse of enterprise IT – Enterprises who aren’t in the IT business stop doing their own IT, and furthermore, move their outsourced IT off-premise.

Why is it that so many enterprises today handle their own IT, and in particular, write their own software? They use office furniture, but nobody would think of manufacturing their own, except of course if you’re in the office furniture manufacturing business.

The game-changing nature of this Crisis Point is obvious, but what’s surprising will be just how fast enterprises rush to offload their entire IT organizations, once it becomes clear that the first to do so have achieved substantial benefits from this move.

IPv4 exhaustion –
Every techie knows that we’re running out of IP addresses, because the IPv4 address space only provides for about 4.3 billion IP addresses, and they’ve almost all been assigned.

IPv6 is around the corner, but very little of our Internet infrastructure supports IPv6 at this time. The surprise here is what will happen when we run out of addresses: the secondary market for IP addresses will explode.

As it turns out, a long time ago IANA assigned most IP addresses to a select group of Class A holders, who each got a block of about 16.8 million addresses.

Companies like Ford, Eli Lilly, and Halliburton all ended up with one of these blocks. How much money do you think they can make selling them once the unassigned ones are all gone?

Fall of frameworks –
Is your chief Enterprise Architect your CEO’s most trusted, important advisor? No? Well, why not?

After all, EA is all about organizing the business to achieve its strategic goals in the best way we know how, and the EA is supposed to know how. The problem is, most EAs are bogged down in the details, spending time with various frameworks and other artifacts, to the point where the value they provide to their organizations is unclear.

In large part the frameworks are to blame – Zachman Framework, TOGAF, DoDAF, to name a few. For many organizations, these frameworks are little more than pointless exercises in organizing terminology that leads to checklist architectures.

At this Crisis Point, executives get fed up, scrap their current EA efforts, and bring in an entirely new way of thinking about Enterprise Architecture. Does ZapThink have ideas about this new approach EA? You bet we do. Stay tuned – or better yet, sign up for our newly revised Licensed ZapThink Architect SOA & Cloud Architecture Boot Camp.

Cyberwar
Yes, most risks facing IT shops today are security related. Not a day goes by without another virus or Windows vulnerability coming to light.

But what happens when there is a concerted, professional, widespread, expert attack on some key part of our global IT infrastructure? It’s not a matter of if, it’s a matter of when.

The surprise here will be just how effective such an attack can be, and perhaps how poor the response is, depending on who the target is. Will terrorists take down the Internet? Maybe just the DNS infrastructure? Or will this battle be between corporations? Regardless, the world post-Cyberwar will never be the same.

As the quantity and complexity of available information exceeds our ability to deal with such information, we’ll need to take a new approach to governance.

Arrival of Generation Y
These are the kids who are currently in college, more or less. Not only is this generation the “post-email” generation, they have grown up with social media.

When they hit the workforce they will hardly tolerate the archaic approach to IT we have today. Sure, some will change to fit the current system, but enterprises who capitalize on this generation’s new perspective on IT will obtain a strategic advantage.

We saw this generational effect when Generation X hit the workforce around the turn of the century – a cadre of young adults who weren’t familiar with a world without the Web. That generation was instrumental in shifting the Web from a fad into an integral part of how we do business today. Expect the same from Generation Y and social media.

Data explosion –
As the quantity and complexity of available information exceeds our ability to deal with such information, we’ll need to take a new approach to governance.

ZapThink discussed this Crisis Point in our ZapFlash The Christmas Day Bomber, Moore’s Law, and Enterprise IT. But while an essential part of dealing with the data explosion crisis point is a move to governance-driven Complex Systems, we place this Crisis Point in the Democratization of Technology Supertrend.

The shift in thinking will be away from the more-is-better, store-and-analyze school of data management to a much greater focus on filtering and curating information. We’ll place increasingly greater emphasis on small quantities of information, by ensuring that information is optimally valuable.

Enterprise application crash –
The days of “Big ERP” are numbered – as well as those of “Big CRM” and “Big SCM” and … well, all the big enterprise apps. These lumbering monstrosities are cumbersome, expensive, inflexible, and filled at their core with ancient spaghetti code.

There’s got to be a better way to run an enterprise. Fortunately, there is. And once enterprises figure this out, one or more of the big enterprise app vendors will be caught by surprise and go out of business. Will it be one of your vendors?
The ZapThink take

We can’t tell you specifically when each of these Crisis Points will come to pass, or precisely how they will manifest. What we can say with a good amount of certainty, however, is that you should be prepared for them. If one or another proves to be less problematic or urgent than feared, then we can all breathe a sigh of relief. But should one come to pass as feared, then the organizations who have suitably prepared for it will not only be able to survive, but will be able to take advantage of the fact that their competition was not so well equipped.

The real challenge with preparing for such Crisis Points is in understanding their context. None of them happens in isolation; rather, they are all interrelated with other issues and the broader Supertrends that they are a part of.

That’s where ZapThink comes in. We are currently putting together a poster that will help people at a variety of organizations understand the context for change in their IT shops over the next ten years, and how they will impact business. We’re currently looking for sponsors. Drop us a line if you’d like more information.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in: