Wednesday, September 15, 2010

Aster Data's newest offering provides row and column functionality for big data MPP analytics

Aster Data has taken big data management and analytics to the next level with the announcement today of its Aster Data nCluster 4.6, which includes a column data store and provides a universal SQL-MapReduce analytic framework on a hybrid row and column massively parallel processing (MPP) database management system (DBMS).

The San Carlos, Calif. company's new offering will allow users to choose the data format best suited to their needs and benefit from the power of Aster Data’s SQL-MapReduce analytic capabilities, as well as Aster Data’s suite of 1000+ MapReduce-ready analytic functions. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Row stores traditionally have been optimized for look-up style queries, while column stores are traditionally optimized for scan-style queries. Providing both a row store and a column store within nCluster and delivering a unified SQL-MapReduce framework across both stores enables both query types.

Universal query framework

For example, a retailer using historical customer purchases to derive customer behavior indicators may store each customer purchase in a row store to ease retrieval of any individual customer order. This is a look-up style query. This same retailer can see a 5-15x performance improvement by using a column store to provide access to the data for a scan-style query, such as the number of purchases completed per brand or category of product. The Aster Data platform now supports both query types with natively optimized stores and a universal query framework.

Other features include:
  • Choice of storage, implemented per-table partition, which provides customers flexible performance optimization based on analytical workloads.

  • Such services as dynamic workload management, fault tolerance, Online Precision Scaling on commodity hardware, compression, indexing, automatic partitioning, SQL-MapReduce, SQL constructs, and cross-storage queries, among others.

  • New statistical functions popular in decision analysis, operations research, and quality management including decision trees and histograms.
You may also be interested in:

Pulse surges for Eclipse with more than one million developers on board

Getting developers on board. That’s the challenge technologies from Linux to Android face every day. Genuitec has helped Eclipse overcome this challenge with Pulse. Indeed, more than one million developers around the world have now installed Pulse.

Pulse works to give software developers an efficient way to locate, install and manage their Eclipse-based tool suite, among other tools. The software essentially empowers developers to customize their installs while avoiding plug-in management issues -- even when crossing operating systems. [Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.]

“When we envisioned Pulse in 2007, we knew the developer community badly needed an easy technology to help manage their Eclipse tools,” says Maher Masri, president and CEO of Genuitec, a founding and strategic member of the Eclipse Foundation. “Now with one million users, we can happily say Pulse is a great success story.”

The Pulse advantage

O
ne of the advantages Pulse is pushing out to its one million developers is the ability to manage four years of Eclipse platform technologies from a single dashboard, including Eclipse 3.0, also known as Helios.

Pulse, like many other powerful Eclipse-based technologies, continues to attract world-class developers to the Eclipse platform



That’s no small feat, seeing how many enterprises standardize on older Eclipse versions, yet still demand an easy migration path to upgrade their projects, technical artifacts, and other mission-critical subsystems. Developers can even access Eclipse 3.7, also known as Indigo, as the milestones are rolled out in coming months.

This multi-year tool stack feature is part of the reason why Pulse has attracted so many Eclipse developers. Pulse is the only product on the market that supports this type of lifecycle-based stack management.

Getting to know Pulse

P
ulse also provides a product family of offerings. There’s a Community Edition that’s free, a Managed Team Edition that aims at the needs of development teams, and a Private Label software delivery version designed for corporate use. Pulse Community Edition is free for individual developers, while Pulse Managed Team Edition is $60 annually. Pricing for Pulse Private Label, a software delivery and management platform, is based on individual requirements.

“Pulse, like many other powerful Eclipse-based technologies, continues to attract world-class developers to the Eclipse platform,” says Mike Milinkovich, executive director of the Eclipse Foundation. “As we continuously enhance our code base and march toward Eclipse 3.7 next summer, we’re pleased that Genuitec will continue to support developers using Eclipse with its Pulse management software.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, September 14, 2010

Delphix Server launches at DEMO to slash relational database redundant copies, storage waste and cost

Delphix has brought virtualization techniques to database infrastructure with general availability of Delphix Server, which reduces structured and relational data redundancy while maintaining full functionality and performance -- and operating in a fraction of the space at lower cost.

The Palo Alto, Calif. company, just launching this week at DEMO, says that Delphix Server solves two major IT challenges: the operational complexity and redundant infrastructure required to support applications lifecycles via multiple database caches. Delphix software installs on standard x86 servers or in virtual machines, allowing customers to virtualize database infrastructure into a "single virtual authority" and do for relational data what storage innovations and "de-dupe" have done to reduce myriad standing copies of data caches.

The interface for managing the data is very clean and time-line based down to seconds. It reminds me of an enterprise-level version of Apple's Mac OS X Time Machine, but far more granular. This allows all those with access to the data to manage it intelligently but sparingly.

While Delphix consolidates storage and reduces database provisioning and refresh times, it adds little or no impact to production systems through its innovative synchronization technology, says Jed Yueh, CEO at Delphix. Other benefits include:

  • Agile application development: Delphix automates the provisioning and refresh process, enabling developers to instantly create personal sandboxes or virtual databases (VDBs) that are up-to-date and isolated from other VDBs. Developers can cut months out of project schedules and perform destructive or parallel testing to improve overall application quality and performance.
  • Improved data resiliency: Patent-pending TimeFlow technology enables customers to create a running record of database changes; VDBs can be instantly provisioned from multiple points-in-time, with granularity down to the second. This time-shifting capability enables businesses to dramatically reduce the time required to recover from logical data loss.
  • Storage consolidation: The average customer creates seven copies of each production database for development, testing, QA, staging, operational reporting, pilots, and training, with each copy typically having its own dedicated and largely redundant storage. Delphix creates a single virtual environment, where multiple VDBs can be instantly provisioned or refreshed from a shared footprint -- coordinating changes and differences in the background without compromising functionality or performance.
Both enterprises and service providers for SaaS and cloud will benefit from reducing the vast data redundancy across the app dev and ops lifecycle. By shrinking the hardware requirements, those hosts seeking to improve their margins gain, while enterprises and ISVs can devote the server and storage resources to more productive uses.

I should think that the app dev and test folks would grok the benefits too. Why not cut the hardware and storage costs for bringing applications to maturity by virtualizing the databases? What works for the OS and runtime works for the data.

You may also be interested in:

Want client virtualization? Time then to get your back-end infrastructure act together

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

We've all heard about client virtualization or virtual desktop infrastructure (VDI) over the past few years, and there are some really great technologies for delivering a PC client experience as a service.

But today’s business and economic drivers need to go beyond just good technology. There also needs to be a clear rationale for change -- both business and economic. Second, there needs to be proven methods for properly moving to client virtualization at low risk and in ways that lead to both high productivity and lower total costs over time.

Cloud computing, mobile device proliferation, and highly efficient data centers are all aligning to make it clear that the deeper and flexible client platform support from back-end servers will become more the norm and less the exception over time.

Client devices and application types will also be dynamically shifting both in numbers and types, and crossing the chasm between the consumer and business spaces. The new requirements for business mobile use point to the need for planning and proper support of the infrastructures that can accommodate these edge, wireless clients.

To help guide business on client virtualization infrastructure requirements, learn more about client virtualization strategies and best practices that support multiple future client directions, and see why such virtualization makes sense economically, we went to Dan Nordhues, Marketing and Business Manager for Client Virtualization Solutions in HP's Industry Standard Servers Organization. The interview is conducted by BriefingsDirect's Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Nordhues: In desktop virtualization, what really comes out to the user device is just pixel information. These protocols just give you the screen information, collect your user inputs from the keyboard and mouse, and take those back to the application or the desktop in the data center.

When you look at desktop virtualization, whether it’s a server-based computing environment, where you are delivering applications, or if you are delivering the whole desktop, as in VDI, to get started you really have to take a look at your whole environment -- and make sure that you're doing a proper analysis and are actually ready.

On the data center side, as we start talking about cloud, the solution is really progressing. HP is moving very strongly toward what we call converged infrastructure, which is wire it once and then have it provisioned and be ready to provide the services that you need. We're on a path where the hardware pieces are there to deliver on that.

But you have to look at the data center and its capacity to house the increased number of servers, storage, and networking that has to go there to support the user.

So now you get the storage folks in IT, the networking folks, and the server support folks all involved in the support of the desk-side environment. It definitely brings a new dynamic.

This is not a prescription for getting rid of those IT people. In fact, there is a lot of benefit to the businesses by moving those folks to do more innovation, and to free up cycles to do that, instead of spending all those cycles managing a desktop environment that may be fairly difficult to manage.

Where we're headed with this, even more broadly than VDI, is back to the converged infrastructure, where we talked about wire it once and have it be a solution. Say you're an office worker and you're just getting applications virtualized out to you. You're going to use Microsoft Office-type applications. You don’t need a whole desktop. Maybe you just need some applications streamed to you.

Maybe, you're more of a power user, and you need that whole desktop environment provided by VDI. We'll provide reference architectures with just wire it once type of infrastructure with storage. Depending on what type of user you are, it can deliver both the services and the experience without having to go back and re-provision or start over, which can take weeks and months, instead of minutes.

Also, really a hybrid solution could deliver in the future VDI plus server-based computing together and cover your whole gamut of users, from the very lowest task-oriented user, all the way up to the highest end power users that you have.

And, we're going to see services wrapped around all of this, just to make it that much simpler for the customers to take this, deploy it, and know that it’s going to be successful.

Why VDI now?

It’s a digital generation of millions of new folks entering the workforce, and they've grown up expecting to be mobile and increasingly global. So, we need to have computing environments that don’t have us having to report to a post number in an office building in order to get work done.

We have an increasingly global and mobile workforce out there. Roughly 60 percent of employees in organizations don’t work where their headquarters are for their company, and they work differently.

When you go mobile, you give up some things. However, the major selling point is that you can get access. You can check in on a running process, if you need to see how things are progressing. You can do some simple things like go in and monitor processes, call logs, or things like that. Having that access is increasingly important.

Delivering packaged services out to the end user is something that’s still being worked out by software providers, and you're going to see some more elements of that come out as we go through the next year.



And, of course, there's the impact of security, which is always the highest on customer lists. We have customers out there, large enterprise accounts, who are spending north of $100 million a year just to protect themselves from internal fraud.

With client virtualization, the security is built in. You have everything in the data center. You can’t have users on the user endpoint side, which may be a thin client access device, taking files away on USB keys or sticks.

It’s all something that can be protected by IT, and they can give access only to users as they see fit. In most cases, they want to strictly control that. Also, you don’t have users putting applications that you don't want ... on top of your IT infrastructure.

And there is really a catalyst coming as well in the Windows 7 availability and launch since late last year. Many organizations are looking at their transition plans there. It’s a natural time to look at a way to do the desktop differently than it has been done in the past.

Reference architectures support all clients

W
e've launched several reference architectures and we are going to continue to head down this path. A reference architecture is a prescribed solution for a given set of problems.

A lot of the deployment issue, and what makes this difficult, is that there are so many choices.



For example, in June, we just launched a reference architecture for VDI that uses some iSCSI SAN storage technology, and storage has traditionally been one of the cost factors in deploying client virtualization. It has been very costly to deploy Fibre Channel SAN, for example. So, moving to this iSCSI SAN technology is helping to reduce the cost and provide fantastic performance.

In this reference architecture, we've done the system integration for the customer. A lot of the deployment issue, and what makes this difficult, is that there are so many choices. You have to choose which server to use and from which vendor: HP, Dell, IBM, or Cisco? Which storage to choose: HP, EMC, or NetApp? Then, you have got the software piece of it. Which hypervisor to use: Microsoft, VMware, or Citrix? Once you chase all these down and do your testing and your proof of concept, it can take quite a substantial length of time.

We targeted the enterprise first. Some of our reference architectures that are out there today exist for 1,000-plus users in a VDI environment. If you go to some of the lower-end offerings we have, they are still in the 400-500 range.

We're looking at bringing that down even further with some new storage technologies, which will get us down to a couple of hundred users, the small and medium business (SMB) market, certainly the mid-market, and making it just very easy for those folks to deploy. They'll have it come completely packaged.

Today, we have reference architectures based on VDI or based on server-based computing and delivering just the applications. As I mentioned before, were looking at marrying those, so you truly have a wire-it-once infrastructure that can deliver whatever the needs are for your broad user community.

What HP has done with these reference architectures is say, "Look, Mr. Customer, we've done all this for you. Here is the server and storage and all the way out to the thin client solution. We've tested it. We've engineered it with our partners and with the software stack, and we can tell you that this VDI solution will support exactly this many knowledge workers or that many productivity users in your PC environment." So, you take that system integration task away from the customer, because HP has done it for them.

We have a number of customer references. I won’t call them out specifically, but we do have some of these posted out on HP.com/go/clientvirtualization, and we continue to post more of our customer case studies out there. They are across the whole desktop virtualization space. Some are on server-based computing or sharing applications, some are based on VDI environments, and we continue to add to those.

With any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different.



HP also has an ROI or TCO calculator that we put together specifically for this space. You show a customer a case study and they say, "Well, that doesn’t really match my pain points. That doesn’t really match my problem. We don’t have that IT issue," or "We don’t have that energy, power issue."

We created this calculator, so that customers can put in their own data. It’s a fairly robust tool, but we can put in information about what’s your desktop environment costing you today, what would it cost to put in a client virtualization environment, and what you can expect as far as your return on investment. So, it’s a compelling part of the discussion.

Obviously, with any new computing technology, the underlying consideration is always cost or, in this case, a lot of customers look at it at a cost-per-seat perspective, and this is no different, which is why we have provided the tool and the consulting around that.

On that same website that I mentioned, HP.com/go/clientvirtualization, we have our technical white papers that we've published, along with each of these reference architectures.

For example, if you pick the VDI reference architecture that will support 1,000-plus users in general, there is a 100-page white paper that talks about exactly how we tested it, how we engineered it, and how it scales with the VMware view or with Microsoft Hyper-V, plus Citrix XenDesktop.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, September 13, 2010

HP gets more than security benefits from ArcSight acquisition, it gets closer to comprehensive BI for IT

The build, buy or partner equation has favored "buy" once again as HP moves aggressively to dominate IT operations management and governance software and services.

HP on Monday announced the intention to buy 10-year-old ArcSight for $1.5 billion, rapidly filling out its software products portfolio again under Bill Veghte, Executive Vice President of the HP Software & Solutions group. HP has been on a tear after recently acquiring Fortify and 3Par. I guess we should expect even more buying by HP as the economy and stock market makes these companies attractive before their value increases. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

ArcSight -- with a $200 million revenue run rate and 35 percent annual top line growth -- might be best known for providing the means to snuff out cyber crime and user access and data management risks. And the systems log capture and management portfolio at ArcSight is also adept at helping with regulatory oversight requirements and compliance issues. To solve these problems, the company sells to the largest enterprises, including the US government and military, and financial, telco and retail giants.

But for me the real value for HP is in gaining a comprehensive platform and portfolio via ArcSight for total systems log management. Being able to manage and exploit the reams of ongoing log data across all data center devices offers huge benefits, even the ability to correlate business events and IT events for what I call BI for IT.

We're right on the cusp of reliable and penetrating levels predictive types of IT analysis, and HP needs to in the vanguard on this. VMware just last month bought privately held Integrien for the same reason. The market is looking for de facto standard governance systems of record and HP's other governance products plus ArcSight makes that a market opportunity only one for HP to lose.

This predictive approach to IT failures -- of identifying and ameliorating system snafus before they impact applications and data performance -- stands as the progeny of better IT operations continuity. The structured and unstructured systems data and analysis from ArcSight will help HP develop a constant feedback loop between build, manage and monitoring processes, to help ensure that enterprises remain secure and reliable in operations, says HP.

Consider too that managing security and dependability at the edge takes on a whole new meaning as enterprises dive more deeply into smartphones, mobile apps, netbooks, thin clients and desktop virtualization, and the need to not just manage each of them -- but all of them in an orchestra of coordinated data and applications access, provisioning and compliance.

Virtualization drives need for governance


Oh, and then there's the virtualization revolution that's only partly played out in enterprise IT and growing fast. And so how to manage and govern fleeting virtual instances of servers, networking equipment and storage? The logs. The logs data. It's a sure way to gain a complete view of IT operations, even as that picture is rapidly changing moment by moment.

Another complement to the ArcSight-HP match-up: All that log data needs to be crunched and reported, a function of BI-adept hardware and optimized systems, which, of course, HP has in spades.

So all this deep and wide governance capability from ArcSight is a strong complement to HP's Business Service Automation and Cloud Service Automation solutions, among several others. Given that HP already resells ArcSight's appliances (and soon, we're told all-software products, too), we should expect the combined solutions to be moving down-market to the SMBs pretty quickly. This global and massive market has also been a recent priority for HP across other products and services.

Don't just view the ArcSight purchase today through the lens of cyber security and compliance solutions. This is a synergistic acquisition for HP on many levels. The common denominator is comprehensive governance, and the next goal for the combined HP and ArcSight products and services is predictive BI for IT ... and correlating that all to the real-time business events and processes. That's the total business insight capability that companies so desperately need -- and only IT can provide -- to effectively manage complexity and risk.

You may also be interested in: