Thursday, September 11, 2008

Systems log analytics offers operators performance insights that set stage for IT transformation

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: LogLogic.

Read a full transcript of the discussion.

Despite growing complexity, IT organizations need to reduce operations costs, increase security and provide more insight, clarity, and transparency across multiple IT systems -- even virtualized systems. A number of new tools and approaches are available for gaining contextual information and visibility into what goes on within IT infrastructure.

IT systems information gushes forth from an increasing variety of devices, as well as networks, databases, and lots of physical and virtual servers and blades. Putting this information all in one place, to be analyzed and exploited, far outweighs manual, often paper-based examination. The automated log forensics solutions that capture all the available systems information and aggregate and centralize that information are becoming essential to efficient IT management.

To learn more about systems logs analytics, I recently moderated a sponsored BriefingsDirect panel discussion podcast with Pat Sueltz, the CEO at LogLogic; Jian Zhen, senior director of product management at LogLogic, and Pete Boergermann, technical support manager at Citizens & Northern Bank.

Here are some excerpts:
When I think of the state of the art in terms of reducing IT costs, I look at for solutions that can solve multiple problems at one time. One of the reasons that I find this interesting is that, first of all, you've got to be focused not just on IT operations, but also adjunct operations the firm offers out.

For example, security operations and controls, because of their focus areas, frequently look like they are in different organizations, but in fact, they draw from the same data. The same goes as you start looking at things like compliance or regulatory pieces.

When technologies get started, they tend to start in a disaggregated way, but as technology -- and certainly data centers -- have matured, you see that you have to be able to not only address the decentralization, but you have to be able to bring it all together in one point ... [This] undergirds the need for a product or solution to be able to work in both environments, in the standalone environment, and also in the consolidated environment.

There are a lot of logs and server systems sitting out in the various locations. One of the biggest issues is being able to have a solution to capture all that information and aggregate and centralize all that information. ... Approximately 30 percent of the data in the data centers is just log data, information that's being spewed out by our devices applications and servers.

We have the Log Data Warehouse that basically can suck information from networks, databases, systems, users, or applications, you name it. Anything that can produce a log, we can get started with, and then and store it forever, if a customer desires, either because of regulatory or because of a compliance issues with industry mandates and such.

[But then] how do you bring operational intelligence out and give the CIOs the picture that they need to see in order to make the right business decisions? ... People have been doing a lot of integration, taking essentially LogLogics information, and integrating it into their portals to show a more holistic view of what's happening, combining information from system monitoring, as well as log management, and putting it into single view, which allows them to troubleshoot things a lot faster.

We have so many pieces of network gear out there, and a lot of that gear doesn't get touched for months on end. We have no idea what's going on, on the port-level with some of that equipment. Are the ports acting up? Are there PCs that are not configured correctly? The time it takes to log into each one of those devices and gather that information is simply overwhelming.

Reviewing those logs is an enormous task, because there's so much data there. Looking at that information is not fun to begin with, and you really want to get to the root of the problem as quickly as possible. ... Weeding out some of the frivolous and extra information and then alerting on the information that you do want to know about is -- I just can't explain in enough words how important that is to helping us get our jobs done a lot quicker.

I think of taking control of the information lifecycle. And, not just gathering pieces, but looking at it in terms of the flow of the way we do business and when we are running IT systems. ... You've got to know what’s known and unknown, and then be able to assess that analysis -- what's happening in real-time, what's happening historically. Then, of course, you've got to be able to apply that with what's going on and retain it. ... We've also got to be able to work with [analytics] just as the systems administrators and the IT and the CSOs want to see it.

I like to use the term "operational intelligence," because that's really intelligence for the IT operations. Bringing that front and center, and allowing CIOs to make the right decisions is extremely critical for us.

It's all about getting that improved service delivery, so that we can eliminate downtime due to, for example, mis-configured infrastructure. That's what I think of in terms of the value.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: LogLogic.

Read a full transcript of the discussion.

Tuesday, September 9, 2008

ActiveVOS 6.0 helps extend SOA investments to the level of business-process outcomes

Active Endpoints has propelled business-process services to the forefront with the general availability release today of ActiveVOS 6.0, an integrated solution designed to free companies and developers from complexity and fragmentation in assembling business processes.

ActiveVOS, a standards-based orchestration and business process management system, permits developers, business analysts, and architects to collaborate across IT and business boundaries through an integrated visual tool.

The latest product from the Waltham, Mass., company includes integrated process modeling; a design, testing, debugging, and deployment environment; reporting and consoling; and a tightly integrated complex event processing (CEP) engine.

CEP helps extend services-based applications, but until now, it has required users to integrate yet another server into their applications and to manage the complexity of integrating the application with the CEP engine. ActiveVOS eliminates this challenge by providing a fully capable CEP engine.

Users select which events generated by the execution engine should trigger CEP events. In addition, these selections are made at deployment time, meaning that developers can easily add or modify CEP capabilities to running applications at will.

Standards implemented in ActiveVOS 6.0 include Business Process Modeling Notation (BPMN), Business Process Execution Language (BPEL) and human task management via the BPEL4People and WS-Human Task specifications.

Analysts can import models and documentation of existing applications, including Microsoft Visio drawings, directly into the graphical BPMN designer to document existing business processes and create new processes using a standards-based designer.

BPMN models are automatically transformed to executable BPEL, allowing developers to provide the implementation details necessary to turn the logical model into a running application. BPEL processes can also be transformed into BPMN, allowing the developer to document existing processes for analysts.

ActiveVOS permits developers to reuse plain old Java objects (POJOs) as native web services, and processes can be thoroughly tested and simulated, even when there are no actual services available during the testing phase. Because ActiveVOS is standards-based, it can go from design to execution without the need for custom code at execution time.

Dashboards, integrated reporting, and a universal console support the needs of operations staff and management.

Active Endpoints' latest packaging and integration, along with the emphasis on the business analyst-level process and visualization tools, strikes me as what the market is looking for at this stage of SOA and BPM.

The way they package and their tools helps reduce complexity in a unique way. I'd say that they have a fuller package as a solution than what I've seen elsewhere. And the depth of ActiveVOS OEM use testifies to the technical capabilities and adherence to standards.

ActiveVOS 6.0 is available for download, and has a free, 30-day trial. Pricing is set at $12,000 per CPU socket for deployment licensees. Development licenses are priced at $5,000 per CPU socket.

Friday, September 5, 2008

Red Hat buys Qumranet, adds gasoline to the spreading VDI bonfire

Open-source giant Red Hat has upped the ante in the PC desktop virtualization market with its acquisition of Qumranet, Inc. in a $107-million deal announced this week.

This acquisition clearly ups the ante in the race for Desktop Virtualization Infrastructure (VDI) solutions. I used to call VDI "desktop as a service (DaaS)," and still think that works pretty well. Anyay, the Red Hat purchase comes on the heels of HP's major virtualization push announced this week, which includes a large VDI component. [See a sponsored podcast on HP's virtualization solutions.]

The Red Hat purchase of Sunnyvale, Calif.-based Qumranet's kernel-based virtual machine (KVM) platform and SolidICE VDI solution is targeted at enterprise customers seeking to cut the total cost of providing applications, web access and runtime features to the client edge.

The acquisition of Qumranet gives the Raleigh, N.C.-based Red Hat a more comprehensive portfolio of virtualization offerings, including:
  • An open-source operating system with built-in virtualization.

  • An embedded hypervisor that supports major operating systems.

  • A consistent management platform for both virtual and physical systems.

  • A cloud and grid management solution.

  • Advanced, high-speed inter-application messaging.

  • An integrated security infrastructure.
SolidICE debuted in April, just weeks before Citrix unveiled its updated XenDesktop, putting Qumranet -- and now Red Hat -- head-to-head with Citrix and VMWare in the desktop virtualization arena. Microsoft may well take is forthcoming Hyper-V in a VDI direction, but for now seems content on partnering with Citrix on VDI. Sun Microsystems should own this market, but opted to hand over Java to the world and buy a tape drive company instead.

SolidICE is a high-performance, scalable virtualization solution built specifically for desktops, and not, Red Hat says, as a retrofit from server virtualization (slap!). It is based on the Simple Protocol for Independent Computing Environments (SPICE) and enables a Windows or Linux desktop to run in a virtual machine hosted on a central server or datacenter.

Virtualization has been around for decades, mostly on mainframes. It's foray into the desktop market was originally hampered by reliability and security issues. However, recent technological advances have ramped up interest and given virtualization a new head of steam. Such vendors as HP are seemingly confident that the performance issues are no longer an inhibitor, just as the economic drivers for virtualization (like energy conservation) are mounting fast.

Red Hat says that it doesn't expect the acquisition to contribute any substantial to its bottom line in the fiscal year that ends February 29, 2009, but after that the company is looking at $20 million in added revenue the following year.

In a nutshell, Qumranet and VDI fit Red Hat to a "T" -- with the service and maintance of centralized server-based clients just gravy on the already robust Red Hat infrastructure support business. VDI allows Red Hat to take its model to the PC, without leaving the datacenter. And it allows the promulagation of Linux for the client OS in much more expedient fashion than taking on Redmond on the desktop.

As I told NewsFactor Network, the market for VDI could be in store for a large growth spurt. VDI simply solves too many problems while providing very little disruption for end users to be ignored.

VDI, somewhat ironically, may also work well for market mover Microsoft as it seeks to slow the momentum to outright web-based and OSS/LAMP-supported applications and services for large businesses. Microsoft must realize that enterprises have had it with the high cost of maintaining and managing the traditional Windows OS in all its client-side permutations.

Not even a $300 million ad campaign for Vista can stop the addition and subtraction that spells this fact out. The math simply does not lie. Help desk costs to fix user config-type and malware issues are killing IT budgets.

Yet (just in time!) VDI allows Microsoft to keep the apps as Windows apps, retains the desktop OS license fees -- even if they are virtualized and server-based -- and VDI on Windows keeps developers and ISVs writing new and updating old apps to run on ... Windows. VDI allows converting client-server apps into Windows Server apps, without turning them into web apps.

Essentially, at the same time, virtualized and server-based VDI delivery of Windows apps and Windows desktop functionality allows enterprises to cut total costs, reuse aging desktop hardware, streamline updates and migrations, and slash security and privacy/control concerns (by maintaining management at the datacenter).

Help desks can actually be pared back, folks. Sorry, Ashish. Data can be kept safe on servers, not out in the strange world of lost hard drives and corporate espionage. Indeed, the U.S. Dept. of Defense (DoD) and other three-acronym spy agencies use VDI extensively. Nothing on the client but chips and dips. If you can do it there, you can do it anywhere.

Now, as Red Hat (and it's partner IBM?) seek to enter the VDI space aggressively and perhaps add Linux as the spolier runtime, Microsoft will need to accelerate its VDI initiatives. I expect MSFT to become the leader in VDI (perhaps via major acquisitions), as a hedge against Google, Red Hat, FOSS, the web, compute clouds, Amazon, IBM, and the far too high cost of traditional Windows clients.

Speaking of IBM, VDI offers Big Blue a way to play to all its global strengths -- infrastructure and services (green IT) -- while moving back into the client solutions (and end-to-end) value business in a potentially Big, Big, way. There's no reason why HP and IBM won't be huge beneficiaries of VDI, even as Microsoft makes it easier for them based on its own need to move quickly in this direction.

Here's a dark horse thought: If you can inject search- and web-based ads into web/SaaS apps, why could you not inject them into VDI-delivered apps? There could well be an additional business model of VDI-delivered desktops and apps supported by targeted ads. Telcos, cable providers, and service providers might (if the were smart) give away the PC/MID hardware, include the VDI/DaaS as part of triple-play connection or premium service fees, and monetize it all through relevant ads embedded intelligently in virtualized apps delivery. Nawwww!

Trust me, keep an eye on VDI, it has the potential to rock the IT market every way as much as Google/Yahoo/Amazon/SalesForce.com/SaaS -- only this trend hits the enterprise directly and fully. Incidentally, cloud computing as a private enterprise endeavor hugely supports the viability and economic rationale for VDI.

It's nice when IT megatrends align so well.

Tuesday, September 2, 2008

Interview: HP's virtualization services honcho John Bennett on 'rethinking virtualization'

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Hewlett-Packard announced a series of wide-ranging virtualization products, services and initiatives on Sept. 2. The drive indicates a global and long-term surge by HP on managing solutions for virtualization, but in the context of business outcomes and in a management framework that includes larger IT transformation strategies.

I conducted an earlier panel discussion on the HP announcements and vision, but also decided to go the top and interview the visionary behind the virtualization vision, John Bennett, the worldwide director of HP's data center transformation solutions and also the HP Technology Solutions Group (TSG) lead for virtualization.

Here are some excerpts from our chat:
We see large numbers of customers, certainly well over half, who have actively deployed virtualization projects. We seem to be at a tipping point in terms of everyone doing it, wanting to do it, or wanting to do even more. ... We see virtualization being driven more as tactical or specific types of IT projects.

It's not uncommon to see customers starting out, either to just reduce costs, to improve the efficiency in utilization of the assets they have, or using virtualization to address the issues they might have with energy cost, energy capacity or sometimes even space capacity in the data center. But, it's very much focused around IT projects and IT benefits.

The interesting thing is that as customers get engaged in these projects, their eyes start to open up in terms of what else they can do with virtualization. For customers who've already done some virtualization work, they realize their interesting manageability and flexibility options for IT. "I can provision servers or server assets more quickly. I can be a little more responsive to the needs of the business. I can do things a little more quickly than I could before." And, those clearly have benefits to IT with the right value to the business.

Then, they start to see that there are interesting benefits around availability, being able to reduce or eliminate planned downtime, and also to respond much more quickly and expeditiously to unplanned downtime. That then lends itself to the conversation around disaster recovery, and into business continuity, if not continuous computing and disaster tolerance.

It's a very interesting evolution of things with increasing value to the business, but it's very much stepwise, and today tends to be focused around IT benefits. We think that's kind of missing the opportunity. ... The real business value to virtualization comes in many other areas that are much more critically important to the business.

One of the first is having an IT organization that is able to respond to dynamically changing needs in real-time, increasing demands for particular applications or business services, being able to throw additional capacity very quickly where it's needed, whether that's driven by seasonal factors or whether it's driven by just systemic growth in the business.

We see people looking at virtualization to improve the organization's ability to roll out new applications in business services much more quickly. We also see that they're gaining some real value in terms of agility and flexibility in having an IT organization that can be highly responsive to whatever is going on in the business, short term and long term.

Yes, we see both pitfalls, i.e., problems that arise from not taking a comprehensive approach, and we see missed opportunities, which is probably the bigger loss for an organization. They could see what the potential of virtualization was, but they weren't able to realize it, because their implementation path didn't take into account everything they had to in order to be successful.

This is where we introduce the idea of rethinking virtualization, and we describe it as rethinking virtualization in business terms. It means looking at maximizing your business impact first by taking a business view of virtualization. Then, it maximizes the IT impact by taking a comprehensive view of virtualization in the data center. Then, it maximizes the value to the organization by leveraging virtualization for client implementations, where it makes sense.

But, it's always driven from a business perspective -- what is the benefit to the business, both quantitative and qualitative -- and then drilling down. ... I want to be able to drill down from the business processes and the business service management and automation tools into the infrastructure management, which in turn drills down into the infrastructure itself.

Is the infrastructure designed to be run and operated in a virtualized environment? Is it designed to be managed from an energy control point of view for example? Is it designed to be able to move virtual resources from one physical server to another, without requiring an army of people?

Part of the onus is on HP in this case to make sure that we're integrating and implementing support for virtualization into all of the components in the data center, so that it works and we can take advantage of it. But, it's up to the customer also to take this business and data center view of virtualization and look at it from an integration point of view.

If you do virtualization as point projects, what we've seen is that you end up with management tools and processes that are outside of the domain of the historical investments you've made. ... We see virtual environments that are disconnected from the insight and controls and governance and policy procedures put in for IT. This means that if something happens at a business-services level, you don't quite know how to go about fixing it, because you can't locate it.

That's why you really want to take this integrated view from a business-service's point of view, from an infrastructure and infrastructure management point of view, and also in terms of your client architectures.

Enterprises can lower the cost of IT operations implicitly by reducing the complexity of it and explicitly by having very standardized and simple procedures covering virtual and physical resources, which in conjunction with the other cost savings, frees up people to work on other projects and activities. Those all also contribute to reduce costs for the business, although they are secondary effects in many cases.

We see customers being able to improve the quality of service. They're able to virtually eliminate unplanned downtime, especially where it's associated with the base hardware or with the operating environments themselves. They're able to reduce unplanned downtime, because if you have an incident, you are not stuck to a particular server and trying to get it back up and running. You can restart the image on another device, on another virtual machine, restore those services, and then you have the time to deal with the diagnosis and repair at your convenience. It's a much saner environment for IT.

We see a large number of customers spending less than 30 percent of their IT budget on business priorities, and growth initiatives, and 70 percent or more on management and maintenance. With virtualization and with these broader transformational initiative, you can really flip the ratio around.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

HP experts portray IT transformation vision, explain new wave of virtualization products and services

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Virtualization has been gaining attention and adherents faster than ever, but the context of virtualization to business outcomes has been sketchy. Hewlett-Packard on Sept. 2 announced a series of products and services designed to place virtualization into a business and economic framework, one that helps enterprises flexibly embrace virtualization in the content of IT transformation.

The goal is to use many virtualization products and attain the needed professional services in a way that optimizes the outcomes and paybacks from virtualization activities, to make them related and managed holistically over an IT transformation lifecycle. While much of virtualization has focused on hardware reduction through higher utilization and more efficient platforms, virtualization is at a larger tipping point. The technologies and techniques are extending deeper into storage, applications, desktops, and even allowing enterprises to experiment with cloud computing benefits and efficiencies.

Virtualization needs to play well with itself at its various stages of use, across a variety of platforms and vendors, and it needs to play well too with the physical IT assets and resources. In a sense, virtualization is taking the need for managing and exploiting heterogeneity to an even higher level, with more dynamic and complex elements. It's a clarion call for management in total -- but with huge potential paybacks in terms of lower costs, greater agility, and improved security and control.

HP is well suited to pounce on the opportunity for bringing the variety of virtualization advances into cohesion, with management and risk reduction as the fore-thought, not the after-thought. So it's not surprising that HP on Sept. 2 unleashed a substantial set of announcements around virtualization -- spanning hardware, management, software, alliances, professional services ... and above all, vision.

HP is proposing that virtualization be re-thought in business terms and within a context of larger IT transformation undertakings, such as data center consolidation, application modernization, IT shared services, cloud computing, SOA, and next generation data center architectures. Virtualization is becoming ingrained across IT.

To better understand the new role and return for virtualization, how HP approaches the issues, and to gain more details on the Sept. 2 news, BriefingsDirect conducted a panel discussion with Greg Banfield, consulting manager for the HP Consulting and Integration (C&I) Group infrastructure practice; Dionne Morgan, worldwide marketing manager for HP’s Technology Services Group (TSG), and Tom Norton, worldwide practice lead for Microsoft Services at HP. Our podcast is moderated by me, Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
What’s interesting about virtualization is that, as companies have started to work with virtualization, the easy assumption is that you are really reducing the numbers of servers. But, as you expand your knowledge and your experience with virtualization, you start looking at comprehensive components in your environment or your infrastructure.

You start understanding what storage has to do with virtualization. You look at the impact of networks, when you start doing consolidation in virtualization. You start understanding a little bit more about security, for example.

Also, virtualization, in and of itself, is really allowing you to consolidate the sheer number of servers, but you still have the idea that each of those virtual servers needs to be managed. So, you get a better view about the overall impact of device management, as well as virtual machine management.

What’s interesting about this is that when you get into a virtualized environment, there's a need to understand the heartbeat of the virtualized environment and understand what’s going on at the hardware level. As you grow up from there with the virtualized machines, you have to understand how the virtual machines themselves are performing, and then there's got to be an understanding of how the actual applications are performing within that virtual machine.

So, management and virtual machine management, overall a comprehensive approach to management, is critical to being successful.

One of the key [benefits] areas is cost reduction. Virtualization can help with major cost savings, and that can include savings in terms of the amount of hardware they purchase, the amount of floor space that’s utilized, the cost of power and cooling. So, it improves the energy efficiency of the environment, as well as just the cost of managing the overall environment.

A lot of customers look to virtualization to help reduce their cost and optimize how they manage the environment. Also, when you optimize the management of the environment, that can also help you accelerate business growth. In addition to cost reduction, customers are also beginning to see value in having more flexibility and agility to address the business demand.

You have this increased agility or ability to accelerate growth. It also helps to mitigate risk, so it’s helping improve the up-time of the environment. It helps address disaster recovery, and business continuity. In general, you can summarize the business outcomes in three areas: cost reduction, risk mitigation, and accelerated business growth.

Strategy is becoming even more important. Our customers are very aware, as everyone else is now, that they have many options available to them as far as virtualization, not only from a perspective of what to virtualize in their environment, but also from a number of partners and technology suppliers who have different views or different technologies to support virtualization.

Our customers, from a strategy and design perspective, have looked to us to provide some guidance that says "How can I get an idea of the net effect that virtualization can have in my environment? How can I present that and gain that experience, but at the same time understand my long- term view of where I want to go with virtualization, because there is so much available and there are so many different options? How do I make a logical and sensible first attempt at virtualization, where I can derive some business value quickly, but also match that up against strategy for a long-term vision?"

What we are trying to look for is taking the complexity out of an introduction to virtualization. We're trying to take the complexity out of the long-term vision and planning and give the customers an idea of what their journey looks like, rapidly introduce it, but in the right direction, so they are following their overall vision in gaining their overall business value.

... We assess what’s happening in the organization from a people, a process, and a technology perspective. We benchmark against what’s happening in the industry, making recommendations on where a customer can actually improve, on some of those processes to improve efficiency, and to improve on the service level they are providing to the business. We also assist with the implementation of some of those process improvements. If you look at this from a full lifecycle perspective, HP provides services to assist with everything from strategy, to design, to transition, to the ongoing operations and continual improvement.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Hewlett-Packard.

Read a full transcript of the discussion.

Wednesday, August 27, 2008

Databases leverage MapReduce technology to radically juice data scale, performance, analytics

In what could best be termed a photo finish, Greenplum and Aster Data Systems have both announced that they have integrated MapReduce into their massively parallel processing (MPP) database engines.

MapReduce, pioneered by Google for analyzing the Web, now becomes available to enterprises and service providers, giving them more access and visibility into more data from more origins. Originally created to analyze massive amounts of unstructured data, the approach has been updated to analyze structured data as well.

Greenplum, San Mateo, Calif., says that MapReduce will be part of its Greenplum Database beginning in September. Aster Data, Redwood Shores, Calif., says that MapReduce will be included in its Aster nCluster. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

Curt Monash, president of Monash Research, editor of DBMS2, and a leading authority on MapReduce, sees this as a major leap forward. He reports that both companies had completed adding MapReduce to their existing products and had been racing to the finish line to get their news out first. As it turned out, both made their announcements within hours of each other.

Curt lists some points on his blog about what this new technology marriage means.
  • Google’s internal use of MapReduce is impressive. So is Hadoop’s success. Now commercial implementations of MapReduce are getting their shots too.

  • The hardest part of data analysis is often the recognition of entities or semantic equivalences. The rest is arithmetic, Boolean logic, sorting, and so forth. MapReduce is already proven in use cases encompassing all of those areas.

  • MapReduce isn’t needed for tabular data management. That’s been efficiently parallelized in other ways. But, if you want to build non-tabular structures such as text indexes or graphs, MapReduce turns out to be a big help.

  • In principle, any alphanumeric data at all can be stuffed into tables. But in high-dimensional scenarios, those tables are super-sparse. That’s when MapReduce can offer big advantages by bypassing relational databases. Examples of such scenarios are found in CRM and relationship analytics.
Greenplum customers have been involved in an early-access program using Greenplum MapReduce for advanced analytics. For example, LinkedIn is using Greenplum Database for new, innovative social networking features such as “People You May Know” and sees it as a way to develop compelling analytics products faster. A primary benefit of the new capability is that customers can combine SQL queries and MapReduce programs into unified tasks that are executed in parallel across hundreds or thousands of cores.

Part of the appeal of business intelligence and its huge ramp-up over the past five years is that IT assets play an ever larger role in providing unprecedented strategic guidance and insights to leaders of enterprises, governments, telecos and cloud providers. IT has gone from an automating business functions role to an essential crystal ball service of the highest order. By consequently gaining access to larger data sets that -- more than ever before can be mined and analyzed for higher levels of process and business refinements -- IT has become a member of the board.

With better data reach and inclusion, come better results. So BI allows leaders can establish the trends early that will determine their future success or failures. In a fast-paced, global, hyper competitive business landscape these insights are the currency of success for the future. The better you do BI, the better you do business ... current, near-term and long-term. There's no better way to know your customers, competitors, employees and the variables that buffet and stir markets than effective BI.

Now, by exanding the role and reach of MapReduce technologies and methods, a powerful new tool is added to the BI arsenal. More data, more data types, more data sources -- all rolled into an analytical framework that can be directly targeted by developers, scripters, business analysts, exectutives, and investors.

These new MapReduce use announcements mark a significant advancement that helps makes IT another notch higher in its utility and indespensible nature to business. And it comes at a time when more data, meta data, complex events, transactions and Internet-scale inferences demand tools that can do for enterprise BI what Google has done for Web search and indexing.

Being comprehensive and deep with massive data sets analytics offers a new mantra: The database is dead, long live the data. Structured data and the containers that contain it are simply not enough to organize an access the intelligence lurking on modern networks, at Internet scale and Internet time.

Tuesday, August 26, 2008

Citrix makes virtualization splash with new version of XenApp to speed desktop applications delivery

Citrix Systems has overhauled its flagship presentation server product, promising IT operators higher performance and lower costs, while improving the end-user experience. The company this week announced Citrix XenApp 5, the next generation of its application virtualization solution.

The new version of XenApp, formerly the Citrix Presentation Server, combines with Citrix XenServer to create an "end-to-end" solution that spans servers, applications, and desktops. Companies using the new combined product can centralize applications in their datacenter and deliver them as on-demand services to both physical and virtual desktops.

Virtualization, while not a new technology, has currently been gaining a huge head of steam, as companies realize the deployment, maintenance, and security benefits of central control across nearly all applications, while also providing businesses with agile and flexible solutions.

In my thinking, virtualization is allowing the best of the old (central command and control) with the new (user flexibility and ease of innovation). Virtualizing broadly places more emphasis on the datacenter and less on the client, without the end user even knowing it.

What's more, from a productivity standpoint, the end users gain by having app and OS updates and fixes done easier and faster (fewer help desk calls and waits), while operators can excercise the security constraints they need (data stays on the server), and developers need only target the server deployments (local processing is over-rated).

And, of course, virtualization far better aligns IT resources supply with demand, removing wasted utilization capacity while allowing for more flexibility in raming up or down on specific applications or data demands. Works for me.

Currently, most IT operations are faced with managing myriad Windows-based applications, and are hampered by the demands of installing, patching, updating, and removing those applications. Many users have simplified the task and lowered cost by using server-based deployment. We'll see a lot more of this, and that includes more uptake in the use of desktop virtualization, but that's another topic for another day.

According to Fort Lauderdale, Fla.-based Citrix, version 5 of XenApp, which includes more than 50 major enhancements, can improve application start-up time by a factor of 10 and reduces applications preparation and maintenance by 25 percent.

Of the major new features, I like the support for more Windows apps and compatibility with Microsoft AppV (formerly Softgrid), the HTTP streaming support, the IPV6 support, as well as the improved performance monitoring and load balancing. Also very nice is the "inter-isolation communication," which allows each app to be isolated and also aggregrated as if installed locally. Add to that the ability of the apps to communicate locally, such as cut and paste. Think of it as OLE for the virtualized app set (finally).

I've been watching Citrix since it took the bold step of acquiring XenSource just a little over a year ago. At that time, I saw the potential for its move to gobble a piece of the virtualization pie:
The acquisition also sets the stage for Citrix to move boldly into the desktop as a service business, from the applications serving side of things. We’ve already seen the provider space for desktops as a service heat up with the recent arrival of venture-backed Desktone. One has to wonder whether Citrix will protect Windows by virtualizing the desktop competition, or threaten Windows by the reverse.
The new XenApp 5 release is being featured on Sept. 9 as part of a global, online launch event called, Citrix Delivery Center Live! This virtual event is the first in a series that will take place in the second half of 2008 highlighting the entire Citrix Delivery Center product family. This debut event features presentations, chat sessions and online demos from Citrix, as well as participation from key partners such as Microsoft and Intel. I'm also looking forward to attending Citrix's annual analyst conference in Phoenix on Sept. 9.

XenApp 5, which runs on the Microsoft Windows Server platform, leverages all the enhancements in Windows Server 2008 and fully supports Windows Server 2003. This enables existing Windows Server 2003 customers to immediately deploy Windows Server 2008 into their existing XenApp environments in any mix.

XenApp 5 will be available Sept. 10. For North America, suggested retail pricing is per concurrent user (CCU) and includes one year of Subscription Advantage, the Citrix program that provides updates during the term of the contract:
  • Advanced Edition – $350

  • Enterprise Edition – $450

  • Platinum Edition – $600
Standalone pricing for client-side application streaming and virtualization begins as low as $60 per CCU. TCO for virtualized apps will over time continue to fall, a nice effect for all concerned.

Thursday, August 21, 2008

Pulse provides novel training and tools configuration resource to aid in developer education, preparedness

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Genuitec.

Read a full transcript of the discussion.

Java training and education has never been easy. Not only is the language and its third-party and community offerings constantly moving targets, each developer has his or her own preferences, plug-ins inventory and habits. What's more, the "book knowledge" gained in many course settings can vary wildly from what happens in the "real world" of communities and teams.

MyEclipse maker Genuitec developed Pulse last year to monitor and update the most popular Eclipse plug-ins, but Pulse also has a powerful role in making Java training and tools preferences configuration management more streamlined, automated and extensible. Unlike commercial software, in the open source, community-driven environments like Eclipse, there is no central vendor to manage plug-ins and updates. For the Eclipse community Pulse does that, monitoring for updates while managing individual developers' configuration data -- and at the same time gathering meta data about how to better serve Eclipse and Java developers.

I recently moderated a sponsored podcast to explore how Pulse, and best practices around it use, helps organize and automate tools configuration profiles for better ongoing Java training and education. I spoke with Michael Cote, an analyst with RedMonk; Ken Kousen, an independent technical trainer, president of Kousen IT, Inc., and adjunct professor at Rensselaer Polytechnic Institute; and Todd Williams, vice president of technology at Genuitec.

Here are some excerpts:
The gap between what's taught in academia and what's taught in the real world is very large, actually. ... Academia will talk about abstractions of data structures, algorithms, and different techniques for doing things. Then, when people get into the real world, they have no idea what Spring, Hibernate, or any of the other issues really are.

It's also interesting that a lot of developments in this field tend to flow from the working professionals toward academia, rather than the other way around, which is what you would find in engineering.

Part of what I see as being difficult, especially in the Java and Enterprise Java market, is the huge number of technologies that are being employed at different levels. Each company picks its own type of stack. ... Finding employees that fit with what you are trying to do today, with an eye toward being able to mature them into where you are going tomorrow, is probably going to always be the concern.

You look at the employment patterns that most developers find themselves in, and they are not really working at some place three, five, 10, even 20 years. It's not realistic. So, specializing in some technology that essentially binds you to a job isn't really an effective way to make sure you can pay your bills for the rest of your life.

You have to be able to pick up quickly any given technology or any stack, whether it’s new or old. Every company has their own stack that they are developing. You also have to remember that there is plenty of old existing software out there that no one really talks about anymore. People need to maintain and take care of it.

So, whether you are learning a new technology or an old technology, the role of the developer now, much more so in the past, is to be more of a generalist who can quickly learn anything without support from their employer.

Obviously, in open source, whether it’s something like the Eclipse Foundation, Apache, or what have you, they make a very explicit effort to communicate what they are doing through either bug reports, mail lists, and discussion groups. So, it's an easy way to get involved as just a monitor of what's going on. I think you could learn quite a bit from just seeing how the interactions play out.

That's not exactly the same type of environment they would see inside closed-wall corporate development, simply because the goals are different. Less emphasis is put on external communications and more emphasis is put on getting quality software out the door extremely quickly. But, there are a lot of very good techniques and communication patterns to be learned in the open-source communities.

[With Pulse] we built a general-purpose software provisioning system that right now we are targeting at the Eclipse market, specifically Eclipse developers. For our initial release last November, we focused on providing a simple, intuitive way that you could install, update, and share custom configurations with Eclipse-based tools.

In Pulse 2, which is our current release, we have extended those capabilities to address what we like to call team-synchronization problems. That includes not only customized tool stacks, but also things like workspace project configurations and common preference settings. Now you can have a team that stays effectively in lock step with both their tools and their workspaces and preferences.

With Pulse, we put these very popular, well-researched plug-ins into a catalog, so that you can configure these types of tool stacks with drag-and-drop. So, it's very easy to try new things. We also bring in some of the social aspects; pulling in the rankings and descriptions from other sources like Eclipse Plug-in Central and those types of things.

So, within Pulse, you have a very easy way to start out with some base technology stacks for certain kinds of development and you can easily augment them over time and then share them with others.

The Pulse website is www.poweredbypulse.com. There is a little 5 MB installer that you download and start running. If anyone is out in academia, and they want to use Pulse in a setting for a course, please fill out the contact page on the Website. Let us know, and we will be glad to help you with that. We really want to see usage in academia grow. We think it’s very useful. It's a free service, so please let us know, and we will be glad to help.

I did try it in a classroom, and it's rather interesting, because one of the students that I had recently this year was coming from the Microsoft environment. I get a very common experience with Microsoft people, in that they are always overwhelmed by the fact, as Todd said, there are so many choices for everything. For Microsoft, there is always exactly one choice, and that choice costs $400.

I tried to tell them that here we have many, many choices, and the correct choice, or the most popular choice changes all the time. It can be very time consuming and overwhelming for them to try to decide which ones to use in which circumstances.

So, I set up a couple of configurations that I was able to share with the students. Once they were able to register and download them, they were able to get everything in a self-contained environment. We found that pretty helpful. ...

It was pretty straightforward for everybody to use. ... whenever you get students downloading configurations, they have this inevitable urge to start experimenting, trying to add in plug-ins, and replacing things. I did have one case where the configuration got pretty corrupted, not due to anything that they did in Pulse, but because of plug-ins they added externally. We just basically scrapped that one and started over and it came out very nicely. So, that was very helpful in that case.

We have a very large product plan for Pulse. We've only had it out since November, but you're right. We do have a lot of profile information, so if we chose to mine that data, we could find some correlations between the tools that people use, like some of the buying websites do.

People who buy this product also like this one, and we could make ad hoc recommendations, for example. It seems like most people that use Subversion also use Ruby or something, and you just point them to new things in the catalog. It's kind of a low-level way to add some value. So there are certainly some things under consideration.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Sponsor: Genuitec.

Read a full transcript of the discussion.