Friday, October 17, 2008

BriefingsDirect Insights Analysts identify IT winners and losers in global economic downturn

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Charter sponsor: Active Endpoints.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Insights Edition, Vol. 31, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events, with a panel of IT analysts and guests.

In this episode, recorded Oct. 10, 2008, our experts examine the worldwide economic maelstrom, with an eye to the IT sector and how enterprises and vendors will be impacted. While the times will remain challenging for the foreseeable future, there are opportunities and counter-intuitive effects -- like beginning a start-up company -- when economics drive more of the rationalization around IT decisions.

Please join noted IT industry analysts and experts Tony Baer, senior analyst at Ovum; Jim Kobielus, senior analyst at Forrester Research, and Dave Linthicum, independent SOA consultant at Linthicum Group. Our discussion is hosted and moderated by your's truly.

Here are some excerpts:
On the IT Winners

Baer: ... The winners are those who are likely to be more diversified into services, services that can help companies harvest more of what they already have. ... The fact is that in an economic situation like this, especially where there are a lot of known unknowns, having a services business is a good way of helping clients to discover new economies. And it's also potentially a much more flexible arrangement than having to put in an upgrade of a new version of SAP software.

Gardner: My first take on this is that the government vertical is actually going to explode and might even start going down this road towards transformation in a much more significant way.

Linthicum: People are going to look to government to solve some of these issues and bureaucratic changes are going to be built here in different divisions, and people are going to have oversight of the financial industry.

If the Democratic administration comes in, there is going to be more civilian spending, and there is going to be probably a little shift from the spending in the Department of Defense on the military side.

So, this area is going to be explosive yet again, based on some things that are occurring and based on the government taking power in particular industries.

... I think healthcare is going to remain fairly static, and I think some of their costs maybe reduced. As they start moving into more of a socialized medicine, if the Democrats take it there, there is going to be some big shifts there.

Believe it or not, even though you are moving into a healthcare-for-everyone kind of an environment, you are going to see that actually [IT] cost probably will go up, as a bureaucracy is put in place to maintain and administer that.

Baer: I hate to use the 'D' word but back in the depression, and I hope we are not heading into one, what area boomed during that era? Hollywood, the film industry. People were going out to the movies for cheap thrills. In today's environment, the equivalent of that is, if you already have an Xbox 360 out there, you are going to be buying more games. Those are cheap thrills.

Gardner: We haven't talked about one sector, and that is the Entertainment/Web 2.0/Internet. We’ve seen some downturn in advertising, including Internet advertising, but is there an opportunity for buying $3 movie and downloading it, a $2 song, a $3 game.


On the IT Losers


Koblielus: ... Those who will get hurt are those vendors who rely on new-product sales, especially new product sales that are very much hardware-centric. ... In any economic downturn, the things that get cut from corporate budgets, for example, are large capital expenditure (CAPEX) projects. That's going to hurt a number of IT vendors in particular niches, for example the hardware vendors, and also where it's a discretionary software upgrade purchase. Those are also going to feel the crunch.

In any downturn, users, large corporate IT, look to rationalize and streamline their vendor commitments. In other words, they consolidate to a few very large, very strategic vendors. So, the big guys will get bigger and the small, pure-play data-warehousing appliance vendors will be acquired or will vanish.

Gardner: On the other hand some, verticals that don't look good include retail and manufacturing. The auto industry is getting whacked.

Linthicum: The retail space is going to suffer tremendously. They already have very narrow razor-thin margins. I think we are going to see a lot of the larger retailers suffer and perhaps go away. ... Finance is obviously going to be killed for a long time, especially the banking industry. That's going to be an area that isn't going to recover very quickly from what's going on right now, but I think that manufacturing ultimately will recover and we are going to see some good growth in the year 2009-2010.

Kobielus: ... It basically supplements the fact that there is going to be a decline in the journalist population, essentially a migration towards the extremes, which is on one hand journalism and this is not a development.

I’m very happy to see is that, as the financial base and the business model for journalism businesses is evaporating at this point, you are seeing more-and-more citizen journalists taking up more of the load. People are reading more blogs. They are not buying newspapers.


On the Consequences

Linthicum: [IT buyers] are looking to morph the way in which they consume IT. ... They plan to implement strategic technology into their enterprise [in a way that] increases in interest but decreases in cost. In other words, people are going to move into more efficient technologies. They are going to look at a little bit more at cloud computing and other ways to save money and start moving aggressively in those directions.

... Instead of having a huge Microsoft infrastructure just for e-mail and calendar-sharing in groupware, and those sorts of things, moving to things that are in the cloud. This is obviously Google, but there is also a ton of other guys that are offering some pretty good technology -- information-sharing using similar infrastructure. They’ll start outsourcing that, versus maintaining all these data centers that are just dealing with e-mail and communication between people within the company.

Gardner: Where does this put Microsoft?

Bear: ... They are in a transition. ... In the short-term, I think it's going to hurt their business, because clearly take-up of Vista has pretty well-flagged, especially on the corporate side. Obviously they are trying to cultivate the Software Plus Services side, but that business is still very much in its early in its cycle.

Linthicum: You can go off-premises with lots of stuff and the cost is always cheaper, and also it allows you to upgrade and innovate into new technological areas you haven’t driven before.

Next, would be tactical, software-as-a-service (SaaS) applications. Take some of the HR processing, which is driven by some kind of in-house system in the data center, and outsource that to the dozen or so SaaS vendors who are offering HR processing. That's kind of a light-weight business process.

Then, the next generation is even more risky, and I don’t see a ton of guys doing that initially. It involves some of the core business processes, and getting into an SOA kind of an initiative. Re-automating those, but also outsourcing a tremendous number that haven’t been done before for the primary reason of cost saving.

Kobielus: I see definitely the economic downturn is going to expand the footprint, as it were, for the cloud in data warehousing, where data warehouses are becoming ever larger in the hundreds of terabytes and now into the petabyte.

I’m seeing an upsurge in the number of start-ups and data warehousing vendors that now have cloud based offerings. ... In other words, where there is a capital expenditure crunch or a budget crunch, and users can’t afford to pay the millions of dollars to bring one of these petabyte-scale data warehouses in house, they are going to go outside to the likes of a 1010data or using Amazon EC2 to aggregate, persist these huge datasets.

They can do very complex analyses and also run a greater degree of their data mining and predictive analytics algorithms in that very cloud. It just saves them money, and it's not a huge capital expenditure. It's a pay-as-you-go kind of thing. I think that's going to be the trend and those vendors who are already out there could be the major beneficiaries of this current economic crunch.

Baer: ... In times like these, obviously you have changing economic conditions, changing in a very unpredictable manner. On the other hand, the financial crunch and the credit crunch is going to restrict the amount of resources you have at your disposal. So, you’re basically going to look very opportunistically. You are going to look at, let's say, the low-hanging fruit that will give you the greatest gain in savings or a way to respond to the market in a more agile manner.

... You won’t necessarily do a global top down or enterprise-architectural SOA transformation, if you haven't done SOA already. But, opportunistically, if you are trying to take advantage of some of these cloud-based services to start doing mining on a more massive scale, at the same time trying to lower your risk, it will require certain applications or data source that you may have. You may need to conduct a transformation, where you will implement, more flexible architectures, data SOA architecture.

But you will do it opportunistically in these tactical areas, where you can take advantage of services in the cloud that give you the advantages of the transformation to solve the problem you need to deal with, and at the same time, minimizing your risk.

Kobielus: ... The financial vertical and the government vertical are becoming overlapped. There is a degree of nationalization already that's taking place. The government is taking back Fannie Mae and Freddie Mac. I think they have taken over AIG, but all around the world, you hear governments, especially in Europe saying, “Hey, we need to re-nationalize or, to some degree, exert tighter control over the financial vertica., I think this is everywhere in the world.

What we’re already seeing is that the government vertical, as they have indicated, will continue to grow, because it's going to exercise much greater oversight and equity positions within the financial vertical. I think the early part of this decade is a prelude to what we’re going to see in even greater abundance in the next 10 years.

After the whole Enron fiasco, with Sarbanes-Oxley and so forth, we saw the growth of this market and this technology called governance, risk management, and compliance (GRC) to exert tighter control over the financials of private enterprise, and bring greater transparency.

I think we are going to see now, the government exert ever tighter GRC reigns over the financial sector, to a degree unprecedented, because we now have government actually owning or controlling a number of the key firms in that space. So, the whole GRC sector is in an embryonic stage. There are a number of vendors like SAP and Oracle who have taken sort of a leading-edge position in that area. That will expand greatly, and we are going to see more of these risk dashboards and controls being implemented in the context of BI and the data warehousing investments that enterprises have already made.

In terms of the horizontals, the GRC sector will come into its own, and it will be primarily the driver. There will be the financials, and then it will be around the world. All governments will enforce the use of this kind of technology.

Baer: ... In the case of governance, I don't know if I would call it “opportunistic,” but it is an area in which you do not have an option as to whether you comply or not. Therefore, the only economic way to provide all the information and to do all the audits without having to rip apart all of your existing back-end infrastructure is through a service's layer on top of all that.

Maybe I can come up with a cheap buzzword here, a buzz-line or a tag-line, such as “Son of SOX,” for what's going to become a changing regulatory environment. You’ll need a governance layer that can contend with changes in this moving target. Obviously, the only feasible way, from an architectural standpoint, to deal with that is do a flexible architecture, and that's essentially what a SOA is.

Linthicum: I think this is a great time to do a startup. Number one, VCs be damned at this point. You don't need their money at all, just some angel investors to invest in some very minute infrastructure. With cloud computing out there and the number of things you can do from a marketing, application developer's, and outsourcing perspective, you can basically get a technology company up and running -- and profitable -- probably for the least real cost we've seen in years. It's a great time for people who are innovative, able, and resourceful to get out there and start technology companies.

... Now is a great time for small innovative new startups to get out there and help create new spaces, such as Web 2.0, and I think there are a number of SOA problems that needs solving as well. I'd love to see some startups get out there and take those problems on.
Read a full transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Charter sponsor: Active Endpoints.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Tuesday, October 14, 2008

Cast Iron takes 'integration as a service' to cloud-based or on-premises deployment

Cast Iron Systems is striking a new pose by putting template-driven integration services in the cloud, offering customers a choice between cloud-based and on-premise "integration as a service."

The new cloud-based solution from the Mountain View, Calif. company will allow enterprises to rapidly connect software-as-a-service (SaaS) solutions with other on-demand and on-premise applications.

This ability to integrate applications across hybrid deployment types is critical for enterprises to increasingly move to cloud-based and SaaS models. More than not, applications will come from several sources, and so the integrations becomes the enabler of the models ongoing use and success.

At the heart of the Cast Iron integration service is a cloud-based library of template integration processes (TIPs) for the most common SaaS business processes, based on Cast Iron’s experience with thousands of customer integrations.

For example, if customers need to integrate two SaaS applications, they can search the TIPs library, choose the TIP that matches their scenario, and deploy it to the cloud. Their SaaS-to-SaaS integration project goes live rather than taking weeks, or even months to develop with custom code. Also, SaaS integrations can now be monitored from anywhere, anytime using the Cast Iron Cloud. Sharing of templates widely is encouraged.

In addition, for companies that want to customize TIPs based on their specific requirements, Cast Iron provides a self-guided wizard similar to the simple wizard-based experience in popular products like Intuit TurboTax. Users answer a few questions based on the specific situation, and the integration process is automatically customized to expedite SaaS integration and adoption.

The integration services can be accessed via the Cast Iron Cloud, on-premises deployments, as well as via both physical and virtual appliances, says Cast Iron.

Curt Monash of DBMS2 says that the move by Cast Iron isn’t the first such offering, but seems to be the most comprehensive:
The most comprehensive integration-as-a-service story I’ve heard may be the one Cast Iron Systems is rolling out. Cast Iron is hosting with OpSource any integration you can get in the Cast Iron appliance. To emphasize this, pricing is identical to that of the rental option for the appliance ($1K/month in the simplest two-endpoint cases), and customers are encouraged to switch between appliance and cloud usage as they see fit.
At the same time, Cast Iron announced a SaaS partner program, which is calls Powered by Cast Iron. This is designed to provide integration solutions for SaaS independent software vendors, value-added resellers, system integrators, and OEMs by combining leading products, service methodologies, and customized sales and marketing strategies.

Among the SaaS providers already participating in the program are Gearworks, Intelenex, Serene Corp., Taleo, and Xactly.

Monday, October 6, 2008

With Systinet 3.0, HP broadens SOA governance role to encompass services lifecycle, business processes, IT service management

Hewlett-Packard trotted out the new HP SOA Systinet 3.0 registry today, capping a year of announcements that create a services oriented architecture (SOA) lifecycle portfolio, and extends the governance function broadly -- a cradle to grave approach -- that spans from design time to run time and all the way up to project portfolio management (PPM).

The newest market leading Systinet UDDI registry forms the cockpit for managing not only services, but with the newly added Business Process Execution Language (BPEL) support, takes the helm for business processes, too. HP plans to further push the envelope on a master management value even further into IT operations and IT Service Management, as well as a PPM role with the registry.

HP SOA Systinet 3.0 is designed with broadening the use of SOA governance, and IT governance, in mind. HP Software sees SOA moving toward more enterprise-wide deployment. To get there, the role of the registry itself needs to expand. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Brad Shimmin at Current Analysis has a comprehensive write-up on the earlier announcements and HP's direction for the product.

The new SOA infrastructure component captures more than UDDI information, it encompasses best practices, CMDB information, and sets the stage for a wider "culture of governance" to emerge in enterprises, said Kelly Emo, SOA product marketing manager at HP Software.

If you've been following HP's acquisitions and development efforts of the past five years, you'll see a distinct pattern of putting the pieces together for a total or master management capability. The goal is not simply putting all the management data in a common repository, but of elevating the visibility into management across more aspects of IT and business processes.

That visibility and the access to the right systems in the right business context then provides the means to further automate IT and SOA activities, to capture best practices and instantiate them back into how IP performs with repeatability and scale.

This latest product release caps a series of significant acquisitions by HP, from Systinet to Opsware. The cradle to grave story of comprehensive IT management and automation is not yet complete, but the strategy is clear. And the pivotal role of the registry is also clear.

The movement is to expand SOA governance, but perhaps more importantly, expand governance in general across more of what IT touches. Rules, roles, business context, policy, development-to-deployment lifecycles, operational efficiency, projects and services -- all need to be brought into a contextual whole. Not by a common product set, but via standards, technology provider inclusion, and with a methodological and cultural commonality emphasis. There really isn't another place to try and find this common framework for stitching together the disparate aspects of IT management -- the SOA registry is as good as we have nowadays.

Of course, the trends in the market make a move toward comprehensive IT service management via automation -- not reactive and disjointed manual stop-gaps -- imperative. As enterprises take up virtualization, cloud computing, SOA, master data management, and such IT shared services approaches typified by ITIL 3.0, then the scale, complexity and range of inter-dependent IT assets needs a better master.

HP is placing a large bet on the HP SOA Systinet 3.0 registry will fill the roles of eyes, ears, and execution coordinator for more of what makes IIT tick.

More information on the release.

WSO2 eases enterprise data availability for SOA access, consumption

A new data services offering from WSO2 allows a database administrator (DBA) or anyone with a knowledge of SQL to access enterprise data and expose it to services and operations through a Web services application-programming interface (API).

WSO2 Data Services, the latest open-source product from the Mountain View, Calif. company, helps DBAs and programmers contribute to a company’s service-oriented architecture (SOA) by creating WS-* style Web services and REST-style Web resources based on enterprise data. [Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.]

Users can enter queries and map them into services and operations. Once the query or stored procedure has been exposed as a service, it can be accessed across the network as a service or Web resource.

In its initial release, the application supports access to data stored in relational databases such as Oracle, MySQL and IBM DB2, as well as the comma-separated values (CSV) file format, and Excel spreadsheets. It allows users to authenticate, encrypt and sign services using the WS-Security and HTTP security standards. In addition, support for the WS-ReliableMessaging standard provides enterprise-level reliability.

Current Analysis's Brad Shimmin has some good points on the release (log in required).

Features of WSO2 Data Services 1.0 include:
  • Data aggregation, which allows administrators to create services that aggregate data from multiple data sources.
  • Wizards for easy configuration.
  • XML configuration file format.
  • A "try-it" tool that lets users test the data services they have created within the Data Services console.
  • Dual REST and WS-* support. REST resources access data using a unique URL for each record; WS-* services use typical Web service access to expose data.
  • Built-in caching to eliminate the system overhead of repeatedly returning the same XML response to clients.
The new data services product will be available for download beginning today from the WSO2 Web site. As an open-source product there are no licensing or subscription fees, although service and support options are available.

Thursday, October 2, 2008

BriefingsDirect Insights analysts examine HP-Oracle Exadata, 'extreme' BI, virtualization and cloud computing news

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsors: Active Endpoints, Hewlett-Packard.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Welcome to the latest BriefingsDirect Insights Edition, Vol. 30, a periodic discussion and dissection of software, services, SOA and compute cloud-related news and events, with a panel of IT analysts.

In this episode, recorded Sept. 26, 2008, our experts examine the HP-Oracle announcements at Oracle OpenWorld, cloud computing and "on-premises" clouds, and recent virtualization news from VMware, HP, Red Hat and Citrix.

Please join noted IT industry analysts and experts Joe McKendrick, an independent analyst and ZDNet blogger; Brad Shimmin, principal analyst of Current Analysis; Jim Kobielus, senior analyst at Forrester Research, and Dave Linthicum, independent SOA consultant. Our discussion is hosted and moderated by your's truly.

Here are some excerpts:
Oracle announced the release, in partnership with HP, of a very high-end data warehousing appliance. They may not use the word "appliance," but that's in fact what it is. It's called the HP Oracle Database Machine. It encompasses and includes the Oracle Exadata Storage Server, which is a grid storage level server.

What Oracle and HP have essentially done is take a page from the Netezza book, because that is, of course, the feature of the Netezza performance system. What they did essentially is they also shot across Teradata's bow, because this is Oracle's petabytes-scale, data warehousing-solution platform.

I was shocked, shocked, absolutely simply shocked. This is because historically Oracle has strayed so far away from the appliance market. I am glad to see this happening. ... Now if only they would release parts of their middleware as an appliance, I would be very happy.

Larry Ellison indicated that they seem to have some plans for that. They really resisted details -- but they seem to have some plans to "appliance-ize," if that's the word, more and more the Oracle Fusion Middleware stack.

It almost seems now that Oracle has anointed HP at some level as a preferred hardware supplier on storage, if not also other aspects of hardware. What does that mean for EMC and some of the other storage hardware providers?

I think that all of those relationships will come under strain from this. There is no question about that. I think there are going be a lot of far-ranging ripples from this relationship that will change the way the market functions.

It certainly moves the business intelligence (BI) arena forward. ... Now there is a trend emerging. I am sure Oracle has an eye on this as well. It's toward open source. We are seeing more open source in data warehouses too. This is open source at the warehouse level itself, at the database level itself. [Sun Microsystem's] MySQL for example has been pointing in this direction, PostgreSQL as well. [And there's Ingres.]

Now with Oracle and HP cooperating, why shouldn't we expect Sun to come out with something quite similar, but with MySQL as the database, and their [Sparc/UltraSparc] processing, and their rack, cooling and InfiniBand, and of course, their storage? If Sun does that then IBM will certainly come out with something around DB2.

There is an emphasis on simplifying data warehousing, making data warehousing simple for the masses. Microsoft, love them or hate them, has been doing a lot of work in this area by increasing the simplicity of its data warehouse and making it available at more of a commodity level for the small to medium size business space.

One of the other important outcomes from my point of view this week at Oracle OpenWorld -- was the fact that Oracle, now in conjunction with Amazon's Elastic Compute Cloud, has an Oracle cloud -- the existing Amazon cloud can take Oracle database licenses. ... Using tools that Oracle is providing they can move their data to back it up or move databases entirely to be persistent in the cloud, in Amazon's S3 service.

I think that this is one step in the direction that, in essence, we're going back in time a bit, moving back into the time-sharing space. A lot of things are going to be pushed back out into the universe through economy's scale, and also through the value of communities. It just can be a much more cheap and cost effective way of doing it. I think it's going to be a huge push in the next two years.

I think that Oracle is going to have a cloud offering, IBM is going to have a cloud offering, Sun is going to have a cloud offering, and it's going to be the big talk in the big industry over the next two or three years. I think they are just going to get out there and fight it out.

I think you are going to have number of start-ups, too. They are going to have huge cloud offerings as well. They are going to compete with the big guys. ... Quite frankly, I think, maybe the more agile, smaller companies may win that war.

These vendors are basically tripping over themselves and rushing out to the market, way before these private clouds have even established themselves. Yet the vendors are declaring that they have the infrastructure and the approach to do it. It sort of reminds me of a platform, or even operating system, land grab -- that getting there first and establishing some of the effective standards and coming up with industry-common implementations gives them an opportunity to at some level or format create the de facto portability means.

As we go forward, I think that's the destination. If you look at how everything is going, I think everything is going to be pushed up into the cloud. People are basically going to have virtual platforms in the cloud, and that's how they are going to drive it. Just from a cost standpoint, everything we just discussed, the advantages are going to be for those who get there first.

I think that very much like the early adopters of the Web, back in the 1990s, this is going to be the same kind of a land grab, and the same kind of land rush that's going to occur. Ultimately you are going to find 60 percent to 80 percent of the business processes over the next 10 years are going to be outsourced.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsors: Active Endpoints, Hewlett-Packard.

Read a full transcript of the discussion.

Special offer: Download a free, supported 30-day trial of Active Endpoint's ActiveVOS at www.activevos.com/insight.

Tuesday, September 30, 2008

Improved insights and analysis from IT systems logs helps reduce complexity risks from virtualization

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Read complete transcript of the discussion.

Virtualization has certainly taken off, but less attention gets directed to how to better manage virtualization, to gain better security using virtualization techniques, and also to find methods for compliance and regulation of virtualized environments -- but without the pitfalls of complexity and confusion.

We seem to be at a tipping point in terms of everyone doing virtualization, or wanting to do it, or wanting to do even more. IT managers experimenting with virtualization are seeking to reduce costs, to improve the efficiency in use of their assets, or for using virtualization to address the issues they might have with energy cost, energy capacity or sometimes even space capacity in the data center. But the paybacks from virtualization can be lost or mitigated when management and complexity are not matched. Poorly run or mismanaged virtualized environments are a huge missed opportunity.

Now's the time when virtualization best practices are being formed. The ways to control and fully exploit virtualization are in demand, along with the tools to gain analysis and insights into how systems are performing in a dynamic, virtualized state.

To help learn about new ways that systems log tools and analysis are aiding the ramp-up to virtualization use, I recently spoke with Charu Chaubal, senior architect for technical marketing, at VMware; Chris Hoff, chief security architect at Unisys, and Dr. Anton Chuvakin, chief logging evangelist and a security expert at LogLogic.

Here are some excerpts:
The reasons people are virtualizing are cost, cost savings and then cost avoidance, which is usually seconded by agility and flexibility. It’s also about being able to, as an IT organization, service your constituent customers in a manner that is more in line with the way business functions, which is, in many cases, quite a fast pace -- with the need to be flexible.

Adding virtualization to the technology that people use in such a massive way as it's occurring now brings up the challenges of how do we know what happens in those environments. Is there anybody trying to abuse them, just use them, or use them inappropriately? Is there a lack of auditability and control in those environments? Logs are definitely one of the ways, or I would say a primary way, of gaining that visibility for most IT compliance, and virtualization is no exception.

As a result, as people deploy VMware and applications in a couple of virtual platforms, the challenge is knowing what actually happens on those platforms, what happens in those virtual machines (VMs), and what happens with the applications. Logging and LogLogic play a very critical role in not only collecting those bits and pieces, but also creating a big picture or a view of that activity across other organizations.

Virtualization definitely solves some of the problems, but at the same time, it brings in and brings out new things, which people really aren't used to dealing with. For example, it used to be that if you monitor a server, you know where the server is, you then know how to monitor it, you know what applications run there.

In virtual environments, that certainly is true, but at the same time it adds another layer of this server going somewhere else, and you monitor where it was moved, where it is now, and basically perform monitoring as servers come up and down, disappear, get moved, and that type of stuff.

The benefits of virtualization today ... is even more exciting and interesting. That's going to fundamentally continue to cause us to change what we do and how we do it, as we move forward. Visibility is very important, but understanding the organizational and operational impacts that real-time infrastructure and virtualization bring, is really going to be an interesting challenge for folks to get their hands around.

When you migrate from a physical to a virtual infrastructure, you certainly still have servers and applications running in those servers and you have people managing those servers. That leaves you with the need to monitor the same audit and the same security technologies that you use. You shouldn't stop. You shouldn't throw away your firewalls. You shouldn't throw away your log analysis tool, because you still have servers and applications.

They might be easier to monitor in virtual environments. It might sometimes be harder, but you shouldn't change things that are working for you in the physical environment, because virtualization does change a few things. At the same time, the fact that you have applications, servers, and they serve you for business purposes, shouldn't stop you from doing useful things you're doing now.

Now, an additional layer on top of what you already have adds the new things that come with virtualization. The fact that this server might be there one day, but be gone tomorrow -- or not be not there one day and be built up and used for a while and then removed -- definitely brings the new challenges to security monitoring, security auditing in figuring out who did what where.

The customers understood that they have to collect the logs from the virtual platforms, and that LogLogic has an ability to collect any type of a log. They first started from a log collection effort, so that they could always go back and say, "We've got this data somewhere, and you can go and investigate it."

We also built up a package of contents to analyze the logs as they were starting their collection efforts to have logs ready for users. At LogLogic, we built and set up reports and searches to help them go through the data. So, it was really going in parallel with that, building up some analytic content to make sense of the data, if a customer already has a collection effort, which included logs from the virtual platform.

All the benefits that we get out of virtualization today are just the beginning and kind of the springboard for what we are going to see in terms of automation, which is great. But we are right at the same problem set, as we kind of pogo along this continuum, which is trying really hard to unite this notion of governance and making sure that just because you can, doesn't mean you should. In certain instances the business processes and policies might prescribe that you don't do some things that would otherwise be harmful in your perspective.

It's that delicate balance of security versus operational agility that we need to get much better at, and much more intelligent about, as we use our virtualization as an enabler. That's going to bring some really interesting and challenging things to the forefront in the way in which IT operates -- benefits and then differences.
Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: LogLogic.

Monday, September 29, 2008

Oracle and HP explain history, role and future for new Exadata Server and Database Machine

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Read complete transcript of the discussion.

The sidewalks were still jammed around San Francisco's Moscone Center and the wonderment of an Oracle hardware announcement was still palpable across the IT infrastructure universe late last week. I sat down with two executives, from Hewlett-Packard and Oracle, to get the early deep-dive briefing on the duo's Exadata appliance shocker.

Oracle Chairman and CEO Larry Ellison caught the Oracle OpenWorld conference audience by surprise the day before by rolling out the Exadata line of two hardware-software configurations. The integrated servers re-architect the relationship between Oracle's 11g database and high-performance storage. Exadata, in essence, gives new meaning to "attached" storage for Oracle databases. It mimics the close pairing of data and logic execution that such cloud providers as Google use with MapReduce technologies. Ellison referred to the storage servers as "programmable."

Exadata also re-architects the HP-Oracle relationship, making HP an Oracle storage partner extraordinaire -- thereby upsetting the status quo of the world's of IT storage, databases and data warehouses markets.

Furthermore, Exadata leverages parallelism and high-performance industry standard hardware to bring "extreme business intelligence" to more enterprises, all in a neat standalone package that's forklift-ready. Beyond 10 terabytes and into the petabyte range was how HP and Oracle designers describe the scale and 10x to 72x typical performance gains from the high-end Exadata "Machine."

The unveiling clearly deserves more detail, more understanding. Listen then as I interview Rich Palmer, director of technology and strategy for the industry standard servers group at HP, along with Willie Hardie, vice president Oracle database product marketing, on the inside story on Exadata.

The interview comes as part of a series of sponsored discussions with IT executives I've done from the Oracle OpenWorld conference. See the full list of podcasts and interviews.

Read complete transcript of the discussion.

Listen to the podcast. Download the podcast. Find it on iTunes/iPod. Learn more. Sponsor: Hewlett-Packard.

Greenplum pushes envelope with MapReduce and parallelism enhancements to its extreme-scale data offering

Greenplum has delivered on its promise to wrap MapReduce into the newest version of its data solutions. The announcement from the data warehousing and analytics supplier comes to a fast-changing landscape, given last week's HP-Oracle Exadata announcements.

It seems that data infrastructure vendors are rushing to the realization that older database architectures have hit a wall in terms of scale and performance. The general solution favors exploiting parallelism to the hilt and aligning database and logic functions in close proximity, while also exploiting MapReduce approaches to provide super-scale data delivery and analytics performance.

Greenplum's Database 3.2 takes on all three, but makes signigficant headway in embedding the MapReduce parallel-processing data-analysis technique pioneered by Google. The capability is accompanied by new tooling to extend the reach of using the technology. The result is Web-scale analytics and performance for enterprises and carriers -- or cloud compute data models for the masses. [Disclosure: Greenplum is a sponsor of BriefingsDirect podcasts.]

The newest offering from the San Mateo, Calif.-based Greenplum provides users new capabilities for analytics, as well as in-database compression, and programmable parallel analytic tools.

With the new functionality, users can combine SQL queries and MapReduce programs into unified tasks executed in parallel across thousands of cores. The in-database compression, Greenplum says, can increase performance and reduce storage requirements dramatically.

The programmable analytics allow mathematicians and statisticians to use the statistical language R or build custom functions using linear algebra and machine learning primitives and run them in parallel directly against the database.

Greenplum's massively parallel, shared-nothing architecture fully utilizes each core, with linear scalability to thousands of processors. This means that Greenplum's open source-powered database software can scale to support the demands of petabyte data warehousing. The company's standards-based approach enables customers to build high-performance data warehousing systems on low-cost commodity hardware.

Database 3.2 offers a new GUI and infrastructure for monitoring database performance and usage. These seamlessly gather, store, and present comprehensive details about database usage and current and historical queries internals, down to the iterator level, making this ideal for profiling queries and managing system utilization.

Now that HP and Oracle have taken the plunge and integrated hardware and software, we can expect that other hardware makers will be seeking software partners. Obviously IBM has DB2, Sun Microsystems has MySQL, but Dell, Hitachi, EDS and a slew of other hardware and storage providers may need to respond to the HP-Oracle challenge.

On Greenplum's blog, Ben Werther, director, Professional Services & Product Management at Greenplum, says: "Oracle has been getting beat badly in the high-end warehousing space ... Once you cut through the marketing, this is really about swapping out EMC storage for HP commodity gear, taking money from EMC's pocket and putting it in Oracle's."

It will also be interesting to watch as bedfellows and evaluated from Microsoft/DatAllegro, what happens with Ingres, whether Sun with MySQL can enter this higher end data performance echelon. This could mean that players like Greenplum and Aster Data Systems get some calling cards from a variety of suitors. The Sun-Greenplum match-up makes sense at a variety of levels.

Stay tuned. This market is clearly heating up.