Tuesday, February 16, 2010

HP ‘trims’ SharePoint web doc management risks, builds advanced workflow tools

Today’s enterprises are creating web-based content at breakneck speed. Much of this digital content becomes bona fide business records that demand document management with regulatory compliance and legal discovery demands in mind.

That’s why Hewlett-Packard (HP) recently rolled out a web-based records management solution specifically designed to help Microsoft SharePoint customers lower business risks. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.)

Dubbed HP Total Records Information Management (TRIM) 7, the latest version of HP’s advanced records management solution aims to help organizations transparently manage Microsoft SharePoint Server records – including documents and information stored on SharePoint Server blogs, wikis, discussions, forms, calendars and workflows – in a single environment.

A Content 2.0 explosion

As HP explains it, TRIM 7 opens the door for consolidation and simplified management of stored content in multiple formats. Using HP TRIM 7, organizations can capture, search and manage physical and electronic files with complete transparency.

“The explosion in Content 2.0 blogs, wikis and discussions creates new information management challenges for organizations trying to meet an escalating set of regulation,” says Jonathan Martin, vice president and general manager of Information Management Solutions at HP. “HP TRIM allows customers to marry records management best practices and governance with dynamic collaboration platforms such as SharePoint.”


An end-to-end solution

HP TRIM 7 offers two modules to address the records management needs of SharePoint products and technologies: HP TRIM Records Management and HP TRIM Archiving.

HP TRIM Records Management aims to improve business records management via transparent access to SharePoint Server content held in HP TRIM directly from the SharePoint Server workspace.

The explosion in Content 2.0 blogs, wikis and discussions creates new information management challenges for organizations trying to meet an escalating set of regulations.



Since the U.S. Department of Defense has awarded HP TRIM its 5015.2 v3 certification, HP notes, organizations are assured the highest levels of records management control for enterprise content. HP has also made improvements that promise faster indexing and search capabilities, along with shorter response times for legal discovery, compliance requests and audits.

Closing the records management loop, HP TRIM Archiving works to help customers lower the risk of data loss while reclaiming storage and system resources from SharePoint Server. This module can either archive specific list objects in SharePoint Server or complete SharePoint Server sites. All this means organizations can take entire SharePoint Server sites offline without losing access to information.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Electric Cloud updates software production offerings with parallelization features

Electric Cloud has accelerated the software production management field today with improvements to two key products: ElectricAccelerator and ElectricCommander 3.5.

ElectricAccelerator boasts a new feature that provides parallel processing and subbuild technology. Dubbed "Electrify," the patented technology promises to speed development on private or public compute clouds by applying the benefits of parallelization to new development tools and tasks.

With Electrify, developers can conduct parallel testing or data modeling on their desktop, in a private cloud or on a dedicated server. Meanwhile, the subbuild technology works to help developers avoid unnecessary or broken builds by identifying only the components required for the current project. [Disclosure: Electric Cloud is a sponsor of BriefingsDirect podcasts.]

Removing production bottlenecks

“Our goal is to remove the bottlenecks in software production wherever they exist,” explains Electric Cloud CEO Mike Maciag. “ElectricAccelerator speeds Make, NMAKE, Visual Studio, and Ant builds by 10-20x. With Electrify we are broadening the technology to enable these benefits for virtually any compute-intensive development task.”

Maciag offers the example of teams standardizing on tools like SCons. With Electrify, he says, those teams can leverage the benefits of centralization to speed builds, reduce hardware costs and curb server sprawl. The technology also makes way for developers to support multiple configurations through ElectricAccelerator’s virtualization capabilities. All this means more control for developers and fewer headaches for IT.

Commanding the cloud

Electric Cloud's ElectricCommander 3.5 offers a customizable and extensible version of its tool for automating and managing the build-test-deploy process in software development. Developers can customize ElectricCommander 3.5 to extract and display data from the defect tracker along with relevant build and test results. This lets build managers track the status of each fix and receive notification when QA has resolved the issue.

ElectricCommander 3.5 also offers user interface (UI) customization that lets development teams or managers create a custom screen to create and execute a build or test request with the appropriate parameters.

In other words, the UI is purpose-built for the developer’s role or environment. The new version also automates and manages what Electric Cloud calls “error-prone, manual pieces of the build-test-deploy process” to make software production faster and more efficient.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Friday, February 12, 2010

UShareSoft rolls out on-demand application delivery platform

UShareSoft is working its way deeper into the cloud this week with two new software-as-a-service (SaaS) products that promise to make the lives of IT admins a little easier by cutting engineering costs and speeding time to value.

The UForge Appliance Factory helps IT pros assemble software appliances while the Open Appliance Studio serves as a framework for automatically deploying solutions in the field. Designed to work hand in hand, UShareSoft is hoping its duo of new products will become the means of choice for building and assembling optimized technology stacks for virtual data center and cloud offerings.

Predictable creation and cloning

U
Forge Appliance Factory works to let IT professionals predictably create, re-use, clone and maintain a complete software stack. UShareSoft promises its tools will simplify the delivery of software to physical, virtualized and cloud environments, including Amazon and VMware vCenter, for scale-up and scale-out computing.

France Telecom is among the customers currently testing the new products. UShareSoft expects customers to see advantages such as independence of image format. The company also expects its products to give organizations the ability to control its own software and governance processes.

UShareSoft’s automated process


How will UForge Appliance Factory delivery these benefits? By automating more of the process and relying less on manual tasks to create optimized stacks.

This approach, the company says, helps reduce errors and saves time. For example, UForge Appliance Factory offers one-click generation to many of the industry standard image formats, including Amazon AMI. The Appliance Factor also offers granular construction of cloning and maintenance tools, along with a catalogue of over 60 best of breed open-source projects.

Open Appliance Studio aims to take it one step further by letting IT admins turn an existing software stack into a vApp. The goal is to help independent software vendors (ISVs) better differentiate their products from the competition by giving them the ability to deliver self contained multi-node offerings that can be deployed in minutes to any cloud.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Thursday, February 11, 2010

Smart Grid for data centers better manages electricity to slash IT energy spending, frees-up wasted capacity

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Nowadays, CIOs need to both cut costs and increase performance. Energy has never been more important in working toward this productivity advantage.

It's now time for IT leaders to gain control over energy use -- and misuse -- in enterprise data centers. More often than not, very little energy capacity analysis and planning is being done on data centers that are five years old or older. Even newer data centers don’t always gather and analyze the available energy data being created amid all of the components.

Finally, smarter, more comprehensive energy planning tools and processes are being directed at this problem. It reqiures a lifecycle approach from the data centers to more toward fuller automation.

And so automation software for capacity planning and monitoring has been newly designed and improved to best match long-term energy needs and resources in ways that cut total costs, while gaining the available capacity from old and new data centers.

Such data gathering, analysis and planning can break the inefficiency cycle that plagues many data centers where hotspots can mismatch cooling needs, and underused and under-needed servers are burning up energy needlessly. These so-called Smart Grid solutions jointly cut data center energy costs, reduce carbon emissions, and can dramatically free up capacity from overburdened or inefficient infrastructure.

By gaining far more control over energy use and misuse, solutions such as Hewlett Packard's (HP) Smart Grid for Data Center can increase capacity from existing facilities by 30-50 percent.

This podcast features two executives from HP to delve more deeply into the notion of Smart Grid for Data Center. Now join Doug Oathout, Vice President of Green IT Energy Servers and Storage at HP, and John Bennett, Worldwide Director of Data Center Transformation Solutions at HP. The discussion is moderated by Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Bennett: Data center transformation (DCT) is focused on three core concepts, and energy is another key focus for that all to work. The drivers behind data center transformation are customers who are trying to reduce their overall IT spending, either flowing it to the bottom-line or, in most cases, trying to shift that spending away from management and maintenance and onto business projects.

We also see increasing mandates to improve sustainability. It might be expressed as energy efficiency in handling energy costs more effectively or addressing green IT.

DCT is really about helping customers build out a data center strategy and an infrastructure strategy. That is aligned to their business plans and goals and objectives. That infrastructure might be a traditional shared infrastructure model. It might be a fabric infrastructure model of which HP’s converged infrastructure is probably the best and most complete example of that in the marketplace today. And, it may indeed be moving to private cloud or, as I believe, some combination of the above for a lot of customers.

The secret is doing so through an integrated roadmap of data-center projects, like consolidation, business continuity, energy, and such technology initiatives as virtualization and automation.

Problem area

Energy has definitely been a major issue for data-center customers over the past several years. The increased computing capability and demand has increased the power needed in the data center. Many data centers today weren’t designed for modern energy consumption requirements. Even data centers that were designed even five years ago are running out of power, as they move to these dense infrastructures. Of course, older facilities are even further challenged. So, customers can address energy by looking at their facilities.

Increasingly, we're finding that we need to look at management -- managing the infrastructure and managing the facilities in order to address the energy cost issues and the increasing role of regulation and to manage energy related risk in the data center.

That brings us not only to energy as a key initiative in DCT, but on Smart Grid for Data Center as a key way of managing it effectively and dynamically.

Oathout: We're really talking about is a problem around energy capacity in data centers. Most IT professionals or IT managers never see an energy bill from the utility. It's usually handled by the facility. They never really concentrate on solving the energy consumption problem.

Where problems have arisen in the past is when a facility person says that they can’t deploy the next server or storage unit, because they're out of capacity to build that new infrastructure to support a line of business. They have to build a new data center. What we're seeing now is customers starting to peel the onion back a little bit, trying to find out where the energy is going, so they can increase the life of their data center.

To date, very few clients have deployed comprehensive software strategies or facility strategies to corral this energy consumption problem. Customers are turning their focus to how much energy is being absorbed by what and then, how do they get the capacity of the data center increase so they can support the new workloads.

What we're seeing today is that software, hardware, and people need to come together in a process that John described in DCT, an energy audit, or energy management.

All those things need to come together, so that customers can now start taking apart their data center, from an analysis perspective, to find out where they are either over-provisioned or under-provisioned, from a capacity standpoint, so they know where all the energy is going. Then, they can then take some steps to get more capability out of their current solution or get more capability out of their installed equipment by measuring and monitoring the whole environment.

Adding resources

The concept of converged infrastructure applies to data center energy management. You can deploy a particular workload onto an IT infrastructure that is optimally designed to run efficiently and optimally designed to continually run in an efficient way, so that you know you're getting the most productive work from the least energy and the more energy efficient equipment infrastructure sitting underneath it.

As workloads grow over time, you then have the auditing capability built into the software ... so that you can add more resources to that pool to run that application. You're not over-provisioning from the start and you're not under-provisioning, but you're getting the optimal settings over time. That's what's really important for energy, as well as efficiency, as well as operating within a data center environment.

You must have tools, software, and hardware that is not only efficient, but can be optimized and run in an optimized way over a long period of time.

Collect information

The key to that is to understand where the power is going. One of the first things we recommend to a client is to look at how much power is being brought into a data center and then where is it going.

What you want to do is start collecting that information through software to find out how much power is being absorbed by the different pieces of IT equipment and associate that with the workloads that are running on them. Then, you have a better view of what you're doing and how much energy you're using.

Then, you can do some analysis and use some applications like HP SiteScope to do some performance analysis, to say, "Could I match that workload to some other platform in the infrastructure or am I running it in optimal way?"

Over time, what you can do is you can migrate some of your older legacy workloads to more efficient newer IT equipment, and therefore you are basically building up a buffer in your data center, so that you can then go deploy new workloads in that same data center.

You use that software to your benefit, so that you're freeing up capacity, so that you can support the new workload that the businesses need.

The energy curve today is growing at about 11 percent annually, and that's the amount IT is spending on energy in a data center.



Bennett: That's really key, Doug, as a concept, because the more you do at this infrastructure level, the less you need to change the facilities themselves. Of course, the issue with facilities-related work is that it can affect both quality of service and outages and may end up costing you a pretty penny, if you have to retrofit or design new data centers.

Oathout: Smart Grid for Data Centers gives a CIO or a data-center manager a blueprint to manage the energy being consumed within their infrastructure. The first thing that we do with a Data Center Smart Grid is map out what is hooked up to electricity in the data center, everything from PDUs, UPSs, and error handlers to the IT equipment servers, networking and storage. It's really understanding how that all works together and how the whole topology comes together.

The second thing we do is visualize all the data. It's very hard to say that this server, that server, or that piece of facilities equipment uses this much power and has this kind of capacity. You really need to see the holistic picture, so you know where the energy is being used and understand where the issues are within a data center.

It's really about visualizing that data, so you can take action on it. Then, it's about setting up policies and automating those procedures to reduce the energy consumption or to manage energy consumption that you have in the data center.

Today, our servers and our storage are much more efficient than the ones we had three or four years ago, but we also add the capability to power cap a lot of the IT equipment. Not only can you get an analysis that says, "Here is how much energy is being consumed," you can actually set caps on the IT equipment that says you can’t use more than this. Not only can you monitor and manage your power envelope, you can actually get a very predictable one by capping everything in your data center.

You know exactly, how much the max power is going to be for all that equipment. Therefore, you can do much better planning. You get much more efficiency out of your data center, and you get more predictable results, which is one of the things that IT really strives for, from an SLA to getting those predictable results, day in and day out.

Mapping infrastructure

S
o, really Data Center Smart Grid for the infrastructure is about mapping the infrastructure. It's about visualizing it to make decisions. Then, it's about automating and capping what you’ve got, so you have better predictable results and you're managing it, so that you are not having out wires, you're not having problems in your data centers, and you're meeting your SLA.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.

Tuesday, February 9, 2010

AmberPoint finally gets acquired as Oracle fills in more remaining stack holes

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

Thanks go out to Oracle on Feb. 8 for finally putting us out of our suspense. AmberPoint was one of a dwindling group of still-standing software independents delivering run-time governance of the for SOA environments.

It’s a smart move for Oracle as it patches some gaps in its Enterprise Manager offering, not only in SOA runtime governance, but also with business transaction management – and potentially – better visibility to non-Oracle systems. Of course, that visibility will in part depend on the kindness of strangers as AmberPoint partners like Microsoft and Software AG might not be feeling the same degree of love going forward.

We’re surprised that AmberPoint was able to stay independent for as long as it had, because the task that it performs is simply one piece of managing the run-time. When you manage whether services are connecting, delivering the right service levels to the right consumers, ultimately you are looking at a larger problem because services do not exist on their own desert island.

Neither should runtime SOA governance. As we’ve stated again and again, it makes little sense to isolate run-time governance from IT Service Management. The good news is that with the Oracle acquisition, there are potential opportunities, not only for converging runtime SOA governance with application management, but as Oracle digests the Sun acquisition, providing full visibility down to infrastructure level.

Transaction monitoring and optimization will become the next battleground of application performance management. . .



But let’s not get ahead of ourselves here as the emergence of a unified, Oracle on Sun turnkey stack won’t happen overnight. And the challenge of delivering an integrated solution will be as much cultural as technical, as the jurisdictional boundary between software development and IT operations blurs. But we digress.

Nonetheless, over the past couple years, AmberPoint itself has begun reaching out from its island of SOA runtime, as it extended its visibility to business transaction management. AmberPoint is hardly alone here as we’ve seen a number of upstarts like AppDynamics or Bluestripe (typically formed by veterans of Wiley and HP/Mercury), burrowing down into the space of instrumenting transactions from hop to hop. Transaction monitoring and optimization will become the next battleground of application performance management, and it is one that IBM, BMC, CA, HP, and Compuware are hardly likely to passively watch from the sidelines. [Disclosure: CA, HP and Compuware are sponsors of BriefingsDirect podcasts.]

Last one standing

As for whether run-time SOA governance demands a Switzerland-style independent vendor approach, that leaves it up to the last one standing, SOA Software, to fight the good fight. Until now, AmberPoint and SOA Software have competed for the affections of Microsoft; AmberPoint has offered an Express web services monitoring product that is a free plug-in for Visual Studio (a version is also available for Java); SOA Software offers extensive .NET versions of its service policy, portfolio, repository, and service manager offerings.

Nonetheless, although AmberPoint isn’t saying anything outright about the WebLogic (now Oracle's formerly BEA's) share of its 300-customer installed base, that platform was first among equals when it came to R&D investment and presence. BEA previously OEM’ed the AmberPoint management platform, an arrangement that Oracle ironically discontinued; well in this case, the story ends happily ever after. As for SOA Software, we would be surprised if this deal didn’t push it into closer embrace with Microsoft.

Postscript: Thanks to Ann Thomas Manes for updating me on AmberPoint’s alliances. They are/were with SAP, TIBCO Software, and HP, in addition to Microsoft. Their Software AG relationship has faded in recent years. [Disclosure: TIBCO is a sponsor of BriefingsDirect podcasts.]

Of course all this M&A rearranges the dance floor in interesting ways. Oracle currently OEMs HP’s Systinet as its SOA registry, an arrangement that might get awkward now that Oracle’s getting into the hardware business. That will place into question virtually all of AmberPoint’s relationships.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.