Tuesday, March 9, 2010

TIBCO rolls out Spotfire 3.1 with spotlight on predictive analytics

In a move to mainstream predictive analytics, TIBCO Software today rolled out the latest version of its Spotfire platform.

Dubbed Spotfire 3.1, the latest iteration promises a natural language statistical experience. Spotfire 3.1 aims to help anyone in an organization get fact-based answers to questions that help drive revenue.

The company says its software is not just for analytics gurus but also marketing professionals, business development managers and others who need forward-looking business intelligence in a hurry. [TIBCO Software is a sponsor of BriefingsDirect podcasts.]

"Unlike traditional business intelligence tools, which for the most part aggregate historical trends only, Spotfire 3.1 projects them forward with what-if scenarios," says Mark Lorion, vice president of marketing for TIBCO Spotfire. "Anyone in the company can ask questions on demand and our analytics will provide future predictions based on behind-the-scenes data-driven methods. Users don't have to understand the methods. They just have to ask the questions – and they get answers instantly rather than waiting days like you would with today's business intelligence (BI) tools."

Spotfire 3.1 in action

Let’s say you’re trying to promote a new product in the consumer goods market. Spotfire 3.1 lets you choose input variables based on what you suspect might be driving the advertisement response, such as price, discounts, packaged offers, age of the respondent or length of time as a customer. You would then press a button that asks, "Are these related?"

After you push that button, Spotfire 3.1 works behind the scenes to run predictive models, using analytics and statistics to compile sensitivity analysis and correlations, then return a colorful graph that shows the response rate and which factors are most closely correlated to people clicking on your advertisement.

The software's multiple scale bar charts and combination bar and line plots offer analysis of unstructured, ‘free-dimensional’ data to identify key outliers and trends amongst the data.



While BI gives you historical data, the predictive analytics aspect of Spotfire 3.1 offers insights into what could happen next time you run a similar promotion. It can also help you fine-tune your promotions by targeting the customers that clicked on your ad, or offering different promotions to different audiences – and it does it almost instantly.

Unlike traditional BI or static spreadsheets, Lorion says Spotfire 3.1 also includes conditional coloring and lasso and axis marking that allow for better data analysis of patterns, clusters and correlations among sets of variables. The software's multiple scale bar charts and combination bar and line plots offer analysis of unstructured, "free-dimensional" data to identify key outliers and trends amongst the data.

“IT organization and statistician groups aren’t able to respond quickly enough to the many questions that arise from business users, so they go to their gut,” Lorion says. “Spotfire lets you make fact-based decisions rather than gut-based decisions.”

Predictive analytics challenges

Of course, predictive analytics software is not a new concept, and Lorion admits that the predictions are only as good as the quality and breadth of the available data. But predictive analytics is gaining momentum in the enterprise marketplace.

The economic downturn has been good for the analytics space because customers need to make reductions and predictions – but they need to be smart about it



IBM bought predictive analytics firm SPSS last July for $1.2 billion. And IDC predicts the $1.4 billion market for advanced analytics, of which predictive analytics is a subset, will grow 10 percent annually through 2011. Despite tight IT budgets, Lorion is optimistic about the space and the company’s offering.

“The economic downturn has been good for the analytics space because customers need to make reductions and predictions – but they need to be smart about it,” Lorion says. “Companies don’t want to hire PhDs to make sense of their statistics. But we need to drive awareness of our product and educate the market that the power of predictive analytics isn’t in the hands of only a couple of statisticians.”

Spotfire 3.1 works in tandem with Spotfire Application Data Services to let companies analyze data from various sources, including SAP NetWeaver BI, SAP ERP, Salesforce.com, Siebel eBusiness Applications, and the Oracle E-Business Suite.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Cast Iron launches integration platform to help pull hybrid cloud models together

In a move to tackle a persistent cloud computing challenge, Cast Iron Systems just rolled out a new platform that aims to help companies large and small securely integrate public clouds, private clouds and on-premise applications.

Dubbed OmniConnect, the cloud integration solution offers a single platform rather than multiple products or on-premise tools to accomplish cloud integrations.

Five pillars undergird OmniConnect: complete integrations, a complete cloud experience, reusability of connectivity and processes, and portable, embeddable, and brandable environments, and centralized cloud management.

"Cloud application use is exploding, but just because you like Salesforce.com doesn't mean you are going to throw out SAP, Oracle or other applications you have on-premise. It's a hybrid world where companies have a combination of cloud and on-premise locations," says Chandar Pattabhiram, vice president of Channel and Product Marketing for Cast Iron Systems. "You don't maximize the value of your cloud applications unless you get all the data into it – so you need integration."

Complete integrations

Integration can get complex in a hurry with a growing number of applications in the enterprise, such as Salesforce, Google Apps, WebEx and ADP. Companies could take a do-it-yourself approach but it won't scale over time. Companies could also use an on-demand vendor for cloud-to-cloud scenarios, or hire an on-premise integration firm. Cast Iron Systems, though, is pushing OmniConnect as a better solution.

"Fifty-six percent of CIOs in a Gartner survey said they are transitioning away from the cloud because too many choices make it too difficult," Pattabhiram says. "Our new platform is meant to solve this problem by bridging the on-premise and cloud worlds. We offer complete integrations that include data migration, process integration, and UI mashup capabilities."

Fifty-six percent of CIOs in a Gartner survey said they are transitioning away from the cloud because too many choices make it too difficult.



OmniConnect, for example, lets SaaS applications access, cleanse, and synchronize data stored in legacy systems in real-time and completes processes such as quote-to-order, purchase-to-pay, and order-to-cash without leaving the Cast Iron OmniConnect environment. The platform can also mash up the data from disparate sources and display them in a single view without taking the data out of one application and putting it into another.

Users can configure their integration processes in the cloud, run them in a multi-tenant cloud-based environment, and monitor all integrations from a single cloud-based console. And the Cast Iron Secure Connector aims to overcome data security issues by offering a secure channel that exchanges encrypted or firewalled data between enterprise applications and Cast Iron’s multi-tenant cloud service.

Reusability, portability and management

C
ast Iron also announced a new Connector Development Kit that works to streamline building connections to new applications and data sources. The kit allows IT gurus to re-use connectivity created in OmniConnect to snap in connections to public clouds, private clouds, and on-premise applications. OmniConnect also offers reusable templates of the most common processes.

Portability is another feature that Cast Iron is boasting about. The software lets users make integrations or the entire OmniConnect portable into any public cloud, private cloud or on-premise data center environment. Infrastructure providers can also embed and brand the platform as their own integration-as-a-service offering. ADP, Dell and Cisco are already reselling the service.

There is significant value in having one platform rather than multiple solutions to bridge private cloud, public cloud and on-premise applications.



Finally, a cloud-based management console makes it possible for users to monitor multiple integrations across customer deployments in a single location. Management APIs are available for IT and SaaS providers to view the monitoring data within their private or public clouds. Cast Iron also announced support for Amazon Web Services customers through integration-as-a-service.

"Security and integration are the two biggest concerns cited in Gartner's study," says Pattabhiram. "That's why you see mega-brands partnering with us. They want to have an enterprise grade solution to help their customers adopt their cloud applications. There is significant value in having one platform rather than multiple solutions to bridge private cloud, public cloud and on-premise applications."
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Tuesday, March 2, 2010

Cloud Security Alliance research defines top threats and best paths to secure cloud computing

Security. It's one of the major issues that keeps cloud computing from working its way deeper and more quickly into the enterprise IT mainstream.

But what are the potential threats around using cloud services? How can companies make sure business processes and data remain secured in the cloud? And how can CIOs accurately assess the risks and benefits of cloud adoption strategies?

Hewlett-Packard (HP) and the Cloud Security Alliance (CSA) answer these and other questions in a new research report entitled, "Top Threats to Cloud Computing Report."

The report, which was highlighted during the Cloud Security Summit at the RSA conference this week, taps the knowledge of information security experts at 29 enterprises, solutions providers and consulting firms that deal with demanding and complex cloud environments. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Mastering next-gen IT

As Cloud Security Alliance Founder Jim Reavis sees it, cloud services are the next generation of IT that enterprises must master – and it's imperative that companies understand and mitigate security threats that accompany the cloud.

"The objective of this report was to not only identify those threats which are most germane to IT organizations but also help organizations understand how to proactively protect themselves," Reavis said. "This is the first deliverable in our cloud threat research initiative, which will feature regular updates to reflect participation from a greater number of experts and to keep pace with the dynamic nature of new threats."

Cloud computing abuse

T
he Top Threats to Cloud Computing Report shines a light on vulnerabilities that threaten to hinder cloud service offerings from reaching their full potential. HP and the Cloud Security Alliance warn companies to be aware of the abuse and nefarious use of cloud computing. The report specifically points to the Zeus botnet and InfoStealing Trojan horses as a prime examples of malicious software that has compromised sensitive private resources in cloud environments.

Cloud services are the next generation of IT that enterprises must master – and it's imperative that companies understand and mitigate security threats that accompany the cloud.



Beyond malicious software, the report pegs sites that rely on multiple application programming interfaces (APIs) as typically representing the weakest security link. That's because one insecure API can impact a larger set of members using the evolving social Web, which presents data from disparate sources.

Rounding out the list of common cloud threats covered in the report are malicious insiders, shared technology vulnerabilities, data loss and leakage and account/service and traffic hijacking.

I'll be moderating a panel in San Francisco in conjunction with RSA later this week on the very subject of cloud security with Jeremiah Grossman, founder and Chief Technology Officer of WhiteHat Security; Chris Hoff, Director of Cloud & Virtualization Solutions at Cisco Systems and a Founding Member of the CSA, and Andy Ellis, Chief Security Architect at Akamai Technologies. [Disclosure Akamai is a sponsor of BriefingsDirect podcasts.]

We'll be rebroadcasting the panel "live" with call-in for questions and answer at noon ET on March 31. More details to come.

For now, the RSA-debuted full report is available on the CSA Web site: http://cloudsecurityalliance.org/topthreats/csathreats.v1.0.pdf.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Monday, March 1, 2010

Open source solutions for SOA: Check your bias at the door

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

By Ronald Schmelzer

Most experienced practitioners in Service-Oriented Architecture (SOA) and Enterprise Architecture (EA), including ourselves, would assert that architecture and implementation are not interdependent. That is to say that architecture expresses means for describing ways of doing things whereas implementations are specific ways of doing those things.

Doing any particular architecture doesn’t require using any particular implementation, and vice versa, implementing something a particular way doesn’t imply or require any specific architecture. As such, any good architect should know that the best solutions are always context specific – give the business, users, whomever the constituency is, the best solution based upon their needs rather than any particular assumption ahead of time.

Getting this truism out of the way, why is it then that so many IT organizations prematurely discard Open Source Software (OSS) from their SOA implementations? While OSS may not be suitable for all implementations all the time, they are increasingly becoming suitable and feasible for an increasing number of SOA implementations.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

To make it absolutely clear, ZapThink is not advocating dumping all your vendor solutions in favor of an OSS stack; however, we believe that the current economic and technology environment are making OSS solutions more credible, feasible, cost-effective, and potent as the industry matures. In this ZapFlash, we’ll look at the current state of OSS for SOA and why this might be the right time to reevaluate your biases and assumptions about the “readiness” of OSS for SOA.

Open source, free software, and community development

F
irst, it is important to get our definitions straight. As is very aptly defined in Wikipedia, open source software is “computer software for which the source code and certain other rights normally reserved for copyright holders are provided under a software license that meets the Open Source Definition or that is in the public domain. OSS licenses permit users to use, change, and improve the software, and to redistribute it in modified or unmodified forms.” OSS differs from commercial software in that the ownership, maintenance, and rights to change the software are not owned by a specific company or group of companies.

The FSF defines Free and open source software (F/OSS or FOSS) as the freedom to copy and re-use the software, or in other words, “free as in free speech, not as in free beer”.



The term open source is frequently, although not always, used in conjunction with the idea of free software. In this terminology, free sometimes means that it costs nothing to acquire the license, but that’s not exactly how it’s defined by the Free Software Foundation (FSF). The FSF defines Free and open source software (F/OSS or FOSS) as the freedom to copy and re-use the software, or in other words, “free as in free speech, not as in free beer”. This means that FOSS licenses gives users the rights to copy, modify, share, redistribute, and otherwise contribute to the advancement of the technology, but doesn’t necessarily imply anything about total cost.

Muddying the waters is the idea of Commercial Open Source Software (COSS). In COSS, the community has rights to certain aspects of modifying, sharing, and enhancing the software whereas others are reserved for the company. We’ve seen many instances of COSS in the SOA landscape in particular, from firms who wish to have a “freemium” model or “Community Edition” products which are offered for free as an entry point, and commercially licensed and maintained products offered as a premium. So what’s the problem with OSS? Simply put, three big issues: Fear, Uncertainty, and Doubt (FUD).

OSS SOA FUD

Let’s start with uncertainty. From a SOA perspective, the big uncertainty on OSS rests on two main issues: Are there a sufficient number of OSS offerings to cover the scope of things we need for our SOA implementations, and are those OSS projects of sufficient quality to meet our needs? If only companies did indeed start with this question, they would quickly find that there are an increasing number of widely implemented, tested OSS solutions for a wide range of SOA development, infrastructure, and management needs.

For certain, if you are looking for products that offer so-called Enterprise Service Bus (ESB) functionality, then there are a plethora of Open Source solutions. Companies have successfully implemented Mule ESB, Apache Axis2, Apache Synapse and Apache ServiceMix.

You can’t make the blanket statement that implementations based on OSS are less robust than vendor solutions.



For SOA development, there are a wide variety of OSS options, most notably the Eclipse project. Not only has IBM’s OSS contribution of Eclipse made major inroads throughout IT development, it has spawned many associated development frameworks, such as the Swordfish SOA framework and the Equinox OSGi bundling framework.

Many open source projects are integrated or built on top of the Eclipse platform. There are now even open source SOA registry and management solutions including Mule Galaxy, SOPERA, WSO2’s open source registry offering, and the Membrane SOA Management tool. There are a wide range of OSS Business Process Management (BPM) and BPEL runtime engines including ActiveBPEL, Apache ODE, Orchestra, and a plethora of others.

As a sum total, these tools have had tens of million downloads and hundreds of thousands of implementations. Furthermore, individuals and companies have poured tens of thousands of hours of development time and maintenance into these tools.

Are these of the same quality as tools from vendors with decades of product development history? You can’t make the blanket statement that implementations based on OSS are less robust than vendor solutions.

Many open source tools build upon the experience of users who have previously used commercial offerings and thus aim to mimic or improve the functionality and performance of those solutions.

Furthermore, just how stable are those vendor tools anyways? After a decade implementing one vendor’s infrastructure suite, you find that that vendor got acquired not once, but two or even three times as their acquirers in turn got acquired, with the final product set “mish-moshed” amongst a dozen other acquisitions with no firm roadmap, an ill-defined integration plan from the vendor, and license and maintenance fees that make little sense.

In many ways, the simplicity and lack of confusion of the OSS suite is making more sense given the chaos of the product portfolios in the rapidly consolidating vendor marketplace right now.

Early vendor death and consolidation chaos

This brings us to the other two issues raised on OSS solutions: fear and doubt. Brenda Michelson from Elemental Links did a very good job outlining some of the considerations for open source in the enterprise IT environment.

Many architects refuse to even consider OSS solutions out of the often unfounded fear that they are unsupported. While it is true that many good OSS solutions require paid support to achieve the response time and care necessary, we would argue that money is well spent.

With commercial companies providing support for OSS offerings you get the best of both worlds: community development, testing, and enhancement at low or no cost, and professional support whose time and value are known quantities.

Even if you chose a commercial vendor, you’re going to be paying for support anyways. In what respect are OSS solutions any worse off in this case? It is ludicrous to assert that a vendor’s solutions are of such a high quality that the need for support is less than that of OSS solutions.

In fact, we find the contrary. When you purchase commercial vendor offerings, you pay for the licenses, maintenance, and support, in addition to your integration costs, and you don’t even get the benefit of getting others’ contributions.

When you purchase commercial vendor offerings, you pay for the licenses, maintenance, and support, in addition to your integration costs, and you don’t even get the benefit of getting others’ contributions.



Much of the doubt on OSS is placed by vendors who have vested interests in making sure you continue to feed them millions of dollars of license and maintenance revenue. But given that many enterprise IT vendors are folding, getting acquired, or abandoning their product lines, we see a greater risk in towing a strictly commercial vendor line.

Without the source code and enhancements in the community, when a vendor gives up the ghost, stops developing their product, or gets “mish-moshed”, the code simply disappears. No one is there to support a dead company’s products or a dying product line.

In this regard, OSS presents less of a risk because the code is out there in the community, available for anyone to pick up. From a SOA perspective, you want to have as few dependencies as possible on your infrastructure or a single vendor’s solutions. As such, for many, OSS makes a whole lot of sense.

An OSS and SOA case study (courtesy of the SOA-C)

Recently, the SOA Consortium (SOA-C) had a case study contest to elicit the best SOA implementations and architecture design. One of the winners was BlueStar Energy, which implemented a relatively sizable SOA implementation entirely on OSS solutions.

Some of the lessons they learned were some of the things we often espouse: incremental delivery, standards-based interface, consumer heterogeneity, loose coupling, and, composability. If you read the case study, you can see that the design principles had a decidedly non-vendor bias. They wanted control over their environment, and this meant creating a specification that required implementation neutrality.

The consequence of the way that BlueStar designed their architecture is that they found that OSS solutions best fit the bill for their needs. Their Business Integration Suite consists of open source distributed, scalable and reliable components such as enterprise service bus, business process management system and messaging fabric.

The end result is that between the adoption of SOA, open source and offshore development, the company estimates saving $24 million over the course of five years. For many of our readers, the BlueStar case study probably describes your environment as well. The case study is worthy of a close read!

ZapThink take

We at ZapThink have no vested interest in espousing a particular position that OSS or commercial vendor offerings are inherently better than the other. As mentioned, all good architects need to consider the context for their implementations.

For some companies, a vendor approach is best (especially in mainframe-based legacy environments where OSS simply doesn’t exist). But for others, we believe that biases dominate the discussion. Enterprise architecture does not demand vendor solutions. You can choose to implement aspects of your EA entirely on your own. Or you can buy technology from a handful of vendors. Or you can grab open source solutions online. There’s no bias in the architecture – why do you have bias and why is there bias in the marketplace?

The best place to start is where BlueStar Energy started: focus on the goals and needs of the architecture first. Define your architecture in a vendor-neutral, implementation agnostic way. Then, when it does come time to consider your implementation, start with a gap analysis.

Which tools do you already have that suit the need that you don’t need to buy again? Which infrastructure and tools do you need to acquire to fill the gaps? For those gap fillers, consider OSS and vendor solutions equally and evaluate them on an equal footing. You might be surprised to find what truly fits the bill for your SOA implementation needs.

Check your FUD at the door. Make sure you aren’t losing an advantage by prematurely eliminating OSS from your SOA infrastructure mix.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.



SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

You may also be interested in:

Friday, February 26, 2010

HP rolls out data center services aimed at boosting IT ROI for global SMBs

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

In a move to tap into the small- to mid-sized business (SMB) data center market, Hewlett-Packard (HP) just rolled out a set of services aimed at helping smaller outfits drive the same IT efficiencies as larger enterprises.

The portfolio is designed to improve efficiency and increase IT budget flexibility, while mitigating risks and maximizing return on investment (ROI) from existing IT skills and assets. The services also target dealing with rapid change and the simplifying of management of multi-vendor environments. HP also launched procurement options for custom integration operations and improvement services. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

“Our new services are based on drivers that impact owners of small- to mid-sized data centers,” said Ian Jagger, worldwide marketing manager of Infrastructure and Operations for HP’s Technology Services Group. “These services help our customers deal with the challenge of managing IT complexity and sprawl, space and infrastructure limitations, and limited IT budgets and staff.”

Improving operational efficiency

Recognizing the SMB organization's requirements around speed, efficiency and 24/7 resource accessibility with shared virtual IT services, HP is delivering four new services designed to help clients gain tighter environment-wide control and broader, deeper visibility into support-related functions.

HP Multivendor Support Services works to help clients increase service levels and reduce the complexity and costs of managing heterogeneous IT environments. By exercising global buying power among vendors and suppliers, HP said it can effectively lower the cost of support contracts.

These services are entirely differentiated because only licensed engineers can deliver these services and HP’s competitors don’t have licensed engineers.



“We have been offering multi-vendor support solutions to our customers,” says Dionne Morgan, worldwide solutions marketing manager for HP’s Technology Services group. “In addition to IBM and Dell servers, we also now support Sun servers and Sun Solaris 10 for HP ProLiant servers. And for HP Integrity servers we’re now supporting Novell, SUSE Linux and Microsoft Windows Server 2008.”

On the operational efficiency front, HP also announced HP Insight Remote Support to monitor a customer’s environment around the clock and provide remote diagnostics, troubleshooting and a support solution. HP added support for VMware virtual environments. Meanwhile, HP Active Chat offers real-time Web chat support for problem and the HP Data Center Training Symposium will move to help companies develop a custom training plan to increase the effectiveness of IT staff.

Increasing computing capacity

HP also announced value assessment services structured for data centers up to 5,000 square feet in size. The services work to help SMBs find ways to increase computing capacity and cut energy costs.

The new services include Basic Capacity Analysis for Smaller Footprints Assessment, Infrastructure Condition and Capacity Analysis for Smaller Footprints Assessment, and Energy Efficiency Analysis for Smaller Footprints Assessment.

“These services are entirely differentiated because only licensed engineers can deliver these services and HP’s competitors don’t have licensed engineers,” Jagger says. “Our competitors have to partner with specialist companies to deliver these services. We’re also restructuring these services to be sold by our channel partners.”

Offering flexible purchase options

Finally, HP promises to make it easier for SMBs to procure value services that will help them better manage limited resources and drive business value from their technology infrastructure through HP Units of Service and HP Proactive Select Services.

“We’ve taken the customized services available from our technical services portfolio and converted them into what we call Units of Service,” Jagger says. “A Unit of Service is a deliverable at a highly granular level. Any given custom service could be made up of multiple Units of Service.”

HP Proactive Select Services let clients move to a variable budget model, acquiring expert resources on-demand to address changing data center needs.



HP Units of Service gives SMBs access to value services from HP through channel partners that aim to maximize ROI and set the stage for business growth. For example, SMBs can tap into HP custom data center consulting services such as relocation, integration, operations and improvement.

HP Proactive Select Services let clients move to a variable budget model, acquiring expert resources on-demand to address changing data center needs. HP has included Server Firmware Update Installation Service, Technical Online Seminars, Virtual Tape Library Health Check and LeftHand SAN/iQ Update Service to its portfolio.

“With these services, companies can focus their IT staff on strategic IT investments that differentiate them in the marketplace,” Jagger says. “What you’re seeing here is more and more services brought to customers at a value level through the channel that allows them to focus where they can drive the greatest ROI from staff.”

The SMB IT services and support market is ripe for efficiency and lower total costs. And the SMB arena is also a prime user for upcoming cloud and hybrid-sourced services. So now everything as a service can go anywhere.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

For more information on virtualization and how it provides a foundation for Private Cloud, plan to attend the HP Cloud Virtual Conference taking place in March. To register for this event, go to:
Asia, Pacific, Japan - March 2
Europe Middle East and Africa - March 3
Americas - March 4

Thursday, February 25, 2010

Citrix Online acquires Paglo, launches GoToManage to tear down IT management boundaries for the cloud era

In a move to enter the burgeoning SaaS-based IT management market, Citrix Online announced its acquisition of Menlo Park, Calif.-based Paglo Labs on Wednesday. The first fruits of the acquisition is an integrated web-based platform for monitoring, controlling and supporting IT infrastructure.

Dubbed GoToManage, the new service lets Citrix Online tap into the growing demand for software-as-a-service (Saas)-based IT management, a market Forrester Research predicts will reach $4 billion in 2013. Citrix Online is positioning the latest addition to its online services portfolio as an affordable alternative to premise-based software. [Disclosure: Paglo is a sponsor of BriefingsDirect podcasts. Learn more about Paglo's offerings and value.]

I expect that as more enterprises experiment and adopt more mixed-hosted services -- including cloud, SaaS, IaaS, and outsourced ecosystems solutions -- that web-based management capabilities will become a requirement. In order to manage across boundaries, you need management reach that has mastered those boundaries. On-premises and traditional IT management is clearly not there yet.

Elizabeth Cholawsky, vice president of Products and Services at Citrix Online, explains the reasoning behind the acquisition:
“Our customers increasingly tell us they are interested in adding IT management services to our remote support capabilities. With the growing acceptance of SaaS and the increasing use of IT services in small- and medium-sized businesses, we decided IT management reinforced our remote support strategy.”
The Paglo puzzle piece

According to IDC, Citrix Online was the remote support market leader in 2008 with a 34.7 percent global share via its GoToAssist services. IDC also pegs Citrix Online as the third largest SaaS vendor in the world based on 2007 revenue, but Citrix Online needed Paglo-like log analysis technology in order to offer its customers the next puzzle piece in its full SaaS picture.

Paglo has made a name for itself providing SaaS-based IT search and management services. In short, Paglo helps companies harness and analyze the information explosion coming from all their computer, server, network and log data. Paglo helps companies improve operating efficiencies, gain a clearer understanding of true IT costs and meet compliance requirements.

Now, Paglo serves as the foundation for GoToManage. GoToManage creates an IT "system of record" to give businesses with the ability to discover and identify all network devices, monitor critical servers and applications in real-time, manage network usage, and track configuration changes. Like other Citrix Online products, GoToManage can be accessed from anywhere, and doesn’t require costly server infrastructure.

A seamless transition?

With GoToManage, Citrix Online is once again disrupting the traditional IT model. Brian de Haff, CEO of Paglo, expects a seamless integration for Paglo customers and GoToAssist customers that tap into the new service. With behind-the-scenes integration completed, customers can click on a link and instantly access GoToManage. De Haff also expects Paglo customers to adopt GoToAssist and use the two services in tandem.

Bringing these technologies together is a terrific win for the customers of both companies.



“When we look across the Paglo customer base, the integration of monitoring with remote support is by far the number one requested feature that customers are asking for,” de Haff says. “So bringing these technologies together is a terrific win for the customers of both companies.”

Cholawsky declined to comment whether Citrix Online will make additional acquisitions to add to its portfolio, which also includes GoToMyPC, GoToMeeting, GoToAssist, GoToWebinar, GoToTraining and GoView. What she did say is that Citrix Online is witnessing a large growth spurt, which she expects to continue.

“We’re constantly looking at partners and acquisitions,” Cholawsky says. “With the venture capital investments in smaller companies with great technologies over the past couple of years, acquisitions are a terrific way to grow our company. But whether we develop more organically or go out and partner closely or do more acquisitions, we’ll be investing heavily in the SaaS market.”

Financial terms of the Paglo acquisition were not disclosed.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Tuesday, February 23, 2010

Survey: IT executives experimenting with mostly 'private' cloud architectures

If you want a realistic view of cloud computing adoption – along with an understanding of what motivates IT executives to invest the cloud, what concerns remain, and what initiatives are planned – you can’t limit your frame to a single industry. The full picture only becomes clear through a cross section of research, manufacturing, government and education fields.

That’s the approach Platform Computing took at a recent supercomputing conference. The company late last year surveyed 95 IT executives across a number of fields to offer insight into how organizations are experimenting with cloud computing and how they view the value of private clouds. [Disclosure: Platform Computing is a sponsor of BriefingsDirect podcasts.]

The results: Nearly 85 percent intend to keep their cloud initiatives within their own firewall.

“When deploying a private cloud, organizations will need a management framework that can leverage existing hardware and software investments and support key business applications,” says Peter Nichol, general manager of the HPC Business Unit at Platform Computing. “This survey reaffirms the benefits that private clouds offer – a more flexible and dynamic infrastructure with greater levels of self-service and enterprise application support.”

Most organizations surveyed are experimenting with cloud computing – and experimenting is the key word. Eighty-two percent don’t foresee cloud bursting initiatives any time soon. This suggests an appreciation for private cloud management platforms that are independent of location and ownership, and can provide the needed security in a world of strict regulations around transparency and privacy.

Security is chief concern

Forty-nine percent cite security as a chief concern with cloud computing. Another 31 percent pointed to the complexity of managing clouds, while only 15 percent said cost was an issue. Indeed, security concerns are a force driving many IT execs toward private rather than public clouds. Forty-five percent of organizations considering establishing private clouds as they experiment with ways to improve efficiency, increase their resource pool and build a more flexible infrastructure.

. . . The adoption of cloud computing should follow a sequence of evolutionary steps rather than an overnight revolution.



There seems to be some naïveté over the cloud. Nearly three-quarters of those surveyed don’t expect their IT organization infrastructure to change in the face of cloud computing. But that is not a realistic expectation. The move to cloud computing is an evolutionary one and IT organizations must themselves evolve to meet the demands of the organizations and their users. Ultimately, a willingness to evolve begins with an appreciation of the cloud’s value.

“Cloud computing has provided the impetus for IT to make a much needed shift, but many in the industry are still struggling to understand the value of the cloud,” says Randy Clark, chief marketing officer at Platform Computing. “As organizations continue to experiment with cloud to move toward better efficiency and cost-savings, it is best to bear in mind that to ensure success, the adoption of cloud computing should follow a sequence of evolutionary steps rather than an overnight revolution.”
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.

Complex systems engineering helps scale SOA the right way

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Ever since ZapThink published our Business Agility as an Emergent Property of SOA ZapFlash, we've been explaining in our Licensed ZapThink Architect course how SOA implementations must be complex systems in order to deliver on emergent properties like business agility. Yet even though we've expanded our treatment of Complex Systems Engineering (CSE) in the latest version of the course, the reaction of most of our students is typically one of perplexity.

Not that we're really surprised, however. Breaking away from the Traditional Systems Engineering (TSE) way of thinking is a huge leap for most technologists, as it shakes to the foundation how they think about architecture, not just SOA in particular, but even more fundamentally, the role IT plays in the enterprise.

Complex systems: Order from chaos in nature

Complex systems theory is especially fascinating because it describes how many natural phenomena occur. Whenever there is an emergent property in nature -- that is, a property of a system as a whole that the elements of the system do not exhibit -- then that system is a complex system.

Everything from the human mind to the motion of galaxies are emergent properties of their respective systems. Fair enough, but those are all natural complex systems, and we're charged with implementing an artificial, human-made complex system. How we take the lessons from nature and apply them in the IT shop is a question that engenders the perplexity we see on our students' faces.

There is a fundamental flaw in this distinction, however. Making such a distinction between natural and artificial systems is basically a TSE way of thinking because it separates people from their tools. In a traditional IT system, people are the "users," but not inherently part of the system. In many complex systems, however, people aren't just part of the system, they are the system.

. . . The system includes individual people making individual decisions based upon their personal point of view within the system . . .



In fact, any large group of people behaves as a complex system. For example, take a stadium full of people doing the wave. Each individual in the crowd decides whether or not to participate based upon the behavior of other people, but the wave itself has "a mind of its own" -- in other words, the wave behavior is an emergent property of the crowd. Another example would be a traffic jam. An accident in opposing traffic will slow down your side of the freeway every time, even though each individual knows that slowing down to look will cause a jam. You and hundreds of people like you can decide not to slow down to look in order to avoid creating a jam, but the jam forms nevertheless.

In the wave example, no technology of any kind takes a role, while in the traffic example, vehicles affect the behavior of the system to a certain extent. In fact, changing the technology can have a dramatic impact on the behavior of the system: If the traffic consisted of trains instead of automobiles, your train might not slow down at all for a problem on a neighboring track. But regardless of whether it's made up of trains or automobiles, the system includes individual people making individual decisions based upon their personal point of view within the system, and emergent properties result, just as they do in a natural system with no people involved at all.

The enterprise as a complex system

Any human organization is, in fact, a complex system, including those unwieldy beasts we refer to as enterprises. Enterprises all have policies and managers and lines of control, but the overall behavior of the enterprise emerges from the individual behaviors of the participants in it. Furthermore, the emergent behaviors of corporations and governments may depend entirely on the people who belong to such enterprises, independent of technology. But when we do include technology in our enterprises, we can dramatically affect the emergent behavior of those systems, just as switching from cars to trains changes how traffic behaves.

. . . It's certainly true that some architects are too focused on the technology, leaving people out of the equation altogether . . .



So, what do you get when you take traffic and subtract the people? A parking lot! Without the people, what was a complex system is now little more than a collection of individual, traditional systems, namely the cars themselves. Each auto is a traditional system in the sense that the properties it exhibits are the properties its manufacturer designed into it. The best you can expect with TSE, after all, is to deliver a system that does what it's supposed to do.

Too often in the enterprise, people confuse complex systems with collections of traditional systems, which is just as big a mistake as confusing a parking lot full of empty cars with a traffic jam. In fact, architects are often the first to make this mistake. Of course, it's certainly true that some architects are too focused on the technology, leaving people out of the equation altogether, but even for those architects who include people in the architecture, they often do so from a TSE perspective rather than a CSE approach. But no matter how hard you try, designing better steering wheels and leather seats and the like won't prevent traffic jams!

Complex systems thinking and SOA

In traditional systems thinking, then, we have systems and users of those systems, where the users have requirements for the systems. If the systems meet those requirements then everybody's happy.

In complex systems thinking, we have systems made up of technology and people, where the people make decisions and perform actions based upon their own individual circumstances. They interact with the technology in their environments as appropriate, and the technology responds to those interactions based upon the requirements for the complex system as a whole. In many cases, the technology provides a feedback loop that helps the people achieve their individual requirements, just as brake lights in a traffic jam help reduce the chance of collisions.

Such complex systems thinking has been a common theme in many of ZapThink's articles for several years now. Here are some examples:
  • In Best Effort SOA and the SOA Quality Star, we discuss how the business agility requirement complicates the SOA quality challenge. Because agility is an emergent property, we have to establish continuous quality policies that ensure that the delivered system is sufficiently agile. As a result, there's always a trade-off between agility and quality we call "Best Effort SOA."

  • In The Buckaroo Banzai Effect: Location Independence, Service-Oriented Architecture, and the Cloud, we explore the "Next Big Thing" as SOA, Cloud Computing, Web 2.0, and mobile presence converge. Our conclusion? "The Next Big Thing isn't a cloud in the sense of abstracted data centers full of technology; it's a cloud of people, communicating, creating, and conducting business, where the technology is hidden in the mist."

  • In Resilience: The Missing Word in the SOA Conversation, we discuss how SOA implementations must be resilient, that is, they must have self-righting tendencies that help them recover from adverse forces in their environment. Resilience is a property of the component systems in a SOA implementation that allows the overall system to exhibit the emergent property of business agility.

  • Finally, in the more recent The Christmas Day Bomber, Moore's Law, and Enterprise IT, we introduce the concept of a "metapolicy feedback loop" that explicitly describes the relationship between humans tackling governance in the enterprise and the governance technology they leverage for the task. Only by taking a complex systems approach to the problem of governance do organizations have any chance of dealing with the explosion in the quantity and complexity of information in the enterprise over time.
The common elements to all of these arguments are the feedback loops between people and technology at the component level that enables the overall system to continue to meet requirements as those requirements change -- the essence of business agility.

The ZapThink take

If you still find yourself perplexed by this whole complex systems story, it might help to point out that complex systems aren't necessarily complicated. In fact, in a fundamental way they are really quite simple. Traffic jams may be difficult to understand, but individuals driving cars are not.

Best practices like Metadata-driven governance, the Business Service abstraction, and infrastructure and implementation variability, to name a few, are well within reach of today's SOA initiatives. And the great thing about complex systems is that if you take care of the nuts and bolts, the big picture ends up taking care of itself.

For organizations who don't take a complex systems approach to SOA, however, the risks are enormous. As traditional systems scale, they become less agile. Ask any architect who's attempted to hardwire several disparate pieces of middleware together in a large enterprise -- yes, maybe you can get such a rat's nest to work, but it will be expensive and inflexible. If you want to scale your SOA implementation so that it continues to deliver business agility even on the enterprise scale, then the complex systems approach is absolutely essential.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.