Thursday, July 23, 2009

Cloud security depends on the human element

This BriefingsDirect guest post comes to you courtesy of Andras Robert Szakal, director of software architecture for the U.S. Federal Software Group at IBM and a member of The Open Group board of directors. He can be contacted here.

By Andras Robert Szakal

I’m spending a considerable amount of time these days thinking about cyber security. Not just contemplating the needs of my customers but working with other industry thought leaders within IBM and externally on how to address this convoluted, politically charged, complex and possibly lucrative domain. Much of our thinking has centered around new government programs and game changing technologies that seek to solve the increasing cyber security challenge.

Yesterday another cyber security report was released and instead of new technology it suggested the problem was with us – the people – as in us humans. The study suggests that we need more educated smart folks to thwart those evil hackers and prevent nation state attacks. Oh, and embedded in the report is the notion that technical folks would like to have some runway in their careers. Gee that’s a novel idea that we have been pushing as part of the professional certification programs within The Open Group over the last five years. [Disclosure: The Open Group is a sponsor of BiefingsDirect podcasts.]

I’m sitting in the cloud computing stream today at the 23rd Open Group Enterprise Architecture Practitioners Conference in Toronto. Not surprisingly every presenter and panel eventually settles on the subject of cloud resiliency and securability and the need for skilled architects capable of effectively applying these technologies.

I can’t help but think that most of the industry’s challenges with implementing shared services and effectively protecting the infrastructure is that organizations don’t have the proper architectural skills. Most vulnerabilities are seeded in bad architectural decisions. Likewise,

Some headway has been made in the integration between IT and the business. But for the most part they still exist as separate entities.

many of the poorly constructed services, SOA, Cloud or otherwise are the result of poor design and a lack of architectural skill. The challenge here is that high value architects are difficult to grow and often more difficult to retain.

Back to the cyber report released July 22.

The fact is that most organizations have created an artificial barrier between IT professionals and business professionals. The line of business professionals, management and executives are more valued than the techies running the IT shop. Some headway has been made in the integration between IT and the business. But for the most part they still exist as separate entities. No wonder the cyber report suggests that prospective high valued cyber security specialists and architects don’t see a future in a cyber security career.

How do we address this challenge?

Let me offer a few ideas. First, ensure these folks have architectural as well as cyber security skills. This will allow them to think in the context of the business and find opportunity to move from IT to a line of business position as their careers grow. Ultimately, the IT teams must be integrated into the business itself. As the report suggests it’s necessary to establish a career path for technologies but more importantly technical career paths must be integrated into the overall career framework of the business.

This BriefingsDirect guest post comes to you courtesy of Andras Robert Szakal, director of software architecture for the U.S. Federal Software Group at IBM and a member of The Open Group board of directors. He can be contacted here.

Tuesday, July 21, 2009

Enterprises seek better ways to discover, manage and master their information explosion headaches

Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Read a full transcript. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Join a free HP Solutions Virtual Event on July 28 on four main IT themes. Learn more. Register.

Businesses of all stripes need better means of access, governance, and data lifecycle best practices, given the vast ocean of new information coming from many different directions.

By getting a better handle on information explosion, enterprises can gain clarity in understanding what is really going on within the businesses, and, especially these days, across dynamic market environments.

The immediate solution approach requires capturing, storing, managing, finding, and using information better. We’ve all seen a precipitous drop in the cost of storage and a dramatic rise in the incidents of data from all kinds of devices and across more kinds of business processes, from sensors to social media.

To help better understand how to best manage and leverage information, even as it’s exploding around us, I recently spoke with Suzanne Prince, worldwide director of information solutions marketing at Hewlett-Packard (HP).

Here are some excerpts:
Prince: We’re noticing major shifts going on in the business environment, which are partially driven by the economy, but they were already happening anyway.

We’re moving more into the collaboration age, with flatter organizations. And the way is information is consumed is changing rapidly. We live in the always-on age, and we all expect and want instant access, instant gratification for whatever we want. It’s just compounding the problems.

We did a survey in February of this year in several countries around the world. It was both for IT and line-of-business decision makers. The top business priority for those people that we talked to, way and above everything else, was having the right information at the right time, when needed. It was above reducing operating costs, and even above reducing IT costs. So what it’s telling us is how business managers see this need for information as business critical.

[Yet] you often hear people saying that information is life -- it’s the lifeblood of an organization. But, in reality, that analogy breaks down pretty quickly, because it does not run smoothly through veins. It’s sitting in little pockets everywhere, whether it’s the paper files ... that get lost, on your or my memory sticks, on our laptops, or in the data center.

The growth in unstructured content is double the growth that’s going on in the structured world. ... For the longest time now, IT has really focused on the structure side of data, stuff that’s in databases. But, with the growth of content that was just mentioned -- whether it's videos, Twitter tweets, or whatever -- we’re seeing a massive uptick in the problems around content storage.

The whole category of information governance really comes into play when you start talking about cloud computing, because we’ve already talked about the fact that we’ve got disparate sources, pockets of information throughout an organization. That’s already there now. Now, you open it up with cloud and you’ve got even more.

There are quality issues, security issues, and data integration issues, because you

In reality, the work product of IT is information. It’s not applications. Applications are what move it around, but, at the end of the day, information is what is produced for the business by IT.

most likely want to pull information from your cloud applications or services and integrate that within something like a customer relationship management (CRM) system to be able to pull business intelligence (BI) out.

You also need to have a governance plan that brings together business and IT. This is not just an IT problem, it’s a business problem and all parties need to be at the table.
Another area to look at is content repository consolidation, or data mart consolidation. I’m talking about consolidating the content and data stores.

You really need to look at deleting what I would call "nuisance information," so that you’re not storing things you don’t need to. In other words, if I’m emailing you to see if you’d like to come have a cup of coffee, that doesn’t need to be stored. So, optimizing storage and optimizing your data center infrastructure.

People are now looking at information as the issue. Before they would look at the applications as the issue. Now, there's the realization that, when we talk about IT, there is an "I" there that says "Information." In reality, the work product of IT is information. It’s not applications. Applications are what move it around, but, at the end of the day, information is what is produced for the business by IT.

In HP Labs, we have eight major focus areas, and I would categorize six of them as being focused on information -- the next set of technology challenges. It ranges all the way from content transformation, which is the complete convergence of the physical and digital information, to having intelligent information infrastructure. So, it’s the whole gamut. But, six out of eight of our key projects are all based on information, information processing, and information management.
Listen to the podcast. Download the podcast. Find it on iTunes/iPod and Podcast.com.

Read a full transcript. Download the transcript. Learn more. Sponsor: Hewlett-Packard.

Join a free HP Solutions Virtual Event on July 28 on four main IT themes. Learn more. Register.

TIBCO joins governance ecology support with HP's SOA lifecycle software

TIBCO Software is expanding its governance solutions for service-oriented architecture (SOA) and will now provide support for Hewlett-Packard (HP) SOA Systinet lifecycle governance software.

ActiveMatrix Policy Manager and Service Performance Manager from TIBCO combined with HP SOA Systinet are designed to reduce risk and improve management of SOA environments, including identifying and defining appropriate services, managing the lifecycle of service assets, and measuring effectiveness. [Disclosure: TIBCO and HP are sponsors of BriefingsDirect podcasts.]

Governance has proved important when adopting SOA solutions by preventing delays in software delivery from compliance interoperability, security risks, and poor service quality. The topic has been a hot one at this week's Open Group conference in Toronto.

Companies that have been early adopters of end-to-end governance have seen significant results. One global telcom company saw a 327 percent return on investment (ROI) over three years, something they attributed to well-managed SOA governance, according to TIBCO.

We should expect to see more governance ecology cooperation like this one. And these same vendors should support standards and collaboration efforts like the Jericho Forum and Cloud Security Alliance, if the promise of cloud computing is to be realized. Vendors and/or cloud providers that try to provide it all their own way will only delay or sabotage the benefits that cloud can provide.

Open Group conference shows how security standards and governance hold keys to enterprise cloud adoption

This BriefingsDirect guest post comes courtesy of Jim Hietala, vice president of security, The Open Group. You can reach him here.

By Jim Hietala

Spending the early part of this week in The Open Group Security Forum meetings, I have been struck by the commonality of governance, risk, compliance, and audit issues between physical IT infrastructure today, and virtual and cloud environments in the (very) near future. Issues such as:
  • Moving away from manual compliance processes, toward automated test, measurement, and reporting on compliance status for large IT infrastructure. When you are talking about physical infrastructure, manual compliance is difficult, expensive in labor cost, and sub-optimal -- given that many organizations choose to sample just a few representative systems for compliance, rather than actually testing the entire environment. When you are talking about virtual environments and cloud services, manual compliance processes just won’t work, automation will be key.

  • Incompatible log formats output by physical devices continues to be a problem for the industry that manifests itself in problems for security information and event management systems, log management systems, and auditors. Ditto for virtual and cloud environments, at much larger scale.

  • Managing security configurations across physical versus virtual and cloud environments provides similar challenges. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]
Emerging-standards work from the Security Forum, which was originally conceived as solutions for some of these issues in traditional IT environments (in house, physical servers), will have important applications in cloud and virtualization scenarios. In fact, with the scale and agility provided by these environments, it is hard to think about adequately addressing audit and compliance concerns without standards that provide for “scalable automation.”

The Automated Compliance Expert Markup Language standards initiative will address issues of security configuration and compliance alerting and reporting across physical, virtual, and cloud environments. The revised XDAS standard from The Open Group will address audit incompatibility issues. Both of these standards efforts are work-in-progress at the present time, and our standards process is truly and open one. If your organization is a customer organization grappling with these issues, or a vendor whose product might benefit from implementing these standards, we invite you to learn more.

This BriefingsDirect guest post comes courtesy of Jim Hietala, vice president of security, The Open Group. You can reach him here.

SOA and security: Are services the problem or the solution?

This guest post comes courtesy of Dr. Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group. You can reach him here.

By Dr. Chris Harding

I’m with the SOA Work Group at The Open Group conference in Toronto this week (see http://www.opengroup.org/toronto2009-apc).

The Work Group has been busy recently, completing its Governance Framework, helping to complete The Open Group’s Service Integration Maturity Model, and working with members of OASIS and the OMG to finish the joint paper “Navigating the SOA Open Standards Landscape Around Architecture,” which explains how the architecture-focused SOA standards of these bodies relate to each other.

There was so much to do that we started our discussions last weekend, and we made good progress on our Practical Guide to Using TOGAF for SOA, and on our SOA Reference Architecture. Today we moved on to the thorny question of SOA and Security, which we discussed in a joint session with The Open Group's Security Forum. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Security is often seen as a major problem for SOA but – and this was the thread we pursued in today’s discussion – perhaps this is looking at the problem the wrong way round.

Certainly, there are security problems associated with service chains, where some of the services in the chain may be outside the control of – or even not known to – the consumer, and where the identity of the consumer may not be known to all the services in the chain.

But really these problems are due, not to the use of services, but to the use of distributed software modules with multiple owners. They would arise whether the underlying facilities were provided as services or in some other form – as object methods that can be invoked remotely, for example. They have become associated with SOA because that is the form that cross-domain distributed computing usually takes these days.

In fact, SOA gives us a way of addressing these security problems. Security is a matter of

And, where the consumer is in turn providing services to others, the analysis can help determine the contractual level of security that can reasonably be offered for those services.

assessing and mitigating risks. The service principle provides an excellent basis for doing this.

The consumer can ask questions that help establish the levels of risk.

“What services am I using?” “Who provides them?” “What level of security are they contracted to provide?” “How far do I believe that they can and will meet their contractual obligation?” The answers to such questions enable the consumer to decide what security mechanisms to deploy.

And, where the consumer is in turn providing services to others, the analysis can help determine the contractual level of security that can reasonably be offered for those services.

This is not to say that SOA solves the security problems of cross-domain distributed computing. These problems are difficult, and there are aspects – such as the lack of a commonly-accepted standard identity framework – that SOA does not address. But, looked at in the right way, it is a positive, rather than a negative, factor. And that’s something!

Harding is Forum Director for SOA and Semantic Interoperability at The Open Group. He has been with The Open Group for over ten years, and is currently responsible for managing and supporting its work on semantic interoperability, SOA, and cloud computing. Chris can be contacted at c.harding@opengroup.org.

Engine Yard launches robust Ruby cloud-based deployment platform service

Engine Yard is working to make life easier for Ruby on Rails developers. The San Francisco-based application automation and management start-up rolled out two new products on Monday with an eye toward the cloud.

Ruby on Rails is a Web programming framework that's rapidly emerging as one of the most popular ways to develop Web sites and Web applications. Popular Web 2.0 applications like Twitter, Hulu and Scribd are built using Ruby on Rails, and Ruby usage has increased by 40 percent in 2009 alone, according to Evans Data. Even though only 14 percent of developers are using Ruby, Evans predicts 20 percent will adopt the technology by 2010.

Engine Yard is preparing for Ruby growth in the next 12 months and beyond with its latest offerings: Engine Yard Cloud and Flex. Engine Yard Cloud is a services platform that leverages 100 man-years of experience deploying, managing and scaling some of the world's largest Rail sites and makes that know-how accessible to companies looking to run Rails in the cloud. Meanwhile, Flex is a cloud service plan for production-level Rails applications.

Tackling Tough Issues

What Engine Yard is, in effect, taking Ruby a step beyond application development. These new tools tackle tougher issues like deployment, maintenance, scalability, uptime and performance -- skills most developers either don't have or don't want to acquire. Cloud management solutions abound, but Engine Yard charging forward with a platform to specifically address the needs of developers building applications in Rails.

Unlike an infrastructure cloud, Engine Yard Cloud provides application-aware auto-scaling, auto-healing and monitoring and a highly optimized, pre-integrated Rails runtime stack. Engine Yard Cloud is also backed by 24x7 Premium Support from Engine Yard. It runs on Amazon EC2 infrastructure cloud.

Pricing for the Flex Plan starts at $349 per month. Pricing for Engine Yard Premium Support starts at $475 per month. Engine Yard Cloud will be generally available in August.

"Companies like Amazon and Rackspace are doing a good job at the hardware resource provisioning level," said Tom Mornini, CTO of Engine Yard. "But they don't actually help you with assembling your raw virtual machines, storage, object stores and file systems into an application architecture. Engine Yard Cloud is the layer on top of the hardware that helps you get from raw resources to functioning application architecture."

Under the Hood


With its Flex plan, Engine Yard Cloud serves customers running production applications that want to leverage the on-demand flexibility of a cloud but also need application-level scaling, reliability and support. With developer features like automated deployment from source check-ins, handling rapid application changes driven by agile development is easier for developers.

Behind the scenes, Engine Yard Cloud is automatically scaling applications. Engine Yard can

Engine Yard Cloud is the layer on top of the hardware that helps you get from raw resources to functioning application architecture.

come to the rescue of a site that's under stress or low in memory by adding more application capacity on the fly. Here's how it works: Essentially, the technology provisions a new Amazon virtual machine, lays down the operating system, lays down Ruby on Rails, lays down the source code, hooks it up with a load balancer, and assembles the monitoring so the developer -- who is not a systems administrator -- doesn't have to.

Engine Yard Cloud also offers reliability features to make sure sites don't go down, such as an automatic database replica and an auto-healing capacity in case a server fails in the application tier. Engine Yard Cloud even offers what it calls "one-click cloning" that lets developers duplicate production sites -- even if they are running 15 or 20 or more servers -- in order to perform testing or stage new code.

This is all coming together for integrated app-stack in one cloud automation. I expect this will also be of interest for private clouds. And I'm hip to the notion of personal cloud as a means to ease the deployment of robust apps.

Competing in the Cloud

On the Ruby front, Engine Yard has a strong position in the market. Engine Yard's competitors are Joyent, Rails Machine, Devunity and RailsCluster, among others.

But Engine Yard isn't just competing with vendors in the Ruby space. It's competing with other platforms. Google App Engine is doing something similar for Java. Microsoft is shipping Azure in November. Even if Engine Yard dominates on the Ruby front, there's still a battle for market share in cloud platforms.

BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached here and here.