Wednesday, February 8, 2012

User meta data wars going way too far, Google

I'm a big fan of Google, always have been. But the thirst for pulling in more users to its Google+ social network is about to turn my admiration south.

Now Google is not alone in sliding down the slippery slope of user information invasion. But they are getting too good at it, and they have a huge exploitation potential that others do not.

Google+ seems to now -- I just noticed it today -- require me to click a little box NOT to send my Google+ posts to all the contacts in MY Gmail address book that are not already on Google+.

That's right. When I have something to post to my circles of social connections on Google+ I have to opt out of not having Google send a copy of that post to all the people in my own address book via unsolicited email -- also known as spam. Kind of defeats the purpose of having circles in the first place, right?

This puts me in the place of shilling for Google+ unless I opt out. Not necessarily evil, but not benign, either.

Incidentally, if I wanted to jam all my posts to all my contacts, to spam them, I'd just blast it out to my contacts as my own email. No need for Google+.

So today I'm being held up as a spammer from those I care about most, those I intentionally put in my address book, and that I thought was still ****MY**** data even if it is -- gulp -- in the cloud on Google or iCloud or ... oh my, where ever else my once-private address book is now being sucked into.

But I do not want to spam my contacts. I'd be a fool too. And Google should not want to spam my contacts either, even if they do have Facebook envy to a foolish level.

To be fair, a lot of other Facebook wannabes are also resorting to user address book shenanigans. Path just got a whole lot of flak for outright downloading address books. Not sure if that was a bug or a feature.

And some site called ApnaCircle last month had me scrambling to stop email invites to join it from going out again and again to my contacts. That was not my intent. So I deleted my account, but had to manually delete all my contacts there too or the emails kept going out.

This is not how word of mouth marketing or social networking is supposed to work, folks. I kind of feel like my pocket has been picked of the little black book I keep there for my contacts. My contacts. Did I give up the rights to my contacts when I placed them in an address book on Gmail? Maybe I did, but not for long.

No, this filching of user data is social networking run amok, and it needs to stop.

NewSQL pioneer Clustrix delivers free software-only kit to demo shard-less MySQL scaling, unveils a poster child use at Twoo

There's a lot to like about MySQL databases if you're a start-up, until success comes knocking a bit too fast.

When big data demand soars then MySQL can sour on making the transactions needed on time. Sharding the application and data resources has been about the only answer, other than to painfully and expensively cut and run to another data base like NoSQL.

This was the problem facing Massive Media when its social networking site Twoo rapidly grew to four million users in six months. By using the Clustrix distributed relational database system, Massive Media gained high scale-out transactional performance and automated fault tolerance, said Clustrix.

And that has now made Twoo the poster child for Clustrix, a San Francisco start-up funded by Sequoia and USVP and its co-founder, Paul Mikesell, also co-founded Isilon, which was sold to EMC for $2.25 billion.

Recognizing the huge uptake in MySQL -- while also understanding the database's limits -- promoted Clustrix to find a NewSQL alternative, first via a hardware appliance play, and now this week broadening to a software-only environment too that simulates the hardware components of the Clustrix database appliance.

On Tuesday, Clustrix announced the availability of the free Clustrix Development Kit, allowing users to try out the NewSQL system that it's backers say scales to an "unlimited number of users, transactions or data."

New class of database

Clustrix fits into the new class of hybrid SQL-NoSQL database solutions that combine the advantage of being compatible with many SQL applications and providing the scalability of NoSQL ones. Other such solutions include Database.com with ODBC/JDBC drivers, NuoDB, Xeround, and VoltDB, according to InfoQ.

"We are seeing increased interest in NewSQL database technologies that enable users to scale their databases without having to resort to complex manual sharding," said Matt Aslett, research manager, data management and analytics at 451 Research, in a release. "Clustrix's combination of an SSD-based appliance and MySQL compatibility is a compelling alternative for enterprises struggling to manage with sharding MySQL."

Clustrix uniquely offers a hardware solution that provides for linear scalability by simply adding hardware appliance nodes to the database cluster as demand mounts. The appliances sport a 4- or 8-cores processor, 24-48GB RAM, and 448-896GB SSD, and the entire cluster is seen and managed as one database, according to InfoQ. Pricing starts at about $100,000.

Eliminating the need for database sharding, which Clustrix CEO Robin Purohit calls "a toxic event," is huge because of the manual work required of developers (three times the code), the complexity due to not being able to do transactions across shards, and difficulty doing joins and innovations across the sharded data. You might recall that Purohit was an executive at HP Software before he joined Clustrix last October.

The value of the hybrid SQL-NoSQL database solutions reminds me of where server virtualization was a few years ago. A very good thing can quickly become a bad thing when sprawl and complexity undercut the benefits.

If Clustrix and its brethren can allow MySQL values to grow unencumbered via NewSQL then it will be of interest to more than start-ups. Enterprises building new applications for cloud, mobile, and high-transactions-intense big data uses may well be seduced to the NewSQL way as well. And there will be a lot of skilled developers and DBAs at their disposal who know MySQL well.

You may also be interested in:

Five tips enterprise architects can learn from the Winchester Mystery House

This guest post comes courtesy of E.G. Nadhan of HP Enterprise Services.

By E.G.Nadhan, HP Enterprise Services

N
ot far from where The Open Group Conference was held in San Francisco this week is the Winchester Mystery House, once the personal residence of Sarah Winchester, widow of the gun magnate William Wirt Winchester. It took 38 years to build this house. Extensions and modifications were primarily based on a localized requirement du jour. Today, the house has several functional abnormalities that have no practical explanation.

To build a house right, you need a blueprint that details what is to be built, where, why and how based on the home owner's requirements (including cost). As the story goes, Sarah Winchester's priorities were different. However, if we don't follow this systematic approach as enterprise architects, we are likely to land up with some Winchester IT houses as well.

Or, have we already? Enterprises are always tempted to address the immediate problem at hand with surprisingly short timelines. Frequent implementations of sporadic, tactical additions evolve to a Winchester Architecture. Right or wrong, Sarah Winchester did this by choice. If enterprises of today land up with such architectures, it can only by chance and not by choice.

Choice not chance

So, here are my tips to architect by choice rather than chance:
  1. Establish your principles: Fundamental architectural principles must be in place that serve as a rock solid foundation upon which architectures are based. These principles are based on generic, common-sense tenets that are refined to apply specifically to your enterprise.
  2. Install solid governance: The appropriate level of architectural governance must be in place with the participation from the stakeholders concerned. This governance must be exercised, keeping these architectural principles in context.
  3. Ensure business alignment: After establishing the architectural vision, Enterprise Architecture must lead in with a clear definition of the over-arching business architecture which defines the manner in which the other architectural layers are realized. Aligning business to IT is one of the primary responsibilities of an enterprise architect.
  4. Plan for continuous evaluation: Enterprise Architecture is never really done. There are constant triggers (internal and external) for implementing improvements and extensions. Consumer behavior, market trends and technological evolution can trigger aftershocks within the foundational concepts that the architecture is based upon.
  5. Standardize: All that said, enterprises must be agile in order to react to such demands. A standardized and modularized approach is key. Standardization can be implemented in various shapes and forms. It could be the Architectural Development Method (TOGAF), the reference architecture for a Service Oriented Approach or the manner in which infrastructure services are provisioned across SOA and Cloud solutions.
Thus, it is interesting that The Open Group conference was miles away from the Winchester House. By choice, I would expect enterprise architects to go to The Open Group Conference. By chance, if you do happen by the Winchester House and are able to relate it to your Enterprise Architecture, please follow the tips above to architect by choice, and not by chance.

If you have instances where you have seen the Winchester pattern, do let me know by commenting here or following me on Twitter @NadhanAtHP.

This blog post was originally posted on HP’s Transforming IT Blog.

This guest post comes courtesy of E.G. Nadhan of HP Enterprise Services.

You may also be interested in:

Tuesday, February 7, 2012

Open Group security gurus dissect the cloud: Higher or lower risk?

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

F
or some, any move to the cloud -- at least the public cloud -- means a higher risk for security.

For others, relying more on a public cloud provider means better security. There’s more of a concentrated and comprehensive focus on security best practices that are perhaps better implemented and monitored centrally in the major public clouds.

And so which is it? Is cloud a positive or negative when it comes to cyber security? And what of hybrid models that combine public and private cloud activities, how is security impacted in those cases?

We posed these and other questions to a panel of security experts at last week's Open Group Conference in San Francisco to deeply examine how cloud and security come together -- for better or worse.

The panel: Jim Hietala, Vice President of Security for The Open Group; Stuart Boardman, Senior Business Consultant at KPN, where he co-leads the Enterprise Architecture Practice as well as the Cloud Computing Solutions Group; Dave Gilmour, an Associate at Metaplexity Associates and a Director at PreterLex Ltd., and Mary Ann Mezzapelle, Strategist for Enterprise Services and Chief Technologist for Security Services at HP.

The discussion was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group and HP are sponsors of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Is this notion of going outside the firewall fundamentally a good or bad thing when it comes to security?

Hietala: It can be either. Talking to security people in large companies, frequently what I hear is that with adoption of some of those services, their policy is either let’s try and block that until we get a grip on how to do it right, or let’s establish a policy that says we just don’t use certain kinds of cloud services. Data I see says that that’s really a failed strategy. Adoption is happening whether they embrace it or not.

The real issue is how you do that in a planned, strategic way, as opposed to letting services like Dropbox and other kinds of cloud collaboration services just happen. So it’s really about getting some forethought around how do we do this the right way, picking the right services that meet your security objectives, and going from there.

Gardner: Is cloud computing good or bad for security purposes?

Boardman: It’s simply a fact, and it’s something that we need to learn to live with.

What I've noticed through my own work is a lot of enterprise security policies were written before we had cloud, but when we had private web applications that you might call cloud these days, and the policies tend to be directed toward staff’s private use of the cloud.

Then you run into problems, because you read something in policy -- and if you interpret that as meaning cloud, it means you can’t do it. And if you say it’s not cloud, then you haven’t got any policy about it at all. Enterprises need to sit down and think, "What would it mean to us to make use of cloud services and to ask as well, what are we likely to do with cloud services?"

Gardner: Dave, is there an added impetus for cloud providers to be somewhat more secure than enterprises?

Gilmour: It depends on the enterprise that they're actually supplying to. If you're in a heavily regulated industry, you have a different view of what levels of security you need and want, and therefore what you're going to impose contractually on your cloud supplier. That means that the different cloud suppliers are going to have to attack different industries with different levels of security arrangements.

The problem there is that the penalty regimes are always going to say, "Well, if the security lapses, you're going to get off with two months of not paying" or something like that. That kind of attitude isn't going to go in this kind of security.

What I don’t understand is exactly how secure cloud provision is going to be enabled and governed under tight regimes like that.

An opportunity

Gardner: Jim, we've seen in the public sector that governments are recognizing that cloud models could be a benefit to them. They can reduce redundancy. They can control and standardize. They're putting in place some definitions, implementation standards, and so forth. Is the vanguard of correct cloud computing with security in mind being managed by governments at this point?

Hietala: I'd say that they're at the forefront. Some of these shared government services, where they stand up cloud and make it available to lots of different departments in a government, have the ability to do what they want from a security standpoint, not relying on a public provider, and get it right from their perspective and meet their requirements. They then take that consistent service out to lots of departments that may not have had the resources to get IT security right, when they were doing it themselves. So I think you can make a case for that.

Gardner: Stuart, being involved with standards activities yourself, does moving to the cloud provide a better environment for managing, maintaining, instilling, and improving on standards than enterprise by enterprise by enterprise? As I say, we're looking at a larger pool and therefore that strikes me as possibly being a better place to invoke and manage standards.

Boardman: Dana, that's a really good point, and I do agree. Also, in the security field, we have an advantage in the sense that there are quite a lot of standards out there to deal with interoperability, exchange of policy, exchange of credentials, which we can use. If we adopt those, then we've got a much better chance of getting those standards used widely in the cloud world than in an individual enterprise, with an individual supplier, where it’s not negotiation, but "you use my API, and it looks like this."

Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.



Having said that, there are a lot of well-known cloud providers who do not currently support those standards and they need a strong commercial reason to do it. So it’s going to be a question of the balance. Will we get enough specific weight of people who are using it to force the others to come on board? And I have no idea what the answer to that is.

Gardner: We've also seen that cooperation is an important aspect of security, knowing what’s going on on other people's networks, being able to share information about what the threats are, remediation, working to move quickly and comprehensively when there are security issues across different networks.

Is that a case, Dave, where having a cloud environment is a benefit? That is to say more sharing about what’s happening across networks for many companies that are clients or customers of a cloud provider rather than perhaps spotty sharing when it comes to company by company?

Gilmour: There is something to be said for that, Dana. Part of the issue, though, is that companies are individually responsible for their data. They're individually responsible to a regulator or to their clients for their data. The question then becomes that as soon as you start to share a certain aspect of the security, you're de facto sharing the weaknesses as well as the strengths.

So it’s a two-edged sword. One of the problems we have is that until we mature a little bit more, we won’t be able to actually see which side is the sharpest.

Gardner: So our premise that cloud is good and bad for security is holding up, but I'm wondering whether the same things that make you a risk in a private setting -- poor adhesion to standards, no good governance, too many technologies that are not being measured and controlled, not instilling good behavior in your employees and then enforcing that -- wouldn’t this be the same either way? Is it really cloud or not cloud, or is it good security practices or not good security practices? Mary Ann?

No accountability

Mezzapelle: You're right. It’s a little bit of that "garbage in, garbage out," if you don’t have the basic things in place in your enterprise, which means the policies, the governance cycle, the audit, and the tracking, because it doesn’t matter if you don’t measure it and track it, and if there is no business accountability.

David said it -- each individual company is responsible for its own security, but I would say that it’s the business owner that’s responsible for the security, because they're the ones that ultimately have to answer that question for themselves in their own business environment: "Is it enough for what I have to get done? Is the agility more important than the flexibility in getting to some systems or the accessibility for other people, as it is with some of the ubiquitous computing?"

So you're right. If it’s an ugly situation within your enterprise, it’s going to get worse when you do outsourcing, out-tasking, or anything else you want to call within the cloud environment. One of the things that we say is that organizations not only need to know their technology, but they have to get better at relationship management, understanding who their partners are, and being able to negotiate and manage that effectively through a series of relationships, not just transactions.

Gardner: If data and sharing data is so important, it strikes me that cloud component is going to be part of that, especially if we're dealing with business processes across organizations, doing joins, comparing and contrasting data, crunching it and sharing it, making data actually part of the business, a revenue generation activity, all seems prominent and likely.

So to you, Stuart, what is the issue now with data in the cloud? Is it good, bad, or just the same double-edged sword, and it just depends how you manage and do it?

Boardman: Dana, I don’t know whether we really want to be putting our data in the cloud, so much as putting the access to our data into the cloud. There are all kinds of issues you're going to run up against, as soon as you start putting your source information out into the cloud, not the least privacy and that kind of thing.

A bunch of APIs

W
hat you can do is simply say, "What information do I have that might be interesting to people? If it’s a private cloud in a large organization elsewhere in the organization, how can I make that available to share?" Or maybe it's really going out into public. What a government, for example, can be thinking about is making information services available, not just what you go and get from them that they already published. But “this is the information," a bunch of APIs if you like. I prefer to call them data services, and to make those available.

So, if you do it properly, you have a layer of security in front of your data. You're not letting people come in and do joins across all your tables. You're providing information. That does require you then to engage your users in what is it that they want and what they want to do. Maybe there are people out there who want to take a bit of your information and a bit of somebody else’s and mash it together, provide added value. That’s great. Let’s go for that and not try and answer every possible question in advance.

Gardner: Dave, do you agree with that, or do you think that there is a place in the cloud for some data?

Gilmour: There's definitely a place in the cloud for some data. I get the impression that there is going to drive out of this something like the insurance industry, where you'll have a secondary cloud. You'll have secondary providers who will provide to the front-end providers. They might do things like archiving and that sort of thing.

If you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner.



Now, if you have that situation where your contractual relationship is two steps away, then you have to be very confident and certain of your cloud partner, and it has to actually therefore encompass a very strong level of governance.

The other issue you have is that you've got then the intersection of your governance requirements with that of the cloud provider’s governance requirements. Therefore you have to have a really strongly -- and I hate to use the word -- architected set of interfaces, so that you can understand how that governance is actually going to operate.

Gardner: Wouldn’t data perhaps be safer in a cloud than if they have a poorly managed network?

Mezzapelle: There is data in the cloud and there will continue to be data in the cloud, whether you want it there or not. The best organizations are going to start understanding that they can’t control it that way and that perimeter-like approach that we've been talking about getting away from for the last five or seven years.

So what we want to talk about is data-centric security, where you understand, based on role or context, who is going to access the information and for what reason. I think there is a better opportunity for services like storage, whether it’s for archiving or for near term use.

There are also other services that you don’t want to have to pay for 12 months out of the year, but that you might need independently. For instance, when you're running a marketing campaign, you already share your data with some of your marketing partners. Or if you're doing your payroll, you're sharing that data through some of the national providers.

Data in different places

S
o there already is a lot of data in a lot of different places, whether you want cloud or not, but the context is, it’s not in your perimeter, under your direct control, all of the time. The better you get at managing it wherever it is specific to the context, the better off you will be.

Hietala: It’s a slippery slope [when it comes to customer data]. That’s the most dangerous data to stick out in a cloud service, if you ask me. If it's personally identifiable information, then you get the privacy concerns that Stuart talked about. So to the extent you're looking at putting that kind of data in a cloud, looking at the cloud service and trying to determine if we can apply some encryption, apply the sensible security controls to ensure that if that data gets loose, you're not ending up in the headlines of The Wall Street Journal.

Gardner: Dave, you said there will be different levels on a regulatory basis for security. Wouldn’t that also play with data? Wouldn't there be different types of data and therefore a spectrum of security and availability to that data?

Gilmour: You're right. If we come back to Facebook as an example, Facebook is data that, even if it's data about our known customers, it's stuff that they have put out there with their will. The data that they give us, they have given to us for a purpose, and it is not for us then to distribute that data or make it available elsewhere. The fact that it may be the same data is not relevant to the discussion.

Three-dimensional solution

T
hat’s where I think we are going to end up with not just one layer or two layers. We're going to end up with a sort of a three-dimensional solution space. We're going to work out exactly which chunk we're going to handle in which way. There will be significant areas where these things crossover.

The other thing we shouldn’t forget is that data includes our software, and that’s something that people forget. Software nowadays is out in the cloud, under current ways of running things, and you don't even always know where it's executing. So if you don’t know where your software is executing, how do you know where your data is?

It's going to have to be just handled one way or another, and I think it's going to be one of these things where it's going to be shades of gray, because it cannot be black and white. The question is going to be, what's the threshold shade of gray that's acceptable.

Gardner: Mary Ann, to this notion of the different layers of security for different types of data, is there anything happening in the market that you're aware of that’s already moving in that direction?

That's the importance of something like an enterprise architecture that can help you understand that you're not just talking about the technology components, but the information.



Mezzapelle: The experience that I have is mostly in some of the business frameworks for particular industries, like healthcare and what it takes to comply with the HIPAA regulation, or in the financial services industry, or in consumer products where you have to comply with the PCI regulations.

There has continued to be an issue around information lifecycle management, which is categorizing your data. Within a company, you might have had a document that you coded private, confidential, top secret, or whatever. So you might have had three or four levels for a document.

You've already talked about how complex it's going to be as you move into trying understand, not only for that data, that the name Mary Ann Mezzapelle, happens to be in five or six different business systems over a 100 instances around the world.

That's the importance of something like an enterprise architecture that can help you understand that you're not just talking about the technology components, but the information, what they mean, and how they are prioritized or critical to the business, which sometimes comes up in a business continuity plan from a system point of view. That's where I've advised clients on where they might start looking to how they connect the business criticality with a piece of information.

One last thing. Those regulations don't necessarily mean that you're secure. It makes for good basic health, but that doesn't mean that it's ultimately protected.You have to do a risk assessment based on your own environment and the bad actors that you expect and the priorities based on that.

Leaving security to the end

Boardman: I just wanted to pick up here, because Mary Ann spoke about enterprise architecture. One of my bugbears -- and I call myself an enterprise architect -- is that, we have a terrible habit of leaving security to the end. We don't architect security into our enterprise architecture. It's a techie thing, and we'll fix that at the back. There are also people in the security world who are techies and they think that they will do it that way as well.

I don’t know how long ago it was published, but there was an activity to look at bringing the SABSA Methodology from security together with TOGAF. There was a white paper published a few weeks ago.

The Open Group has been doing some really good work on bringing security right in to the process of EA.

Hietala: In the next version of TOGAF, which has already started, there will be a whole emphasis on making sure that security is better represented in some of the TOGAF guidance. That's ongoing work here at The Open Group.

Gardner: As I listen, it sounds as if the in the cloud or out of the cloud security continuum is perhaps the wrong way to look at it. If you have a lifecycle approach to services and to data, then you'll have a way in which you can approach data uses for certain instances, certain requirements, and that would then apply to a variety of different private cloud, public cloud, hybrid cloud.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive.



Is that where we need to go, perhaps have more of this lifecycle approach to services and data that would accommodate any number of different scenarios in terms of hosting access and availability? The cloud seems inevitable. So what we really need to focus on are the services and the data.

Boardman: That’s part of it. That needs to be tied in with the risk-based approach. So if we have done that, we can then pick up on that information and we can look at a concrete situation, what have we got here, what do we want to do with it. We can then compare that information. We can assess our risk based on what we have done around the lifecycle. We can understand specifically what we might be thinking about putting where and come up with a sensible risk approach.

You may come to the conclusion in some cases that the risk is too high and the mitigation too expensive. In others, you may say, no, because we understand our information and we understand the risk situation, we can live with that, it's fine.

Gardner: It sounds as if we are coming at this as an underwriter for an insurance company. Is that the way to look at it?

Current risk

Gilmour: That’s eminently sensible. You have the mortality tables, you have the current risk, and you just work the two together and work out what's the premium. That's probably a very good paradigm to give us guidance actually as to how we should approach intellectually the problem.

Mezzapelle: One of the problems is that we don’t have those actuarial tables yet. That's a little bit of an issue for a lot of people when they talk about, "I've got $100 to spend on security. Where am I going to spend it this year? Am I going to spend it on firewalls? Am I going to spend it on information lifecycle management assessment? What am I going to spend it on?" That’s some of the research that we have been doing at HP is to try to get that into something that’s more of a statistic.

So, when you have a particular project that does a certain kind of security implementation, you can see what the business return on it is and how it actually lowers risk. We found that it’s better to spend your money on getting a better system to patch your systems than it is to do some other kind of content filtering or something like that.

Gardner: Perhaps what we need is the equivalent of an Underwriters Laboratories (UL) for permeable organizational IT assets, where the security stamp of approval comes in high or low. Then, you could get you insurance insight-- maybe something for The Open Group to look into. Any thoughts about how standards and a consortium approach would come into that?

Hietala: I don’t know about the UL for all security things. That sounds like a risky proposition.

Gardner: It could be fairly popular and remunerative.

Hietala: It could.

Mezzapelle: An unending job.

Hietala: I will say we have one active project in the Security Forum that is looking at trying to allow organizations to measure and understand risk dependencies that they inherit from other organizations.

At the end of the day, you're always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.



So if I'm outsourcing a function to XYZ corporation, being able to measure what risk am I inheriting from them by virtue of them doing some IT processing for me, could be a cloud provider or it could be somebody doing a business process for me, whatever. So there's work going on there.

I heard just last week about a NSF funded project here in the U.S. to do the same sort of thing, to look at trying to measure risk in a predictable way. So there are things going on out there.

Gardner: We have to wrap up, I'm afraid, but Stuart, it seems as if currently it’s the larger public cloud provider, something of Amazon and Google and among others that might be playing the role of all of these entities we are talking about. They are their own self-insurer. They are their own underwriter. They are their own risk assessor, like a UL. Do you think that's going to continue to be the case?

Boardman: No, I think that as cloud adoption increases, you will have a greater weight of consumer organizations who will need to do that themselves. You look at the question that it’s not just responsibility, but it's also accountability. At the end of the day, you're always accountable for the data that you hold. It doesn’t matter where you put it and how many other parties they subcontract that out to.

The weight will change

S
o there's a need to have that, and as the adoption increases, there's less fear and more, "Let’s do something about it." Then, I think the weight will change.

Plus, of course, there are other parties coming into this world, the world that Amazon has created. I'd imagine that HP is probably one of them as well, but all the big names in IT are moving in here, and I suspect that also for those companies there's a differentiator in knowing how to do this properly in their history of enterprise involvement.

So yeah, I think it will change. That's no offense to Amazon, etc. I just think that the balance is going to change.

Gilmour: Yes. I think that's how it has to go. The question that then arises is, who is going to police the policeman and how is that going to happen? Every company is going to be using the cloud. Even the cloud suppliers are using the cloud. So how is it going to work? It’s one of these never-decreasing circles.

There's going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing.



Mezzapelle: At this point, I think it’s going to be more evolution than revolution, but I'm also one of the people who've been in that part of the business -- IT services -- for the last 20 years and have seen it morph in a little bit different way.

Stuart is right that there's going to be a convergence of the consumer-driven, cloud-based model, which Amazon and Google represent, with an enterprise approach that corporations like HP are representing. It’s somewhere in the middle where we can bring the service level commitments, the options for security, the options for other things that make it more reliable and risk-averse for large corporations to take advantage of it.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

HP provides more picks and shovels to cloud miners

In two separate recent announcements, HP has affirmed its goal of being the neutral supplier of choice for all things cloud.

Last week, HP delivered HP Discovery and Dependency Mapping Advanced (DDMA) Content Pack 10, bringing with the ability to better manage cloud instances across the enterprise-public cloud continuum, including deep discovery of virtualized workloads' performance inside of Amazon and VMware vCloud clouds.

Then this week, HP on Tuesday further thrust its global market-leading LoadRunner performance testing suite -- via partners -- into development clouds, known as platform as a service (PaaS) providers. This is clearly aimed at the fast-growing mobile development and greenfield SMB development spaces.

Interestingly, neither the cloud operations efficiency benefits of the updated DDMA nor the HP LoadRunner-in-the-Cloud offering will be initially offered inside of any HP public clouds. These formerly enterprise-targeted development and operations tools are being extended to more private and public cloud uses -- but via cloud ecosystems, partners and channels. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Picks and shovels

While HP is not taking the arrival of its own public cloud offerings off the table -- indeed they have committed to them in the past -- they seem to be happy for now to develop the picks and shovels and provide them to the miners and the current mine owners.

The strategy lessens the potential for conflict that other cloud providers such as Microsoft, Google, Amazon, Salesforce.com and VMware can face (no mention yet of Microsoft Azure). And it makes HP more amenable as a supplier to those public clouds, which may be of interest to them, given both HP's technologies and their vast and global installed base of enterprise customers.

While HP is not taking the arrival of its own public cloud offerings off the table . . . they seem to be happy for now to develop the picks and shovels



Digging more deeply into the news items, the DDMA Content Pack 10 brings a critical part of the HP IT Performance Suite to more types of cloud uses, as well as back into more kinds of mainframes, particularly for the IBM iSeries servers. Reaching more deeply into legacy workloads and across various cloud and hybrid models allows for more automation of those apps and runtimes, and fosters far better change management when those loads need to be adjusted to accommodate varying demands.

HP is also enabling any IP-pingable device to be discovered, mapped, and managed via the various online deployments. The overall benefit is more a lifecycle approach to management of apps and devices across legacy and hybrid environments, and to gain a single view as a business service of all the parts that support the apps and processes regardless of their locations.

Discovery capabilities have also been added for HP ServiceGuard, Glassfish open-source server and VMware Datastore. In addition, integration has also been enhanced to include CiscoWorks LAN Management Solution (LMS), Aperture VISTA, NNMi, Application Signature and Service-Now. Functionality has also been added to the integration of Troux. Finally, Content Pack 10 provides new features such as support for SAP JCo3, Oracle VM Server for SPARC, UCMDB to XML export and a BMC Atrium pull adapter.

Three partners

On the LoadRunner news today, HP has worked so far with three partners that will take the LoadRunner on demand services out to their specific customers and on their public clouds of their choices. The initial partners are: Orasi Software Inc., Genilogix and J9 Technologies. These partners will set the pricing, but the performance testing services are deliver on a pay as you go basis.

"This is unique. It's the easiest, lowest-cost way to bring LoadRunner capabilities to the cloud," said Matt Morgan, senior director, Product and Solution Marketing, Software, HP.

It's the easiest, lowest-cost way to bring LoadRunner capabilities to the cloud.



Incidentally, the testing phase of the cloud PaaS proposition is essential for quick devops and RAD benefits. It further allows any investments that enterprises have made in Loadrunner to be extended via the cloud providers to developers working on new mobile projects, or for them to control and view testing results when using third-party developers.

By straddling the cloud-enterprise ecosystem HP may be able to bring more value to the channel partners and end users -- especially SMBs -- then trying to build the whole cloud first and putting in services later. It's the ecosystem of services, after all, not the location of them, that matters most.

You may also be interested in:

Sunday, February 5, 2012

San Francisco Conference observations: Enterprise transformation, enterprise architecture, SOA and a splash of cloud computing

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group.

By Chris Harding, The Open Group

This week I have been at The Open Group conference in San Francisco. The theme was Enterprise Transformation which, in simple terms means changing how your business works to take advantage of the latest developments in IT.

Evidence of these developments is all around. I took a break and went for coffee and a sandwich, to a little cafe down on Pine and Leavenworth that seemed to be run by and for the Millennium generation. True to type, my server pulled out a cellphone, with a device attached through which I swiped my credit card; an app read my screen-scrawled signature and the transaction was complete.

Then dinner. We spoke to the hotel concierge, she tapped a few keys on her terminal and, hey presto, we had a window table at a restaurant on Fisherman's Wharf. No lengthy phone negotiations with the Maitre d'. We were just connected with the resource that we needed, quickly and efficiently.

The power of ubiquitous technology to transform the enterprise was the theme of the inspirational plenary presentation given by Andy Mulholland, Global CTO at Capgemini. Mobility, the Cloud, and big data are the three powerful technical forces that must be harnessed by the architect to move the business to smarter operation and new markets.

If you had thought five years ago that no technical trend could possibly generate more interest and excitement than SOA, cloud computing would now be proving you wrong.



Jeanne Ross of the MIT Sloan School of Management shared her recipe for architecting business success, with examples drawn from several major companies. Indomitable and inimitable, she always challenges her audience to think through the issues. This time we responded with, "Don't small companies need architecture too?" Of course they do, was the answer, but the architecture of a big corporation is very different from that of a corner cafe.

Corporations don't come much bigger than Nissan. Celso Guiotoko, Corporate VP and CIO at the Nissan Motor Company, told us how Nissan are using enterprise architecture for business transformation. Highlights included the concept of information capitalization, the rationalization of the application portfolio through SOA and reusable services, and the delivery of technology resource through a private cloud platform.

The set of stimulating plenary presentations on the first day of the conference was completed by Lauren States, VP and CTO Cloud Computing and Growth Initiatives at IBM. Everyone now expects business results from technical change, and there is huge pressure on the people involved to deliver results that meet these expectations. IT enablement is one part of the answer, but it must be matched by business process excellence and values-based culture for real productivity and growth.

My role in The Open Group is to support our work on Cloud Computing and SOA, and these activities took all my attention after the initial plenary. If you had thought, five years ago, that no technical trend could possibly generate more interest and excitement than SOA, Cloud Computing would now be proving you wrong.

Interest in SOA continues

But interest in SOA continues, and we had a SOA stream including presentations of forward thinking on how to use SOA to deliver agility, and on SOA governance, as well as presentations describing and explaining the use of key Open Group SOA standards and guides: the Service Integration Maturity Model (OSIMM), the SOA Reference Architecture, and the Guide to using TOGAF for SOA.

We then moved into the Cloud , with a presentation by Mike Walker of Microsoft on why Enterprise Architecture must lead Cloud strategy and planning. The “why” was followed by the “how”: Zapthink's Jason Bloomberg described Representational State Transfer (REST), which many now see as a key foundational principle for Cloud architecture. But perhaps it is not the only principle; a later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).

In the evening we had a CloudCamp, hosted by The Open Group and conducted as a separate event by the CloudCamp organization. The original CloudCamp concept was of an "unconference" where early adopters of Cloud Computing technologies exchange ideas. Its founder, Dave Nielsen, is now planning to set up a demo center where those adopters can experiment with setting up private clouds. This transition from idea to experiment reflects the changing status of mainstream cloud adoption.

The public conference streams were followed by a meeting of the Open Group Cloud Computing Work Group. This is currently pursuing nine separate projects to develop standards and guidance for architects using cloud computing. The meeting in San Francisco focused on one of these - the Cloud Computing Reference Architecture. It compared submissions from five companies, also taking into account ongoing work at the U.S. National Institute of Standards and Technology (NIST), with the aim of creating a base from which to create an Open Group reference architecture for Cloud Computing. This gave a productive finish to a busy week of information gathering and discussion.

A later presentation suggested a three-tier approach with the client tier, including mobile devices, accessing RESTful information resources through a middle tier of agents that compose resources and carry out transactions (ACT).



Ralph Hitz of Visana, a health insurance company based in Switzerland, made an interesting comment on our reference architecture discussion. He remarked that we were not seeking to change or evolve the NIST service and deployment models. This may seem boring, but it is true, and it is right. Cloud Computing is now where the automobile was in 1920. We are pretty much agreed that it will have four wheels and be powered by gasoline. The business and economic impact is yet to come.

So now I'm on my way to the airport for the flight home. I checked in online, and my boarding pass is on my cellphone. Big companies, as well as small ones, now routinely use mobile technology, and my airline has a frequent-flyer app. It's just a shame that they can't manage a decent cup of coffee.

This guest post comes courtesy of Chris Harding, Forum Director for SOA and Semantic Interoperability at The Open Group.

You may also be interested in:

Wednesday, February 1, 2012

EMC's Hadoop strategy cuts to the chase

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

To date, Big Storage has been locked out of Big Data. It’s been all about direct attached storage for several reasons. First, Advanced SQL players have typically optimized architectures from data structure (using columnar), unique compression algorithms, and liberal usage of caching to juice response over hundreds of terabytes. For the NoSQL side, it’s been about cheap, cheap, cheap along the Internet data center model: have lots of commodity stuff and scale it out. Hadoop was engineered exactly for such an architecture; rather than speed, it was optimized for sheer linear scale.

Over the past year, most of the major platform players have planted their table stakes with Hadoop. Not surprisingly, IT household names are seeking to somehow tame Hadoop and make it safe for the enterprise.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary, invent their own technology, then share and harvest from the open source community at will. Hardly a suitable scenario for the enterprise mainstream, the common thread behind the diverse strategies of IBM, EMC, Microsoft, and Oracle toward Hadoop has been to not surprisingly make Hadoop more approachable.

Up ' til now, anybody with armies of the best software engineers that Internet firms could buy could brute force their way to scale out humungous clusters and if necessary.

What’s been conspicuously absent so far was a play from Big Optimized Storage. The conventional wisdom is that SAN or NAS are premium, architected systems whose costs might be prohibitive when you talk petabytes of data.

Similarly, so far there has been a different operating philosophy behind the first generation implementations from the NoSQL world that assumed that parts would fail, and that five nines service levels were overkill. And anyway, the design of Hadoop brute forced the solution: replicate to have three unique copies of the data distributed around the cluster, as hardware is cheap.

As Big Data gains traction in the enterprise, some of it will certainly fit this pattern of something being better than nothing, as the result is unique insights that would not otherwise be possible. For instance, if your running analysis of Facebook or Twitter goes down, it probably won’t take the business with it. But as enterprises adopt Hadoop – and as pioneers stretch Hadoop to new operational use cases such as what Facebook is doing with its messaging system – those concepts of mission-criticality are being revisited.

And so, ever since EMC announced last spring that its Greenplum unit would start supporting and bundling different versions of Hadoop, we’ve been waiting for the other shoe to drop: When would EMC infuse its Big Data play with its core DNA, storage?

Today, EMC announced that its Isilon networked storage system was adding native support for Apache Hadoop’s HDFS file system. There were some interesting nuances to the rollout.

Big vendors feeling their way

It’s interesting to see how IT household names are cautiously navigating their way into unfamiliar territory. EMC becomes the latest, after Oracle and Microsoft, to calibrate their Hadoop strategy in public.

Oracle announced its Big Data appliance last fall before it lined up its Hadoop distribution. Microsoft ditched its Dryad project built around its HPC Server. Now EMC has recalibrated its Hadoop strategy; when it first unveiled its Hadoop strategy last spring, the spotlight was on the MapR proprietary alternatives to the HDFS file system of Apache Hadoop. It’s interesting that vendor initial announcements have either been vague, or have been tweaked as they’ve waded into the market. For EMC’s shift, more about that below.


For EMC, HDFS is the mainstream

MapR’s strategy (and IBM’s along with it, regarding GPFS) has prompted debate and concern in the Hadoop community about commercial vendors forking the technology. As we’ve ranted previously, Hadoop’s growth will be tied, not only to megaplatform vendors that support it, but the third party tools and solutions ecosystem that grows around it.

For such a thing to happen, ISVs and consulting firms need to have a common target to write against, and having forked versions of Hadoop won’t exactly grow large partner communities.

Regarding EMC, the original strategy was two Greenplum Hadoop editions: a Community Edition with a free Apache distro and an Enterprise Edition that bundled MapR, both under the Greenplum HD branding umbrella. At first blush, it looked like EMC was going to earn the bulk of its money from the proprietary side of the Hadoop business.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve.

What’s significant is that the new announcement of Isilon support pertains on to the HDFS open source side. More to the point, EMC is rebranding and subtly repositioning its Greenplum Hadoop offerings: Greenplum HD is the Apache HDFS edition with the optional Isilon support, and Greenplum MR is the MapR version, which is niche targeted towards advanced Hadoop use cases that demand higher performance.

Coming atop recent announcements from Oracle and Microsoft that have come clearly out on the side of OEM’ing Apache rather than anything limited or proprietary, and this amounts to an unqualified endorsement of Apache Hadoop/HDFS as not only the formal, but also the de facto standard.

This reflects emerging conventional wisdom that the enterprise mainstream is leery about lock-in to anything that smells proprietary for technology where they still are in the learning curve. Other forks may emerge, but they will not be at the base file system layer. This leaves IBM and MapR pigeonholed – admittedly, there will be API compatibility, but clearly both are swimming upstream.

Central Storage is newest battleground

As noted earlier, Hadoop’s heritage has been the classic Internet data center scale-out model. The advantage is that, leveraging Hadoop’s highly linear scalability, organizations could easily expand their clusters quite easily by plucking more commodity server and disk. Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

In blunt terms, the choice is whether you pay now or pay later. As mentioned before, do-it-yourself compute clusters require sweat equity – you need engineers who know how to design, deploy, and operate them. The flipside is that many, arguably most corporate IT organizations either lack the skills or the capital. There are various solutions to what might otherwise appear a Hobson’s Choice:

  • Go to a cloud service provider that has already created the infrastructure, such as what Microsoft is offering with its Hadoop-on-Azure services;
  • Look for a happy, simpler medium such as Amazon’s Elastic MapReduce on its DynamoDB service;
  • Subscribe to SaaS providers that offer Hadoop applications (e.g., social network analysis, smart grid as a service) as a service;

    Pioneers or purists would scoff at the notion of an appliance approach because it was always simply scaling out inexpensive, commodity hardware, rather than paying premiums for big vendor boxes.

  • Get a platform and have a systems integrator put it together for you (key to IBM’s BigInsights offering, and applicable to any SI that has a Hadoop practice)
  • Go to an appliance or engineered systems approach that puts Hadoop and/or its subsystems in a box, such as with Oracle Big Data Appliance or EMC’s Greenplum DCA. The systems engineering is mostly done for you, but the increments for growing the system can be much larger than simply adding a few x86 servers here or there (Greenplum HD DCA can scale in groups of 4 server modules). Entry or expansion costs are not necessarily cheap, but then again, you have to balance capital cost against labor.
  • Surrounding Hadoop infrastructure with solutions. This is not a mutually exclusive strategy; unless you’re Cloudera or Hortonworks, which make their business bundling and supporting the core Apache Hadoop platform, most of the household names will bundle frameworks, algorithms, and eventually solutions that in effect place Hadoop under the hood. For EMC, the strategy is their recent announcement of a Unified Analytics Platform (UAP) that provides collaborative development capabilities for Big Data applications. EMC is (or will be) hardly alone here.

With EMC’s new offering, the scale-up option tackles the next variable: storage. This is the natural progression of a market that will address many constituencies, and where there will be no single silver bullet that applies to all.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

You may also be interested in:

Tuesday, January 31, 2012

Enterprise architects play key role in transformation, data analytics value -- but they need to act fast, say Open Group speakers

Good data management, analytics, and helping to shape the goals of the business are keys to transforming the enterprise through impactful enterprise architecture (EA). That was the theme, from different perspectives, presented by a series of plenary speakers this week at The Open Group Conference in San Francisco.

Jeanne Ross, Director and Principal Research Scientist at MIT's Center for Information System Research, opened Monday's plenary session, telling the attendees that the stakes are high for EA, which needs to show swift success in the new digital economy. Enterprise architects also now need to help their organizations better use new services and instill a "value cycle." [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Coming from the siloed past in IT, companies are now moving to business service-driven processes across various resources, Ross said. But they need to recognize the forces around consumption of such services, not just the implementation.

Making good data management a priority, a "single source of truth" is also at the heart of making EA valuable, said Ross. Ensuring the quality of data and the speed of data refresh will help enterprise architects rise in performance appreciation more than just about anything else, she said. Ross studies how firms develop competitive advantage through the implementation and reuse of digitized platforms.

Some day CIOs are going to report to the enterprise architect, because that's the way it ought to be.



She is also the co-author of three books: IT Governance: How Top Performers Manage IT Decision Rights for Superior Results, Enterprise Architecture As Strategy: Creating a Foundation for Business Execution, and IT Savvy: What Top Executives Must Know to Go from Pain to Gain.

I also interviewed Ross on enterprise transformation issues before the conference.

IT-enablement isn't enough, Ross said, because companies typically under-utilize new systems and applications. It's not that we can't build them, she said of systems, but that companies aren't using them to their potential. Architects need to consider this and then market and evangelize solutions.

And EAs need to be more involved with making quality data center stage in their companies. "You don't get good analytics with bad data," Ross said, "The secret to good EA is to put information in every person's hands so they can use data better." And that in turn will help transform the business and spur added innovation using IT systems and good architecture principles.

Most senior executives aren't very good at combining business and technology strategies, Ross said, and she outlined the architect's elevated role in helping their bosses deliver increased business value:
  • Help senior execs clarify business goals
  • Identify architectural capabilities that can be readily exploited
  • Present options and their implications for business goals
  • Build capabilities incrementally
She closed out, getting applause from the audience, by predicting, "Some day CIOs are going to report to the enterprise architect, because that's the way it ought to be."

Impressive cost reduction

The second plenary speaker, Celso Guiotoko, Corporate Vice President and CIO of Nissan Motor Co, Ltd., told how business value is at the top of IT principles for Nissan, information as an asset comes next, and then reducing complexity.

Using these principles, Nissan in 2005 developed "BEST" as an IT mid-term plan and significantly improved the efficiency of its information systems. BEST is an acronym for business alignment, EA, selective sourcing, and technology simplifications.

This was followed in 2009 with the development of the "Change" program, which provided the basis for further advances by changing people, technology, and "process." And, in 2011, the next IT mid-term plan "VITESSE" was launched, designed to bring direct profit to the company. VITESSE encompasses value, innovation, technology, simplification, and service excellence. Through the various initiatives, Nissan has reduced IT cost by over 40 percent, going from a cost per user of $1.09 to $0.63.

The transformed enterprise

Andy Mulholland, Global Chief Technology Officer and Corporate Vice President at Capgemini, focused on the transformed enterprise and cloud trends, as well as the effect of new devices and social networking. Forty millions tablets and 70 million smartphones are having a huge impact on how workers and consumers expect to work and shop.

The "bring your own device" phenomenon is forcing a change in thinking for enterprises, Mulholland said, as two environments are developing -- inside IT and outside IT. Typically back-end activities operate inside the firewall, while front-end people and activities operate outside the firewall, yet people nowadays want to be able to use smartphones and tablets for both personal and work tasks.

This has led to a situation in which workers are increasingly going outside IT to buy services. Mulholland quoted a Gartner prediction that up to 35 percent of IT expenditures will be outside the IT department by 2015. Other industry analysts like IDC have placed the figure higher.

IT faces a huge “re-integration project” to bring together the inside and outside services in a rational way.



Because of this, IT faces a huge “re-integration project” to bring together the inside and outside services in a rational way, Mulholland said, adding that the transformed enterprise needs to focus on the productivity of people and innovative business models.

I interviewed Mulholland a few weeks ago and we delved even deeper into the cloud duality issues now coming to the fore of enterprise technology issues and planning. I was also intrigued by a Wall Street Journal piece today on how the US faces a new tech boom. It was aligned with much of what Mulholland was saying.

The key to doing this “re-integration project,” according to Mulholland, is governance, and the industry really lacks a good cloud governance model, meaning that many businesses are already in trouble. However, enterprises shouldn't let that get in the way of progress. Mulholland advised, "If business wants something radically different from you, don't try to stop it. Try to understand it and take control of it."

Driving IT transformation

Lauren States, Vice President and Chief Technology Officer, Cloud Computing and Growth Initiatives, IBM, emphasized that transforming the enterprise requires a huge emphasis on analytics, and a successful integration of analytics and IT.

States drew on IBM's decades-long journey of constant transformation, relying on business process excellence, values-based culture, and IT-enablement. This has led to $1.5 billion in IT savings since 2005 as well as avoiding over $20 million in expenses over five years with a private analytics cloud, she said.

According to States, CMOs are overwhelmingly underprepared for the data explosion and recognize the need to invest in and integrate technology and analysis and consider analytics as business differentiators.

CEOs and CIOs are both highly focused on insights, clients, and people skills, States said, feeding into what she called the "new reality," the need to harvest and pass insights and build trusted relationships.

States' takeaway: We're at the beginning of a major change, much like the PC revolution three decades ago. The cloud's sweet spot now, she says, is in bringing new innovation and insights to marketing, sales and customer service.

No need to wait

Speaker Bill Rouse, executive director, Tennenbaum Institute at Georgia Tech, said that many enterprises wait too long to change, with the decision to transform dragging on until the damage is beyond repair. As evidence, he said that in the past 25 years, 1000 companies have dropped from the Fortune 500 list -- showing enterprise transformation has high failure rate, and that waiting for the right time change is a risky business plan.

Moreover, for those enterprises seeking transformation, they need to look at the full ecosystem that a business operates in to effectively transform, says Rouse. Business ecosystems are co-creating high-value services, expanding transformation across supply chains, says Rouse. This is an important nee dimension, he added.

Using analytics better to support evidence-based decision making is transformative and should be a priority, says Rouse. And architecture-oriented thinking can be transformative in itself, he said.

Cyber security threats

On the topic of cyber security, plenary speaker Joseph Menn, cyber security correspondent for the Financial Times and author of Fatal System Error: The Hunt for the New Crime Lords Who are Bringing Down the Internet, made it clear that business as usual won't do.

Joe has covered security since 1999 for both the Financial Times and then before that, for the Los Angeles Times. Fatal System Error is his third book, he also wrote All the Rave: The Rise and Fall of Shawn Fanning's Napster. I also recently interviewed him.

"It's in no one's interest to tell us how bad it really is" when it comes to cyber crime and security, said Menn. And the Stuxnet affair is huge as a harbinger of things to come, he said.

As a result, more taxpayer money will be needed for effective government-level defenses against cyber attacks, he suggested. But government intervention won't do the job alone. Increasingly, corporations will need to play more than just defense on attacks, many of which come from Russia and China and from groups that blend state and criminal interests.

Counter attacks may be a strong defense when it comes to cyber risks, and US government may "turn blind eye", says Menn. We may even see cyber crime bounty hunters that corporations hire on the QT to go after those that attack them, he said.

Meanwhile, IT groups and enterprise architects can play a bigger role. Knowing what you have helps you know when something has been taken, so improve tracking of assets, Menn told them. He also suggusted that companies keep their most critical data offline, and protect their intellectual property by burying it in and among fake data.

Allen Brown, President and CEO of The Open Group, said that more than 400 corporations are now members of The Open Group, showing strong growth over past 12 years since its founding. TOGAF 9 certification rates growing rapidly worldwide, he said.

FACE standard

In other news from The Open Group on Monday, The Future Airborne Capability Environment (FACE) Consortium, announced the official release of the FACE Technical Standard, which provides guidelines for creating a common operating environment to support applications across multiple Department of Defense avionics systems. See my interview on FACE as it was just getting under way.

The standard is designed to enhance the U.S. military aviation community’s ability to address issues of limited software reuse and accelerate and enhance warfighter capabilities, as well as enabling the community to take advantage of new technologies more rapidly and affordably.

It is our hope this standard will accelerate the open and secure development of products within the Department of Defense’s Airborne community by enabling industry-government collaboration.

The FACE technical standard will enable developers to create and deploy a wide catalog of applications for use across the spectrum of military aviation systems through a common operating environment. Product development efforts by industry and procurements by government customer organizations are already underway based on the FACE standard.

“The introduction of the FACE Technical Standard is an important milestone in extending interoperability among the armed forces and creating a common platform for avionics that enables systems to work together across each of the branches of the U.S. military,” said Brown.

And on Tuesday, The Open Group announced the arrival of ArchiMate 2.0, the latest version of the organization's open and independent modeling language for enterprise architecture. This version is more tightly aligned to TOGAF, so enterprise architects using the language can improve the way key business and IT stakeholders collaborate and adapt to change.

ArchiMate 2.0 improves collaboration through clearer understanding across multiple functions, including business executives, enterprise architects, systems analysts, software engineers, business process consultants and infrastructure engineers, according to the release. The new standard enables the creation of fully integrated models of an organization's Enterprise Architecture, the motivation behind it, and the programs, projects and migration paths to implement it.

"By combining TOGAF and ArchiMate, TOGAF becomes more easy to apply in any organization," said Harmen van den Berg, partner and co-founder at BiZZdesign. "Having a reference model makes them both easier to apply in any industry or vertical."

He added: "Architects like to make models, and this now helps them to use those models to create change in the organization, for something that means more to the business."

Making the EA function a chief weapon of enterprise transformation in a time of roiling change and complexity, that's the main message from the conference. No time to wait.

You may also be interested in: