Monday, March 11, 2013

Fighting in the cloud service orchestration wars

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Combine the supercharged cloud computing marketplace with the ubergeek cred of the open source movement, and you’re bound to have some Mentos-in-Diet-Coke moments. Such is the case with today’s cloud service orchestration (CSO) platforms. At this moment in time, the leading CSO platform is OpenStack. Dozens of vendors and cloud service providers (CSPs) have piled on this effort, from Rackspace to HP to Dell, and most recently, IBM has announced that they’re going all in as well. Fizzy to be sure, but all Coke, no Mentos.

Then there are CloudStack, Eucalyptus, and a few other OpenStack competitors. With all the momentum of OpenStack, it might seem that these open source alternatives are little more than also-rans, doomed to drop further and further behind the burgeoning leader. But there’s more to this story. This is no techie my-open-source-is-better-than-your-open-source battle of principle, of interest only to the cognoscenti. On the contrary: big players are now involved, and they’re placing increasingly large bets. Add a good healthy dose of Mentos – only this time, the Mentos are money.

Understanding the CSO Marketplace

Look around the infrastructure-as-a-service (IaaS) market. Notice that elephant in the corner? That’s Amazon Web Services (AWS). The IaaS market simply doesn’t make sense unless you realize that AWS essentially invented IaaS. And by invented, we mean actually got it to work. Which if you think about it, is rather atypical for most technology vendors. Your average software vendor will identify a new market opportunity, take some old stuff they’ve been struggling to sell, give it a nice new coat of PowerPoint, and shoehorn it into the new market. If customers bite, then the vendor will devote resources into making the product actually do what it’s supposed to do. Eventually. We hope.

Bloomberg
But AWS is different. Amazon.com is an online reseller, not a software vendor. They think more like Wal-Mart than IBM. They figured out elasticity at scale, added customer self-service, and christened it IaaS. Then they grew it exponentially, defining what cloud computing really means. Today, they leverage their market dominance and economies of scale to continually lower prices, squeezing their competitors’ margins to nothing. It worked for Rockefeller’s Standard Oil, and it works for Wal-Mart. Now it’s working for Amazon.

But as with any market, there are always competitors looking to carve off a bit of opportunity for themselves. Given AWS’s dominance, however, there are two basic approaches to competing with Amazon: do what AWS is doing but try to do it a bit better (say, with Rackspace’s promise of better customer service), or do something similar to AWS but different enough to interest some segment of the market (leading in particular to the enterprise public cloud space populated by the likes of Verizon Terremark and Savvis, to name a few).

And then there are the big vendors like HP and IBM, who not only offer a range of enterprise software products, but who also offer enterprise data center managed services and associated consulting. Such vendors want to play two sides of this market: they want to be public cloud providers in their own right, and also offer “turnkey” cloud gear to customers who want to build their own private clouds.

Enter OpenStack. Both of the aforementioned vendors as well as the smaller players realize that piecing together their own cloud offerings will never enable them to catch up to AWS. Instead, they’re joining forces to build out a common cloud infrastructure platform that supports the primary capabilities of IaaS (compute, storage, database, and network), as well as providing the infrastructure platform for platform-as-a-service (PaaS) and Software-as-a-Service (SaaS) capabilities down the road. The open source model is perfect for such collaboration, as the Apache license allows contributors to take the shared codebase and build out whatever proprietary add-ons they like.

Most challenging benefits

Perhaps the most touted, and yet most challenging benefits of the promised all-OpenStack world is the holy grail of workload portability. In theory, if you’re running your workloads on one OpenStack-based cloud, you should be able to move them lock stock and barrel to any other OpenStack-based cloud, even if it belongs to a different CSP. Workload portability is the key to cloud-based failover and disaster recovery, cloud bursting, and multi-cloud deployments. Today, workload portability requires a single proprietary platform, and only VMware offers such portability. AWS offers a measure of portability within its cloud, but will face challenges supporting portability between itself and other providers. As a result, if OpenStack can get portability to work properly, participating CSPs will have a competitive lever against Amazon.

Achieving a strong competitive position against AWS with OpenStack is easier said than done, however. OpenStack is a work in progress, and many bits and pieces are still missing. Open source efforts take time to mature, and meanwhile, AWS keeps growing. In response, the players in this space are taking different tacks to build mature offerings that have a hope of carving off a viable chunk of the IaaS marketplace:
  • Rackspace is trying to capitalize on its OpenStack leadership position and the aforementioned customer service to provide a viable alternative to AWS. They are also touting the workload portability benefits of OpenStack. But downward pricing pressure combined with the holes in OpenStack capabilities are pounding on Rackspace’s stock price.
    Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration.

  • Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration. Cloud integration is a critical enabler of hybrid clouds, but doesn’t address the workload portability challenge. As a result, Dell’s cloud marketing efforts are focused on the benefits of integration over portability. Dell’s recent acquisition of Quest Software also hints at a Microsoft application migration strategy for Dell Cloud.
  • HP wants to rush its enterprise public cloud offering to market, and it doesn’t want to wait for OpenStack to mature. Instead, it’s hammering out its own version of OpenStack, essentially forking the OpenStack codebase to its own ends, according to Nnamdi Orakwue, vice president for Dell Cloud. Such a move may pay off for HP, but increases the risk that the HP add-ons to OpenStack will have quality issues.
  • IBM recently announced that they are “all in” with OpenStack with the rollout of  IBM SmartCloud Orchestrator built on the platform.  But IBM has a problem: the rest of their SmartCloud suite isn’t built on OpenStack, leaving them to scramble to rewrite a number of existing products leveraging OpenStack’s incomplete codebase, while in the meantime, integrating the mishmash of SmartCloud components at the PowerPoint layer.
  • Red Hat is making good progress hammering out what they consider an “enterprise” deployment of OpenStack. As perhaps the leading enterprise open source vendor, they are well-positioned to lead this segment of the market, but it still remains to be seen whether enterprise customers will want to  build all open source private clouds in the near term, as the products gradually mature. On the other hand, IBM has a history of leveraging Red Hat’s open source products, so an IBM/Red Hat partnership may move SmartCloud forward more quickly than IBM might be able to accomplish on its own.
CSO Wild Card: CloudStack

There are several more players in this story, but one more warrants a discussion: Citrix. The desktop virtualization leader had been one face in the OpenStack crowd, but they suddenly decided to switch horses and take a contrarian strategy. They ditched OpenStack, took their 2011 Cloud.com acquisition and donated the code to CloudStack. Then they switched CloudStack’s licensing model from GNU (derivative products must be licensed under GNU) to Apache (OK to build proprietary offerings on top of the open source codebase), and subsequently passed the entire CloudStack effort along to the Apache Foundation, where it’s now in incubation.

There are far fewer players on the CloudStack team than OpenStack’s, and its core value proposition is quite similar to OpenStack, so on first glance, Citrix’s move raises eyebrows. After all, why bail on the market leader to join the underdog? But look more closely, and it seems that Citrix may be onto something.
Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen.

First, Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen. Xen is one of the two leading open source virtualization platforms, and provides the underpinnings to many commercial virtualization products on the market today. Citrix’s 2007 acquisition of XenSource positioned them as a Xen leader, and they’ve been driving development of the Xen codebase ever since.

Citrix’s heavy investment in Xen bucks the conventional virtualization wisdom: since Xen’s primary competitor, KVM (Kernel-based Virtual Machine) is distributed as part of standard Linux distros, KVM is the no-brainer choice for the virtualization component of open source CSOs. After all, it’s essentially part of Linux, so any CSP (save those focusing on Windows-centric IaaS) don’t have to lift a finger to build their offerings on KVM. Citrix, however, picked up on a critical fact: KVM is simply not as good as Xen. And now that Citrix has been pushing Xen to mature for half a dozen years, Xen is a far better choice for building turnkey cloud solutions than KVM. So they Citrix combined Xen and CloudStack into a single cloud architecture they dubbed Windsor, which forms the basis of their CloudPlatform offering.

And therein lies the key to Citrix’s contrarian strategy: CloudPlatform is a turnkey cloud solution for customers who want to deploy private clouds – or as close to turnkey as today’s still nascent cloud market can offer. Citrix is passing on the opportunity to be their own CSP (at least for now), instead focusing on driving CloudStack and Xen maturity to the point that they can put together a complete cloud infrastructure software offering. In other words, they are focusing on a niche and giving it all they got.

The ZapThink Take

If this ZapFlash makes comprehending the IaaS marketplace look like herding cats, you’re right. AWS has gotten so big, so fast, and their products are so good, that everyone else is scrambling to put something together that will carve off a piece of what promises to be an immense market. But customers are holding the cards, because everyone knows how AWS works, which means that everyone knows how IaaS is supposed to work. If a vendor or CSP brings an offering to market that doesn’t compare with AWS on quality, functionality, or cost, then customers will steer clear, no matter how good the contenders’ PowerPoints are.

But as with feline wrangling, it’s anybody’s guess where this tabby or that calico is heading next. If anyone truly challenges Amazon’s dominance, who will it be? Rackspace? IBM? Dell? Or any of the dozens of other four-legged critters just looking for a warm spot in the sun? And then there’s the turnkey cloud solution angle. Today, building out your own private cloud is difficult, expensive, and fraught with peril. But if tomorrow brings simple, low cost, low risk private clouds to the enterprise, how will that impact the public CSP marketplace? You pays your money, you takes your chances. But today, the safe IaaS choice is AWS, unless you have a really good reason for selecting an alternative.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in:

Thursday, March 7, 2013

Cloud, mobile bringing new value to Agile development methods, even in bite-sized chunks

As IT aligns itself with business goals, Agile software development is increasingly enabling developers to better create applications that meet user needs quickly. And, now, the advent of increased mobile apps development is further accelerating the power of Agile methods.

Thought it’s been around for decades, Agile’s tenets of collaboration, incremental development, speed, and flexibility resonate with IT leaders who want developers to focus on working with users to develop the applications. This method stands in contrast to the more rigid and traditional process of collecting user requirements, taking months to create a complete application, and delivering the application to users with the hopes that it fits the bill and that requirements haven’t changed during the process.
In many cases today, the business has alternatives, thanks to cloud -- all the services they could need are available with a credit card.

In fact, in today’s world, where business leaders can shop for the technology they need with any cloud or software-as-a-service (SaaS) provider they choose, IT must ensure enterprise applications are built collaboratively to meet needs, or lose out to the competition.

“In many cases today, the business has alternatives, thanks to cloud -- all the services they could need are available with a credit card,” says Raziel Tabib, Senior Product Manager of Application Lifecycle Management with HP Software . “IT has to work to be the preferred solution. If the IT department wants to maintain its position, it has to make the best tools to meet business needs. Developers have to get engaged with end users to ensure they are meeting those needs.” [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP Software recently released HP Agile Manager, a SaaS-based solution for planning and executing agile projects. And the division itself has embraced some of the principles of agile that have, for example, helped it to move from an 18-month release cycle to come up with product releases and refreshes every month, says Tabib.

Pick and choose

However, Agile is far from an all-or-nothing proposition, particularly for large organizations with developers distributed across the globe that may have a harder a time adopting certain agile work styles, he warns.

“We’re not saying any organization can just look at the agile manifesto and start tomorrow with scrum meetings and everything will work well,” Tabib says. “We have engineers in Israel, Prague, and Vietnam. While some agile practices are easy to pick up, others are really difficult to adopt, when you’re talking about organizations at that scale.”

That’s okay, he adds -- organizations should be encouraged to cherry pick the elements of agile that make sense to embrace, blend them with more traditional approaches to application development, and still reap benefits.

A report published in September of 2012 by Forrester Consulting on behalf of HP supports the idea that Agile is one of many disciplines that can be used to develop applications that meet users needs.
Agile software development is one tool in a vast toolbox,” reads the report. “But a fool with a tool is still a fool.

The report, entitled Agile Software Development and the Factors that Drive Success, surveyed 112 professionals regarding application development habits and success. It found that companies already successful in application development used Agile techniques to make them even better.

For example, respondents cited the Agile practice of limiting the amount of work in progress to reduce the impact of sudden business change meant that requirements didn’t grow stale while waiting for coding to begin -- but that their overall success was based on more than just implementing agile.

And it found respondents at companies that weren’t as successful with application development reported using aspects of agile. The upshot of the survey was that simply adopting agile did not ensure success. “Agile software development is one tool in a vast toolbox,” reads the report. “But a fool with a tool is still a fool.”

I think Agile will get even more of a boost in value as developers move toward a "mobile first" approach, which seems tightly coupled with fast, iterative apps improvement schedules.

One of the neat things about a mobile first orientation is that it forces long-overdue simplification and ease in use in apps. When new apps are designed for their mobile device deployment first, the dictates of the mobile constraints prevail.

Combine that with Agile, and the guiding principles of speed and keeping user requirements dominant help keep projects from derailing. Revisions and updates remain properly constrained. Mobile First discourages snowballing of big applications, instead encouraging releases of smaller, more manageable apps.

Mobile First design benefits combined with Agile methods can be well extended across SaaS, cloud, VDI, web, and even client-server applications.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Friday, March 1, 2013

Complexity from big data and cloud trends makes architecture tools like ArchiMate and TOGAF more powerful, says expert panel

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

We recently assembled a panel of enterprise architecture (EA) experts to explain how such simultaneous and complex trends as big data, cloud computing, security, and overall IT transformation can be helped by the combined strengths of The Open Group Architecture Framework (TOGAF®) and the ArchiMate® modeling language.

The panel consisted of Chris Forde, General Manager for Asia-Pacific and Vice President of Enterprise Architecture at The Open Group; Iver Band, Vice Chair of The Open Group ArchiMate Forum and Enterprise Architect at The Standard, a diversified financial services company; Mike Walker, Senior Enterprise Architecture Adviser and Strategist at HP and former Director of Enterprise Architecture at Dell; Henry Franken, the Chairman of The Open Group ArchiMate Forum and Managing Director at BIZZdesign, and Dave Hornford, Chairman of the Architecture Forum at The Open Group and Managing Partner at Conexiam. I served as the moderator.

This special BriefingsDirect thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on "big data -- he transformation we need to embrace today." [Disclosure: The Open Group and HP are sponsors of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Is there something about the role of the enterprise architect that is shifting?

Walker: There is less of a focus on the traditional things we come to think of EA such as standards, governance and policies, but rather into emerging areas such as the soft skills, business architecture, and strategy.
Walker

To this end I see a lot in the realm of working directly with the executive chain to understand the key value drivers for the company and rationalize where they want to go with their business. So we're moving into a business-transformation role in this practice.

At the same time, we've got to be mindful of the disruptive external technology forces coming in as well. EA can’t just divorce from the other aspects of architecture as well. So the role that enterprise architects play becomes more and more important and elevated in the organization.

Two examples of this disruptive technology that are being focused on at the conference are big data and cloud computing. Both are providing impacts to our businesses not because of some new business idea but because technology is available to enhance or provide new capabilities to our business. The EA’s still do have to understand these new technology innovations and determine how they will apply to the business.

We need to get really good enterprise architects, it’s difficult to find good ones. There is a shortage right now especially given that a lot of focus is being put on the EA department to really deliver sound architectures.

Not standalone

Gardner: We've been talking a lot here about big data, but usually that's not just a standalone topic. It's big data and cloud, cloud, mobile and security.

So with these overlapping and complex relationships among multiple trends, why is EA and things like the TOGAF framework and the ArchiMate modeling language especially useful?

Band: One of the things that has been clear for a while now is that people outside of IT don't necessarily have to go through the technology function to avail themselves of these technologies any more. Whether they ever had to is really a question as well.

Band
One of things that EA is doing, and especially in the practice that I work in, is using approaches like the ArchiMate modeling language to effect clear communication between the business, IT, partners and other stakeholders. That's what I do in my daily work, overseeing our major systems modernization efforts. I work with major partners, some of which are offshore.

I'm increasingly called upon to make sure that we have clear processes for making decisions and clear ways of visualizing the different choices in front of us. We can't always unilaterally dictate the choice, but we can make the conversation clearer by using frameworks like the TOGAF standard and the ArchiMate modeling language, which I use virtually every day in my work.

Hornford: The fundamental benefit of these tools is the organization realizing its capability and strategy. I just came from a session where a fellow quoted a Harvard study, which said that around a third of executives thought their company was good at executing on its strategy. He highlighted that this means that two-thirds are not good at executing on their strategy.

Hornford
If you're not good at executing on your strategy and you've got big data, mobile, consumerization of IT and cloud, where are you going? What's the correct approach? How does this fit into what you were trying to accomplish as an enterprise?

An enterprise architect that is doing their job is bringing together the strategy, goals and objectives of the organization. Also, its capabilities with the techniques that are available, whether it's offshoring, onshoring, cloud, or big data, so that the organization is able to move forward to where it needs to be, as opposed to where it's going to randomly walk to.

Forde: One of the things that has come out in several of the presentations is this kind of capability-based planning, a technique in EA to get their arms around this thing from a business-driver perspective. Just to polish what Dave said a little bit, it's connecting all of those things. We see enterprises talking about a capability-based view of things on that basis.

Gardner: Let's get a quick update. The TOGAF framework, where are we and what have been the highlights from this particular event?

Minor upgrade

Hornford: In the last year, we've published a minor upgrade for TOGAF version 9.1 which was based upon cleaning up consistency in the language in the TOGAF documentation. What we're working on right now is a significant new release, the next release of the TOGAF standard, which is dividing the TOGAF documentation to make it more consumable, more consistent and more useful for someone.

Today, the TOGAF standard has guidance on how to do something mixed into the framework of what you should be doing. We're peeling those apart. So with that peeled apart, we won't have guidance that is tied to classic application architecture in a world of cloud.

What we find when we have done work with the Banking Industry Architecture Network (BIAN) for banking architecture, Sherwood Applied Business Security Architecture (SABSA) for security architecture, and the TeleManagement Forum, is that the concepts in the TOGAF framework work across industries and across trends. We need to move the guidance into a place so that we can be far nimbler on how to tie cloud with my current strategy, how to tie consumerization of IT with on-shoring?

Franken: The ArchiMate modeling language turned two last year, and the ArchiMate 1.0 standard is the language to model out the core of your EA. The ArchiMate 2.0 standard added two specifics to it to make it better aligned also to the process of EA.

Franken
According to the TOGAF standard, this is being able to model out the motivation, why you're doing EA, stakeholders and the goals that drive us. The second extension to the ArchiMate standard is being able to model out its planning and migration.

So with the core EA and these two extensions, together with the TOGAF standard process working, you have a good basis on getting EA to work in your organization.

Gardner: Mike, fill us in on some of your thoughts about the role of information architecture vis-à-vis the larger business architect and enterprise architect roles.

Walker: Information architecture is an interesting topic in that it hasn’t been getting a whole lot of attention until recently.

Information architecture is an aspect of enterprise architecture that enables an information strategy or business solution through the definition of the company's business information assets, their sources, structure, classification and associations that will prescribe the required application architecture and technical capabilities.

Information architecture is the bridge between the business architecture world and the application and technology architecture activities.

The reason I say that is because information architecture is a business-driven discipline that details the information strategy of the company. As we know, and from what we’ve heard at the conference keynotes like in the case of NASA, big data, and security presentations, the preservation and classification of that information is vital to understanding what your architecture should be.

Least matured

From an industry perspective, this is one of the least matured, as far as being incorporated into a formal discipline. The TOGAF standard actually has a phase dedicated to it in data architecture. Again, there are still lots of opportunities to grow and incorporate additional methods, models and tools by the enterprise information management discipline.

Enterprise information management not only it captures traditional topic areas like master data management (MDM), metadata and unstructured types of information architecture but also focusing on the information governance, and the architecture patterns and styles implemented in MDM, big data, etc. There is a great deal of opportunity there.

From the role of information architects, I’m seeing more and more traction in the industry as a whole. I've dealt with an entire group that’s focused on information architecture and building up an enterprise information management practice, so that we can take our top line business strategies and understand what architectures we need to put there.

This is a critical enabler for global companies, because oftentimes they're restricted by regulation, typically handled at a government or regional area. This means we have to understand that we build our architecture. So it's not about the application, but rather the data that it processes, moves, or transforms.

We didn’t have to treat information as a first-class citizen. Times have changed, though.
Gardner: Up until not too long ago, the conventional thinking was that applications generate data. Then you treat the data in some way so that it can be used, perhaps by other applications, but that the data was secondary to the application.

But there's some shift in that thinking now more toward the idea that the data is the application and that new applications are designed to actually expand on the data’s value and deliver it out to mobile tiers perhaps. Does that follow in your thinking that the data is actually more prominent as a resource perhaps on par with applications?

Walker: You're spot on, Dana. Before the commoditization of these technologies that resided on premises, we could get away with starting at the application layer and work our way back because we had access to the source code or hardware behind our firewalls. We could throw servers out, and we used to put the firewalls in front of the data to solve the problem with infrastructure. So we didn’t have to treat information as a first-class citizen. Times have changed, though.

Information access and processing is now democratized and it’s being pushed as the first point of presentment. A lot of times this is on a mobile device and even then it’s not the corporate’s mobile device, but your personal device. So how do you handle that data?

It's the same way with cloud, and I’ll give you a great example of this. I was working as an adviser for a company, and they were looking at their cloud strategy. They had made a big bet on one of the big infrastructures and cloud-service providers. They looked first at what the features and functions that that cloud provider could provide, and not necessarily the information requirements. There were two major issues that they ran into, and that was essentially a showstopper. They had to pull off that infrastructure.

The first one was that in that specific cloud provider’s terms of service around intellectual property (IP) ownership. Essentially, that company was forced to cut off their IP rights.

Big business

As you know, IP is a big business these days, and so that was a showstopper. It actually broke the core regulatory laws around being able to discover information.

So focusing on the applications to make sure it meets your functional needs is important. However, we should take a step back and look at the information first and make sure that for the people in your organization who can’t say no, their requirements are satisfied.

Gardner: Data architecture is it different from EA and business architecture, or is it a subset? What’s the relationship, Dave?

Hornford: Data architecture is part of an EA. I won’t use the word subset, because a subset starts to imply that it is a distinct thing that you can look at on its own. You cannot look at your business architecture without understanding your information architecture. When you think about big data, cool. We've got this pile of data in the corner. Where did it come from? Can we use it? Do we actually have legitimate rights, as Mike highlighted, to use this information? Are we allowed to mix it and who mixes it?

When we look at how our business is optimized, they normally optimize around work product, what the organization is delivering. That’s very easy. You can see who consumes your work product. With information, you often have no idea who consumes your information. So now we have provenance, we have source and as we move for global companies, we have the trends around consumerization, cloud and simply tightening cycle time.
If we look at data in isolation, I have to understand how the system works and how the enterprise’s architecture fits together.

Gardner: Of course, the end game for a lot of the practitioners here is to create that feedback loop of a lifecycle approach, rapid information injection and rapid analysis that could be applied. So what are some of the ways that these disciplines and tools can help foster that complete lifecycle?

Band: The disciplines and tools can facilitate the right conversations among different stakeholders. One of the things that we're doing at The Standard is building cadres equally balanced between people in business and IT.

We're training them in information management, going through a particular curriculum, and having them study for an information management certification that introduces a lot of these different frameworks and standard concepts.

Creating cadres

We want to create these cadres to be able to solve tough and persistent information management problems that affect all companies in financial services, because information is a shared asset. The purpose of the frameworks is to ensure proper stewardship of that asset across disciplines and across organizations within an enterprise.

Hornford: The core is from the two standards that we have, the ArchiMate standard and the TOGAF standard. The TOGAF standard has, from its early roots, focused on the components of EA and how to build a consistent method of understanding of what I'm trying to accomplish, understanding where I am, and where I need to be to reach my goal.

When we bring in the ArchiMate standard, I have a language, a descriptor, a visual descriptor that allows me to cross all of those domains in a consistent description, so that I can do that traceability. When I pull in this lever or I have this regulatory impact, what does it hit me with, or if I have this constraint, what does it hit me with?

If I don’t do this, if I don’t use the framework of the TOGAF standard, or I don’t use the discipline of formal modeling in the ArchiMate standard, we're going to do it anecdotally. We're going to trip. We're going to fall. We're going to have a non-ending series of surprises, as Mike highlighted.
The businesses value of TOGAF is that they get a repeatable and a predictable process for building out our architectures that properly manage risks and reliably produces value.

"Oh, terms of service. I am violating the regulations. Beautiful. Let’s take that to our executive and tell him right as we are about to go live that we have to stop, because we can't get where we want to go, because we didn't think about what it took to get there." And that’s the core of EA in the frameworks.

Walker: To build on what Dave has just talked about and going back to your first question Dana, the value statement on TOGAF from a business perspective. The businesses value of TOGAF is that they get a repeatable and a predictable process for building out our architectures that properly manage risks and reliably produces value.

The TOGAF framework provides a methodology to ask what problems you're trying to solve and where you are trying to go with your business opportunities or challenges. That leads to business architecture, which is really a rationalization in technical or architectural terms the distillation of the corporate strategy.

From there, what you want to understand is information -- how does that translate, what information architecture do we need to put in place? You get into all sorts of things around risk management, etc., and then it goes on from there, until what we were talking about earlier about information architecture.

If the TOGAF standard is applied properly you can achieve the same result every time, That is what interests business stakeholders in my opinion. And the ArchiMate modeling language is great because, as we talked about, it provides very rich visualizations so that people cannot only show a picture, but tie information together. Different from other aspects of architecture, information architecture is less about the boxes and more about the lines.

Quality of the individuals

Forde: Building on what Dave was saying earlier and also what Iver was saying is that while the process and the methodology and the tools are of interest, it’s the discipline and the quality of the individuals doing the work
Forde

Iver talked about how the conversation is shifting and the practice is improving to build communications groups that have a discipline to operate around. What I am hearing is implied, but actually I know what specifically occurs, is that we end up with assets that are well described and reusable.

And there is a point at which you reach a critical mass that these assets become an accelerator for decision making. So the ability of the enterprise and the decision makers in the enterprise at the right level to respond is improved, because they have a well disciplined foundation beneath them.

A set of assets that are reasonably well-known at the right level of granularity for them to absorb the information and the conversation is being structured so that the technical people and the business people are in the right room together to talk about the problems.

This is actually a fairly sophisticated set of operations that I am discussing and doesn't happen overnight, but is definitely one of the things that we see occurring with our members in certain cases.
There is a point at which you reach a critical mass that these assets become an accelerator for decision making.

Hornford: I want to build on that what Chris said. It’s actually the word "asset." While he was talking, I was thinking about how people have talked about information as an asset. Most of us don’t know what information we have, how it’s collected, where it is, but we know we have got a valuable asset.

I'll use an analogy. I have a factory some place in the world that makes stuff. Is that an asset? If I know that my factory is able to produce a particular set of goods and it’s hooked into my supply chain here, I've got an asset. Before that, I just owned a thing.

I was very encouraged listening to what Iver talked about. We're building cadres. We're building out this approach and I have seen this. I'm not using that word, but now I'm stealing that word. It's how people build effective teams, which is not to take a couple of specialists and put them in an ivory tower, but it’s to provide the method and the discipline of how we converse about it, so that we can have a consistent conversation.

When I tie it with some of the tools from the Architecture Forum and the ArchiMate Forum, I'm able to consistently describe it, so that I now have an asset I can identify, consume and produce value from.

Business context

Forde: And this is very different from data modeling. We are not talking about entity relationship, junk at the technical detail, or third normal form and that kind of stuff. We're talking about a conversation that’s occurring around the business context of what needs to go on supported by the right level of technical detail when you need to go there in order to clarify.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Friday, February 22, 2013

The Open Group panel explores how the Big Data era now challenges the IT status quo

Listen to the podcast. Find it on iTunes. Watch the video. Read a full transcript or download a copy. Sponsor: The Open Group.

We recently assembled a panel of experts to explore how big data changes the status quo for architecting the enterprise. The bottom line from the discussion is that large enterprises should not just wade into big data as an isolated function, but should anticipate the strategic effects and impacts of big data -- as well the simultaneous complicating factors of cloud computing and mobile -- as soon as possible.

The panel consisted of Robert Weisman, CEO and Chief Enterprise Architect at Build The Vision; Andras Szakal, Vice President and CTO of IBM's Federal Division; Jim Hietala, Vice President for Security at The Open Group, and Chris Gerty, Deputy Program Manager at the Open Innovation Program at NASA. I served as the moderator.




And this special BriefingsDirect thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on "big data -- he transformation we need to embrace today." [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Threaded factors

An interesting thread for me throughout the conference was to factor where big data begins and plain old data, if you will, ends. Of course, it's going to vary quite a bit from organization to organization.

But Gerty from NASA, part of our panel, provided a good example: It’s when you run out of gas with your old data methods, and your ability to deal with the data -- and it's not just the size of the data itself.

Therefore, big data means do things differently -- not just to manage the velocity and the volume and the variety of the data, but to really think about data fundamentally and differently. And, we need to think about security, risk and governance. If it's a "boundaryless organization" when it comes your data, either as a product or service or a resource, that control and management of which data should be exposed, which should be opened, and which should be very closely guarded all need to be factored, determined and implemented.

Here are some excerpts from the on-stage discussion:
Dana Gardner: You mentioned that big data to you is not a factor of the size, because NASA's dealing with so much. It’s when you run out of steam, as it were, with the methodologies. Maybe you could explain more. When do you know that you've actually run out of steam with the methodologies?

Gerty: When we collect data, we have some sort of goal in minds of what we might get out of it. When we put the pieces from the data together, it either maybe doesn't fit as well as you thought or you are successful and you continue to do the same thing, gathering archives of information.

Gerty
At that point, where you realize there might even something else that you want to do with the data, different than what you planned originally, that’s when we have to pivot a little bit and say, "Now I need to treat this as a living archive. It's a 'it may live beyond me' type of thing." At that point, I think you treat it as setting up the infrastructure for being used later, whether it’d be by you or someone else. That's an important transition to make and might be what one could define as big data.

Gardner: Andras, does that square with where you are in your government interactions -- that data now becomes a different type of resource, and that you need to know when to do things differently?

Szakal: The importance of data hasn’t changed. The data itself, the veracity of the data, is still important. Transactional data will always need to exist. The difference is that you have certainly the three or four Vs, depending on how you look at it, but the importance of data is in its veracity, and your ability to understand or to be able to use that data before the data's shelf life runs out.

Szakal
Some data has a shelf life that's long lived. Other data has very little shelf life, and you would use different approaches to being able to utilize that information. It's ultimately not about the data itself, but it’s about gaining deep insight into that data. So it’s not storing data or manipulating data, but applying those analytical capabilities to data.

Gardner: Bob, we've seen the price points on storage go down so dramatically. We've seem people just decide to hold on to data that they wouldn’t have before, simply because they can and they can afford to do so. That means we need to try to extract value and use that data. From the perspective of an enterprise architect, how are things different now, vis-à-vis this much larger set of data and variety of data, when it comes to planning and executing as architects?

Weisman: One of the major issues is that normally organizations are holding two orders of magnitude more data then they need. It’s an huge overhead, both in terms of the applications architecture that has a code basis, larger than it should be, and also from the technology architecture that is supporting a horrendous number of servers and a whole bunch of technology stuff that they don't need.

The issue for the architect is to figure out as what data is useful, institute a governance process, so that you can have data lifecycle management, have a proper disposition,  focus the organization on information data and knowledge that is basically going to provide business value to the organization, and help them innovate and have a competitive advantage.

Can't afford it

And in terms of government, just improve service delivery, because there's waste right now on information infrastructure, and we can’t afford it anymore.

Gardner: So it's difficult to know what to keep and what not to keep. I've actually spoken to a few people lately who want to keep everything, just because they want to mine it, and they are willing to spend the money and effort to do that.
Jim Hietala, when people do get to this point of trying to decide what to keep, what not to keep, and how to architect properly for that, they also need to factor in security. It shouldn't become later in the process. It should come early. What are some of the precepts that you think are important in applying good security practices to big data?

Hietala: One of the big challenges is that many of the big-data platforms weren’t built from the get-go with security in mind. So some of the controls that you've had available in your relational databases, for instance, you move over to the big data platforms and the access control authorizations and mechanisms are not there today.
Hietala

Planning the architecture, looking at bringing in third-party controls to give you the security mechanisms that you are used to in your older platforms, is something that organizations are going to have to do. It’s really an evolving and emerging thing at this point.

Gardner: There are a lot of unknown unknowns out there, as we discovered with our tweet chat last month. Some people think that the data is just data, and you apply the same security to it. Do you think that’s the case with big data? Is it just another follow-through of what you always did with data in the first place?

Hietala: I would say yes, at a conceptual level, but it's like what we saw with virtualization. When there was a mad rush to virtualize everything, many of those traditional security controls didn't translate directly into the virtualized world. The same thing is true with big data.

When you're talking about those volumes of data, applying encryption, applying various security controls, you have to think about how those things are going to scale? That may require new solutions from new technologies and that sort of thing.

Gardner: Chris Gerty, when it comes to that governance, security, and access control, are there any lessons that you've learned that you are aware of in terms of the best of openness, but also with the ability to manage the spigot?

Gerty: Spigot is probably a dangerous term to use, because it implies that all data is treated the same. The sooner that you can tag the data as either sensitive or not, mostly coming from the person or team that's developed or originated the data, the better.

Kicking the can

Once you have it on a hard drive, once you get crazy about storing everything, if you don't know where it came from, you're forced to put it into a secure environment. And that's just kicking the can down the road. It’s really a disservice to people who might use the data in a useful way to address their problems.

We constantly have satellites that are made for one purpose. They send all the data down. It’s controlled either for security or for intellectual property (IP), so someone can write a paper. Then, after the project doesn’t get funded or it just comes to a nice graceful close, there is that extra step, which is almost a responsibility of the originators, to make it useful to the rest of the world.

Gardner: Let’s look at big data through the lens of some other major trends right now. Let’s start with cloud. You mentioned that at NASA, you have your own private cloud that you're using a lot, of course, but you're also now dabbling in commercial and public clouds. Frankly, the price points that these cloud providers are offering for storage and data services are pretty compelling.

So we should expect more data to go to the cloud. Bob, from your perspective, as organizations and architects have to think about data in this hybrid cloud on-premises off-premises, moving back and forth, what do you think enterprise architects need to start thinking about in terms of managing that, planning for the right destination of data, based on the right mix of other requirements?

Weisman: It's a good question. As you said, the price point is compelling, but the security and privacy of the information is something else that has to be taken into account. Where is that information going to reside? You have to have very stringent service-level agreements (SLAs) and in certain cases, you might say it's a price point that’s compelling, but the risk analysis that I have done means that I'm going to have to set up my own private cloud.

Weisman
Right now, everybody's saying is the public cloud is going to be the way to go. Vendors are going to have to be very sensitive to that and many are, at this point in time, addressing a lot of the needs of some of the large client basis. So it’s not one-size-fits-all and it’s more than just a price for service. Architecture can bring down the price pretty dramatically, even within an enterprise.

Gardner: Andras, how do the cloud and big data come together in a way that’s intriguing to you?

Szakal: Actually it’s a great question. We could take the rest of the 22 minutes talking on this one question. I helped lead the President’s Commission on big data that Steve Mills from IBM and -- I forget the name of the executive from SAP -- led. We intentionally tried to separate cloud from big data architecture, primarily because we don't believe that, in all cases, cloud is the answer to all things big data. You have to define the architecture that's appropriate for your business needs.

However, it also depends on where the data is born. Take many of the investments IBM has made into enterprise market management, for example, Coremetrics, several of these services that we now offer for helping customers understand deep insight into how their retail market or supply chain behaves.

Born in the cloud

All of that information is born in the cloud. But if you're talking about actually using cloud as infrastructure and moving around huge sums of data or constructing some of these solutions on your own, then some of the ideas that Bob conveyed are absolutely applicable.

I think it becomes prohibitive to do that and easier to stand up a hybrid environment for managing the amount of data. But I think that you have to think about whether your data is real-time data, whether it's data that you could apply some of these new technologies like Hadoop to, Hadoop MapReduce-type solutions, or whether it's traditional data warehousing.

Data warehouses are going to continue to exist and they're going to continue to evolve technologically. You're always going to use a subset of data in those data warehouses, and it's going to be an applicable technology for many years to come.

Gardner: So suffice it to say, an enterprise architect who is well versed in both cloud infrastructure requirements, technologies, and methods, as well as big data, will probably be in quite high demand. That specialization in one or the other isn’t as valuable as being able to cross-pollinate between them.

Szakal: Absolutely. It's enabling our architects and finding deep individuals who have this unique set of skills, analytics, mathematics, and business. Those individuals are going to be the future architects of the IT world, because analytics and big data are going to be integrated into everything that we do and become part of the business processing.

Gardner: Well, that’s a great segue to the next topic that I am interested in, and it's around mobility as a trend and also application development. The reason I lump them together is that I increasingly see developers being tasked with mobile first.

When you create a new app, you have to remember that this is going to run in the mobile tier and you want to make sure that the requirements, the UI, and the complexity of that app don’t go beyond the ability of the mobile app and the mobile user. This is interesting to me, because data now has a different relationship with apps.

We used to think of apps as creating data and then the data would be stored and it might be used or integrated. Now, we have applications that are simply there in order to present the data and we have the ability now to present it to those mobile devices in the mobile tier, which means it goes anywhere, everywhere all the time.

Let me start with you Jim, because it’s security and risk, but it's also just rethinking the way we use data in a mobile tier. If we can do it safely, and that’s a big IF, how important should it be for organizations to start thinking about making this data available to all of these devices and just pour out into that mobile tier as possible?

Hietala: In terms of enabling the business, it’s very important. There are a lot of benefits that accrue from accessing your data from whatever device you happen to be on. To me, it is that question of "if," because now there’s a whole lot of problems to be solved relative to the data floating around anywhere on Android, iOS, whatever the platform is, and the organization being able to lock down their data on those devices, forgetting about whether it’s the organization device or my device. There’s a set of issues around that that the security industry is just starting to get their arms around today.

Mobile ability

Gardner: Chris, any thoughts about this mobile ability that the data gets more valuable the more you can use it and apply it, and then the more you can apply it, the more data you generate that makes the data more valuable, and we start getting into that positive feedback loop?

Gerty: Absolutely. It's almost an appreciation of what more people could do and get to the problem. We're getting to the point where, if it's available on your desktop, you’re going to find a way to make it available on your device.

That same security questions probably need to be answered anyway, but making it mobile compatible is almost an acknowledgment that there will be someone who wants to use it. So let me go that extra step to make it compatible and see what I get from them. It's more of a cultural benefit that you get from making things compatible with mobile.

Gardner: Any thoughts about what developers should be thinking by trying to bring the fruits of big data through these analytics to more users rather than just the BI folks or those that are good at SQL queries? Does this change the game by actually making an application on a mobile device, simple, powerful but accessing this real time updated treasure trove of data?

Gerty: I always think of the astronaut on the moon. He's got a big, bulky glove and he might have a heads-up display in front of him, but he really needs to know exactly a certain piece of information at the right moment, dealing with bandwidth issues, dealing with the environment, foggy helmet wherever.

It's very analogous to what the day-to-day professional will use trying to find out that quick e-mail he needs to know or which meeting to go to -- which one is more important -- and it all comes down to putting your developer in the shoes of the user. So anytime you can get interaction between the two, that’s valuable.

Weisman: From an enterprise architecture point of view my background is mainly defense and government, but defense mobile computing has been around for decades. So you've always been dealing with that.

The main thing is that in many cases, if they're coming up with information, the whole presentation layer is turning into another architecture domain with information visualization and also with your security controls, with an integrated identity management capability.

It's like you were saying about astronaut getting it right. He doesn't need to know everything that’s happening in the world. He needs to know about his heads-up display, the stuff that's relevant to him.

So it's getting the right information to person in an authorized manner, in a way that he can visualize and make sense of that information, be it straight data, analytics, or whatever. The presentation layer, ergonomics, visual communication are going to become very important in the future for that. There are also a lot of problems. Rather than doing it at the application level, you're doing it entirely in one layer.

Governance and security

Gardner: So clearly the implications of data are cutting across how we think about security, how we think about UI, how we factor in mobility. What we now think about in terms of governance and security, we have to do differently than we did with older data models.

Jim Hietala, what about the impact on spurring people towards more virtualized desktop delivery, if you don't want to have the date on that end device, if you want solve some of the issues about control and governance, and if you want to be able to manage just how much data gets into that UI, not too much not too little.

Do you think that some of these concerns that we’re addressing will push people to look even harder, maybe more aggressive in how they go to desktop and application virtualization, as they say, keep it on the server, deliver out just the deltas?

Hietala: That’s an interesting point. I’ve run across a startup in the last month or two that is doing is that. The whole value proposition is to virtualize the environment. You get virtual gold images. You don't have to worry about what's actually happening on the physical device and you know when the devices connect. The security threat goes away. So we may see more of that as a solution to that.

Gardner: Andras, do you see that that some of the implications of big data, far fetched as it may be, are propelling people to cultivate their servers more and virtualize their apps, their data, and their desktop right up to the end devices?

Szakal: Yeah, I do. I see IBM providing solutions for virtual desktop, but I think it was really a security question you were asking. You're certainly going to see an additional number of virtualized desktop environments.

Ultimately, our network still is not stable enough or at a high enough bandwidth to really make that useful exercise for all but the most menial users in the enterprise. From a security point of view, there is a lot to be still solved.

And part of the challenge in the cloud environment that we see today is the proliferation of virtual machines (VMs) and the inability to actually contain the security controls within those machines and across these machines from an enterprise perspective. So we're going to see more solutions proliferate in this area and to try to solve some of the management issues, as well as the security issues, but we're a long ways away from that.

Gerty: Big data, by itself, isn't magical. It doesn't have the answers just by being big. If you need more, you need to pry deeper into it. That’s the example. They realized early enough that they were able to make something good.

Gardner: Jim Hietala, any thoughts about examples that illustrate where we’re going and why this is so important?

Hietala: Being a security guy, I tend to talk about scare stories, horror stories. One example from last year that struck me. One of the major retailers here in the U.S. hit the news for having predicted, through customer purchase behavior, when people were pregnant.

They could look and see, based upon buying 20 things, that if you're buying 15 of these and your purchase behavior has changed, they can tell that. The privacy implications to that are somewhat concerning.

An example was that this retailer was sending out coupons related to somebody being pregnant. The teenage girl, who was pregnant hadn't told her family yet. The father found it. There was alarm in the household and at the local retailer store, when the father went and confronted them.

Privacy implications

There are privacy implications from the use of big data. When you get powerful new technology in marketing people's hands, things sometimes go awry. So I'd throw that out just as a cautionary tale that there is that aspect to this. When you can see across people's buying transactions, things like that, there are privacy considerations that we’ll have to think about, and that we really need to think about as an industry and a society.
Listen to the podcast. Find it on iTunes. Watch the video. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, February 19, 2013

NetIQ unveils two appliances for better access control to leverage cloud and social media use

NetIQ today announced new appliances that help enable businesses and other organizations to simply and securely access the power of two mega trends -- cloud and social media.

Cloud Access 1.1 provides a single sign-on virtual appliance that provides access to cloud services without complex and risky controls. Users within the company can access the permissioned cloud services without having to keep track of numerous and often changing usernames and passwords.

The IT team retains control of which services users can access, while making any changes in authentication for each individual site transparent to the end user. Administrators can provision services for employees on an as-needed basis, while easily de-provisioning those services when the employee leaves the company, or no longer requires access to certain services because of a role change or some other reason.

Feature-rich connectors automatically provision users to popular cloud-based applications, such as Google Apps, Salesforce, Office365, and some 200 verified connectors to security assertion markup language (SAML)-enabled cloud applications. CloudAccess 1.1 also includes a Connector Toolkit that allows IT personnel and partners to extend these federation capabilities to any SAML-enabled third-party software-as-a-service (SaaS) applications.

The issue of access ease and control still vexes web apps, never mind cloud and social platforms. It's clearly an issue that needs to be solved if users and enterprises alike are to adopt at the they pace they want.

“Prior to CloudAccess 1.1, existing approaches to managing user access to external resources was a difficult and manual process – made even more complex in light of the demands of today’s dynamic organizations,” said Kent Purdy, solution marketing manager at NetIQ. “CloudAccess 1.1 is not just delivering cloud single sign-on, but also simplifying IT’s ability to successfully turn SaaS, cloud, mobility, and other disruptive trends into business-enabling opportunities."

Log data

Access logs also let IT administrators see how often cloud services are being used, which will allow them to determine whether various services are still cost-effective for the company. It also provides visibility into which employees accessed which services -- and for how long.

The second appliance solution is SocialAccess 1.0, which helps organizations -- retailers, commerce hubs, state and local governments -- rapidly engage with customers and constituents by allowing them to use their unique social identity and profile information from providers such as Facebook, Twitter, Google, and others.

CloudAccess 1.1 is simplifying IT’s ability to successfully turn SaaS, cloud, mobility and other disruptive trends into business-enabling opportunities.
Until now, such access required individuals to create and maintain a unique username and password for each site, which is costly for the organization and inconvenient for the individual. SocialAccess 1.0 enables large-scale “bring your own identity” (BYOI) services that simplify how organizations interact with stakeholders and develop greater levels of customer intimacy, all while increasing brand loyalty and reducing IT costs.


Because it's an appliance, it makes it quick to deploy and easy to use for retailers, commerce hubs, state and local government and others seeking rapid engagement with stakeholders without the need to build, manage and maintain an identity store.

The impact of social media on corporate decision-making came into focus last week, when bourbon-maker Beam, Inc., announced plans to cut the alcohol content of its Marker's Mark brand by watering it down in order to meet growing demand. Within days, social media -- Facebook and Twitter -- were filled with furious protests over the move, leading Beam to reverse it's decision. The impact of social media is by no means a flash in the pan.

Demanding access

“Consumers are demanding convenient access to more services from more endpoints than ever and organizations need to be able to seize the opportunities that social identity, mobile computing, cloud and other trends naturally create,” said Geoff Webb, director, Solution Strategy at NetIQ. “BYOI is a great example of the opportunity to build on existing processes, improve existing services and respond more rapidly to customers."

One early adopter of the SocialAccess appliance is the New York City Department of Information Technology and Telecommunications (DoITT). The department serves a network of 120 agencies, board, and offices, as well as more than 8 million residents, 300,000 employees, and approximately 50 million visitors a year.

Consumers are demanding convenient access to more services from more endpoints than ever and organizations need to be able to seize the opportunities.
The department was looking for a way for people to log into NYC.gov and have a personalized experience. Using SocialAccess and social media sign-on, users were spared the need to create and maintain another online identity.

CloudAccess 1.1 is offered on a subscription basis or perpetual license. For more information, visit www.netiq.com/cloudaccess. SocialAccess 1.0 is licensed on a per user basis. For more information, visit www.netiq.com/socialaccess.

You may also be interested in: