Friday, February 22, 2013

The Open Group panel explores how the Big Data era now challenges the IT status quo

Listen to the podcast. Find it on iTunes. Watch the video. Read a full transcript or download a copy. Sponsor: The Open Group.

We recently assembled a panel of experts to explore how big data changes the status quo for architecting the enterprise. The bottom line from the discussion is that large enterprises should not just wade into big data as an isolated function, but should anticipate the strategic effects and impacts of big data -- as well the simultaneous complicating factors of cloud computing and mobile -- as soon as possible.

The panel consisted of Robert Weisman, CEO and Chief Enterprise Architect at Build The Vision; Andras Szakal, Vice President and CTO of IBM's Federal Division; Jim Hietala, Vice President for Security at The Open Group, and Chris Gerty, Deputy Program Manager at the Open Innovation Program at NASA. I served as the moderator.




And this special BriefingsDirect thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on "big data -- he transformation we need to embrace today." [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Threaded factors

An interesting thread for me throughout the conference was to factor where big data begins and plain old data, if you will, ends. Of course, it's going to vary quite a bit from organization to organization.

But Gerty from NASA, part of our panel, provided a good example: It’s when you run out of gas with your old data methods, and your ability to deal with the data -- and it's not just the size of the data itself.

Therefore, big data means do things differently -- not just to manage the velocity and the volume and the variety of the data, but to really think about data fundamentally and differently. And, we need to think about security, risk and governance. If it's a "boundaryless organization" when it comes your data, either as a product or service or a resource, that control and management of which data should be exposed, which should be opened, and which should be very closely guarded all need to be factored, determined and implemented.

Here are some excerpts from the on-stage discussion:
Dana Gardner: You mentioned that big data to you is not a factor of the size, because NASA's dealing with so much. It’s when you run out of steam, as it were, with the methodologies. Maybe you could explain more. When do you know that you've actually run out of steam with the methodologies?

Gerty: When we collect data, we have some sort of goal in minds of what we might get out of it. When we put the pieces from the data together, it either maybe doesn't fit as well as you thought or you are successful and you continue to do the same thing, gathering archives of information.

Gerty
At that point, where you realize there might even something else that you want to do with the data, different than what you planned originally, that’s when we have to pivot a little bit and say, "Now I need to treat this as a living archive. It's a 'it may live beyond me' type of thing." At that point, I think you treat it as setting up the infrastructure for being used later, whether it’d be by you or someone else. That's an important transition to make and might be what one could define as big data.

Gardner: Andras, does that square with where you are in your government interactions -- that data now becomes a different type of resource, and that you need to know when to do things differently?

Szakal: The importance of data hasn’t changed. The data itself, the veracity of the data, is still important. Transactional data will always need to exist. The difference is that you have certainly the three or four Vs, depending on how you look at it, but the importance of data is in its veracity, and your ability to understand or to be able to use that data before the data's shelf life runs out.

Szakal
Some data has a shelf life that's long lived. Other data has very little shelf life, and you would use different approaches to being able to utilize that information. It's ultimately not about the data itself, but it’s about gaining deep insight into that data. So it’s not storing data or manipulating data, but applying those analytical capabilities to data.

Gardner: Bob, we've seen the price points on storage go down so dramatically. We've seem people just decide to hold on to data that they wouldn’t have before, simply because they can and they can afford to do so. That means we need to try to extract value and use that data. From the perspective of an enterprise architect, how are things different now, vis-à-vis this much larger set of data and variety of data, when it comes to planning and executing as architects?

Weisman: One of the major issues is that normally organizations are holding two orders of magnitude more data then they need. It’s an huge overhead, both in terms of the applications architecture that has a code basis, larger than it should be, and also from the technology architecture that is supporting a horrendous number of servers and a whole bunch of technology stuff that they don't need.

The issue for the architect is to figure out as what data is useful, institute a governance process, so that you can have data lifecycle management, have a proper disposition,  focus the organization on information data and knowledge that is basically going to provide business value to the organization, and help them innovate and have a competitive advantage.

Can't afford it

And in terms of government, just improve service delivery, because there's waste right now on information infrastructure, and we can’t afford it anymore.

Gardner: So it's difficult to know what to keep and what not to keep. I've actually spoken to a few people lately who want to keep everything, just because they want to mine it, and they are willing to spend the money and effort to do that.
Jim Hietala, when people do get to this point of trying to decide what to keep, what not to keep, and how to architect properly for that, they also need to factor in security. It shouldn't become later in the process. It should come early. What are some of the precepts that you think are important in applying good security practices to big data?

Hietala: One of the big challenges is that many of the big-data platforms weren’t built from the get-go with security in mind. So some of the controls that you've had available in your relational databases, for instance, you move over to the big data platforms and the access control authorizations and mechanisms are not there today.
Hietala

Planning the architecture, looking at bringing in third-party controls to give you the security mechanisms that you are used to in your older platforms, is something that organizations are going to have to do. It’s really an evolving and emerging thing at this point.

Gardner: There are a lot of unknown unknowns out there, as we discovered with our tweet chat last month. Some people think that the data is just data, and you apply the same security to it. Do you think that’s the case with big data? Is it just another follow-through of what you always did with data in the first place?

Hietala: I would say yes, at a conceptual level, but it's like what we saw with virtualization. When there was a mad rush to virtualize everything, many of those traditional security controls didn't translate directly into the virtualized world. The same thing is true with big data.

When you're talking about those volumes of data, applying encryption, applying various security controls, you have to think about how those things are going to scale? That may require new solutions from new technologies and that sort of thing.

Gardner: Chris Gerty, when it comes to that governance, security, and access control, are there any lessons that you've learned that you are aware of in terms of the best of openness, but also with the ability to manage the spigot?

Gerty: Spigot is probably a dangerous term to use, because it implies that all data is treated the same. The sooner that you can tag the data as either sensitive or not, mostly coming from the person or team that's developed or originated the data, the better.

Kicking the can

Once you have it on a hard drive, once you get crazy about storing everything, if you don't know where it came from, you're forced to put it into a secure environment. And that's just kicking the can down the road. It’s really a disservice to people who might use the data in a useful way to address their problems.

We constantly have satellites that are made for one purpose. They send all the data down. It’s controlled either for security or for intellectual property (IP), so someone can write a paper. Then, after the project doesn’t get funded or it just comes to a nice graceful close, there is that extra step, which is almost a responsibility of the originators, to make it useful to the rest of the world.

Gardner: Let’s look at big data through the lens of some other major trends right now. Let’s start with cloud. You mentioned that at NASA, you have your own private cloud that you're using a lot, of course, but you're also now dabbling in commercial and public clouds. Frankly, the price points that these cloud providers are offering for storage and data services are pretty compelling.

So we should expect more data to go to the cloud. Bob, from your perspective, as organizations and architects have to think about data in this hybrid cloud on-premises off-premises, moving back and forth, what do you think enterprise architects need to start thinking about in terms of managing that, planning for the right destination of data, based on the right mix of other requirements?

Weisman: It's a good question. As you said, the price point is compelling, but the security and privacy of the information is something else that has to be taken into account. Where is that information going to reside? You have to have very stringent service-level agreements (SLAs) and in certain cases, you might say it's a price point that’s compelling, but the risk analysis that I have done means that I'm going to have to set up my own private cloud.

Weisman
Right now, everybody's saying is the public cloud is going to be the way to go. Vendors are going to have to be very sensitive to that and many are, at this point in time, addressing a lot of the needs of some of the large client basis. So it’s not one-size-fits-all and it’s more than just a price for service. Architecture can bring down the price pretty dramatically, even within an enterprise.

Gardner: Andras, how do the cloud and big data come together in a way that’s intriguing to you?

Szakal: Actually it’s a great question. We could take the rest of the 22 minutes talking on this one question. I helped lead the President’s Commission on big data that Steve Mills from IBM and -- I forget the name of the executive from SAP -- led. We intentionally tried to separate cloud from big data architecture, primarily because we don't believe that, in all cases, cloud is the answer to all things big data. You have to define the architecture that's appropriate for your business needs.

However, it also depends on where the data is born. Take many of the investments IBM has made into enterprise market management, for example, Coremetrics, several of these services that we now offer for helping customers understand deep insight into how their retail market or supply chain behaves.

Born in the cloud

All of that information is born in the cloud. But if you're talking about actually using cloud as infrastructure and moving around huge sums of data or constructing some of these solutions on your own, then some of the ideas that Bob conveyed are absolutely applicable.

I think it becomes prohibitive to do that and easier to stand up a hybrid environment for managing the amount of data. But I think that you have to think about whether your data is real-time data, whether it's data that you could apply some of these new technologies like Hadoop to, Hadoop MapReduce-type solutions, or whether it's traditional data warehousing.

Data warehouses are going to continue to exist and they're going to continue to evolve technologically. You're always going to use a subset of data in those data warehouses, and it's going to be an applicable technology for many years to come.

Gardner: So suffice it to say, an enterprise architect who is well versed in both cloud infrastructure requirements, technologies, and methods, as well as big data, will probably be in quite high demand. That specialization in one or the other isn’t as valuable as being able to cross-pollinate between them.

Szakal: Absolutely. It's enabling our architects and finding deep individuals who have this unique set of skills, analytics, mathematics, and business. Those individuals are going to be the future architects of the IT world, because analytics and big data are going to be integrated into everything that we do and become part of the business processing.

Gardner: Well, that’s a great segue to the next topic that I am interested in, and it's around mobility as a trend and also application development. The reason I lump them together is that I increasingly see developers being tasked with mobile first.

When you create a new app, you have to remember that this is going to run in the mobile tier and you want to make sure that the requirements, the UI, and the complexity of that app don’t go beyond the ability of the mobile app and the mobile user. This is interesting to me, because data now has a different relationship with apps.

We used to think of apps as creating data and then the data would be stored and it might be used or integrated. Now, we have applications that are simply there in order to present the data and we have the ability now to present it to those mobile devices in the mobile tier, which means it goes anywhere, everywhere all the time.

Let me start with you Jim, because it’s security and risk, but it's also just rethinking the way we use data in a mobile tier. If we can do it safely, and that’s a big IF, how important should it be for organizations to start thinking about making this data available to all of these devices and just pour out into that mobile tier as possible?

Hietala: In terms of enabling the business, it’s very important. There are a lot of benefits that accrue from accessing your data from whatever device you happen to be on. To me, it is that question of "if," because now there’s a whole lot of problems to be solved relative to the data floating around anywhere on Android, iOS, whatever the platform is, and the organization being able to lock down their data on those devices, forgetting about whether it’s the organization device or my device. There’s a set of issues around that that the security industry is just starting to get their arms around today.

Mobile ability

Gardner: Chris, any thoughts about this mobile ability that the data gets more valuable the more you can use it and apply it, and then the more you can apply it, the more data you generate that makes the data more valuable, and we start getting into that positive feedback loop?

Gerty: Absolutely. It's almost an appreciation of what more people could do and get to the problem. We're getting to the point where, if it's available on your desktop, you’re going to find a way to make it available on your device.

That same security questions probably need to be answered anyway, but making it mobile compatible is almost an acknowledgment that there will be someone who wants to use it. So let me go that extra step to make it compatible and see what I get from them. It's more of a cultural benefit that you get from making things compatible with mobile.

Gardner: Any thoughts about what developers should be thinking by trying to bring the fruits of big data through these analytics to more users rather than just the BI folks or those that are good at SQL queries? Does this change the game by actually making an application on a mobile device, simple, powerful but accessing this real time updated treasure trove of data?

Gerty: I always think of the astronaut on the moon. He's got a big, bulky glove and he might have a heads-up display in front of him, but he really needs to know exactly a certain piece of information at the right moment, dealing with bandwidth issues, dealing with the environment, foggy helmet wherever.

It's very analogous to what the day-to-day professional will use trying to find out that quick e-mail he needs to know or which meeting to go to -- which one is more important -- and it all comes down to putting your developer in the shoes of the user. So anytime you can get interaction between the two, that’s valuable.

Weisman: From an enterprise architecture point of view my background is mainly defense and government, but defense mobile computing has been around for decades. So you've always been dealing with that.

The main thing is that in many cases, if they're coming up with information, the whole presentation layer is turning into another architecture domain with information visualization and also with your security controls, with an integrated identity management capability.

It's like you were saying about astronaut getting it right. He doesn't need to know everything that’s happening in the world. He needs to know about his heads-up display, the stuff that's relevant to him.

So it's getting the right information to person in an authorized manner, in a way that he can visualize and make sense of that information, be it straight data, analytics, or whatever. The presentation layer, ergonomics, visual communication are going to become very important in the future for that. There are also a lot of problems. Rather than doing it at the application level, you're doing it entirely in one layer.

Governance and security

Gardner: So clearly the implications of data are cutting across how we think about security, how we think about UI, how we factor in mobility. What we now think about in terms of governance and security, we have to do differently than we did with older data models.

Jim Hietala, what about the impact on spurring people towards more virtualized desktop delivery, if you don't want to have the date on that end device, if you want solve some of the issues about control and governance, and if you want to be able to manage just how much data gets into that UI, not too much not too little.

Do you think that some of these concerns that we’re addressing will push people to look even harder, maybe more aggressive in how they go to desktop and application virtualization, as they say, keep it on the server, deliver out just the deltas?

Hietala: That’s an interesting point. I’ve run across a startup in the last month or two that is doing is that. The whole value proposition is to virtualize the environment. You get virtual gold images. You don't have to worry about what's actually happening on the physical device and you know when the devices connect. The security threat goes away. So we may see more of that as a solution to that.

Gardner: Andras, do you see that that some of the implications of big data, far fetched as it may be, are propelling people to cultivate their servers more and virtualize their apps, their data, and their desktop right up to the end devices?

Szakal: Yeah, I do. I see IBM providing solutions for virtual desktop, but I think it was really a security question you were asking. You're certainly going to see an additional number of virtualized desktop environments.

Ultimately, our network still is not stable enough or at a high enough bandwidth to really make that useful exercise for all but the most menial users in the enterprise. From a security point of view, there is a lot to be still solved.

And part of the challenge in the cloud environment that we see today is the proliferation of virtual machines (VMs) and the inability to actually contain the security controls within those machines and across these machines from an enterprise perspective. So we're going to see more solutions proliferate in this area and to try to solve some of the management issues, as well as the security issues, but we're a long ways away from that.

Gerty: Big data, by itself, isn't magical. It doesn't have the answers just by being big. If you need more, you need to pry deeper into it. That’s the example. They realized early enough that they were able to make something good.

Gardner: Jim Hietala, any thoughts about examples that illustrate where we’re going and why this is so important?

Hietala: Being a security guy, I tend to talk about scare stories, horror stories. One example from last year that struck me. One of the major retailers here in the U.S. hit the news for having predicted, through customer purchase behavior, when people were pregnant.

They could look and see, based upon buying 20 things, that if you're buying 15 of these and your purchase behavior has changed, they can tell that. The privacy implications to that are somewhat concerning.

An example was that this retailer was sending out coupons related to somebody being pregnant. The teenage girl, who was pregnant hadn't told her family yet. The father found it. There was alarm in the household and at the local retailer store, when the father went and confronted them.

Privacy implications

There are privacy implications from the use of big data. When you get powerful new technology in marketing people's hands, things sometimes go awry. So I'd throw that out just as a cautionary tale that there is that aspect to this. When you can see across people's buying transactions, things like that, there are privacy considerations that we’ll have to think about, and that we really need to think about as an industry and a society.
Listen to the podcast. Find it on iTunes. Watch the video. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Tuesday, February 19, 2013

NetIQ unveils two appliances for better access control to leverage cloud and social media use

NetIQ today announced new appliances that help enable businesses and other organizations to simply and securely access the power of two mega trends -- cloud and social media.

Cloud Access 1.1 provides a single sign-on virtual appliance that provides access to cloud services without complex and risky controls. Users within the company can access the permissioned cloud services without having to keep track of numerous and often changing usernames and passwords.

The IT team retains control of which services users can access, while making any changes in authentication for each individual site transparent to the end user. Administrators can provision services for employees on an as-needed basis, while easily de-provisioning those services when the employee leaves the company, or no longer requires access to certain services because of a role change or some other reason.

Feature-rich connectors automatically provision users to popular cloud-based applications, such as Google Apps, Salesforce, Office365, and some 200 verified connectors to security assertion markup language (SAML)-enabled cloud applications. CloudAccess 1.1 also includes a Connector Toolkit that allows IT personnel and partners to extend these federation capabilities to any SAML-enabled third-party software-as-a-service (SaaS) applications.

The issue of access ease and control still vexes web apps, never mind cloud and social platforms. It's clearly an issue that needs to be solved if users and enterprises alike are to adopt at the they pace they want.

“Prior to CloudAccess 1.1, existing approaches to managing user access to external resources was a difficult and manual process – made even more complex in light of the demands of today’s dynamic organizations,” said Kent Purdy, solution marketing manager at NetIQ. “CloudAccess 1.1 is not just delivering cloud single sign-on, but also simplifying IT’s ability to successfully turn SaaS, cloud, mobility, and other disruptive trends into business-enabling opportunities."

Log data

Access logs also let IT administrators see how often cloud services are being used, which will allow them to determine whether various services are still cost-effective for the company. It also provides visibility into which employees accessed which services -- and for how long.

The second appliance solution is SocialAccess 1.0, which helps organizations -- retailers, commerce hubs, state and local governments -- rapidly engage with customers and constituents by allowing them to use their unique social identity and profile information from providers such as Facebook, Twitter, Google, and others.

CloudAccess 1.1 is simplifying IT’s ability to successfully turn SaaS, cloud, mobility and other disruptive trends into business-enabling opportunities.
Until now, such access required individuals to create and maintain a unique username and password for each site, which is costly for the organization and inconvenient for the individual. SocialAccess 1.0 enables large-scale “bring your own identity” (BYOI) services that simplify how organizations interact with stakeholders and develop greater levels of customer intimacy, all while increasing brand loyalty and reducing IT costs.


Because it's an appliance, it makes it quick to deploy and easy to use for retailers, commerce hubs, state and local government and others seeking rapid engagement with stakeholders without the need to build, manage and maintain an identity store.

The impact of social media on corporate decision-making came into focus last week, when bourbon-maker Beam, Inc., announced plans to cut the alcohol content of its Marker's Mark brand by watering it down in order to meet growing demand. Within days, social media -- Facebook and Twitter -- were filled with furious protests over the move, leading Beam to reverse it's decision. The impact of social media is by no means a flash in the pan.

Demanding access

“Consumers are demanding convenient access to more services from more endpoints than ever and organizations need to be able to seize the opportunities that social identity, mobile computing, cloud and other trends naturally create,” said Geoff Webb, director, Solution Strategy at NetIQ. “BYOI is a great example of the opportunity to build on existing processes, improve existing services and respond more rapidly to customers."

One early adopter of the SocialAccess appliance is the New York City Department of Information Technology and Telecommunications (DoITT). The department serves a network of 120 agencies, board, and offices, as well as more than 8 million residents, 300,000 employees, and approximately 50 million visitors a year.

Consumers are demanding convenient access to more services from more endpoints than ever and organizations need to be able to seize the opportunities.
The department was looking for a way for people to log into NYC.gov and have a personalized experience. Using SocialAccess and social media sign-on, users were spared the need to create and maintain another online identity.

CloudAccess 1.1 is offered on a subscription basis or perpetual license. For more information, visit www.netiq.com/cloudaccess. SocialAccess 1.0 is licensed on a per user basis. For more information, visit www.netiq.com/socialaccess.

You may also be interested in:

Friday, February 15, 2013

Big Data success depends on better risk management practices like FAIR, say conference panelists

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

This BriefingsDirect thought leadership panel discussion comes to you in conjunction with The Open Group Conference held recently in Newport Beach, California. The conference focused on "big data -- the transformation we need to embrace today."

The panel of experts explores new trends and solutions in the area of risk management and analysis. Learn now how large enterprises are delivering better risk assessments and risk analysis, and discover how big data can be both an area to protect, but also used as a tool for better understanding and mitigating risks.

The panelists are Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF; Jack Jones, Principal of CXOWARE, and Jim Hietala, Vice President, Security for The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Jack Jones has more than nine years experience as a Chief Information Security Officer (CISO), and is the inventor of the Factor Analysis Information Risk  (FAIR) framework. Jack Freund has more than 14 years in enterprise IT experience, is a visiting professor at DeVry University, and chairs a risk-management subcommittee for ISACA.

Here are some excerpts:
Gardner: Why is the issue of risk analysis so prominent now? What's different from, say, five years ago?

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure -- the likelihood of bad things happening and how bad those things are likely to be.

It's only when we speak in those terms of risk that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They're tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

Freund
There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they're doing.

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT). That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

Hietala
You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don't want to hear about, "I've got 15 vulnerabilities in the mobility part of my organization." They want to understand what’s the risk of bad things happening because of mobility, what we're doing about it, and what’s happening to risk over time.

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we're at a big-data conference, do you share my perception, Jack Jones, that big data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we're employing with big data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment?

Crown jewels

Jones: You are absolutely right. You think of big data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It's like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

Jones
There are a lot of ways into that data and such, but at least if you can leverage that same big data architecture, it's an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of big data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: What parts of organizations aren’t being assessed for risk and should be?

Freund: The big problem that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.
We end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we're going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is  and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon Factor Analysis Information Risk  (FAIR), the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that's been lacking frequently in IT security and risk management.
In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups.

Gardner: Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed?

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: How do you know if you’re doing it right? How do you know if you're moving from yellow to green, instead of to red?
Becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

Freund: There are a couple of things in that question. The first is there's this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That's part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that's one thing.

The second thing, and it's a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you're talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium ,and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that's hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can't be normalized. It really harms the utility of it. I see data out there and I think, "That looks like that can be really useful." But, I hesitate to use it because I don't understand. They don't publish their definitions, approach, and how they went after it.

There's just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can't be defended. It doesn’t make sense, and therefore the data can't be used. It's an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: I'm curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language -- the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I've seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It's not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it's providing the customers becomes a problem.

Looking to the future

Gardner: What's the future of risk management, and what does the cloud trend bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That's an unfortunate reality. You can throw things out in the cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there's insurance or whatever the case may be.

That's just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.
Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

As I mentioned, we’ve standardized the taxonomy piece of the Factor Analysis Information Risk  (FAIR) framework. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That's in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but  there's still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we'd still like the ability to know what that is, and I don't think that's going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What's the most important thing that I care about? And that's why risk management exists, because there’s a certain series of things that we have to deal with. We don't have the resources to do them all, and I don't think that's going to change over time. Regardless of whether the landscape changes, that's the one that remains true.

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that's a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.
We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things.

What if this variable changed in our landscape? Let's run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let's change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can't begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That's what we’re doing with the Factor Analysis Information Risk  (FAIR) framework, but without some sort of framework like that, there's no way you can get there.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Friday, February 8, 2013

Three best practices for successful implementation of enterprise architecture using the TOGAF framework and the ArchiMate modeling language


This guest post comes courtesy of The Open Group and BiZZdesign

By Henry Franken, Sven van Dijk and Bas van Gils, BiZZdesign

The discipline of enterprise architecture (EA) was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question: “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure or set of structures for developing a broad range of architectures and consists of a process and a modeling component. The TOGAF framework and the ArchiMate modeling language – both maintained by The Open Group – are two leading and widely adopted standards in this field.

While both the TOGAF framework and the ArchiMate modeling language have a broad (enterprise-wide) scope and provide a practical starting point for an effective EA capability, a key factor is the successful embedding of EA standards and tools in the organization. From this perspective, the implementation of EA means that an organization adopts processes for the development and governance of EA artifacts and deliverables. Standards need to be tailored, and tools need to be configured in the right way in order to create the right fit. Or more popularly stated, “For an effective EA, it has to walk the walk, and talk the talk of the organization.”

EA touches on many aspects such as business, IT (and especially the alignment of these two), strategic portfolio management, project management, and risk management. EA is by definition about cooperation and therefore it is impossible to operate in isolation. Successful embedding of an EA capability in the organization is typically approached as a change project with clearly defined goals, metrics, stakeholders, appropriate governance and accountability, and with assigned responsibilities in place.

With this in mind, we share three best practices for the successful implementation of EA:

Think big, start small

The potential footprint of a mature EA capability is as big as the entire organization, but one of the key success factors for being successful with EA is to deliver value early on. Experience from our consultancy practice proves that a “think big, start small” approach has the most potential for success. This means that the process of implementing an EA capability is a process with iterative and incremental steps, based on a long term vision. Each step in the process must add measurable value to the EA practice, and priorities should be based on the needs and the change capacity of the organization.

Combine process and modeling

The TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities.

Franken
The TOGAF standard describes the architecture process in detail. The Architecture Development Method (ADM) is the core of the TOGAF standard. The ADM is a customer-focused and value-driven process for the sustainable development of a business capability. The ADM specifies deliverables throughout the architecture life-cycle with a focus on the effective communication to a variety of stakeholders.

ArchiMate is fully complementary to the content as specified in the TOGAF standard. The ArchiMate standard can be used to describe all aspects of the EA in a coherent way, while tailoring the content for a specific audience. Even more, an architecture repository is a valuable asset that can be reused throughout the enterprise. This greatly benefits communication and cooperation of enterprise architects and their stakeholders.

Use a tool

It is true, “a fool with a tool is still a fool.” In our teaching and consulting practice we have found, however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA initiative forward.

van Dijk
EA brings together valuable information that greatly enhances decision making, whether on a strategic or more operational level. This knowledge not only needs to be efficiently managed and maintained, it also needs to be communicated to the right stakeholder at the right time, and even more importantly, in the right format.

EA has a diverse audience that has business and technical backgrounds, and each of the stakeholders needs to be addressed in a language that is understood by all. Therefore, essential qualifications for EA tools are: rigidity when it comes to the management and maintenance of knowledge and flexibility when it comes to the analysis (ad-hoc, what-if, etc.), presentation, and communication of the information to diverse audiences.

So what you are looking for is a tool with solid repository capabilities, flexible modeling and analysis functionality.

Conclusion

EA brings value to the organization because it answers more accurately the question: “How should we organize ourselves?” Standards for EA help monetize on investments in EA more quickly. The TOGAF framework and the ArchiMate modeling language are popular, widespread, open and complete standards for EA, both from a process and a language perspective.

van Gils
EA becomes even more effective if these standards are used in the right way. The EA capability needs to be carefully embedded in the organization. This is usually a process based on a long term vision and has the most potential for success if approached as “think big, start small.” Enterprise Architects can benefit from tool support, provided that it supports flexible presentation of content, so that it can be tailored for the communication to specific audiences.

More information on this subject can be found on our website: www.bizzdesign.com. Whitepapers are available for download, and our blog section features a number of very interesting posts regarding the subjects covered in this paper.

If you would like to know more or comment on this blog, or please do not hesitate to contact us directly.

Henry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

Sven van Dijk Msc. is a consultant and trainer at BiZZdesign North America. He worked as an application consultant on large scale ERP implementations and as a business consultant in projects on information management and IT strategy in various industries such as finance and construction. He gained nearly eight years of experience in applying structured methods and tools for Business Process Management and Enterprise Architecture.

Bas van Gils is a consultant, trainer and researcher for BiZZdesign. His primary focus is on strategic use of enterprise architecture. Bas has worked in several countries, across a wide range of organizations in industry, retail, and (semi)governmental settings.  Bas is passionate about his work, has published in various professional and academic journals and writes for several blogs.

This guest post comes courtesy of The Open Group and BiZZdesign
 
Copyright The Open Group, 2013. All rights reserved


You may also be interested in:

Tuesday, February 5, 2013

US Department of Energy: Proving the cloud service broker model

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Emerging markets don’t generally follow smooth, predictable paths. Rather, they struggle and jerk unexpectedly, much like an eaglet escaping from its shell. Vendors, analysts, and pundits may seek to define such markets, but typically fall short. After all, vendors don’t establish markets. Customers do.

Today, cloud computing is still in its birth throes. Yes, many organizations are now achieving value in the cloud, but many more still struggle to understand its true value proposition as cloud service providers (CSPs) and vendors mature their offerings in the space. One problem: cloud computing is not a single market. It is in fact many interrelated markets, as its core service models, infrastructure-, platform-, and software as a service (SaaS), fragment as though they were so many pieces of eggshell.

Bloomberg
To bring order to this chaos, a new sub-market of the broader cloud-computing market has emerged: the cloud service broker (CSB). Envision some kind of cloud middleman, helping to cut through the plethora of cloud options and services by offering…well, just what a CSB offers isn’t quite clear. And that’s the problem with the whole notion of a CSB. The market has yet to fully define it.

Not that there aren’t plenty of perspectives on just what a CSB should actually do, mind you. If anything, there are too many opinions, prompting arguments among bloggers and confusion among customers.

Gartner claims CSBs should offer aggregation, integration, and customization, while Forrester delineates simple cloud brokers, full infrastructure brokers, and SaaS brokers – at least initially. And then there’s the National Institute for Standards and Technology (NIST), who calls for CSBs to provide aggregation, intermediation, and arbitrage, specifically for brokers that would serve the US federal government.
There’s only one way to cut through this confusion: talk to an organization who not only figured out what they wanted from a CSB, but also built one themselves.

But poke around the blogosphere, and many other CSB features come to light. Management is a huge requirement -- or two requirements, actually, as some organizations have needs that focus on business management, while others focus more on the technical aspects of management.

And what about assessments? Shouldn’t your broker assess CSPs who wish to join the CSB, providing some kind of thumbs-up before providers can participate? Then there are the questions about the nature and configuration of the CSB itself. Is it internal to the organization, or a third party much like a real-estate broker might be? And finally, is the broker essentially a software solution, or is it an organization or team in its own right, where software plays a support role to what are essentially a set of brokering business processes?

There’s only one way to cut through this confusion: talk to an organization who not only figured out what they wanted from a CSB, but also built one themselves. The organization in question: the National Nuclear Security Administration (NNSA), an agency of the United States Department of Energy (DOE).

Management and security

According to its Web site, NNSA is responsible for the management and security of the nation’s nuclear weapons, nuclear nonproliferation, naval reactor programs, and related activities. Under the auspices of Deputy Chief Technology Officer Anil Karmel, NNSA and the Los Alamos National Lab (LANL) implemented a CSB they call YOURcloud, in collaboration with partners in the contractor community.

According to Karmel, YOURcloud both leverages and supports the DOE’s Information on Demand (IoD) strategy. It provides a self-service portal for infrastructure-as-a-service (IaaS) offerings across multiple CSPs, including on-premise, community, and public cloud services like Amazon’s Elastic Compute Cloud (EC2). YOURcloud balances a diversity of choices among IaaS providers for various DOE programs while allowing those programs to maintain full autonomy of their cloud workloads.

YOURcloud users include DOE users, laboratory and plant users, other government agency users, support contractors, and members of the public. DOE business use cases for the CSB include rapid deployment of servers to scientists, security controls based on data sensitivity, calculating energy savings, disaster recovery, and capital expenditure reduction. And of course, security is a paramount concern.

Karmel describes YOURcloud as a “Cloud of Clouds.” In other words, it’s a secure hybrid CSB that incorporates both on-premise and public cloud offerings. This approach gives them a unified management control plane for IaaS and IoD, and in fact, this technical management capability is central to the role of the CSB at NNSA.
The central problem that led NNSA to build YOURcloud was their desire to deploy cloud services rapidly.

The central problem that led NNSA to build YOURcloud was their desire to deploy cloud services rapidly. Before the debut of the broker, cloud deployments had taken 70 days or more, according to Karmel.

NNSA also required a comprehensive security plan that was more sophisticated than the security capabilities other CSBs, both in production as well as on the drawing board, might offer. To this end, YOURcloud delivers software-defined security covering network, storage, and compute resources. It provides adaptive security that covers both NNSA’s virtual desktop infrastructure (VDI) as well as service enclaves.

In fact, the notion of service enclaves is central to how YOURcloud deals with security. It’s possible to partition enclaves so that an organization can use one cloud, while protecting sensitive data from users who lack the credentials to access the information in that cloud.

In essence, enclaves provide a container for both workloads and configurations. After a program creates an enclave, it establishes role-based access control (RBAC) by assigning permissions to the organization’s technical staff. In the future, YOURcloud will also provide a shared services enclave that will provide the foundation for enterprise “app store” functionality for the DOE broadly and NNSA in particular.

Critical function

Organization-centric user registration is also a critical function of the CSB. NNSA requires that YOURcloud identify each participating organizations’ top-level contacts in part to prevent unnecessary organization overlap. Users include technical contacts who select providers, create enclaves, grant permissions, and manage configurations. In particular, security contacts provide organizational firewall control, while billing contacts handle billing statement controls.

Cost reduction is one of the most trumpeted benefits of cloud computing, but the government procurement context complicates the ability of departments to leverage the cloud’s utility model. It’s essential, therefore, for YOURcloud to define the cost structure for IaaS, including the duration of the infrastructure services as well as the mechanism for payment.

Simple pay-as-you-go pricing, however, won’t work for the DOE. The risk with such pricing, of course, is the possibility of an unexpectedly large bill. Such unpredictability is inconsistent with normal government procurement processes. Instead, agencies require full allocation, meaning a fixed price for a maximum level of consumption of cloud services. YOURcloud facilitates this full allocation pricing model, and also enables programs to turn off cloud services and hold them for future use. In effect, delivery of the CSB enables the DOE to save money while simultaneously providing an agnostic platform for innovation.

Since NNSA is a government agency, it’s no surprise that YOURcloud follows NIST’s definition of a CSB more closely than Gartner’s or Forrester’s. In fact, YOURcloud exhibits all three of NIST’s CSB capabilities: aggregation, intermediation, and arbitrage. Not only does YOURcloud aggregate pre-approved CSPs, it provides both business intermediation as well technical intermediation.
Perhaps the most important asset YOURcloud brings to the table for DOE is how well it supports program autonomy.

The current version of YOURcloud also has limited arbitrage capabilities in the form of a dynamic cost calculator, as well as chargeback and showback functionality (showback refers to providing management with an analysis of the IT costs due to each department, without actually charging those costs back to the departments).

Perhaps the most important asset YOURcloud brings to the table for DOE is how well it supports program autonomy. YOURcloud allows programs within the DOE to maintain full control over their workloads within the context of a common security baseline. Karmel’s cloud-of-clouds approach enables YOURcloud to broker any organization, through any device, to any service. This respect for program autonomy addresses the “not invented here” problem: program managers can leverage the capabilities of YOURcloud without feeling like the broker is pushing them to select services or follow policies that are not in line with their requirements.

It’s not clear how well YOURcloud will define the characteristics of CSBs across the entire cloud-computing market, but NNSA’s efforts have not gone without notice within the federal government. CSBs are a hot topic across both civilian and military agencies, with the General Services Administration (GSA) and the Defense Information Systems Agency (DISA) both fleshing out their respective CSB strategies.

That being said, there is no better way to prove a model than by implementing a working, successful example. By implementing a CSB that supports secure, hybrid Cloud environments, NNSA and the DOE have set the bar for the next generation of Cloud Service Brokers.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in: