Friday, February 15, 2013

Big Data success depends on better risk management practices like FAIR, say conference panelists

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

This BriefingsDirect thought leadership panel discussion comes to you in conjunction with The Open Group Conference held recently in Newport Beach, California. The conference focused on "big data -- the transformation we need to embrace today."

The panel of experts explores new trends and solutions in the area of risk management and analysis. Learn now how large enterprises are delivering better risk assessments and risk analysis, and discover how big data can be both an area to protect, but also used as a tool for better understanding and mitigating risks.

The panelists are Jack Freund, PhD, the Information Security Risk Assessment Manager at TIAA-CREF; Jack Jones, Principal of CXOWARE, and Jim Hietala, Vice President, Security for The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a sponsor of this and other BriefingsDirect podcasts.]

Jack Jones has more than nine years experience as a Chief Information Security Officer (CISO), and is the inventor of the Factor Analysis Information Risk  (FAIR) framework. Jack Freund has more than 14 years in enterprise IT experience, is a visiting professor at DeVry University, and chairs a risk-management subcommittee for ISACA.

Here are some excerpts:
Gardner: Why is the issue of risk analysis so prominent now? What's different from, say, five years ago?

Jones: The information security industry has struggled with getting the attention of and support from management and businesses for a long time, and it has finally come around to the fact that the executives care about loss exposure -- the likelihood of bad things happening and how bad those things are likely to be.

It's only when we speak in those terms of risk that we make sense to those executives. And once we do that, we begin to gain some credibility and traction in terms of getting things done.

Gardner: So we really need to talk about this in the terms that a business executive would appreciate, not necessarily an IT executive.

Effects on business

Jones: Absolutely. They're tired of hearing about vulnerabilities, hackers, and that sort of thing. It’s only when we can talk in terms of the effect on the business that it makes sense to them.

Freund: The problem that we have as a profession, and I think it’s a big problem, is that we have allowed ourselves to escape the natural trend that the other IT professionals have already taken.

Freund
There was a time, years ago, when you could code in the basement, and nobody cared much about what you were doing. But now, largely speaking, developers and systems administrators are very focused on meeting the goals of the organization.

Security has been allowed to miss that boat a little. We have been allowed to hide behind this aura of a protector and of an alerter of terrible things that could happen, without really tying ourselves to the problem that the organizations are facing and how can we help them succeed in what they're doing.

Hietala: There are certainly changes on the threat side of the landscape. Five years ago, you didn’t really have hacktivism or this notion of an advanced persistent threat (APT). That highly skilled attacker taking aim at governments and large organizations didn’t really exist -– or didn’t exist to the degree it does today. So that has changed.

Hietala
You also have big changes to the IT platform landscape, all of which bring new risks that organizations need to really think about. The mobility trend, the cloud trend, the big-data trend that we are talking about today, all of those things bring new risk to the organization.

As Jack Jones mentioned, business executives don't want to hear about, "I've got 15 vulnerabilities in the mobility part of my organization." They want to understand what’s the risk of bad things happening because of mobility, what we're doing about it, and what’s happening to risk over time.

So it’s a combination of changes in the threats and attackers, as well as just changes to the IT landscape, that we have to take a different look at how we measure and present risk to the business.

Gardner: Because we're at a big-data conference, do you share my perception, Jack Jones, that big data can be a source of risk and vulnerability, but also the analytics and the business intelligence (BI) tools that we're employing with big data can be used to alert you to risks or provide a strong tool for better understanding your true risk setting or environment?

Crown jewels

Jones: You are absolutely right. You think of big data and, by definition, it’s where your crown jewels, and everything that leads to crown jewels from an information perspective, are going to be found. It's like one-stop shopping for the bad guy, if you want to look at it in that context. It definitely needs to be protected. The architecture surrounding it and its integration across a lot of different platforms and such, can be leveraged and probably result in a complex landscape to try and secure.

Jones
There are a lot of ways into that data and such, but at least if you can leverage that same big data architecture, it's an approach to information security. With log data and other threat and vulnerability data and such, you should be able to make some significant gains in terms of how well-informed your analyses and your decisions are, based on that data.

Freund: If we fast-forward it five years, and this is even true today, a lot of people on the cutting edge of big data will tell you the problem isn’t so much building everything together and figuring out what it can do. They are going to tell you that the problem is what we do once we figure out everything that we have. This is the problem that we have traditionally had on a much smaller scale in information security. When everything is important, nothing is important.

Gardner: What parts of organizations aren’t being assessed for risk and should be?

Freund: The big problem that exist largely today in the way that risk assessments are done, is the focus on labels. We want to quickly address the low, medium, and high things and know where they are. But the problem is that there are inherent problems in the way that we think about those labels, without doing any of the analysis legwork.
We end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

I think that’s what’s really missing is that true analysis. If the system goes offline, do we lose money? If the system becomes compromised, what are the cost-accounting things that will happen that allow us to figure out how much money we're going to lose.

That analysis work is largely missing. That’s the gap. The gap is if the control is not in place, then there’s a risk that must be addressed in some fashion. So we end up with these very long lists of horrible, terrible things that can be done to us in all sorts of different ways, without any relevance to the overall business of the organization.

Every day, our organizations are out there selling products, offering services, which is  and of itself, its own risky venture. So tying what we do from an information security perspective to that is critical for not just the success of the organization, but the success of our profession.

Risk implications

Jones: Businesses have been making these decisions, chasing the opportunity, but generally, without any clear understanding of the risk implications, at least from the information security perspective. They will have us in the corner screaming and throwing red flags in there, and talking about vulnerabilities and threats from one thing or another.

But, we come to the table with red, yellow, and green indicators, and on the other side of the table, they’ve got numbers. Well, here is what we expect to earn in revenue from this initiative, and the information security people are saying it’s crazy. How do you normalize the quantitative revenue gain versus red, yellow, and green?

Gardner: Jim Hietala, do you see it in the same red, yellow, green or are there some other frameworks or standard methodologies that The Open Group is looking at to make this a bit more of a science?

Hietala: Probably four years ago, we published what we call the Risk Taxonomy Standard which is based upon Factor Analysis Information Risk  (FAIR), the management framework that Jack Jones invented. So, we’re big believers in bringing that level of precision to doing risk analysis. Having just gone through training for FAIR myself, as part of the standards effort that we’re doing around certification, I can say that it really brings a level of precision and a depth of analysis to risk analysis that's been lacking frequently in IT security and risk management.
In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups.

Gardner: Whose job should this fall under? Who is wearing the white hat in the company and can rally the forces of good and make all the bad things managed?

Freund: The profession of IT risk management is changing. That profession will have to sit between the business and information security inclusive of all the other IT functions that make that happen.

In order to be successful sitting between these two groups, you have to be able to speak the language of both of those groups. You have to be able to understand profit and loss and capital expenditure on the business side. On the IT risk side, you have to be technical enough to do all those sorts of things.

But I think the sum total of those two things is probably only about 50 percent of the job of IT risk management today. The other 50 percent is communication. Finding ways to translate that language and to understand the needs and concerns of each side of that relationship is really the job of IT risk management.

To answer your question, I think it’s absolutely the job of IT risk management to do that. From my own experiences with the FAIR framework, I can say that using FAIR is the Rosetta Stone for speaking between those two groups.

Necessary tools

It gives you the tools necessary to speak in the insurance and risk terms that business appreciate. And it gives you the ability to be as technical and just nerdy, if you will, as you need to be in order to talk to IT security and the other IT functions in order to make sure everybody is on the same page and everyone feels like their concerns are represented in the risk-assessment functions that are happening.

Gardner: How do you know if you’re doing it right? How do you know if you're moving from yellow to green, instead of to red?
Becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

Freund: There are a couple of things in that question. The first is there's this inherent assumption in a lot of organizations that we need to move from yellow to green, and that may not be the case. So, becoming very knowledgeable about the risk posture and the risk tolerance of the organization is a key.

That's part of the official mindset of IT security. When you graduate an information security person today, they are minted knowing that there are a lot of bad things out there, and their goal in life is to reduce them. But, that may not be the case. The case may very well be that things are okay now, but we have bigger things to fry over here that we’re going to focus on. So, that's one thing.

The second thing, and it's a very good question, is how we know that we’re getting better? How do we trend that over time? Overall, measuring that value for the organization has to be able to show a reduction of a risk or at least reduction of risk to the risk-tolerance levels of the organization.

Calculating and understanding that requires something that I always phrase as we have to become comfortable with uncertainty. When you are talking about risk in general, you're talking about forward-looking statements about things that may or may not happen. So, becoming comfortable with the fact that they may or may not happen means that when you measure them today, you have to be willing to be a little bit squishy in how you’re representing that.

In FAIR and in other academic works, they talk about using ranges to do that. So, things like high, medium ,and low, could be represented in terms of a minimum, maximum, and most likely. And that tends to be very, very effective. People can respond to that fairly well.

Gathering data

Jones: With regard to the data sources, there are a lot of people out there doing these sorts of studies, gathering data. The problem that's hamstringing that effort is the lack of a common set of definitions, nomenclature, and even taxonomy around the problem itself.

You will have one study that will have defined threat, vulnerability, or whatever differently from some other study, and so the data can't be normalized. It really harms the utility of it. I see data out there and I think, "That looks like that can be really useful." But, I hesitate to use it because I don't understand. They don't publish their definitions, approach, and how they went after it.

There's just so much superficial thinking in the profession on this that we now have dug under the covers. Too often, I run into stuff that just can't be defended. It doesn’t make sense, and therefore the data can't be used. It's an unfortunate situation.

I do think we’re heading in a positive direction. FAIR can provide a normalizing structure for that sort of thing. The VERIS framework, which by the way, is also derived in part from FAIR, also has gained real attraction in terms of the quality of the research they have done and the data they’re generating. We’re headed in the right direction, but we’ve got a long way to go.

Gardner: I'm curious how prevalent cyber insurance is, and is that going to be a leveling effect in the industry where people speak a common language -- the equivalent of actuarial tables, but for security in enterprise and cyber security?

Jones: One would dream and hope, but at this point, what I've seen out there in terms of the basis on which insurance companies are setting their premiums and such is essentially the same old “risk assessment” stuff that the industry has been doing poorly for years. It's not based on data or any real analysis per se, at least what I’ve run into. What they do is set their premiums high to buffer themselves and typically cover as few things as possible. The question of how much value it's providing the customers becomes a problem.

Looking to the future

Gardner: What's the future of risk management, and what does the cloud trend bring to the table?

Hietala: I’d start with a maxim that comes out of the financial services industry, which is that you can outsource the function, but you still own the risk. That's an unfortunate reality. You can throw things out in the cloud, but it doesn’t absolve you from understanding your risk and then doing things to manage it to transfer it if there's insurance or whatever the case may be.

That's just a reality. Organizations in the risky world we live in are going to have to get more serious about doing effective risk analysis. From The Open Group standpoint, we see this as an opportunity area.
Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

As I mentioned, we’ve standardized the taxonomy piece of the Factor Analysis Information Risk  (FAIR) framework. And we really see an opportunity around the profession going forward to help the risk-analysis community by further standardizing FAIR and launching a certification program for a FAIR-certified risk analyst. That's in demand from large organizations that are looking for evidence that people understand how to apply FAIR and use it in doing risk analyses.

Freund: I always try to consider things as they exist within other systems. Risk is a system of systems. There are a series of pressures that are applied, and a series of levers that are thrown in order to release that sort of pressure.

Risk will always be owned by the organization that is offering that service. If we decide at some point that we can move to the cloud and all these other things, we need to look to the legal system. There is a series of pressures that they are going to apply, and who is going to own that, and how that plays itself out.

If we look to the Europeans and the way that they’re managing risk and compliance, they’re still as strict as we in United States think that they may be about things, but  there's still a lot of leeway in a lot of the ways that laws are written. You’re still being asked to do things that are reasonable. You’re still being asked to do things that are standard for your industry. But, we'd still like the ability to know what that is, and I don't think that's going to go away anytime soon.

Judgment calls

We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things. Whenever that happens, you have to make a judgment call. What's the most important thing that I care about? And that's why risk management exists, because there’s a certain series of things that we have to deal with. We don't have the resources to do them all, and I don't think that's going to change over time. Regardless of whether the landscape changes, that's the one that remains true.

Jones: If we were to take a snapshot at any given point in time of an organization’s loss exposure, how much risk they have right then, that's a lagging indicator of the decisions they’ve made in the past, and their ability to execute against those decisions.

We can do some great root-cause analysis around that and ask how we got there. But, we can also turn that coin around and ask how good we are at making well-informed decisions, and then executing against them, the asking what that implies from a risk perspective downstream.

If we understand the relationship between our current state, and past and future states, we have those linkages defined, especially, if we have an analytic framework underneath it. We can do some marvelous what-if analysis.
We’re still going to have to make judgment calls. We’re still going to have to do 100 things with a budget for 10 things.

What if this variable changed in our landscape? Let's run a few thousand Monte Carlo simulations against that and see what comes up. What does that look like? Well, then let's change this other variable and then see which combination of dials, when we turn them, make us most robust to change in our landscape.

But again, we can't begin to get there, until we have this foundational set of definitions, frameworks, and such to do that sort of analysis. That's what we’re doing with the Factor Analysis Information Risk  (FAIR) framework, but without some sort of framework like that, there's no way you can get there.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Friday, February 8, 2013

Three best practices for successful implementation of enterprise architecture using the TOGAF framework and the ArchiMate modeling language


This guest post comes courtesy of The Open Group and BiZZdesign

By Henry Franken, Sven van Dijk and Bas van Gils, BiZZdesign

The discipline of enterprise architecture (EA) was developed in the 1980s with a strong focus on the information systems landscape of organizations. Since those days, the scope of the discipline has slowly widened to include more and more aspects of the enterprise as a whole. This holistic perspective takes into account the concerns of a wide variety of stakeholders. Architects, especially at the strategic level, attempt to answer the question: “How should we organize ourselves in order to be successful?”

An architecture framework is a foundational structure or set of structures for developing a broad range of architectures and consists of a process and a modeling component. The TOGAF framework and the ArchiMate modeling language – both maintained by The Open Group – are two leading and widely adopted standards in this field.

While both the TOGAF framework and the ArchiMate modeling language have a broad (enterprise-wide) scope and provide a practical starting point for an effective EA capability, a key factor is the successful embedding of EA standards and tools in the organization. From this perspective, the implementation of EA means that an organization adopts processes for the development and governance of EA artifacts and deliverables. Standards need to be tailored, and tools need to be configured in the right way in order to create the right fit. Or more popularly stated, “For an effective EA, it has to walk the walk, and talk the talk of the organization.”

EA touches on many aspects such as business, IT (and especially the alignment of these two), strategic portfolio management, project management, and risk management. EA is by definition about cooperation and therefore it is impossible to operate in isolation. Successful embedding of an EA capability in the organization is typically approached as a change project with clearly defined goals, metrics, stakeholders, appropriate governance and accountability, and with assigned responsibilities in place.

With this in mind, we share three best practices for the successful implementation of EA:

Think big, start small

The potential footprint of a mature EA capability is as big as the entire organization, but one of the key success factors for being successful with EA is to deliver value early on. Experience from our consultancy practice proves that a “think big, start small” approach has the most potential for success. This means that the process of implementing an EA capability is a process with iterative and incremental steps, based on a long term vision. Each step in the process must add measurable value to the EA practice, and priorities should be based on the needs and the change capacity of the organization.

Combine process and modeling

The TOGAF framework and the ArchiMate modeling language are a powerful combination. Deliverables in the architecture process are more effective when based on an approach that combines formal models with powerful visualization capabilities.

Franken
The TOGAF standard describes the architecture process in detail. The Architecture Development Method (ADM) is the core of the TOGAF standard. The ADM is a customer-focused and value-driven process for the sustainable development of a business capability. The ADM specifies deliverables throughout the architecture life-cycle with a focus on the effective communication to a variety of stakeholders.

ArchiMate is fully complementary to the content as specified in the TOGAF standard. The ArchiMate standard can be used to describe all aspects of the EA in a coherent way, while tailoring the content for a specific audience. Even more, an architecture repository is a valuable asset that can be reused throughout the enterprise. This greatly benefits communication and cooperation of enterprise architects and their stakeholders.

Use a tool

It is true, “a fool with a tool is still a fool.” In our teaching and consulting practice we have found, however, that adoption of a flexible and easy to use tool can be a strong driver in pushing the EA initiative forward.

van Dijk
EA brings together valuable information that greatly enhances decision making, whether on a strategic or more operational level. This knowledge not only needs to be efficiently managed and maintained, it also needs to be communicated to the right stakeholder at the right time, and even more importantly, in the right format.

EA has a diverse audience that has business and technical backgrounds, and each of the stakeholders needs to be addressed in a language that is understood by all. Therefore, essential qualifications for EA tools are: rigidity when it comes to the management and maintenance of knowledge and flexibility when it comes to the analysis (ad-hoc, what-if, etc.), presentation, and communication of the information to diverse audiences.

So what you are looking for is a tool with solid repository capabilities, flexible modeling and analysis functionality.

Conclusion

EA brings value to the organization because it answers more accurately the question: “How should we organize ourselves?” Standards for EA help monetize on investments in EA more quickly. The TOGAF framework and the ArchiMate modeling language are popular, widespread, open and complete standards for EA, both from a process and a language perspective.

van Gils
EA becomes even more effective if these standards are used in the right way. The EA capability needs to be carefully embedded in the organization. This is usually a process based on a long term vision and has the most potential for success if approached as “think big, start small.” Enterprise Architects can benefit from tool support, provided that it supports flexible presentation of content, so that it can be tailored for the communication to specific audiences.

More information on this subject can be found on our website: www.bizzdesign.com. Whitepapers are available for download, and our blog section features a number of very interesting posts regarding the subjects covered in this paper.

If you would like to know more or comment on this blog, or please do not hesitate to contact us directly.

Henry Franken is the managing director of BiZZdesign and is chair of The Open Group ArchiMate Forum. As chair of The Open Group ArchiMate Forum, Henry led the development of the ArchiMate Version 2.o standard. Henry is a speaker at many conferences and has co-authored several international publications and Open Group White Papers. Henry is co-founder of the BPM-Forum. At BiZZdesign, Henry is responsible for research and innovation.

Sven van Dijk Msc. is a consultant and trainer at BiZZdesign North America. He worked as an application consultant on large scale ERP implementations and as a business consultant in projects on information management and IT strategy in various industries such as finance and construction. He gained nearly eight years of experience in applying structured methods and tools for Business Process Management and Enterprise Architecture.

Bas van Gils is a consultant, trainer and researcher for BiZZdesign. His primary focus is on strategic use of enterprise architecture. Bas has worked in several countries, across a wide range of organizations in industry, retail, and (semi)governmental settings.  Bas is passionate about his work, has published in various professional and academic journals and writes for several blogs.

This guest post comes courtesy of The Open Group and BiZZdesign
 
Copyright The Open Group, 2013. All rights reserved


You may also be interested in:

Tuesday, February 5, 2013

US Department of Energy: Proving the cloud service broker model

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Emerging markets don’t generally follow smooth, predictable paths. Rather, they struggle and jerk unexpectedly, much like an eaglet escaping from its shell. Vendors, analysts, and pundits may seek to define such markets, but typically fall short. After all, vendors don’t establish markets. Customers do.

Today, cloud computing is still in its birth throes. Yes, many organizations are now achieving value in the cloud, but many more still struggle to understand its true value proposition as cloud service providers (CSPs) and vendors mature their offerings in the space. One problem: cloud computing is not a single market. It is in fact many interrelated markets, as its core service models, infrastructure-, platform-, and software as a service (SaaS), fragment as though they were so many pieces of eggshell.

Bloomberg
To bring order to this chaos, a new sub-market of the broader cloud-computing market has emerged: the cloud service broker (CSB). Envision some kind of cloud middleman, helping to cut through the plethora of cloud options and services by offering…well, just what a CSB offers isn’t quite clear. And that’s the problem with the whole notion of a CSB. The market has yet to fully define it.

Not that there aren’t plenty of perspectives on just what a CSB should actually do, mind you. If anything, there are too many opinions, prompting arguments among bloggers and confusion among customers.

Gartner claims CSBs should offer aggregation, integration, and customization, while Forrester delineates simple cloud brokers, full infrastructure brokers, and SaaS brokers – at least initially. And then there’s the National Institute for Standards and Technology (NIST), who calls for CSBs to provide aggregation, intermediation, and arbitrage, specifically for brokers that would serve the US federal government.
There’s only one way to cut through this confusion: talk to an organization who not only figured out what they wanted from a CSB, but also built one themselves.

But poke around the blogosphere, and many other CSB features come to light. Management is a huge requirement -- or two requirements, actually, as some organizations have needs that focus on business management, while others focus more on the technical aspects of management.

And what about assessments? Shouldn’t your broker assess CSPs who wish to join the CSB, providing some kind of thumbs-up before providers can participate? Then there are the questions about the nature and configuration of the CSB itself. Is it internal to the organization, or a third party much like a real-estate broker might be? And finally, is the broker essentially a software solution, or is it an organization or team in its own right, where software plays a support role to what are essentially a set of brokering business processes?

There’s only one way to cut through this confusion: talk to an organization who not only figured out what they wanted from a CSB, but also built one themselves. The organization in question: the National Nuclear Security Administration (NNSA), an agency of the United States Department of Energy (DOE).

Management and security

According to its Web site, NNSA is responsible for the management and security of the nation’s nuclear weapons, nuclear nonproliferation, naval reactor programs, and related activities. Under the auspices of Deputy Chief Technology Officer Anil Karmel, NNSA and the Los Alamos National Lab (LANL) implemented a CSB they call YOURcloud, in collaboration with partners in the contractor community.

According to Karmel, YOURcloud both leverages and supports the DOE’s Information on Demand (IoD) strategy. It provides a self-service portal for infrastructure-as-a-service (IaaS) offerings across multiple CSPs, including on-premise, community, and public cloud services like Amazon’s Elastic Compute Cloud (EC2). YOURcloud balances a diversity of choices among IaaS providers for various DOE programs while allowing those programs to maintain full autonomy of their cloud workloads.

YOURcloud users include DOE users, laboratory and plant users, other government agency users, support contractors, and members of the public. DOE business use cases for the CSB include rapid deployment of servers to scientists, security controls based on data sensitivity, calculating energy savings, disaster recovery, and capital expenditure reduction. And of course, security is a paramount concern.

Karmel describes YOURcloud as a “Cloud of Clouds.” In other words, it’s a secure hybrid CSB that incorporates both on-premise and public cloud offerings. This approach gives them a unified management control plane for IaaS and IoD, and in fact, this technical management capability is central to the role of the CSB at NNSA.
The central problem that led NNSA to build YOURcloud was their desire to deploy cloud services rapidly.

The central problem that led NNSA to build YOURcloud was their desire to deploy cloud services rapidly. Before the debut of the broker, cloud deployments had taken 70 days or more, according to Karmel.

NNSA also required a comprehensive security plan that was more sophisticated than the security capabilities other CSBs, both in production as well as on the drawing board, might offer. To this end, YOURcloud delivers software-defined security covering network, storage, and compute resources. It provides adaptive security that covers both NNSA’s virtual desktop infrastructure (VDI) as well as service enclaves.

In fact, the notion of service enclaves is central to how YOURcloud deals with security. It’s possible to partition enclaves so that an organization can use one cloud, while protecting sensitive data from users who lack the credentials to access the information in that cloud.

In essence, enclaves provide a container for both workloads and configurations. After a program creates an enclave, it establishes role-based access control (RBAC) by assigning permissions to the organization’s technical staff. In the future, YOURcloud will also provide a shared services enclave that will provide the foundation for enterprise “app store” functionality for the DOE broadly and NNSA in particular.

Critical function

Organization-centric user registration is also a critical function of the CSB. NNSA requires that YOURcloud identify each participating organizations’ top-level contacts in part to prevent unnecessary organization overlap. Users include technical contacts who select providers, create enclaves, grant permissions, and manage configurations. In particular, security contacts provide organizational firewall control, while billing contacts handle billing statement controls.

Cost reduction is one of the most trumpeted benefits of cloud computing, but the government procurement context complicates the ability of departments to leverage the cloud’s utility model. It’s essential, therefore, for YOURcloud to define the cost structure for IaaS, including the duration of the infrastructure services as well as the mechanism for payment.

Simple pay-as-you-go pricing, however, won’t work for the DOE. The risk with such pricing, of course, is the possibility of an unexpectedly large bill. Such unpredictability is inconsistent with normal government procurement processes. Instead, agencies require full allocation, meaning a fixed price for a maximum level of consumption of cloud services. YOURcloud facilitates this full allocation pricing model, and also enables programs to turn off cloud services and hold them for future use. In effect, delivery of the CSB enables the DOE to save money while simultaneously providing an agnostic platform for innovation.

Since NNSA is a government agency, it’s no surprise that YOURcloud follows NIST’s definition of a CSB more closely than Gartner’s or Forrester’s. In fact, YOURcloud exhibits all three of NIST’s CSB capabilities: aggregation, intermediation, and arbitrage. Not only does YOURcloud aggregate pre-approved CSPs, it provides both business intermediation as well technical intermediation.
Perhaps the most important asset YOURcloud brings to the table for DOE is how well it supports program autonomy.

The current version of YOURcloud also has limited arbitrage capabilities in the form of a dynamic cost calculator, as well as chargeback and showback functionality (showback refers to providing management with an analysis of the IT costs due to each department, without actually charging those costs back to the departments).

Perhaps the most important asset YOURcloud brings to the table for DOE is how well it supports program autonomy. YOURcloud allows programs within the DOE to maintain full control over their workloads within the context of a common security baseline. Karmel’s cloud-of-clouds approach enables YOURcloud to broker any organization, through any device, to any service. This respect for program autonomy addresses the “not invented here” problem: program managers can leverage the capabilities of YOURcloud without feeling like the broker is pushing them to select services or follow policies that are not in line with their requirements.

It’s not clear how well YOURcloud will define the characteristics of CSBs across the entire cloud-computing market, but NNSA’s efforts have not gone without notice within the federal government. CSBs are a hot topic across both civilian and military agencies, with the General Services Administration (GSA) and the Defense Information Systems Agency (DISA) both fleshing out their respective CSB strategies.

That being said, there is no better way to prove a model than by implementing a working, successful example. By implementing a CSB that supports secure, hybrid Cloud environments, NNSA and the DOE have set the bar for the next generation of Cloud Service Brokers.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in:


Tuesday, January 29, 2013

AT&T cloud services built on VMware vCloud Datacenter meet evolving business demands for advanced IaaS

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next BriefingsDirect IT leadership discussion focuses on how global telecommunications giant AT&T has created advanced cloud services for its business customers. We'll see how AT&T has developed the ability to provide virtual private clouds and other computing capabilities as integrated services at scale.

To learn more about implementing cloud technology to deliver and commercialize an adaptive and reliable cloud services ecosystem, we sat down with Chris Costello, Assistant Vice President of AT&T Cloud Services. The interview was conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why are business cloud services such an important initiative for you?

Costello: AT&T has been in the hosting business for over 15 years, and so it was only a natural extension for us to get into the cloud services business to evolve with customers' changing business demands and technology needs.

Chris Costello
We have cloud services in several areas. The first is our AT&T Synaptic Compute as a Service. This is a hybrid cloud that allows VMware clients to extend their private clouds into AT&T's network-based cloud using a virtual private network (VPN). And it melds the security and performance of VPN with the economics and flexibility of a public cloud. So the service is optimized for VMware's more than 350,000 clients.

If you look at customers who have internal clouds today or private data centers, they like the control, the security, and the leverage that they have, but they really want the best of both worlds. There are certain workloads where they want to burst into a service provider’s cloud.

We give them that flexibility, agility, and control, where they can simply point and click, using free downloadable tools from VMware, to instantly turn up workloads into AT&T's cloud.

Another capability that we have in this space is AT&T Platform as a Service. This is targeted primarily to independent software vendors (ISVs), IT leaders, and line-of-business managers. It allows customers to choose from 50 pre-built applications, instantly mobilize those applications, and run them in AT&T's cloud, all without having to write a single line of code.

So we're really starting to get into more of the informal buyers, those line-of-business managers, and IT managers who don't have the budget to build it all themselves, or don't have the budget to buy expensive software licenses for certain application environments.

Examples of some of the applications that we support with our platform as a service (PaaS) are things like salesforce automation, quote and proposal tools, and budget management tools.

Storage space

The third key category of AT&T's Cloud Services is in the storage space. We have our AT&T Synaptic Storage as a Service, and this gives customers control over storage, distribution, and retrieval of their data, on the go, using any web-enabled device. In a little bit, I can get into some detail on use cases of how customers are using our cloud services.

This is a very important initiative for AT&T. We're seeing customer demand of all shapes and sizes. We have a sizable business and effort supporting our small- to medium-sized business (SMB) customers, and we have capabilities that we have tailor-developed just to reach those markets.

As an example, in SMB, it's all about the bundle. It's all about simplicity. It's all about on demand. And it's all about pay per use and having a service provider they can trust.
It's all about simplicity. It's all about on demand.

In the enterprise space, you really start getting into detailed discussions around security. You also start getting into discussions with many customers who already have private networking solutions from AT&T that they trust. When you start talking with clients around the fact that they can run a workload, turn up a server in the cloud, behind their firewall, it really resonates with CIOs that we're speaking with in the enterprise space.

Also in enterprises, it's about having a globally consistent experience. So as these customers are reaching new markets, it's all about not having to stand up an additional data center, compute instance, or what have you, and having a very consistent experience, no matter where they do business, anywhere in the world.

New era for women in tech

Gardner: The fact is that a significant majority of CIOs and IT executives are men, and that’s been the case for quite some time. But I'm curious, does cloud computing and the accompanying shift towards IT becoming more of a services brokering role change that? Do you think that with the consensus building among businesses and partner groups being more important in that brokering role, this might bring in a new era for women in tech?

Costello: I think it is a new era for women in tech. Specifically to my experience in working at AT&T in technology, this company has really provided me with an opportunity to grow both personally and professionally.

I currently lead our Cloud Office at AT&T and, prior to that, ran AT&T’s global managed hosting business across our 38 data centers. I was also lucky enough to be chosen as one of the top women in wireline services.
The key to success of being a woman working in technology is being able to build offers that solve customers' business problem.

What drives me as a woman in technology is that I enjoy the challenge of creating offers that meet customer needs, whether they be in the cloud space, things like driving eCommerce, high performance computing environment, or disaster recovery (DR) solutions.

I love spending time with customers. That’s my favorite thing to do. I also like to interact with many partners and vendors that I work with to stay current on trends and technologies. The key to success of being a woman working in technology is being able to build offers that solve customers' business problem, number one.

Number two is being able to then articulate the value of a lot of the complexity around some of these solutions, and package the value in a way that’s very simple for customers to understand.

Some of the challenge and also opportunity of the future is that, as technology continues to evolve, it’s about reducing complexity for customers and making the service experience seamless. The trend is to deliver more and more finished services, versus complex infrastructure solutions.

I've had the opportunity to interact with many women in leadership, whether they be my peer group, managers that work as a part of my team, and/or mentors that I have within AT&T that are senior leaders within the business.

I also mentor three women at AT&T, whether they be in technology, sales, or an operations role. So I'm starting to see this trend continue to grow.
It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses.

Gardner: You have a lot of customers who are already using your business network services. I imagine there are probably some good cost-efficiencies in moving them to cloud services as well.

Costello: Absolutely. We've embedded cloud capabilities into the AT&T managed network. It enables us to deliver a mobile cloud as well. That helps customers to transform their businesses. We're delivering cloud services in the same manner as voice and data services, intelligently routed across our highly secure, reliable network.

AT&T's cloud is not sitting on top of or attached to our network, but it's fully integrated to provide customers a seamless, highly secure, low-latency, and high-performing experience.

Gardner: Why did you chose VMware and vCloud Datacenter Services as a core to the AT&T Synaptic Compute as a Service?

Multiple uses

Costello: AT&T uses VMware in several of our hosting application and cloud solutions today. In the case of AT&T Synaptic Compute as a Service, we use that in several ways, both to serve customers in public cloud and hybrid, as well as private cloud solutions.

We've also been using VMware technology for a number of years in AT&T’s Synaptic Hosting offer, which is our enterprise-grade utility computing service. We've also been serving customers with server virtualization solutions available in AT&T data centers around the world and also can be extended into customer or third-party locations.

Just to drill down on some of the key differentiators of AT&T Synaptic Compute as a Service, it’s two-fold.

One is that we integrate with AT&T private networking solutions. Some of the benefits that customers enjoy as a result of that are orchestration of resources, where we'll take the amount of compute storage and networking resources and provide the exact amount of resources at the exact right time to customers on-demand.

Our solutions offer enterprise-grade security. The fact that we've integrated our AT&T Synaptic Compute as a Service with private networking solution allows customers to extend their cloud into our network using VPN.
An engineering firm can now perform complex mathematical computations and extend from their private cloud into AT&T’s hybrid solution instantaneously, using their native VMware toolset.

Let me touch upon VMware vCloud Datacenter Services for a minute. We think that’s another key differentiator for us, in that we can allow clients to seamlessly move workloads to our cloud using native VMware toolsets. Essentially, we're taking technical complexity and interoperability challenges off the table.

With the vCloud Datacenter program that we are part of with VMware, we're letting customers have access to copy and paste workloads and to see all of their virtual machines, whether it be in their own private cloud environment or in a hybrid solution provided by AT&T. Providing that seamless access to view all of their virtual machines and manage those through single interface is key in reducing technical complexity and speeding time to market.

We've been doing business with VMware for a number of years. We also have a utility-computing platform called AT&T Synaptic Hosting. We learned early on, in working with customers’ managed utility computing environments, that VMware was the virtualization tool of choice for many of our enterprise customers.

As technologies evolved over time and cloud technologies have become more prevalent, it was absolutely paramount for us to pick a virtualization partner that was going to provide the global scale that we needed to serve our enterprise customers, and to be able to handle the large amount of volume that we receive, given the fact that we have been in the hosting business for over 15 years.

As a natural extension of our Synaptic Hosting relationship with VMware for many years, it only made sense that we joined the VMware vCloud Datacenter program. VMware is baked into our Synaptic Compute as a Service capability. And it really lets customers have a simplified hybrid cloud experience. In five simple steps, customers can move workloads from their private environment into AT&T's cloud environment.

Think that you are the IT manager and you are coming into start your workday. All of a sudden, you hit 85 percent utilization in your environment, but you want to very easily access additional resources from AT&T. You can use the same console that you use to perform your daily job for the data center that you run in-house.

In five clicks, you're viewing your in-house private-cloud resources that are VMware based and your AT&T virtual machines (VMs) running in AT&T's cloud, our Synaptic Compute as a Service capability. That all happens in minutes' time.

Gardner: I should also think that the concepts around the software-defined datacenter and software-defined networking play a part in this. Is that something you are focused on?
If we start with enterprise, the security aspects of the solution had to prove out for the customers that we do business with.

Costello: Software-defined datacenter and software-defined networks are essentially what we're talking about here with some uniqueness that AT&T Labs has built within our networking solutions. We essentially take our edge, our edge routers, and the benefits that are associated with AT&T networking solutions around redundancy, quality of service, etc., and extend that into cloud solutions, so customers can extend their cloud into our network using VPN solutions.

Added efficiency

Previously many customers would have to buy a router and try to pull together a solution on their own. It can be costly and time consuming. There's a whole lot of efficiency that comes with having a service provider being able to manage your compute storage and networking capabilities end to end.

Global scale was also very critical to the customers who we've been talking to. The fact that AT&T has localized and distributed resources through a combination of our 38 data centers around the world, as well as central offices, makes it very attractive to do business with AT&T as a service provider.

Gardner: We've certainly seen a lot of interest in hybrid cloud. Is that one of the more popular use cases?
We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster

Costello: I speak with a lot of customers who are looking to be able to virtually expand. They have data-center, systems, and application investments and they have global headquarters locations, but they don't want to have to stand up another data center and/or virtually expand and/or ship staff out to other location. So certainly one use case that's very popular with customers is, "I can expand my virtual data-center environment and use AT&T as a service provider to help me to do that."

Another use case that's very popular with our customers is disaster recovery. We see a lot of customers looking for a more efficient way to be able to have business continuity,  have the ability to fail over in the event of a disaster, and also get in and test their plans more frequently than they're doing today.

For many of the solutions that are in place today, clients are saying they are expensive and/or they're just not meeting their service-level agreements (SLAs) to their business unit. One of the solutions that we recently put in place for a client is that we put them in two of AT&T's geographically diverse data centers. We wrapped it with AT&T's private-networking capability and then we solutioned our AT&T Synaptic Compute as a Service and Storage as a Service.

The customer ended up with a better SLA and a very powerful return on investment (ROI) as well, because they're only paying for the cloud resources when the meter is running. They now have a stable environment so that they can get in and test their plans as often as they'd like to and they're only paying for a very small storage fee in the event that they actually need to invoke in the event of a disaster. So DR plans are very popular.

Another use case that’s very popular among our clients is short-term compute. We work with a lot of customers who have massive mathematical calculations and they do a lot of number crunching.

Finally, in the compute space, we're seeing a lot of customers start to hang virtual desktop solutions off of their compute environment. In the past, when I would ask clients about virtual desktop infrastructure (VDI), they'd say, "We're looking at it, but we're not sure. It hasn’t made the budget list." All of a sudden, it’s becoming one of the most highly requested use cases from customers, and AT&T has solutions to cover all those needs.
The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.
 
Gardner: Do you think that this will extend to some of the big data and analytics crunching that we've heard about?

Costello: I don’t think anyone is in a better position than AT&T to be able to help customers to manage their massive amounts of data, given the fact that a lot of this data has to reside on very strong networking solutions. The fact that we have 38 data centers around the world, a global reach from a networking perspective, and all the foundational cloud capabilities makes a whole lot of sense.

Speaking about this type of a "bursty" use case, we host some of the largest brand name retailers in the world. When you think about it, a lot of these retailers are preparing for the holidays, and their servers are going underutilized much of year. So how attractive is it to be able to look at AT&T, as a service provider, to provide them robust SLAs and a platform that they only have to pay for when they need to utilize it, versus sitting and going very underutilized much of the year?

We also host many online gaming customers. When you think about the gamers that are out there, there is a big land rush when the buzz occurs right before the launch of a new game. We work very proactively with those gaming customers to help them size their networking needs well in advance of a launch. Also we'll monitor it in real time to ensure that those gamers have a very positive experience when that launch does occur.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Convercent's cloud app aims to help employees implement, measure, and rate corporate values and culture

Convercent, a new company that launched today, aims to fill a void in the governance, risk, and compliance (GRC) market with approachable tools that help employees implement corporate values while offering ways to measure and rate such contributions.

GRC has traditionally provided companies with tools to help customers meet government and industry regulations, enforce corporate policies and better deal with risk. Yet the areas of corporate culture and values -- which are becoming increasing important in today’s business climate -- were rarely addressed.

“Our platform allows companies to take the fuzzy-wuzzy of ethics –- to take the sign off the wall -- and turn it into structured data to measure employee actions against the organization’s stated values and culture,” said Patrick Quinlan, Convercent's CEO.
Companies like ours spend 85 percent of every dollar on people. If we don’t drive an effective culture, how can we drive performance?

Launching with 52 employees, $10.2 million in venture funding, the start-up hopes to capitalize on the recent trend of companies using their core ethics, culture and values to drive their business model. Convercent’s founders had earlier invested in a compliance software maker called Business Controls Inc. and is converting the 300 customers from that venture to Convercent.

Whole Foods is an example of such a company –- the grocery store’s brand makes a promise of quality and safety to its customers. If that promise is broken, the brand is damaged and the results could be far more devastating than a regulatory fine, said Quinlan.

Plenty of companies are making corporate values a top priority today, according to Quinlan, who offers the example of Google’s "Don't be evil" credo. Ensuring that employees walk the walk of the company’s ethics is becoming as important as making sure they abide by more traditional corporate or regulatory guidelines.

“Companies like ours spend 85 percent of every dollar on people,” said Quinlan. “If we don’t drive an effective culture, how can we drive performance?”

Cloud application

Convercent integrates corporate values and more traditional GRC activities into a cloud application that features mobile access and a clean user interface. For example, employees can use the application to read a definition of what their company considers community service, see examples of such activities, and log hours spent in service.

Aimed at legal, audit and compliance executives, the tool can also be used to distribute company policies, stay compliant with regulations, educate employees, and align company performance with culture, said CIO Philip Winterburn. It offers a way for companies to report and respond to incidents and craft escalations, investigations and resolutions.
It offers a way for companies to report and respond to incidents and craft escalations, investigations and resolutions.

Managers can receive reports on employee or department engagement and generate related scores, turning the ambiguous area of company involvement into a mathematical measurement, according to Winterburn.

The product is available at launch in more than 40 languages and can also produce on-the-fly translations. Mobile access from iOS devices is included at launch, with plans for support of Android devices to follow.

Founders Quinlan, Winterburn, and Barclay Friesen hail from compliance software maker Rivet Software. The three also founded Nebbiolo Ventures, which they describe as an entrepreneurial venture. A video of the Convercent product launch can be found here.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in: