Tuesday, June 21, 2016

Expert panel explores the new reality for cloud security and trusted mobile apps delivery

The next BriefingsDirect thought leadership panel discussion focuses on the heightened role of security in the age of global cloud and mobile delivery of apps and data.

As enterprises and small to medium-sized businesses (SMBs) alike weigh the balance of apps and convenience with security -- a new dynamic is emerging. Security concerns increasingly dwarf other architecture considerations.

Yet advances in thin clients, desktop virtualization (VDI), cloud management services, and mobile delivery networks are allowing both increased security and edge applications performance gains.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about the new reality for end-to-end security for apps and data, please welcome our panel: Stan Black, Chief Security Officer at Citrix; Chad Wilson, Director of Information Security at Children's National Health System in Washington, DC; Whit Baker, IT Director at The Watershed in Delray Beach, Florida; Craig Patterson, CEO of Patterson and Associates in San Antonio, Texas, and Dan Kaminsky, Chief Scientist at White Ops in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, a first major use case of VDI was the secure, stateless client. All the data and apps remain on the server, locked down, controlled. But now that data is increasingly mobile, and we're all mobile. So, how can we take security on the road, so to speak? How do we move past the safe state of VDI to full mobile, but not lose our security posture?

Black: Probably the largest challenge we all have is maintaining consistent connectivity. We're now able to keep data locally or make it highly extensible, whether it’s delivered through the cloud or a virtualized application. So, it’s a mix and a blend. But from a security lens, each one of those of service capabilities has a certain nuance that we need to be cognizant of while we're trying to protect data at rest, in use, and in motion.

Gardner: I've heard you speak about bring your own device (BYOD), and for you, BYOD devices have ended up being more secure than company-provided devices. Why do you think that is?

Caring for assets

Black: Well, if you own the car, you tend to take care of it. When you have a BYOD asset, you tend to take care of it, because ultimately, you're going to own that, whether it’s purchased for you with a retainer or what have you.

Black
Often, corporate-issued assets are like a car rental. You might not bring it back the same way you took it. So it has really changed quite a bit. But the containerization gives us the ability to provide as much, if not more, control in that BYOD asset.

Gardner: This also I think points out the importance of behaviors and end-user culture and thinking about security, acting in certain ways. Let's go to you, Craig. How do we get that benefit of behavior and culture as we think more about mobility and security?

Patterson: When we look at mobile, we've had people who would have a mobile device out in the field. They're accustomed to being able to take an email, and that email may have, in our situation, private information -- Social Security numbers, certain client IDs -- on it, things that we really don't want out in the public space. The culture has been, take a picture of the screen and text it to someone else. Now, it’s in another space, and that private information is out there.

You go from working in a home environment, where you text everything back and forth, to having secure information that needs to be containerized, shrink-wrapped, and not go outside a certain control parameter for security. Now, you're having a culture fight [over] utilization. People are accustomed to using their devices in one way and now, they have to learn a different way of using devices with a secure environment and wrapping. That’s what we're running into.

Gardner: We've also heard at the recent Citrix Synergy 2016 in Las Vegas that IT should be able to increasingly say "Yes," that it's an important part of getting to better business productivity.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Dan, how do we get people to behave well in secure terms, but not say "No"? Is there a carrot approach to this?

Kaminsky: Absolutely. At the end of the day, our users are going to go ahead and do stuff they need to get their jobs done. I always laugh when people say, "I can’t believe that person opened a PDF from the Internet." They work in HR. Their job is to open resumes. If they don’t open resumes, they're going to lose their job and be replaced by someone else.

Kaminsky
The thing I see a lot is that these software-as-a-service (SaaS) providers are being pressed into service to provide the things that people need. It’s kind of like a rogue IT or an outsourced IT, with or without permission.

The unusual realization that I had is that all these random partners we're getting have random policies and are storing data. We hear a lot of stuff about the Internet of Things (IoT), but I don't know any toasters that have my Social Security number. I know lots of these DocuSign, HelloSign systems that are storing really sensitive documents.

Maybe the solution, if we want people to implement our security technologies, or at least our security policies, is to pay them. Tell them, "If you actually have attracted our users, follow these policies, and we'll give you this amount of money per day, per user, automatically through our authentication layer." It sounds ridiculous, but you have to look at the status quo. The status quo is on fire, and maybe we can pay people to put out their fires.

Quid pro quo

Gardner: Or perhaps there are other quid pro quos that don't involve money? Chad, you work at a large hospital organization and you mentioned that you're 100 percent digital. How did you encourage people with the carrot to adhere to the right policies in a challenging environment like a hospital?

Wilson: We threw out the carrot-and-stick philosophy and just built a new highway. If you're driving on a two-lane highway, and it's always congested, and you want somebody to get there faster, then build a new highway that can handle the capacity and the security. Build the right on- and off-ramps to it and then cut over.

Wilson
We've had an electronic medical record (EMR) implementation for a while. We just finished up rolling out to all of our ambulatory spaces for electronic medical record. It's all delivered through virtualization on that highway that we built. So, they have access to it wherever they need it.

Gardner: It almost sounds like you're looking at the beginning bowler’s approach, where you put rails up on the gutters, so you can't go too far afield, whether you wish to or not. Whit Baker, tell us a little bit about The Watershed and how you view security behavior. Is it rails on the gutters, carrots or sticks, how does it go?

Baker: I would say rails on the gutters for us. We've completely converted everything to a VDI environment. Whether they're connecting with a laptop, with broadband, or their own home computer or mobile device, that session is completely bifurcated from their own operating system.

So, we're not really worried. Your desktop machine can be completely loaded with malware and whatnot, but when you open that session, you're inside of our system. That's basically how we handle the security. It almost doesn't require the users to be conscious of security.

Baker
At the same time, we're still afraid of attachments and things like that. So, we do educational type things. When we see some phishing emails come in, I'll send out scam alerts and things like that to our employees, and they're starting to become self-aware. They are starting to ask, "Should I even open this?" -- those sort of things.

So, it's a little bit of containerization, giving them some rails that they can bounce off of, and education.

Gardner: Stan, thinking about other ways that we can encourage good security posture in the mobility era, authentication certainly comes to mind, multi-factor authentication (MFA). How does that play into this keeping people safe?

Behavior elements

Black: It’s a mix of how we're going to deliver the services, but it's also a mix of the behavior elements and the fact that now technology has progressed so much that you can provide a user an entire experience that they actually enjoy. It gives them what they need, inside of a secure session, inside of a secure socket layer, with the inability to go outside of those bowling lanes, if they're not authorized to do so.

Additionally, authentication technologies have come a long way from hard tokens that we used to wear. I've seen people with four, five, or six of them, all in one necklace. I think I might have been one of them.
Authentication technologies have come a long way from hard tokens that we used to wear.

Multi-factor authentication and the user interface  are all pieces of information that aren't tied to the person's privacy or that individual, like their Social Security Number, but it’s their user experience enabling them to connect seamlessly. Often, when you have a help-desk environment, as an example, you put a time-out on their system. They go from one phone call to another phone call and then they have to log back in.

The interfaces that we have now and the MFA, the simple authentication, the simplified side on all of those, enable a person, depending upon what their role is, to connect into the environment they need to do their job quickly and easily.

Gardner: You mentioned user experience, and maybe that’s the quid pro quo. You get more user experience benefits if you take more precautions with how you behave using your devices.

Dan, any thoughts on where we go with authentication and being able to say, Yes, and encourage people to do the right thing?
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Kaminsky: I cannot emphasize how important usability is in getting security wins. We've had some major ones. We moved people from Telnet to SSH. Telnet was unencrypted and was a disaster. SSH is encrypted. It is actually the thing people use now, because if you jump through a few hoops, you stopped having to type in a password.

You know what VPNs meant? VPNs meant you didn't have to drive into the office on a Sunday. You could be at home and fix the problem, and hours became minutes or seconds. Everything that we do that really works involves making things more useable and enabling people. Security is giving you permission to do this thing that used to be dangerous.
Security is giving you permission to do this thing that used to be dangerous.

I actually have a lot of hope in the mobility space, because a lot of these mobile environments and operating systems are really quite secure. You hand someone an iPad, and in a year, that iPad is still going to work. There are other systems where you hand someone a device and that device is not doing so well a year from now.

So there are a lot more controls and stability from some of these mobile things that people actually like to use more, and they turn out to also be significantly more secure.

Gardner: Craig, as we're also thinking about ways of keeping people on the straight and narrow path, we're getting more intelligent networks. We're starting to get more data and analytics from those devices and we're able to see what goes on in that network in high detail.

Tell us about the ways in which we can segment and then make zones for certain purposes that may come and go based on policies. Basically, how are intelligent networks helping us provide that usability and security?

Access to data

Patterson: The example that comes to my mind is that in many of the industries, we have partners who come on site for a short period of time. They need access to data. They might be doing inspections for us and they'll be going into a private area, but we don't want them to take certain photos, documents and other information off site after a period of time.

Patterson
Containerizing data and having zones allows a person to have access while they're on premises, within a certain "electronic wire fence," if you will, or electronic guardrails. Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We had kind of an old-fashioned example where people think they are more secure, because they don't know what they're losing. We had people with file cabinets that were locked and they had the key around their neck. They said, "Why should we go to an electronic documents system where I can see when you viewed it, when you downloaded it, where you moved that document to?" That kind of scared some people.

Then, I walked in with half their file cabinet and I said, "You didn’t even know these were gone, but you felt secure the whole time. Wouldn’t you rather know that it was gone and have been able to institute some security protocols behind it?"

A lot of it goes to usability. We want to make things usable and we have to have access to it, but at the same time, those guardrails include not only where we can access it and at what time, but for how long and for what purposes.
Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We have mobile devices for which we need to be able to turn the camera functions off in certain parts of our facility. For mobile device management, that's helpful. For BYOD, that becomes a different challenge, and that's when we have to handle giving them a device that we can control, as opposed to BYOD.

Gardner: Stan, another major trend these days is the borderless enterprise. We have supply chains, alliances, ecosystems that provide solutions, an API-first mentality, and that requires us to be able to move outside and allow others to cross over. How does the network-intelligence factor play into making that possible so that we can say, Yes, and get a strong user experience regardless of which company we're actually dealing with?

Black: I agree with the borderless concept. The interesting part of it, though, is with networks knowing where they're connecting to physically. The mobile device has over 20 sensors in it. When you take all of that information and bring it together with whatever APIs are enabled in the applications, you start to have a very interesting set of capabilities that we never had before.

A simple example is, if you're a database administrator and you're administering something inside the European Union (EU), there are very stringent privacy laws that make it so you're not allowed to do that.

We don’t have to make it that we have to train the person or make it more difficult for them; we simply disable the capability through geofencing. When one application is talking securely through a socket, all the way to the back end, from a mobile device, all the way into the data center, you have pretty darn good control. You can also separate duties; system administration being one function, whereas database administration is another very different thing. One set doesn't see the private data; one set has very clear access to it.

Getting visibility

Gardner: Chad, you mentioned how visibility is super important for you and your organization. Tell me a bit about moving beyond the user implications. What about the operators? How do you get that visibility and keep it, and how important is that to maintaining your security posture?

Wilson: If you can't see it, you can’t protect it. No matter how much visibility we get into the back end, if the end user doesn't adopt the application or the virtualization that we've put in place or the highway that we've built, then we're not going to see the end-to-end session. They're going to continue to do workarounds.

So, usability is very important to end-user adoption and adopting the new technologies and the new platforms. Systems have to be easy for them to access and to use. From the back-end, the visibility piece, we look at adopting technology strategically to achieve interoperability, not just point products here and there to bolt them on.
So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

A strategic innovation and a strategic procurement around technology and partnership, like we have with Citrix, allows us to have a consistent delivery of the application and the end user experience, no matter what device they go to, and where they access from in the world. On the back side, that helps us, because we can have that end-to-end visibility of where our data is heading, the authentication right upfront, as well as all the pieces and parts of the network that go into play to deliver that experience.

So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

Gardner: Whit, we've heard a lot about the mentality that you should always assume someone unwanted is in your network. Monitoring and response is one way of limiting that. How does your organization acknowledge that bad things can happen, but that you can limit that, and how important is monitoring and response for you in reducing damage?

Baker: In our case, we have several layers of user experience. Through policy, we only allow certain users to do certain things. We're a healthcare system, but we have various medical personnel; doctors, nurses and therapists, versus people in our corporate billing area and our call center.  All of those different roles are basically looking only at the data that they need to be accessing, and through policy, it’s fairly easy to do.

Gardner: Stan, on the same subject, monitoring and response, assuming that people are in, what is Citrix seeing in the field, and how are you giving that response time as low a latency as possible?

Standard protocol

Black: The standard incident-response protocol is identify, contain, control, and communicate. We're able to shrink what we need to identify. We're able to connect from end-to-end, so we're able to communicate effectively, and we've changed how much data we gather regarding transmissions and communications.

If you think about it, we've shrunk our tech surface, we've shrunk our vulnerable areas, methods, or vectors by which people can enter in. At the same time, we've gained incredibly high visibility and fidelity into what is supposed to be going over a wire or wireless, and what is not.

We're now able to shrink the identify, contain, control, and communicate spectrum to a much shorter area and focus our efforts with really smart threat intelligence and incident response people versus everyone in the IT organization and everyone in security. Everyone is looking at the needle in the haystack; now we just have a smaller stack of needles.

Patterson: I had a thought on that, because as we looked at a cloud-first strategy, one of the issues that we looked at was, "We have a voice-over-IP system in the cloud, we have Azure, we have Citrix, we have our NetScaler. What about our firewalls now, and how do we actually monitor intrusion?"
Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

We have file attachments and emails coming through in ways that aren’t on our on-premises firewall and not with all our malware detection. So, those are questions that I think all of us are trying to answer, because now we're creating known unknowns and really unknown unknowns. When it happens, we're going to say, "We didn’t know that that part could happen."

That’s where part of the industry is, too. Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

Gardner: Dan, one of the other ways that we want to be able to say, Yes, to our users and increase their experiences as workers is to recognize the heterogeneity -- any cloud, any device, multiple browser types, multiple device types. How do you see the ability to say, Yes, to vast heterogeneity, perhaps at a scale we've never seen before, but at the same time, preserve that security and keep those users happy?

Kaminsky: The reason we have different departments and multiple teams is because different groups have different requirements. They have different needs that are satisfied in ways that we don't necessarily understand. It’s not the heterogeneity that bothers us; it’s the fact that a lot of systems have different risks. We can merge the risks, or simultaneously address them with consistent technologies, like containerization and virtualization, like the sort of centralization solutions out there.

People are sometimes afraid of putting all their eggs in one basket. I'll take one really well-built basket over 50,000 totally broken ones. What I see is, create environments in which users can use whatever makes their job work best, and go ahead and realize that it's not actually the fact that the risks are that distinct, that they are that unique. The risk patterns of the underlying software are less diverse than the software itself.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Gardner: Stan, most organizations that we speak to say they have at least six, perhaps more, clouds. They're using all sorts of new devices. Citrix has recently come out with Raspberry Pi at less than a $100 to be a viable Windows 10 endpoint. How do we move forward and keep the options open for any cloud and any device?

Multitude of clouds

Black: When you look at the cloud, there is a multitude of public clouds. Many companies have internal clouds. We've seen all of this hyperconvergence, but what has blurred over time are the controls between whether it’s a cloud, whether it’s the enterprise, and whether it’s mobile.

Again, some of what you've seen has been how certain technologies can fulfill controls between the enterprise and the cloud, because cloud is nimble, it’s fast, and it's great.

At the same time, if you don't control it, don’t manage it, or don't know what you have in the cloud, which many companies struggle with, your risk starts to sprawl and you don't even know it's happened.

So it's not adding difficult controls, what I would call classic gates, but transparency, visibility, and thresholds. You're allowed to do this between here and here. An end user doesn't know those things are happening.
Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Gardner: Chad, for you and your organization, how would you like to get security visibility in terms of an analytic dashboard, visualization, and alerts? What would you like to see happen in terms of that analytics benefit?

Wilson: It starts with population health and the concept behind it. Population health takes in all the healthcare data, puts it into a data warehouse, and leverages analytics to be able to show trends with, say, kids presenting with asthma or patients presenting with asthma across their lifespan and other triggers. That goes to quality of care.

The same concept should be applied to security. When we bring that data together, all the various logs, all of the various threat vectors and what we are seeing, not just signatures, but we're able to identify trends, and how folks are doing it, how the bad guys are doing it. Are the bad guys single-vectored or have they learned the concept of combined arms, like our militaries have? Are they able to put things together to have better impact? And where do we need to put things together to have better protection?

We need to change the paradigm, so when they show their hand once, it doesn't work anymore. The only way that we can do that is by being able to detect that one time when they show their hand. It's getting them to do one thing to show how they are going to attack us. To do that, we have to pull together all the logs, all of the data, and provide analytics and get down to behavior; what is good behavior, what is bad behavior.

That's not a signature that you're detecting for malware; that is a behavior pattern. Today I can do one thing, and tomorrow I can do it differently. That's what we need to be able to get to.

Getting information

Patterson: I like the illustration that was just used. What we're hoping for with the cloud strategy is that, when there's an attack on one part of the cloud, even if it's someone else that’s in Citrix or another cloud provider, then that is shared, whereas before we have had all these silos that need to be independently secured.

Now, the windows that are open in these clouds that we're sharing are going to be ways that we can protect each one from the other. So, when one person attacks Citrix a certain way, Azure a certain way, or AWS a certain way, we can collectively close those windows.
I want to know where the windows are open and where the heat loss went or where there was air intrusion.

What I like to see in terms of analytics is, and I'll use kind of a mechanical engineering approach, I want to know where the windows are open and where the heat loss went or where there was air intrusion. I would like to see, whether it went to an endpoint that wasn't secured or that I didn't know about. I'd like to know more about what I don't know in my analytics. That’s really what I want analytics for, because the things that I know I know well, but I want my analytics to tell me what I don't know yet.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Thursday, June 16, 2016

451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about trends and developments in DevOps, micro services, containers, and the new direction for composable infrastructure, we’re joined by Donnie Berkholz, Research Director at 451 Research, and he’s based in Minneapolis. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are things changing so much for apps deployment infrastructure? Why is DevOps newly key for software development? And why are we looking for “composable infrastructure?”

Berkholz: It’s a good question. There are a couple of big drivers behind it. One of them is cloud, probably the biggest one, because of the scale and transience that we have to deal with now, with virtual machines (VMs) appearing and disappearing on such a rapid basis.

Berkholz
We have to have software, processes, and cultures that support that kind of new approach, and IT is getting more-and-more demands for scale and to do more from the line of business. They're not getting more money or people, and they have to figure out what’s the right approach to deal with this. How can we scale and how can we do more and how can we be more agile?

DevOps is the approach that’s been settled on. One of the big reasons behind that is the automation. That’s one of what I think of as the three pillars of DevOps, which are culture, automation, and measurement.

Automation is what lets you move from this metaphor of cattle versus pets, moving from the pet side of it, where you carefully name and handcraft each server, to a cattle mindset, where you're thinking about fleets of servers and about services rather than individual servers, VMs, containers, or what have you. You can have syste
ms administrators maintaining 10,000 VMs, rather than 100 or 150 servers by hand. That’s what automation gets you.

More with less

So you're doing more with less. Then, as I said, they're also getting demands from the business to be more agile and deliver it faster, because the businesses all want to compete with companies like Netflix or Zenefits, the Teslas of the world, the software-defined organizations. How can they be more agile, how can they become competitive, if they're a big insurance company or a big bank?

DevOps is one of the key approaches behind that. You get the automation, not just on the server side, but on the application-delivery pipeline, which is really a critical aspect of it. You're moving toward this continuous delivery approach, and being able to move a step beyond agile to bring agile all the way through to production and to deploy software, maybe even on every comment, which is the far end of DevOps. There are a lot of organizations that aren’t there yet, but they're taking steps toward that, toward moving from deployments every three months or six months to every few weeks.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: So the vision of having that constant iterative process, continuous development, continuous test, continuous deployment -- at the same time being able to take advantage of these new cloud models -- it’s still kind of a tricky equation for people to work out.

What is it that we need to put in place that allows us to be agile as a development organization and to be automated and orchestrated as an operations organization? How can we make that happen practically?

Berkholz: It always goes back to three things -- people, process, and technology. From the people perspective, what I have run into is that there are a lot of organizations that have either development or operational groups, where some of them just can't make this transition.
IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold.

They can't start thinking about the business impacts of what they're doing. They're focused on keeping the lights on, maintaining the servers, writing the code, and being able to make that transition to focusing on what the business needs. How am I helping the company is the critical step from an individual level, but also from an organizational level.

IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold. How they do so is this transition toward IT as a service is the way we think about it. IT becoming more like a service provider in their own right, pulling in all these external services and providing a better experience in house.

If you think about shadow IT, for example, you think about developers using a credit card to sign-up for some public cloud or another. That’s all well and good, but wouldn’t it be even nicer if they didn’t have to worry about the billing, the expensing, the payments, and all that stuff, because IT already provided that for them. That’s where things are going, because that’s the IT-as-a-service provider model.

Gardner: People, process, technology, and existential issues. The vendors are also facing existential issues, things and changing so fast, and they provide technology, the people and the process which is up to the enterprise to figure out. What's happening on the technology side, and how are the vendors reacting to allow enterprises to then employ the people and put in place the processes that will bring us to this better DevOps automated reality? What can we put in place technically to make this possible?

Two approaches

Berkholz: It goes back to two approaches -- one coming in from the development side and one coming in from the operational side.

From a development side, we're talking about things like continuous-delivery pipelines --  what does the application delivery process look like? Typically, you'd start with something like continuous integration (CI).

Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another. This is a big transition for people to make, especially as you think about moving the next step to continuous delivery, which is not just testing the code base, but testing the full environment and being ready to deploy that to production with every commit, or perhaps on a daily basis.
Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another.

So that's a continuous-integration, continuous-delivery approach using CI servers. There's a pretty well-known open-source one called Jenkins. There are many other examples of as-a-service options around the prime options. That tends to be step one, if you're coming in from the development side.

Now, on the operational side, automation is much more about infrastructure as code. It's really the core tenet, and this is embodied by configuration management software like Puppet, Chef, Ansible, Salt, maybe CFEngine, and the approaches defining server configuration and code and maintaining it in version control, just like you would maintain the software that you're building in version control. You can scale it easily because you know exactly how a server is created.

You can ask if that's one mail server or is it 20? It doesn’t really matter. I'm just running the same code again to deploy a new VM or to deploy onto a bare-metal environment or to deploy a new container. It’s all about that infrastructure-as-code approach using configuration-management tools. When you bring those two things together, that’s what enables you to really do continuous delivery.

You’ve got the automated application delivery pipeline on the top and you've got the automated server environment on the bottom. Then, in the middle, you’ve got things like service virtualization, data virtualization, and continuous-integration servers all letting you have an extremely reliable and reproducible and scalable environment that is the same all the way from development to production.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: And when we go to infrastructure as code, when we go to software-based everything. There's a challenge getting there, but there are also some major paybacks. When you feed-up to analyze your software, when you can replicate things rapidly, when you can deploy to a cloud model that works for your economic or security requirements, you get lot of benefits.

Are we seeing those yet, Donnie?

Berkholz: One of the challenges is that we know there are benefits, but they're very challenging to quantify. When you talk about the benefit of delivering a solution to market faster than your competitors, the benefit is that you're still in business. The benefit is that you’re Netflix and you're not Blockbuster. The benefit is that you’re Tesla and you’re not one of the big-three car manufacturers. Tesla, for example, can ship an update to its cars that let them self-drive on-the-fly for people who already purchased the car.
If you want to survive, you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

You can't really quantify the value of that easily. What you can quantify is natural selection and action. There's no mandatory requirement that any company survive or that any company can make the transition to software-defined. But, if you want to survive,  you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

Gardner: Perhaps one of the ways we can measure this is that we used to look at IT spend as a percentage of capital spend for an enterprise. Many organizations, over the past 20 or 30, years found themselves spending 50 percent or more of their capital expenditures on IT.

I think they'd like to ratchet back. If we go to IT as a service, if we pay for things at an operations level, if we only pay for what we use, shouldn't we start to see a fairly significant decrease in the total IT spend, versus revenue or profit for most organizations?

Berkholz: The one underlying factor is how important software is to your company. If that importance is growing, you're probably going to spend more as a percentage. But you're going to be generating more margin as a result of that. That's one of the big transitions that are happening, the move from IT as a cost center to IT as a collaborator with the business.

The move is away from your traditional old CIO view of we're going to keep the lights on. A lot of companies are bringing in Chief Digital Officers, for example, because the CIO wasn't taking this collaborative business view. They're either making the transition or they're getting left behind.

Spending increase

I think we'll see IT spend increase as a percentage, because companies are all realizing that, in actuality, they're software companies or they're becoming software companies. But as I said, they are going to be generating a lot more value on top of that spend.

To your point about OPEX and buying things for the service, the piece of advice I always give to companies is the saying, "How many of these things that you're doing are significant differentiators for your company?" Is it really a differentiator for your company to be an expert at automating a delivery pipeline, to be an expert at automating your servers, to be an expert at setting up file sharing, to be an expert at setting up an internal chat server? None of those, right?

Why not outsource them to people who are experts and to people who do generate that as their core differentiator and their core value creator and focus on the things that your business cares about.

Gardner: Let's get back to this infrastructure equation. We're hearing about composable infrastructure, software-defined data center (SDDC), micro services, containers and, of course, hybrid cloud or hybrid computing. If I'm looking to improve my business agility where do I look to in terms of understanding my future infrastructure partners? Is my IT organization just a broker and are they going to work with other brokers? Are we looking at a hierarchy of brokering with some sort of a baseline commoditized set of services underneath?
Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability.

So, where do we go in terms of knowing who the preferred vendors are. I guess we're sort of looking at a time when no one got fired for from buying IBM, for example. Everyone is saying Amazon is going to take over the world, but I've heard that about other vendors in the past, and it didn't pan out. This is a roundabout way of saying when you want to compose infrastructure, how do you keep choice, how to keep from getting locked in, how do you find a way to be in a market at all times?

Berkholz: Composability is really key. We see a lot of IT organizations. As you said, they used to just buy Big Blue, for example, at their IBM shops. That's no longer a thing in the way that it used to be. There's a lot more fragmentation in terms of technology, programming languages, hardware, JavaScript toolkits, and databases.

Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability. Focus on multi-vendor solutions, focus on openness, opening APIs, and open-source as well, are incredibly important in this composable world, because everything has to be able to piece together.

But the problem is that when you give traditional enterprises a bunch of pieces, it's like having kids just create a huge mess on the floor. Where do you even get started? That's one of the challenges they need to have. The way I always think about it is what are enterprises looking for? They're looking for a Lego castle, right? They don’t want the Lego pieces, and they don't want like that scene in The Lego Movie where the father glues all the blocks together. They don't want to be stuck. That's the old monolithic world.

The new composable world is where you get that castle and you can take off the tower and put on a new tower if you want to. But you're not given just the pieces; you're given not just something that is composable, but something that is pre-composed for you, for your use. case. So that generates value and looks like what we used to think about reference architectures, for example, being something sitting on a PowerPoint slides with kind of a fancy diagram.

It’s moving more toward reference architectures in the form of code, where it’s saying, "Here's a piece of code that’s ready to deploy and that’s enabled through things like infrastructure as code."

Gardner: Or a set of APIs.

Ready to go

Berkholz: Exactly. It’s enabled by having all of that stuff ready to go, ready to build in a way that wasn’t possible before. The best-case scenario before was, "Here’s a virtual appliance; have fun with that." Now, you can distribute the code and they can roll that up, customize it, take a piece out, put a piece in, however they want to.

Gardner: Before we close out, Donnie, any words of advice for organizations back to that cultural issue -- probably the more difficult one really? You have a lot of choices of technology, but how you actually change the way people think and behave among each other is always difficult. DevOps, leading to composable infrastructure, leading to this sort of services brokering economy, for lack of a better word, or marketplace perhaps.

What are you telling people about how to make that cultural shift? How do organizations change while still keeping the airplane flying so to speak?
You can’t do it as a big bang. That's absolutely the worst possible way to go about it.

Berkholz: You can’t do it as a big bang. That's absolutely the worst possible way to go about it. If you think about change management, it’s a pretty well-studied discipline at this point. There's an approach I prefer from a guy named John Kotter who has written books about change management. He lays out an eight- or nine-step process of how to make these changes happen. The funny thing about it is that actually doing the change is one of the last steps.

So much of it is about building buy-in, about generating small wins, about starting with an independent team and saying, "We're going to take the mobile apps team and we're going to try a continuous delivery over there. We're not going to stop doing everything for six months as we are trying to roll this out across the organization, because the business isn’t going to stand for that."
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
They're going to say, "What are you doing over there? You're not even shipping anything. What are you messing around with?" So, you’ve got to go piece by piece. Let’s say, start by rolling out continuous integration and slowly adding more and more automated tests to it, while keeping the manual testers alongside, so that you're not dropping all of the quality that you had before. You're actually adding more quality by adding the automation and slowly converting those manual testers over to the engineers on test.

That’s the key to it. Generate small wins, start small, and then gradually work your way up as you are able to prove the value to the organization. Make sure while you're doing so that you have executive buy-in. The tool side of things you can start at a pretty small level, but thinking about reorganization and cultural change, if you don’t have executive buy-in, is never going to fly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, June 14, 2016

How IT4IT helps turn IT into a transformational service for digital business innovation

The next BriefingsDirect expert panel discussion examines the value and direction of The Open Group IT4IT initiative, a new reference architecture for managing IT to help business become digitally innovative.

IT4IT was a hot topic at The Open Group San Francisco 2016 conference in January. This panel, conducted live at the event, explores how the reference architecture grew out of a need at some of the world's biggest organizations to make their IT departments more responsive, more agile.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn now how those IT departments within an enterprise and the vendors that support them have reshaped themselves, and how others can follow their lead. The expert panel consists of Michael Fulton, Principal Architect at CC&C Solutions; Philippe Geneste, a Partner at Accenture; Sue Desiderio, a Director at PriceWaterhouseCoopers; Dwight David, Enterprise Architect at Hewlett Packard Enterprise (HPE); and Rob Akershoek, Solution Architect IT4IT at Shell IT International. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we bridge the divide between a cloud provider, or a series of providers, and have IT take on a brokering role within the organization? How do we get to that hybrid vision role?

Geneste: We'll get there step-by-step. There's a practical step that’s implementable today. My suggestion would be that every customer or company that selects an outsourcer, that selects a cloud vendor, that selects a product, uses the IT4IT Reference Architecture in the request for proposal (RFP), putting a strong emphasis on the integration.

We see a lot of RFPs that are still silo-based -- which one is the best product for project and portfolio management, which one is the best service management tool -- but it’s not very frequent that we see the integration as being the topnotch value measured in the RFP. That would be one point.

The discussions with the vendors, again, cloud vendors or outsourcers or consulting firms should start from this, use it as an integration architecture, and tell us how you would do things based on these standardized concepts. That’s a practical step that can be used or employed today.

In a second step, when we go further into the vendor specification, there are vendors today, when you analyze the products and the cloud offerings that are closer to the concepts we have in the reference architecture. They're maybe not certified, maybe not the same terminology, but the concepts are there, or the way to the concepts is closer.

And then ultimately, step 3 and 3.5 will be product vendor certified, cloud service offering certified, hopefully full integration according to the reference architecture, and eventually, even plug-and-play. We're doing a little bit about plug-and-play, but at least integration.

Gardner: What sort of time frame would you put on those steps? Is this a two-year process, a four-year process, to soon to tell?

Achievable goals

Geneste: That’s a tough one. I suppose the vendor should be responding to this one. For the service providers, for the cloud service providers, it’s a little bit trickier, but for the consulting firm for the service providers it should be what it takes to get the workforce trained and to get the concepts spread inside the organization. So within six to 12 months, the critical mass should be there in these organizations. It's tough, but project by project, customer by customer it’s achievable.

Some vendors are on the way, and we've seen several vendors talk about IT4IT in this conference. I know that those have significant efforts on the way and are preparing for vendor certification. It will be probably a multiyear process to get the full suite of products certified, because there is quite a lot to change in the underlying software, but progressively, we should get there.

So, it's having first levels of certification within one to two years, possibly even sooner. I would be interested in knowing what the vendor responses will be.

Gardner: Sue, along the same lines, what do you see needed in order to make the IT department able to exercise the responsibility of delivering IT across multiple players and multiple boundaries?

Desiderio
Desiderio: Again, it’s starting with the awareness and the open communication about IT4IT and, on a specific instance, where that fits in. Depending on the services we're getting from vendors, or whether it's even internal services that we are getting, where do they fit into the whole IT4IT framework, what functions are we getting, what are the key components, and where are our interface points?

Have those conversations upfront in the contract conversations, so that everyone is aware of what we're trying to accomplish and that we're trying to seek that seamless integration between those suppliers and us.

Gardner: Rob, this would appear to be a buyer’s market in terms of their ability to exercise some influence. If they go seeking RFPs, if there are fewer cloud providers than there were general vendors in a traditional IT environment, they should be able to dictate this, don’t you think?

Akershoek: In the cloud world, the consumer would not dictate at all. That’s the traditional way that we dictate how an operator should provide us data. That’s the problem with the cloud. We want to consume a standard service. So we can't tell the cloud vendor, send me your cost data in this format. That won't work, because we don’t want the cloud vendor to make something proprietary for us.

That’s the first challenge. The cloud vendors are out there and we don’t want to dictate; we want to consume a standard service. So if they set up a catalog in their way, we have to adopt that. If they do the billing their way, we have to adopt it or select another cloud vendor. That’s the only option you have, select another vendor or adopt the management practices of the cloud vendor. Otherwise, we will continuously have to update it according to our policy. That’s a key challenge.

Akershoek
That’s why managing your cloud vendor is really about the entire value chain. You start with making your portfolio, thinking about what cloud services you put in your offerings, or your portfolio. So for past platforms, we use vendor A, and for infrastructure and service, vendor B. That’s where it starts. Which vendors do I engage with?

And then, going down to the Request to Fulfill, it’s more like what are the products that we're allowed to order and how do we provision those? Unfortunately, the cloud vendors don’t have IT4IT yet, meaning we have to do some work. Let’s say we want to provision the cloud environment. We make sure that all the cloud resources we provision are linked to that subscription, linked to that service, so at least we know the components that a cloud vendor is managing, where it belongs, and which service is consuming that.

Different expectations

Fulton: Rob has a key point here around the expectations being different around cloud vendors, and that’s why IT4IT is actually so powerful. A cloud vendor is not going to customize their interfaces for every single individual company, but we can hold cloud vendors accountable to an open industry standard like IT4IT, if we have detailed out the right levels of interoperability.
Fulton

To me, the way this thing comes together long term is through this open standard, and then through that RFP process, customer organizations holding their vendors accountable to delivering inside that open standard. In the world of cloud, that’s actually to the benefit of the cloud providers as well.

Akershoek: That’s a key point you make there, indeed.

David: And just to piggyback on what we're saying, it goes back to the value proposition. Why am I doing this? If we have something that’s an open standard, it enables velocity. You can identify costs much easier. It’s simpler and it goes back again to the value proposition and showing these cloud vendors that because of a standard, I'm able to consume more of your services, I'm able to consume your services easier, and here I'm guaranteed because it’s a standard to get my value. Again, it's back to the value proposition that the open standard offers.

Gardner: Sue, how about this issue of automation? Is it essential to be largely automated to realize the full benefits of IT4IT or is that more of a nice-to-have goal? What's the relationship between a high degree of automation in your IT organization for the support of these activities and the standard and Reference Architecture?

Automation is key

Desiderio: I'm a believer that automation is key, so we definitely have to get automation throughout the whole end-to-end value chain no matter what. That’s really part of the whole transformation going into this new model.

You see that throughout the whole value chain. We talked about it individually on the different value streams and how it comes back.

I also want to touch on what’s the right size company or firm to pick up IT4IT. I agree with where Philippe was coming from. Smaller shops can pick it up and start leveraging it more quickly, because they don't have that legacy IT that was done, where it's not built on composite services and things. Everything on a system is pinpointing direct servers and direct networks, instead of building it on services, like a hosting service and a monitoring response service.

For larger IT organizations, there's a lot more change, but it's critical for us to survive and be viable in the future for those IT shops, the larger ones in large organizations, to start adopting and moving forward.
We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. So, it's looking at where to start seeing that business value as you look at new initiatives and things within your organization.

It's not a big bang. We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. It's looking at where to start seeing that business value as you look at new initiatives and things within your organization. How do you start moving into the new model with the new things? How do you start transitioning your legacy systems and whatnot into more of the new way of thinking and looking at that consumption model and what we're trying to do, which is focus on that business outcome.

So it's much harder for the larger IT shops, but the concepts apply to all sizes.

Gardner: Rob, the subject of the moment is size and automation.

Akershoek: I think the principle we just discussed, automation, is a good principle, but if you look at the legacy, as you mentioned, you're not going to automate your legacy, unless you have a good business case for that. You need to standardize your services on many different layers, and that's what you see in the cloud.

Cloud vendors are standardizing extremely, defining standard component services. You have to do the same and define your standard services and then automate all of those. The legacy ones you can't automate or probably don’t want to automate.

So it's more standardization, more standard configurations, and then you can automate and develop or Detect to Correct as well, if you have a very complex configuration and it changes all the time without any standards.

The size of the organization doesn’t matter. Both for large and smaller organizations you need to adopt standard cloud practices from the vendors and automate the delivery to make things repeatable.

Desire to grow

David: Small organizations don’t want to remain small all the time; they actually want to grow. Growth starts with a mindset, a thinking mindset. By applying the Reference Architecture, even though you don't apply every single point to my one-man or two-man shop, it then helps me, it positions me, and it gives me the frame of reference, the thinking to enable growth.
David

It grows organically. So, you don't end up with the legacy baggage that most of the large companies have. And small companies may get acquired, but at least they have good discipline or they may acquire others as they grow. The application of the IT4IT Reference Architecture is just not for large companies, it’s also for small companies, and I'm saying that as a small-business owner myself.

Akershoek: Can I add to that? If you're starting out deployed to the cloud, maybe the best way is to start with automation at first or at least design for automation. If you have a few thousand servers running in the cloud and you didn't start with that concept, then you already have legacy after a few years running in the cloud. So, you should start thinking about automation from the start, not with your legacy of course, but if you're now moving to the cloud design, build that immediately.

The entire Reference Architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.
Fulton: On this point, one of the directions we're heading is to figure out this very issue, what of the reference architecture applies at what size and evolution in a company’s growth.

As I mentioned, I think I made this comment earlier, the entire reference architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.

If it's implicit, it's in the head of the founder. You're still doing the elements, or you can be still doing the elements, of the reference architecture in your mind and your thought process, but there are pieces you need to make explicit even when you are, as Charlie likes to say, two people in a garage.

On the automation piece, the key thing that has been happening throughout our industry related to automation has been, at least in my perspective, when we've been automating within functional components. What the IT4IT Reference Architecture and its vision of value streams allow us to do is rethink automation along the lines of value streams, across functional components. That's where it starts to really add a considerable value, especially when we can start to put together interoperability between tooling on some of these things. That’s where we're going to see automation take us to that next level as IT organizations.

Gardner: As IT4IT matures and becomes adopted and serves both consumers and providers of services, it seems to me that there will be a similar track with digital business of how you run your business, which is going to be more a brokering activity at a business level, that a business is really a constituency of different providers across supply chains, increasingly across service providers.

Is there a dual track for IT4IT on the IT side and for business management of services through a portal, through dashboard, something that your business analyst and on up would be involved with? Should we let them happen separately? How can we make them more aligned and even highly integrated and synergistic?

Best practices

Geneste: We have such best practices in IT4IT that the businesses themselves can replicate that and use that for themselves. I suppose certain companies do that a little bit today; if you take the Ubers and the Airbnbs and have these disintermediation connecting with private individuals a lot of the time, but have some of these service-oriented concepts today effectively, even though they don’t use IT4IT.

Just as much as we see today, we have cases where businesses, for their help-desks or for their request management, turn to the likes of HPE for service-management software to help them with their business help-desk. We're likely to see that those best practices in terms of individualization and specification of individual conceptual service, service catalogue, or subscription mechanisms. You're right; the concepts could very easily apply to businesses. As to how that would turn out, I would need to do a little bit more thinking, but I think from a concept’s standpoint, it truly should be useful.

Desiderio: We're trying to move ourselves up the stack to help the businesses in the services that they're providing and so it’s very relevant as we're looking at IT4IT and how we're managing the IT services. It’s also those business services, it’s concurrent, it’s evolving and training and making the business aware of where we're trying to go and how they can leverage that in their own services that they are providing outward.
Where we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve.

When you look at adopting this, even when you go back down to your IT in your organization where you have your different typical organizational teams, there's a challenge for each IT team to look at the services they're providing and how they start looking at what they do in terms of services, instead of just the functions.

That goes all the way up the stack including the business, the business services, and IT’s job. When we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve and then how are we truly that business-enabler.

Akershoek: I interpret your question like it's about shadow IT, that there is no shadow IT. Some IT management activity is performed by the business, and you mentioned as well, the business needs to apply IT4IT practices as well. As soon as IT activities are done by the business, like they select and manage their own software-as-a-service (SaaS) application, they need to perform the IT4IT related activities themselves. They're even starting to configure SaaS services themselves. The business can do the configuration and they might even provide the end-user support. Also in these cases, these management activities fit in the IT4IT reference structure model as well.

Gardner: Dwight, we have a business scorecard, we have an IT scorecard, why shouldn’t they be the same scorecard?

David: I'm always reminded that IT is in place to help the business, right? The business is the function, and IT should be the visible enabler of business success. I would classify that as catching up to the business expectations. Could some of the principles that we apply in IT be used for the business? Yeah, it can be, but I see it more the other way around. If you look at a whole value chain that came from the business perspective being approached, being applied to IT, I still see that the business is driven, but really IT is becoming more seamless in enabling the business to achieve their particular goals.

Application of IT

Fulton: The whole concept of digital business is actually a complete misnomer. I hate it; I think it’s wrong. It’s all about the application of information technology. In the context of what we typically talk about with IT4IT, we're talking about the application of information technology to the management of the IT department.

We also talk about the application of information technology to the transformation of business processes. Most of the time, that happens inside companies, and we're using the principles of IT4IT to do that. When we talk about digital business, usually we're talking about the application of information technology into the transformation of business models of companies. Again, it’s still all about applying information technology to make the company work in a different way. For me, the IT4IT principles, the Reference Architecture, the value streams, will still hold for all of that.

Geneste: The two innovations that we have in the IT4IT Reference Architecture -- the Service Backbone and the Request to Fulfill (R2F) value stream -- are the two greatest novelties of the reference architecture.

Are they mature? They're mature enough, and they'll probably evolve in their level of maturity. There are a number of areas that are maturing, and some that we have in design. The IT Financial Management, for instance, is one that I'm working on, and the service costing within that, which I think we'll get a chance to get ready by version 2.1. The idea is to have it as guidance in version 2.1.

The value streams by themselves are also mature and almost complete. There are a number of improvements we can make to all of them, but I think overall the reference architecture is usable today as an architecture to start with. It's not quite for vendor certification, although that’s upcoming, but there are a number of good things and a number of implementations that would benefit from using the current IT4IT Reference Architecture 2.0.

Gardner: Sue, where do you see the most traction and growth, and what would you like to see improved?

Desiderio: An easy entry point to start with is Detect to Correct because it’s one of the value streams that’s a little bit more known and understood. So that’s an easier point of entry for the whole IT4IT Value Chain, compared to some of the other value streams.

The service model, as we've stated all along, is definitely the backbone to the whole IT value chain. Although it's well-formed and in a good, mature state, there's still plenty of work to do to make that consumable to the IT organizations to understand all the different phases of the life cycle and all the different data objects that make up the Service Backbone. That's something that we're currently working on for the 2.1 version, so that we have better examples. We can show how it applies in a real IT organization, and it’s not just what’s in the documentation today.

More detail

Akershoek: I don’t think it’s about positive and negative in this case, but more about areas that we need to work on in more detail, like defining the service-broker role that you see in the new IT organization [and] how you interface with your external service providers. We've identified a number of areas where the IT organization has key touch points with these vendors, like your service catalog, you need to synchronize catalog information with the external vendors and aggregate it into your own catalog.

But there's also the fulfillment API -- how do you communicate a request to your suppliers or different technology stacks and get the consumption and cost data back in? I think we define that today in the IT4IT standard, but we need to go to a lower level of detail -- how do we actually integrate with vendors and our service providers?

So interfacing with the vendors in the eco-system sits on many different levels. It’s on the catalog level and the request fulfillment, that you actually do provision, the cost consumption data, and those kind of aspects.

Another topic is still the linking in to security and identity and access management. It's an area where we still need to clarify. We need to clarify how all the subscriptions in a service link in to that access management capability, which is part of the subscription and, of course, the fulfillment. We didn’t identify it as a separate functional component.

Gardner: Dwight, where are you most optimistic and where would you put more emphasis?

David: I'll start with the latter. More emphasis needs to be on our approach to Detect to Correct. Oftentimes, I see people thinking about Detect to Correct as in the traditional mode of being reactive, as opposed to understanding that this model can be applied even to the new changing user-friendly type of economy and within the hybrid type of IT. A change in thinking in the application of the value streams would also help us.

Many of us have a lot of gray hairs, including myself, and we revert to the old way of thinking, as opposed to the way we should be moving forward. That’s the area where we can do the most.

What's really good, though, is that a lot of people understand Detect to Correct. So it’s an easy adoption in terms of understanding the Reference Architecture. It’s a good entry point to the IT4IT Reference Architecture. That’s where I see the actual benefit. I would encourage us to make it useful, use it, and try it. The most benefit happens then.

Gardner: And Michael, room for optimism and room for improvement?


Management Guide

Fulton: I want to build on Dwight’s point around trying it by sharing. The one thing I'm most excited about, particularly this week, is the Management Guide -- very specifically, chapter 5 of the Management Guide. I hope all of you got a chance to grab your copy of that. If you haven’t, I recommend downloading it from The Open Group website. That chapter is absolutely rich in content about how to actually implement IT4IT.

And I tip my hat to Rob, who did a great piece of work, along with several other people. If you want to pick up the standard and use it, start there, start with chapter 5 of the Management Guide. You may not need to go much further, because that’s just great content to work with. I'm very excited about that.

From the standpoint of where we need to continue to evolve and grow as a standard, we've referenced some of the individual pieces, but at a higher level. The supporting activities in general all still need to evolve and get to the level of detail that we have with the value streams. That’s a key area for me.

The next area that I would highlight, and I know we're actively starting work on this, is around getting down to that level of detail where we can do data interoperability, where we can start to outline the specifics that are needed to define APIs between the functional components in such a way that we can ultimately bring us back to that Open Group vision of a boundaryless information flow.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: