Thursday, June 16, 2016

451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about trends and developments in DevOps, micro services, containers, and the new direction for composable infrastructure, we’re joined by Donnie Berkholz, Research Director at 451 Research, and he’s based in Minneapolis. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are things changing so much for apps deployment infrastructure? Why is DevOps newly key for software development? And why are we looking for “composable infrastructure?”

Berkholz: It’s a good question. There are a couple of big drivers behind it. One of them is cloud, probably the biggest one, because of the scale and transience that we have to deal with now, with virtual machines (VMs) appearing and disappearing on such a rapid basis.

Berkholz
We have to have software, processes, and cultures that support that kind of new approach, and IT is getting more-and-more demands for scale and to do more from the line of business. They're not getting more money or people, and they have to figure out what’s the right approach to deal with this. How can we scale and how can we do more and how can we be more agile?

DevOps is the approach that’s been settled on. One of the big reasons behind that is the automation. That’s one of what I think of as the three pillars of DevOps, which are culture, automation, and measurement.

Automation is what lets you move from this metaphor of cattle versus pets, moving from the pet side of it, where you carefully name and handcraft each server, to a cattle mindset, where you're thinking about fleets of servers and about services rather than individual servers, VMs, containers, or what have you. You can have syste
ms administrators maintaining 10,000 VMs, rather than 100 or 150 servers by hand. That’s what automation gets you.

More with less

So you're doing more with less. Then, as I said, they're also getting demands from the business to be more agile and deliver it faster, because the businesses all want to compete with companies like Netflix or Zenefits, the Teslas of the world, the software-defined organizations. How can they be more agile, how can they become competitive, if they're a big insurance company or a big bank?

DevOps is one of the key approaches behind that. You get the automation, not just on the server side, but on the application-delivery pipeline, which is really a critical aspect of it. You're moving toward this continuous delivery approach, and being able to move a step beyond agile to bring agile all the way through to production and to deploy software, maybe even on every comment, which is the far end of DevOps. There are a lot of organizations that aren’t there yet, but they're taking steps toward that, toward moving from deployments every three months or six months to every few weeks.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: So the vision of having that constant iterative process, continuous development, continuous test, continuous deployment -- at the same time being able to take advantage of these new cloud models -- it’s still kind of a tricky equation for people to work out.

What is it that we need to put in place that allows us to be agile as a development organization and to be automated and orchestrated as an operations organization? How can we make that happen practically?

Berkholz: It always goes back to three things -- people, process, and technology. From the people perspective, what I have run into is that there are a lot of organizations that have either development or operational groups, where some of them just can't make this transition.
IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold.

They can't start thinking about the business impacts of what they're doing. They're focused on keeping the lights on, maintaining the servers, writing the code, and being able to make that transition to focusing on what the business needs. How am I helping the company is the critical step from an individual level, but also from an organizational level.

IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold. How they do so is this transition toward IT as a service is the way we think about it. IT becoming more like a service provider in their own right, pulling in all these external services and providing a better experience in house.

If you think about shadow IT, for example, you think about developers using a credit card to sign-up for some public cloud or another. That’s all well and good, but wouldn’t it be even nicer if they didn’t have to worry about the billing, the expensing, the payments, and all that stuff, because IT already provided that for them. That’s where things are going, because that’s the IT-as-a-service provider model.

Gardner: People, process, technology, and existential issues. The vendors are also facing existential issues, things and changing so fast, and they provide technology, the people and the process which is up to the enterprise to figure out. What's happening on the technology side, and how are the vendors reacting to allow enterprises to then employ the people and put in place the processes that will bring us to this better DevOps automated reality? What can we put in place technically to make this possible?

Two approaches

Berkholz: It goes back to two approaches -- one coming in from the development side and one coming in from the operational side.

From a development side, we're talking about things like continuous-delivery pipelines --  what does the application delivery process look like? Typically, you'd start with something like continuous integration (CI).

Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another. This is a big transition for people to make, especially as you think about moving the next step to continuous delivery, which is not just testing the code base, but testing the full environment and being ready to deploy that to production with every commit, or perhaps on a daily basis.
Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another.

So that's a continuous-integration, continuous-delivery approach using CI servers. There's a pretty well-known open-source one called Jenkins. There are many other examples of as-a-service options around the prime options. That tends to be step one, if you're coming in from the development side.

Now, on the operational side, automation is much more about infrastructure as code. It's really the core tenet, and this is embodied by configuration management software like Puppet, Chef, Ansible, Salt, maybe CFEngine, and the approaches defining server configuration and code and maintaining it in version control, just like you would maintain the software that you're building in version control. You can scale it easily because you know exactly how a server is created.

You can ask if that's one mail server or is it 20? It doesn’t really matter. I'm just running the same code again to deploy a new VM or to deploy onto a bare-metal environment or to deploy a new container. It’s all about that infrastructure-as-code approach using configuration-management tools. When you bring those two things together, that’s what enables you to really do continuous delivery.

You’ve got the automated application delivery pipeline on the top and you've got the automated server environment on the bottom. Then, in the middle, you’ve got things like service virtualization, data virtualization, and continuous-integration servers all letting you have an extremely reliable and reproducible and scalable environment that is the same all the way from development to production.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: And when we go to infrastructure as code, when we go to software-based everything. There's a challenge getting there, but there are also some major paybacks. When you feed-up to analyze your software, when you can replicate things rapidly, when you can deploy to a cloud model that works for your economic or security requirements, you get lot of benefits.

Are we seeing those yet, Donnie?

Berkholz: One of the challenges is that we know there are benefits, but they're very challenging to quantify. When you talk about the benefit of delivering a solution to market faster than your competitors, the benefit is that you're still in business. The benefit is that you’re Netflix and you're not Blockbuster. The benefit is that you’re Tesla and you’re not one of the big-three car manufacturers. Tesla, for example, can ship an update to its cars that let them self-drive on-the-fly for people who already purchased the car.
If you want to survive, you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

You can't really quantify the value of that easily. What you can quantify is natural selection and action. There's no mandatory requirement that any company survive or that any company can make the transition to software-defined. But, if you want to survive,  you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

Gardner: Perhaps one of the ways we can measure this is that we used to look at IT spend as a percentage of capital spend for an enterprise. Many organizations, over the past 20 or 30, years found themselves spending 50 percent or more of their capital expenditures on IT.

I think they'd like to ratchet back. If we go to IT as a service, if we pay for things at an operations level, if we only pay for what we use, shouldn't we start to see a fairly significant decrease in the total IT spend, versus revenue or profit for most organizations?

Berkholz: The one underlying factor is how important software is to your company. If that importance is growing, you're probably going to spend more as a percentage. But you're going to be generating more margin as a result of that. That's one of the big transitions that are happening, the move from IT as a cost center to IT as a collaborator with the business.

The move is away from your traditional old CIO view of we're going to keep the lights on. A lot of companies are bringing in Chief Digital Officers, for example, because the CIO wasn't taking this collaborative business view. They're either making the transition or they're getting left behind.

Spending increase

I think we'll see IT spend increase as a percentage, because companies are all realizing that, in actuality, they're software companies or they're becoming software companies. But as I said, they are going to be generating a lot more value on top of that spend.

To your point about OPEX and buying things for the service, the piece of advice I always give to companies is the saying, "How many of these things that you're doing are significant differentiators for your company?" Is it really a differentiator for your company to be an expert at automating a delivery pipeline, to be an expert at automating your servers, to be an expert at setting up file sharing, to be an expert at setting up an internal chat server? None of those, right?

Why not outsource them to people who are experts and to people who do generate that as their core differentiator and their core value creator and focus on the things that your business cares about.

Gardner: Let's get back to this infrastructure equation. We're hearing about composable infrastructure, software-defined data center (SDDC), micro services, containers and, of course, hybrid cloud or hybrid computing. If I'm looking to improve my business agility where do I look to in terms of understanding my future infrastructure partners? Is my IT organization just a broker and are they going to work with other brokers? Are we looking at a hierarchy of brokering with some sort of a baseline commoditized set of services underneath?
Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability.

So, where do we go in terms of knowing who the preferred vendors are. I guess we're sort of looking at a time when no one got fired for from buying IBM, for example. Everyone is saying Amazon is going to take over the world, but I've heard that about other vendors in the past, and it didn't pan out. This is a roundabout way of saying when you want to compose infrastructure, how do you keep choice, how to keep from getting locked in, how do you find a way to be in a market at all times?

Berkholz: Composability is really key. We see a lot of IT organizations. As you said, they used to just buy Big Blue, for example, at their IBM shops. That's no longer a thing in the way that it used to be. There's a lot more fragmentation in terms of technology, programming languages, hardware, JavaScript toolkits, and databases.

Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability. Focus on multi-vendor solutions, focus on openness, opening APIs, and open-source as well, are incredibly important in this composable world, because everything has to be able to piece together.

But the problem is that when you give traditional enterprises a bunch of pieces, it's like having kids just create a huge mess on the floor. Where do you even get started? That's one of the challenges they need to have. The way I always think about it is what are enterprises looking for? They're looking for a Lego castle, right? They don’t want the Lego pieces, and they don't want like that scene in The Lego Movie where the father glues all the blocks together. They don't want to be stuck. That's the old monolithic world.

The new composable world is where you get that castle and you can take off the tower and put on a new tower if you want to. But you're not given just the pieces; you're given not just something that is composable, but something that is pre-composed for you, for your use. case. So that generates value and looks like what we used to think about reference architectures, for example, being something sitting on a PowerPoint slides with kind of a fancy diagram.

It’s moving more toward reference architectures in the form of code, where it’s saying, "Here's a piece of code that’s ready to deploy and that’s enabled through things like infrastructure as code."

Gardner: Or a set of APIs.

Ready to go

Berkholz: Exactly. It’s enabled by having all of that stuff ready to go, ready to build in a way that wasn’t possible before. The best-case scenario before was, "Here’s a virtual appliance; have fun with that." Now, you can distribute the code and they can roll that up, customize it, take a piece out, put a piece in, however they want to.

Gardner: Before we close out, Donnie, any words of advice for organizations back to that cultural issue -- probably the more difficult one really? You have a lot of choices of technology, but how you actually change the way people think and behave among each other is always difficult. DevOps, leading to composable infrastructure, leading to this sort of services brokering economy, for lack of a better word, or marketplace perhaps.

What are you telling people about how to make that cultural shift? How do organizations change while still keeping the airplane flying so to speak?
You can’t do it as a big bang. That's absolutely the worst possible way to go about it.

Berkholz: You can’t do it as a big bang. That's absolutely the worst possible way to go about it. If you think about change management, it’s a pretty well-studied discipline at this point. There's an approach I prefer from a guy named John Kotter who has written books about change management. He lays out an eight- or nine-step process of how to make these changes happen. The funny thing about it is that actually doing the change is one of the last steps.

So much of it is about building buy-in, about generating small wins, about starting with an independent team and saying, "We're going to take the mobile apps team and we're going to try a continuous delivery over there. We're not going to stop doing everything for six months as we are trying to roll this out across the organization, because the business isn’t going to stand for that."
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
They're going to say, "What are you doing over there? You're not even shipping anything. What are you messing around with?" So, you’ve got to go piece by piece. Let’s say, start by rolling out continuous integration and slowly adding more and more automated tests to it, while keeping the manual testers alongside, so that you're not dropping all of the quality that you had before. You're actually adding more quality by adding the automation and slowly converting those manual testers over to the engineers on test.

That’s the key to it. Generate small wins, start small, and then gradually work your way up as you are able to prove the value to the organization. Make sure while you're doing so that you have executive buy-in. The tool side of things you can start at a pretty small level, but thinking about reorganization and cultural change, if you don’t have executive buy-in, is never going to fly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, June 14, 2016

How IT4IT helps turn IT into a transformational service for digital business innovation

The next BriefingsDirect expert panel discussion examines the value and direction of The Open Group IT4IT initiative, a new reference architecture for managing IT to help business become digitally innovative.

IT4IT was a hot topic at The Open Group San Francisco 2016 conference in January. This panel, conducted live at the event, explores how the reference architecture grew out of a need at some of the world's biggest organizations to make their IT departments more responsive, more agile.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

We’ll learn now how those IT departments within an enterprise and the vendors that support them have reshaped themselves, and how others can follow their lead. The expert panel consists of Michael Fulton, Principal Architect at CC&C Solutions; Philippe Geneste, a Partner at Accenture; Sue Desiderio, a Director at PriceWaterhouseCoopers; Dwight David, Enterprise Architect at Hewlett Packard Enterprise (HPE); and Rob Akershoek, Solution Architect IT4IT at Shell IT International. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we bridge the divide between a cloud provider, or a series of providers, and have IT take on a brokering role within the organization? How do we get to that hybrid vision role?

Geneste: We'll get there step-by-step. There's a practical step that’s implementable today. My suggestion would be that every customer or company that selects an outsourcer, that selects a cloud vendor, that selects a product, uses the IT4IT Reference Architecture in the request for proposal (RFP), putting a strong emphasis on the integration.

We see a lot of RFPs that are still silo-based -- which one is the best product for project and portfolio management, which one is the best service management tool -- but it’s not very frequent that we see the integration as being the topnotch value measured in the RFP. That would be one point.

The discussions with the vendors, again, cloud vendors or outsourcers or consulting firms should start from this, use it as an integration architecture, and tell us how you would do things based on these standardized concepts. That’s a practical step that can be used or employed today.

In a second step, when we go further into the vendor specification, there are vendors today, when you analyze the products and the cloud offerings that are closer to the concepts we have in the reference architecture. They're maybe not certified, maybe not the same terminology, but the concepts are there, or the way to the concepts is closer.

And then ultimately, step 3 and 3.5 will be product vendor certified, cloud service offering certified, hopefully full integration according to the reference architecture, and eventually, even plug-and-play. We're doing a little bit about plug-and-play, but at least integration.

Gardner: What sort of time frame would you put on those steps? Is this a two-year process, a four-year process, to soon to tell?

Achievable goals

Geneste: That’s a tough one. I suppose the vendor should be responding to this one. For the service providers, for the cloud service providers, it’s a little bit trickier, but for the consulting firm for the service providers it should be what it takes to get the workforce trained and to get the concepts spread inside the organization. So within six to 12 months, the critical mass should be there in these organizations. It's tough, but project by project, customer by customer it’s achievable.

Some vendors are on the way, and we've seen several vendors talk about IT4IT in this conference. I know that those have significant efforts on the way and are preparing for vendor certification. It will be probably a multiyear process to get the full suite of products certified, because there is quite a lot to change in the underlying software, but progressively, we should get there.

So, it's having first levels of certification within one to two years, possibly even sooner. I would be interested in knowing what the vendor responses will be.

Gardner: Sue, along the same lines, what do you see needed in order to make the IT department able to exercise the responsibility of delivering IT across multiple players and multiple boundaries?

Desiderio
Desiderio: Again, it’s starting with the awareness and the open communication about IT4IT and, on a specific instance, where that fits in. Depending on the services we're getting from vendors, or whether it's even internal services that we are getting, where do they fit into the whole IT4IT framework, what functions are we getting, what are the key components, and where are our interface points?

Have those conversations upfront in the contract conversations, so that everyone is aware of what we're trying to accomplish and that we're trying to seek that seamless integration between those suppliers and us.

Gardner: Rob, this would appear to be a buyer’s market in terms of their ability to exercise some influence. If they go seeking RFPs, if there are fewer cloud providers than there were general vendors in a traditional IT environment, they should be able to dictate this, don’t you think?

Akershoek: In the cloud world, the consumer would not dictate at all. That’s the traditional way that we dictate how an operator should provide us data. That’s the problem with the cloud. We want to consume a standard service. So we can't tell the cloud vendor, send me your cost data in this format. That won't work, because we don’t want the cloud vendor to make something proprietary for us.

That’s the first challenge. The cloud vendors are out there and we don’t want to dictate; we want to consume a standard service. So if they set up a catalog in their way, we have to adopt that. If they do the billing their way, we have to adopt it or select another cloud vendor. That’s the only option you have, select another vendor or adopt the management practices of the cloud vendor. Otherwise, we will continuously have to update it according to our policy. That’s a key challenge.

Akershoek
That’s why managing your cloud vendor is really about the entire value chain. You start with making your portfolio, thinking about what cloud services you put in your offerings, or your portfolio. So for past platforms, we use vendor A, and for infrastructure and service, vendor B. That’s where it starts. Which vendors do I engage with?

And then, going down to the Request to Fulfill, it’s more like what are the products that we're allowed to order and how do we provision those? Unfortunately, the cloud vendors don’t have IT4IT yet, meaning we have to do some work. Let’s say we want to provision the cloud environment. We make sure that all the cloud resources we provision are linked to that subscription, linked to that service, so at least we know the components that a cloud vendor is managing, where it belongs, and which service is consuming that.

Different expectations

Fulton: Rob has a key point here around the expectations being different around cloud vendors, and that’s why IT4IT is actually so powerful. A cloud vendor is not going to customize their interfaces for every single individual company, but we can hold cloud vendors accountable to an open industry standard like IT4IT, if we have detailed out the right levels of interoperability.
Fulton

To me, the way this thing comes together long term is through this open standard, and then through that RFP process, customer organizations holding their vendors accountable to delivering inside that open standard. In the world of cloud, that’s actually to the benefit of the cloud providers as well.

Akershoek: That’s a key point you make there, indeed.

David: And just to piggyback on what we're saying, it goes back to the value proposition. Why am I doing this? If we have something that’s an open standard, it enables velocity. You can identify costs much easier. It’s simpler and it goes back again to the value proposition and showing these cloud vendors that because of a standard, I'm able to consume more of your services, I'm able to consume your services easier, and here I'm guaranteed because it’s a standard to get my value. Again, it's back to the value proposition that the open standard offers.

Gardner: Sue, how about this issue of automation? Is it essential to be largely automated to realize the full benefits of IT4IT or is that more of a nice-to-have goal? What's the relationship between a high degree of automation in your IT organization for the support of these activities and the standard and Reference Architecture?

Automation is key

Desiderio: I'm a believer that automation is key, so we definitely have to get automation throughout the whole end-to-end value chain no matter what. That’s really part of the whole transformation going into this new model.

You see that throughout the whole value chain. We talked about it individually on the different value streams and how it comes back.

I also want to touch on what’s the right size company or firm to pick up IT4IT. I agree with where Philippe was coming from. Smaller shops can pick it up and start leveraging it more quickly, because they don't have that legacy IT that was done, where it's not built on composite services and things. Everything on a system is pinpointing direct servers and direct networks, instead of building it on services, like a hosting service and a monitoring response service.

For larger IT organizations, there's a lot more change, but it's critical for us to survive and be viable in the future for those IT shops, the larger ones in large organizations, to start adopting and moving forward.
We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. So, it's looking at where to start seeing that business value as you look at new initiatives and things within your organization.

It's not a big bang. We, in a larger IT shop, are going to be running in a mixed mode for a long time to come. It's looking at where to start seeing that business value as you look at new initiatives and things within your organization. How do you start moving into the new model with the new things? How do you start transitioning your legacy systems and whatnot into more of the new way of thinking and looking at that consumption model and what we're trying to do, which is focus on that business outcome.

So it's much harder for the larger IT shops, but the concepts apply to all sizes.

Gardner: Rob, the subject of the moment is size and automation.

Akershoek: I think the principle we just discussed, automation, is a good principle, but if you look at the legacy, as you mentioned, you're not going to automate your legacy, unless you have a good business case for that. You need to standardize your services on many different layers, and that's what you see in the cloud.

Cloud vendors are standardizing extremely, defining standard component services. You have to do the same and define your standard services and then automate all of those. The legacy ones you can't automate or probably don’t want to automate.

So it's more standardization, more standard configurations, and then you can automate and develop or Detect to Correct as well, if you have a very complex configuration and it changes all the time without any standards.

The size of the organization doesn’t matter. Both for large and smaller organizations you need to adopt standard cloud practices from the vendors and automate the delivery to make things repeatable.

Desire to grow

David: Small organizations don’t want to remain small all the time; they actually want to grow. Growth starts with a mindset, a thinking mindset. By applying the Reference Architecture, even though you don't apply every single point to my one-man or two-man shop, it then helps me, it positions me, and it gives me the frame of reference, the thinking to enable growth.
David

It grows organically. So, you don't end up with the legacy baggage that most of the large companies have. And small companies may get acquired, but at least they have good discipline or they may acquire others as they grow. The application of the IT4IT Reference Architecture is just not for large companies, it’s also for small companies, and I'm saying that as a small-business owner myself.

Akershoek: Can I add to that? If you're starting out deployed to the cloud, maybe the best way is to start with automation at first or at least design for automation. If you have a few thousand servers running in the cloud and you didn't start with that concept, then you already have legacy after a few years running in the cloud. So, you should start thinking about automation from the start, not with your legacy of course, but if you're now moving to the cloud design, build that immediately.

The entire Reference Architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.
Fulton: On this point, one of the directions we're heading is to figure out this very issue, what of the reference architecture applies at what size and evolution in a company’s growth.

As I mentioned, I think I made this comment earlier, the entire reference architecture applies from day one for companies of any size; it's just a question of whether it's explicit or implicit.

If it's implicit, it's in the head of the founder. You're still doing the elements, or you can be still doing the elements, of the reference architecture in your mind and your thought process, but there are pieces you need to make explicit even when you are, as Charlie likes to say, two people in a garage.

On the automation piece, the key thing that has been happening throughout our industry related to automation has been, at least in my perspective, when we've been automating within functional components. What the IT4IT Reference Architecture and its vision of value streams allow us to do is rethink automation along the lines of value streams, across functional components. That's where it starts to really add a considerable value, especially when we can start to put together interoperability between tooling on some of these things. That’s where we're going to see automation take us to that next level as IT organizations.

Gardner: As IT4IT matures and becomes adopted and serves both consumers and providers of services, it seems to me that there will be a similar track with digital business of how you run your business, which is going to be more a brokering activity at a business level, that a business is really a constituency of different providers across supply chains, increasingly across service providers.

Is there a dual track for IT4IT on the IT side and for business management of services through a portal, through dashboard, something that your business analyst and on up would be involved with? Should we let them happen separately? How can we make them more aligned and even highly integrated and synergistic?

Best practices

Geneste: We have such best practices in IT4IT that the businesses themselves can replicate that and use that for themselves. I suppose certain companies do that a little bit today; if you take the Ubers and the Airbnbs and have these disintermediation connecting with private individuals a lot of the time, but have some of these service-oriented concepts today effectively, even though they don’t use IT4IT.

Just as much as we see today, we have cases where businesses, for their help-desks or for their request management, turn to the likes of HPE for service-management software to help them with their business help-desk. We're likely to see that those best practices in terms of individualization and specification of individual conceptual service, service catalogue, or subscription mechanisms. You're right; the concepts could very easily apply to businesses. As to how that would turn out, I would need to do a little bit more thinking, but I think from a concept’s standpoint, it truly should be useful.

Desiderio: We're trying to move ourselves up the stack to help the businesses in the services that they're providing and so it’s very relevant as we're looking at IT4IT and how we're managing the IT services. It’s also those business services, it’s concurrent, it’s evolving and training and making the business aware of where we're trying to go and how they can leverage that in their own services that they are providing outward.
Where we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve.

When you look at adopting this, even when you go back down to your IT in your organization where you have your different typical organizational teams, there's a challenge for each IT team to look at the services they're providing and how they start looking at what they do in terms of services, instead of just the functions.

That goes all the way up the stack including the business, the business services, and IT’s job. When we start talking about transformation, we must be aligned with the business so we understand their business processes and the services that they're trying to serve and then how are we truly that business-enabler.

Akershoek: I interpret your question like it's about shadow IT, that there is no shadow IT. Some IT management activity is performed by the business, and you mentioned as well, the business needs to apply IT4IT practices as well. As soon as IT activities are done by the business, like they select and manage their own software-as-a-service (SaaS) application, they need to perform the IT4IT related activities themselves. They're even starting to configure SaaS services themselves. The business can do the configuration and they might even provide the end-user support. Also in these cases, these management activities fit in the IT4IT reference structure model as well.

Gardner: Dwight, we have a business scorecard, we have an IT scorecard, why shouldn’t they be the same scorecard?

David: I'm always reminded that IT is in place to help the business, right? The business is the function, and IT should be the visible enabler of business success. I would classify that as catching up to the business expectations. Could some of the principles that we apply in IT be used for the business? Yeah, it can be, but I see it more the other way around. If you look at a whole value chain that came from the business perspective being approached, being applied to IT, I still see that the business is driven, but really IT is becoming more seamless in enabling the business to achieve their particular goals.

Application of IT

Fulton: The whole concept of digital business is actually a complete misnomer. I hate it; I think it’s wrong. It’s all about the application of information technology. In the context of what we typically talk about with IT4IT, we're talking about the application of information technology to the management of the IT department.

We also talk about the application of information technology to the transformation of business processes. Most of the time, that happens inside companies, and we're using the principles of IT4IT to do that. When we talk about digital business, usually we're talking about the application of information technology into the transformation of business models of companies. Again, it’s still all about applying information technology to make the company work in a different way. For me, the IT4IT principles, the Reference Architecture, the value streams, will still hold for all of that.

Geneste: The two innovations that we have in the IT4IT Reference Architecture -- the Service Backbone and the Request to Fulfill (R2F) value stream -- are the two greatest novelties of the reference architecture.

Are they mature? They're mature enough, and they'll probably evolve in their level of maturity. There are a number of areas that are maturing, and some that we have in design. The IT Financial Management, for instance, is one that I'm working on, and the service costing within that, which I think we'll get a chance to get ready by version 2.1. The idea is to have it as guidance in version 2.1.

The value streams by themselves are also mature and almost complete. There are a number of improvements we can make to all of them, but I think overall the reference architecture is usable today as an architecture to start with. It's not quite for vendor certification, although that’s upcoming, but there are a number of good things and a number of implementations that would benefit from using the current IT4IT Reference Architecture 2.0.

Gardner: Sue, where do you see the most traction and growth, and what would you like to see improved?

Desiderio: An easy entry point to start with is Detect to Correct because it’s one of the value streams that’s a little bit more known and understood. So that’s an easier point of entry for the whole IT4IT Value Chain, compared to some of the other value streams.

The service model, as we've stated all along, is definitely the backbone to the whole IT value chain. Although it's well-formed and in a good, mature state, there's still plenty of work to do to make that consumable to the IT organizations to understand all the different phases of the life cycle and all the different data objects that make up the Service Backbone. That's something that we're currently working on for the 2.1 version, so that we have better examples. We can show how it applies in a real IT organization, and it’s not just what’s in the documentation today.

More detail

Akershoek: I don’t think it’s about positive and negative in this case, but more about areas that we need to work on in more detail, like defining the service-broker role that you see in the new IT organization [and] how you interface with your external service providers. We've identified a number of areas where the IT organization has key touch points with these vendors, like your service catalog, you need to synchronize catalog information with the external vendors and aggregate it into your own catalog.

But there's also the fulfillment API -- how do you communicate a request to your suppliers or different technology stacks and get the consumption and cost data back in? I think we define that today in the IT4IT standard, but we need to go to a lower level of detail -- how do we actually integrate with vendors and our service providers?

So interfacing with the vendors in the eco-system sits on many different levels. It’s on the catalog level and the request fulfillment, that you actually do provision, the cost consumption data, and those kind of aspects.

Another topic is still the linking in to security and identity and access management. It's an area where we still need to clarify. We need to clarify how all the subscriptions in a service link in to that access management capability, which is part of the subscription and, of course, the fulfillment. We didn’t identify it as a separate functional component.

Gardner: Dwight, where are you most optimistic and where would you put more emphasis?

David: I'll start with the latter. More emphasis needs to be on our approach to Detect to Correct. Oftentimes, I see people thinking about Detect to Correct as in the traditional mode of being reactive, as opposed to understanding that this model can be applied even to the new changing user-friendly type of economy and within the hybrid type of IT. A change in thinking in the application of the value streams would also help us.

Many of us have a lot of gray hairs, including myself, and we revert to the old way of thinking, as opposed to the way we should be moving forward. That’s the area where we can do the most.

What's really good, though, is that a lot of people understand Detect to Correct. So it’s an easy adoption in terms of understanding the Reference Architecture. It’s a good entry point to the IT4IT Reference Architecture. That’s where I see the actual benefit. I would encourage us to make it useful, use it, and try it. The most benefit happens then.

Gardner: And Michael, room for optimism and room for improvement?


Management Guide

Fulton: I want to build on Dwight’s point around trying it by sharing. The one thing I'm most excited about, particularly this week, is the Management Guide -- very specifically, chapter 5 of the Management Guide. I hope all of you got a chance to grab your copy of that. If you haven’t, I recommend downloading it from The Open Group website. That chapter is absolutely rich in content about how to actually implement IT4IT.

And I tip my hat to Rob, who did a great piece of work, along with several other people. If you want to pick up the standard and use it, start there, start with chapter 5 of the Management Guide. You may not need to go much further, because that’s just great content to work with. I'm very excited about that.

From the standpoint of where we need to continue to evolve and grow as a standard, we've referenced some of the individual pieces, but at a higher level. The supporting activities in general all still need to evolve and get to the level of detail that we have with the value streams. That’s a key area for me.

The next area that I would highlight, and I know we're actively starting work on this, is around getting down to that level of detail where we can do data interoperability, where we can start to outline the specifics that are needed to define APIs between the functional components in such a way that we can ultimately bring us back to that Open Group vision of a boundaryless information flow.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Thursday, June 9, 2016

Alation centralizes data knowledge by employing machine learning and crowdsourcing

The next BriefingsDirect Voice of the Customer big-data case study discussion focuses on the Tower of Babel problem for disparate data, and explores how Alation manages multiple data types by employing machine learning and crowdsourcing.

We'll explore how Alation makes data more actionable via such innovative means as combining human experts and technology systems.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how enterprises and small companies alike can access more data for better analytics, please join Stephanie McReynolds, Vice-President of Marketing at Alation in Redwood City, California. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: I've heard of crowdsourcing for many things, and machine learning is more-and-more prominent with big-data activities, but I haven't necessarily seen them together. How did that come about? How do you, and why do you need to, employ both machine learning and experts in crowdsourcing?

McReynolds: Traditionally, we've looked at data as a technology problem. At least over the last 5-10 years, we’ve been pretty focused on new systems like Hadoop for storing and processing larger volumes of data at a lower cost than databases could traditionally support. But what we’ve overlooked in the focus on technology is the real challenge of how to help organizations use the data that they have to make decisions. If you look at what happens when organizations go to apply data, there's often a gap between the data we have available and what decision-makers are actually using to make their decisions.

McReynolds
There was a study that came out within the last couple of years that showed that about 56 percent of managers have data available to them, but they're not using it . So, there's a human gap there. Data is available, but managers aren't successfully applying data to business decisions, and that’s where real return on investment (ROI) always comes from. Storing the data, that’s just an insurance policy for future use.

The concept of crowdsourcing data, or tapping into experts around the data, gives us an opportunity to bring humans into the equation of establishing trust in data. Machine-learning techniques can be used to find patterns and clean the data. But to really trust data as a foundation for decision making human experts are needed to add business context and show how data can be used and applied to solving real business problems.

Gardner: Usually, when you're employing people like that, it can be expensive and doesn't scale very well. How do you manage the fit-for-purpose approach to crowdsourcing where you're doing a service for them in terms of getting the information that they need and you want to evaluate that sort of thing? How do you balance that?

Using human experts

McReynolds: The term "crowdsourcing" can be interpreted in many ways. The approach that we’ve taken at Alation is that machine learning actually provides a foundation for tapping into human experts.

We go out and look at all of the log data in an organization. In particular, what queries are being used to access data and databases or Hadoop file structures. That creates a foundation of knowledge so that the machine can learn to identify what data would be useful to catalog or to enrich with human experts in the organization. That's essentially a way to prioritize how to tap into the number of humans that you have available to help create context around that data.

That’s a great way to partner with machines, to use humans for what they're good for, which is establishing a lot of context and business perspective, and use machines for what they're good for, which is cataloging the raw bits and bytes and showing folks where to add value.
Embed the HPE
Big Data
OEM Software
Gardner: What are some of the business trends that are driving your customers to seek you out to accomplish this? What's happening in their environments that requires this unique approach of the best of machine and crowdsourcing and experts?

McReynolds: There are two broader industry trends that have converged and created a space for a company like Alation. The first is just the immense volume and variety of data that we have in our organizations. If it weren’t the case that we're adding additional data storage systems into our enterprises, there wouldn't be a good groundwork laid for Alation, but I think more interestingly perhaps is a second trend and that is around self-service business intelligence (BI).

So as we're increasing the number of systems that we're using to store and access data, we're also putting more weight on typical business users to find value in that data and trying to make that as self-service a process as possible. That’s created this perfect storm for a system like Alation which helps catalog all the data in the organization and make it more accessible for humans to interpret in accurate ways.
So as we're increasing the number of systems that we're using to store and access data, we're also putting more weight on typical business users to find value in that data and trying to make that as self-service a process as possible.

Gardner: And we often hear in the big data space the need to scale up to massive amounts, but it appears that Alation is able to scale down. You can apply these benefits to quite small companies. How does that work when you're able to help a very small organization with some typical use cases in that size organization?

McReynolds: Even smaller organizations, or younger organizations, are beginning to drive their business based on data. Take an organization like Square, which is a great brand name in the financial services industry, but it’s not a huge organization in and of itself, or Inflection or Invoice2go, which are also Alation customers.

We have many customers that have data analyst teams that maybe start with five people or 20 people. We also have customers like eBay that have closer to a thousand analysts on staff. What Alation provides to both of those very different sizes of organizations is a centralized place, where all of the information around their data is stored and made accessible.

Even if you're only collaborating with three to five analysts, you need that ability to share your queries, to communicate on which queries addressed which business problems, which tables from your HPE Vertica database were appropriate for that, and maybe what Hive tables on your Hadoop implementation you could easily join to those Vertica tables. That type of conversation is just as relevant in a 5-person analytics team as it is in a 1000-person analytics team.

Gardner: Stephanie, if I understand it correctly, you have a fairly horizontal capability that could apply to almost any company and almost any industry. Is that fair, or is there more specialization or customization that you apply to make it more valuable, given the type of company or type of industry?

Generalized technology

McReynolds: The technology itself is a generalized technology. Our founders come from backgrounds at Google and Apple, companies that have developed very generalized computing platforms to address big problems. So the way the technology is structured is general.

The organizations that are going to get the most value out of an Alation implementation are those that are data-driven organizations that have made a strategic investment to use analytics to make business decisions and incorporate that in the strategic vision for the company.

So even if we're working with very small organizations, they are organizations that make data and the analysis of data a priority. Today, it’s not every organization out there. Not every mom-and-pop shop is going to have an Alation instance in their IT organization.

Gardner: Fair enough. Given those organizations that are data-driven, have a real benefit to gain by doing this well, they also, as I understand it, want to get as much data involved as possible, regardless of its repository, its type, the silo, the platform, and so forth. What is it that you've had to do to be able to satisfy that need for disparity and variety across these data types? What was the challenge for being able to get to all the types of data that you can then apply your value to?
Embed the HPE
Big Data
OEM Software
McReynolds: At Alation, we see the variety of data as a huge asset, rather than a challenge. If you're going to segment the customers in your organization, every event and every interaction with those customers becomes relevant to understanding who that individual is and how you might be able to personalize offerings, marketing campaigns, or product development to those individuals.

That does put some burden on our organization, as a technology organization, to be able to connect to lots of different types of databases, file structures, and places where data sits in an organization.

So we focus on being able to crawl those source systems, whether they're places where data is stored or whether they're BI applications that use that data to execute queries. A third important data source for us that may be a bit hidden in some organizations is all the human information that’s created, the metadata that’s often stored in Wiki pages, business glossaries, or other documents that describe the data that’s being stored in various locations.

We actually crawl all of those sources and provide an easy way for individuals to use that information on data within their daily interactions. Typically, our customers are analysts who are writing SQL queries. All of that context about how to use the data is surfaced to them automatically by Alation within their query-writing interface so that they can save anywhere from 20 percent to 50 percent of the time it takes them to write a new query during their day-to-day jobs.

Gardner: How is your solution architected? Do you take advantage of cloud when appropriate? Are you mostly on-premises, using your own data centers, some combination, and where might that head to in the future?

Agnostic system

McReynolds: We're a young company. We were founded about three years ago and we designed the system to be agnostic as to where you want to run Alation. We have customers who are running Alation in concert with Redshift in the public cloud. We have customers that are financial services organizations that have a lot of personally identifiable information (PII) data and privacy and security concerns, and they are typically running an on-premise Alation instance.

We architected the system to be able to operate in different environments and have an ability to catalog data that is both in the cloud and on-premise at the same time.

The way that we do that from an architectural perspective is that we don’t replicate or store data within Alation systems. We use metadata to point to the location of that data. For any analyst who's going to run a query from our recommendations, that query is getting pushed down to the source systems to run on-premise or on the cloud, wherever that data is stored.

Gardner: And how did HPE Vertica come to play in that architecture? Did it play a role in the ability to be agnostic as you describe it?
It gives the IT department insight. Day-to-day, Alation is typically more of a business person’s tool for interacting with data.

McReynolds: We use HP Vertica in one portion of our product that allows us to provide essentially BI on the BI that’s happening. Vertica is used as a fundamental component of our reporting capability called Alation Forensics that is used by IT teams to find out how queries are actually being run on data source systems, which backend database tables are being hit most often, and what that says about the organization and those physical systems.

It gives the IT department insight. Day-to-day, Alation is typically more of a business person’s tool for interacting with data.

Gardner: We've heard from HPE that they expect a lot more of that IT department specific ops efficiency role and use case to grow. Do you have any sense of what some of the benefits have been from your IT organization to get that sort of analysis? What's the ROI?

McReynolds: The benefits of an approach like Alation include getting insight into the behaviors of individuals in the organization. What we’ve seen at some of our larger customers is that they may have dedicated themselves to a data-governance program where they want to document every database and every table in their system, hundreds of millions of data elements.
Embed the HPE
Big Data
OEM Software
Using the Alation system, they were able to identify within days the rank-order priority list of what they actually need to document, versus what they thought they had to document. The cost savings comes from taking a very data-driven realistic look at which projects are going to produce value to a majority of the business audience, and which projects maybe we could hold off on or spend our resources more wisely.

One team that we were working with found that about 80 percent of their tables hadn't been used by more than one person in the last two years. In that case, if only one or two people are using those systems, you don't really need to document those systems. That individual or those two individuals probably know what's there. Spend your time documenting the 10 percent of the system that everybody's using and that everyone is going to receive value from.

Where to go next

Gardner: Before we close out, any sense of where Alation could go next? Is there another use case or application for this combination of crowdsourcing and machine learning, tapping into all the disparate data that you can and information including the human and tribal knowledge? Where might you go next in terms of where this is applicable and useful?

McReynolds: If you look at what Alation is doing, it's very similar to what Google did for the Internet in terms of being available to catalog all of the webpages that were available to individuals and service them in meaningful ways. That's a huge vision for Alation, and we're just in the early part of that journey to be honest. We'll continue to move in that direction of being able to catalog data for an enterprise and make easily searchable, findable, and usable all of the information that is stored in that organization.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: