Tuesday, July 28, 2015

How big data technologies Hadoop and Vertica drive business results at Snagajob

The next BriefingsDirect analytics innovation case study interview explores how Snagajob in Richmond, Virginia – one of the largest hourly employment networks for job seekers and employers – uses big data to finally understand their systems' performance in action. The result is vast improvement in how they provide rapid and richer services to their customers.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

Snagajob recently delivered 4 million new jobs applications in a single month through their systems. To learn how they're managing such impressive scale, BriefingsDirect sat down with Robert Fehrmann, Data Architect at Snagajob in Richmond, Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us about your jobs matching organization. You’ve been doing this successfully since 2000. Let's understand the role you play in the employment market.

Fehrmann: Snagajob, as you mentioned, is America's largest hourly network for employees and employers. The hourly market means we have, relatively speaking, high turnover.
Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition
Another aspect, in comparison to some of our competitors, is that we provide an inexpensive service. So our subscriptions are on the low end, compared to our competitors.

Gardner: Tell us how you use big data to improve your operations. I believe that among the first ways that you’ve done that is to try to better analyze your performance metrics. What were you facing as a problem when it came to performance? [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

Signs of stress

Fehrmann: A couple of years ago, we started looking at our environment, and it became obvious that our traditional technology was showing some signs of stress. As you mentioned, we really have data at scale here. We have 20,000 to 25,000 postings per day, and we have about 700,000 unique visitors on a daily basis. So data is coming in very, very quickly.

Fehrmann
We also realized that we're sitting on a gold mine and we were able to ingest data pretty well. But we had problem getting information and innovation out of our big data lake.

Gardner: And of course, near real time is important. You want to catch degradation in any fashion from your systems right away. How do you then go about getting this in real time? How do you do the analysis?

Fehrmann: We started using Hadoop. I'll use a lot of technical terms here. From our website, we're getting events. Events are routed via Flume directly into Hadoop. We're collecting about 600 million key-value pairs on a daily basis. It's a massive amount of data, 25 gigabytes on a daily basis.

The second piece in this journey to big data was analyzing these events, and that’s where we're using HP Vertica. Second, our original use case was to analyze a funnel. A funnel is where people come to our site. They're searching for jobs, maybe by keyword, maybe by zip code. A subset of that is an interest in a job, and they click on a posting. A subset of that is applying for the job via an application. A subset is interest in an employer, and so on. We had never been able to analyze this funnel.

The dataset is about 300 to 400 million rows, and 30 to 40 gigabytes. We wanted to make this data available, not just to our internal users, but all external users. Therefore, we set ourselves a goal of a five-second response time. No query on this dataset should run for more than five seconds -- and Vertica and Hadoop gave us a solution for this.

Gardner: How have you been able to increase your performance reach your key performance indicators (KPIs) and service-level agreements (SLAs)? How has this benefited you?

Fehrmann: Another application that we were able to implement is a recommendation engine. A recommendation engine is that use where our jobseekers who apply for a specific job may not know about all the other jobs that are very similar to this job or that other people have applied to.
Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition
We started analyzing the search results that we were getting and implemented a recommendation engine. Sometimes it’s very difficult to have real comparison between before and after. Here, we were able to see that we got an 11 percent increase in application flow. Application flow is how many applications a customer is getting from us. By implementing this recommendation engine, we saw an immediate 11 percent increase in application flow, one of our key metrics.

Gardner: So you took the success from your big-data implementation and analysis capabilities from this performance task to some other areas. Are there other business areas, search yield, for example, where you can apply this to get other benefits?

Brand-new applications

Fehrmann: When we started, we had the idea that we were looking for a solution for migrating our existing environment, to a better-performing new environment. But what we've seen is that most of the applications we've developed so far are brand-new applications that we hadn't been able to do before.

You mentioned search yield. Search yield is a very interesting aspect. It’s a massive dataset. It's about 2.5 billion rows and about 100 gigabytes of data as of right now and it's continuously increasing. So for all of the applications, as well as all of the search requests that we have collected since we have started this environment, we're able to analyze the search yield.
Most of the applications we've developed so far are brand-new applications that we hadn't been able to do before.

For example, that's how many applications we get for a specific search keyword in real time. By real time, I mean that somebody can run a query against this massive dataset and gets result in a couple of seconds. We can analyze specific jobs in specific areas, specific keywords that are searched in a specific time period or in a specific location of the country.

Gardner: And once again, now that you've been able to do something you couldn't do before, what have been the results? How has that impacted change your business? [Register for the upcoming HP Big Data Conference in Boston on Aug. 10-13.]

Fehrmann: It really allows our salespeople to provide great information during the prospecting phase. If we're prospecting with a new client, we can tell him very specifically that if they're in this industry, in this area, they can expect an application flow, depending on how big the company is, of let’s say in a hundred applications per day.

Gardner: How has this been a benefit to your end users, those people seeking jobs and those people seeking to fill jobs?

Fehrmann: There are certainly some jobs that people are more interested in than others. On the flip side, if a particular job gets a 100 or 500 applications, it's just a fact that only a small number going to get that particular job. Now if you apply for a job that isn't as interesting, you have much, much higher probability of getting the job.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, July 27, 2015

Beyond look and feel--The new role that total user experience plays in business apps

The next BriefingsDirect business innovation thought leadership discussion focuses on the heightened role and impact of total user experience improvements for both online and mobile applications and services.

We'll explore how user expectations and rethinking of business productivity are having a profound impact on how business applications are used, designed, and leveraged to help buyers, sellers, and employees do their jobs better.

We’ll learn about the advantages of new advances in bringing instant collaboration, actionable analytics, and contextual support capabilities into the application interface to create a total user experience.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. See a demo.

To examine why applications must have more than a pretty face to improve the modern user experience, we're joined by Chris Haydon, Senior Vice President of Solutions Management for Procurement, Finance and Network at Ariba, an SAP company. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Chris, what sort of confluence of factors has come together to make this concept of user experience so important, so powerful? What has changed, and why must we think more about experience than interface?

Haydon: Dana, it’s a great question. There is the movement of hyper-collaboration, and things are moving faster and faster than ever before.

Haydon
We're seeing major shifts in trends on how users view themselves and how they want to interact with their application, their business applications in particular, and more and more, drawing parallels from their consumer and how they bring that simple consumer-based experience into their daily work. Those are some of the mega trends.

Then, as we step down a little bit, within that is obviously this collaboration aspect and how people prefer to collaborate online at work more than they did in traditional mechanisms, certainly via phone or fax.

Then, there's mobility. If someone doesn’t really have a smartphone in this day and age, certainly they're behind the eight ball.

Last but not least, there's the changing demographic of our workforce. In 2015, there are some stats out there that showed that millennials will become the single largest percentage of the workforce.

All of these macro trends and figures are going into how we need to think about our total user experience in our applications.

Something more?

Gardner: For those of us who have been using applications for years and years and have sort of bopped around -- whether we're on a mobile device or a PC -- from application to application, are we just integrating apps so that we don't have to change apps, or is it something more? Is this a whole greater than the existing sum of the parts?

Haydon: It’s certainly something more. It’s more the one plus one equals three concept here. The intersection of the connectivity powered by business networks, as well as the utility of mobile devices not tied to your desktop can fundamentally change the way people think about their interactions and think about the business processes and how they think about the work that needs to be done throughout the course of their work environment. That really is the difference.

This is not about just tweaking up as you open up a user interface. This is really about thinking about personal-based interactions in the context of mobility in a network-oriented or network-centric collaboration.

Gardner: When we think about collaboration, traditionally that’s been among people, but it seems to me that this heightened total user experience means we're actually collaborating increasingly with services. They could be services that recognize that we're at a particular juncture in a business process. They could be services that recognize that we might need help or support in a situation where we've run out of runway and don't know what to do, or even instances where intelligence and analytics are being brought to us as we need it, rather than our calling out to it.

Tell me about this expanding definition of collaboration. Am I right that we're collaborating with more than just other people here?
The best total user experiences are bringing that context to the end user managing that complexity, but contextualizing it to bring it to their attention as they work through it.

Haydon: That’s right. It’s putting information in the context of the business process right at the point of demand. Whether that’s predictive, intelligence, third party, the best user interfaces and the best total user experiences are bringing that context to the end user managing that complexity -- but contextualizing it to bring it to their attention as they work through it.

So whether that’s a budget check and whether there is some gaming on budget, it's saying, "You're under budget; that’s great." That’s an internal metric. Maybe in the future, you start thinking about  how others are performing in other segments of the business. If you want to take it even further, how are other potential suppliers doing on their response rate to their customers?

There is a whole new dimension on giving people contextualized information at the point where they need to make a decision, or even recommending the type of decisions they need to make. It could be from third-party sources that can come from a business network outside your firewall, or from smarter analysis and predictive analysis, from the transactions that are happening within the four walls of your firewall, or in your current application or your other business applications.

Gardner: It seems pretty clear that this is the way things are going. The logic behind why the user experience has expanded in its power and its utility makes perfect sense. I'm really enthused about this notion of contextual intelligence being brought to a business process, but it's more than just a vision here.

Pulling this off must be quite difficult. I know that many people have been thinking about doing this, but there just isn't that much of it actually going on yet. So we're at the vanguard.

What are the problems? What are the challenges that it takes to pull this off to make it happen? It seems to me there are a lot back-end services, and while we focus on the user experience and user interface, we're really talking about sophisticated technology in the data center providing these services.

Cloud as enabler

Haydon: There are a couple of enablers to this. I think the number one enabler here is cloud versus on-premise. When you can see the behavior in real time in a community aspect, you can actually build infrastructure services around that. In traditional on-premise models, when that’s locked in, all that burden is actually being pushed back to the corporate IT to be able to do that.

The second point is when you're in the cloud and you think about applications that are network-aware, you're able to bring in third-party, validated, trusted information to help make that difference. So there are those challenges.

I also think that it comes down to technology, but technology is moving to the focus of building applications actually for the end user. When you start thinking about the interactions with the end user and the focus on them, it really drives you to think about how you give that different contextualized information.

When you can have that level of granularity in saying, "I'm logging on as an invoicing processing assistant", or "I'm logging on as just a casual ad-hoc requisitioner." When the system knows you have done that, it’s actually able to be smart and pick up and contextualize that. That’s where we really see the future and the vision of how this is all coming together.
The reality is that in most companies for the foreseeable future there will be some degree of on-premise applications that continue to drive businesses, and then there will be side-by-side cloud businesses.

Gardner: When we spoke a while back about the traditional way that people got productivity was switching manually from application to application -- whether that’s an on-premise application or a software-as-a-service (SaaS) based application -- if they are losing the benefit of a common back-end intelligence capability or network services that are aware or access an identity management that’s coordinated, we still don't get that total user experience, even though the cloud is an important factor here and SaaS is a big part of it.

What brings together the best of cloud, but also the best of that coordinated, integrated total experience when you know the user and all of their policies and information can be brought to bear on this experience and productivity demand?

Haydon: There are a couple of ways of doing that. You could talk here about the concept of hybrid cloud. The reality is that in most companies for the foreseeable future there will be some degree of on-premise applications that continue to drive businesses, and then there will be side-by-side cloud businesses.

So it’s the job of leading practice technology providers, SaaS and on-premise providers, to enable that to happen. There definitely is this notion of having a very robust platform that underpins the cloud product and can be seamlessly integrated to the on-premise product.

Again, from a technology and a trend perspective, that’s where it’s going. So if the provider doesn’t have a solid platform approach to be able to link the disparate cloud services to disparate on-premise solutions, then you can’t give that full context to the end user.

One thing too is thinking about the user interface. The user interface manages that complexity for the end user. The end user really shouldn't need to know the mode of deployment, nor should they need to know really where they're at. That’s what the new leading user interfaces and what the total experience is about, to take you guided through your workflow or your work that needs to be done irrespective of the deployment location of that service.

Ariba roadmap

Gardner: Chris, we spoke last at Ariba Live, the user conference back in the springtime and you were describing the roadmap for Ariba and other applications coming through 2015  into 2016.

What’s been happening recently? This week, I think, you've gone general availability (July 2015) with some of these services. Maybe you could quickly describe that. Are we talking about the on-premise apps, the SaaS apps, the mobile apps, all the above? What’s happening?

Haydon: We're really excited about that. For our current releases that came out this week (see a demo), we launched our total user experience approach, where we have working anywhere, embracing the most modern user design interactions into our user interface, mobility, and also within that, and how we can enable our end users to learn the processes and context. All this has been launched in Ariba within the last 14 days.

Specifically, it’s about embracing modern user design principles. We have a great design principle here within SAP called Fiori. So we've taken that design principle and brought that into the world of procurement to put on top of our leading-practice capabilities today and we're bringing this new updated user experience design.
What we are doing differently here is embracing the power and the capability of mobile devices with cloud and the work that needs to be done.

But we haven’t stopped there. We're embracing, as you mentioned, this mobility aspect and how can we design new interactions between our common user interface on our mobile and a common user interface on our cloud deployment as one. That’s a given, but what we are doing differently here is embracing the power and the capability of mobile devices with cloud and the work that needs to be done.

One idea of that is how we have a process continuity feature, where you can look on your mobile application, have a look at some activities that you might want to track later on. You can click or pin that activity on your mobile device and when you come to your desktop to do some work, that pinning activity is visible for you to go on tracking and get your job done.

Similarly, if you're on the go to go and have a meeting, you're able to push some reports down to your mobile tablet or your smartphone to be able to look and review that work on the go.

We're really looking at that full, total user experience, whether you're on the desktop or whether you are on the go on your mobile device, all underpinned by a common user design imperative based upon Fiori.

Gardner: Just to be clear, we're talking about not only this capability across those network services for on-prem, cloud, and mobile, but we're taking this across more than a handful of apps. Tell us a bit about how these Ariba applications and the Ariba Network also involve other travel and expense capabilities. What other apps are involved in terms of line-of-business platform that SAP is providing?

Leading practice

Haydon: From a procurement perspective, obviously we have Ariba’s leading practice procurement. As context, we have another fantastic solution for contingent labor, statement-of-work labor and other services, and that’s called Fieldglass. We've been working closely with the Fieldglass team to ensure that our user interface that we are rolling out on our Ariba procurement applications is consistent with Fieldglass, and it’s based again on the Fiori style of design construct.

We're moving toward where an end user, whether they want to interact to do detailed time sheets or service entry, or they want to do requisitioning for powerful materials and inventory, on the Ariba side find a seamless experience.

We're progressively moving forward to that same style of construct for the Concur applications for our travel and expense, and even the larger SAP, cloud and S4/HANA approaches as well.

Gardner: You mentioned SAP HANA. Tell us how we're not only dealing with this user experience across devices, work modes, and across application types, but now we have a core platform approach that allows for those analytics to leverage and exploit the data that's available, depending on the type of applications any specific organization is using.
What we are really seeing is that the customer interactions change because they're actually able to do different and faster types of iterations.

It strikes me that we have a possibility of a virtuous adoption cycle; that is to say, the more data used in conjunction with more apps begets more data, begets more insights, begets more productivity. How is HANA and analytics coming to bear on this?

Haydon: We've had HANA running on analytics on the Ariba side for more than 12 months now. The most important thing that we see with HANA is that it's not about HANA in itself. It's a wonderful technology, but what we are really seeing is that the customer interactions change because they're actually able to do different and faster types of iterations.

To us, that's the real power of what HANA gives us from a technology and platform aspect to build on. When you can have real time analytics across massive amounts of information put into the context of what an end user does, that to us is where the true business and customer and end-user benefit will come from leveraging the HANA technology.

So we have it running in our analytics stack, progressively moving that through the rest of our analytics on the Ariba platform. Quite honestly, the sky's the limit as it relates to what that technology can enable us to do. The main focus though is how we give different business interactions, and HANA is just a great engine that enables us to do that.

Gardner: It's a fascinating time if you're a developer, because previously, you had to go through a requirements process with the users, but using these analytics you can measure and see what those users are actually doing, or progressing and modernizing their processes, and then take that analytics capability back into the next iteration of the application.

So it's interesting that we're talking about total user experience. We could be talking about total developer experience, or even total IT operator experience when it comes to delivering security and compliance. Expand a little bit about how what you are doing on that user side actually benefits the entire life cycle of these applications.

Thinking company

Haydon: It's really exciting. There are other great companies that do this, and SAP is really investing in this as well as Ariba, making sure we're really a data-driven, real-time, thinking company.

And you're right. In the simplest way, we're rolling out our total user experience in the simplest model. We're providing a toggle, meaning we're enabling our end users to road test the user experience and then switch back. We don't think anyone will want to switch back, but it's great.

That's the same type of experience that you experience in your personal life. When someone is trialing a new feature on an e-marketplace or in a consumer store, you're able to try this experience and come back. What's great about that is we're getting real-time insight. We know which customers are doing this. We know which personas are doing this. We know how long they are doing this for their session time.

We're able to bring that back to our developers, to our product managers, to our user design center experts, and just as importantly, back to our customers and also back to our partners to be able to say, "There is some info, doing these types of things, they are not on this page. They have been looking for this type of information when they do a query or request."
First and foremost, and it’s important, our customers entrust their business processes to us, and so it's about zero business disruption, and no downtime is our number one goal.

These types of information we're feeding into our roadmap, but we are also feeding back into our customers so they understand how their employees are working with our applications. As we step forward, we're exposing this in the right way to our partners to help them potentially build applications on top of what we already have on the Ariba platform.

Gardner: So obviously you can look this in the face at the general level of productivity, but now we can get specific with partners into verticals, geographies, all the details that come along with business applications, company to company, region to region.

Let’s think about how this comes to market. You've announced the general availability in July 2015 on Ariba, and because this is SaaS, there are no forklifts, there are no downloads, no install, and no worries about configuration data. Tell us how this rolls out and how people can experience it if they've become intrigued about this concept of total user experience. How easy is it for them to then now start taking part in it?

Haydon: First and foremost, and it’s important, our customers entrust their business processes to us, and so it's about zero business disruption, and no downtime is our number one goal.

When we rolled out our global network release to a 1.8 million suppliers two weeks ago (see a demo), we had zero downtime on the world’s largest business network. Similarly, as we rolled out our total user experience, zero downtime as well. So that’s the first thing. The number one thing is about business continuity.

The second thing really is a concept that we think about. It’s called agile adoption. This is again how we let end users and companies of end users adopt our solutions.

Educating customers

We have done an awful lot of work, before go live, on educating our customers, providing frequently asked questions, where required, training materials and updates, all those types of support aspects. But we really believe our work starts day plus one, not day minus one.

How are we working with our customers after this is turned on by monitoring, to know exactly what they are doing, giving them proactive support and communications, when we need to, when we see them either switching back or we have a distribution of a specific customer group or end user group within their company? We'll be actively monitoring them and pushing that forward.

That’s what we really think it’s about. We're taking this end user customer-centric view to roll out our applications, but letting our own customers find their own pathways.

Organic path

Gardner: If users want to go more mobile in how they do their business processes, want to get those reports and analytics delivered to them in the context of their activity, is there an organic path for them or do they have to wait for their IT department?

What do you recommend for people that maybe don’t even have Ariba in their organization? What are some steps they can take to either learn more or from a grassroots perspective encourage adoption of this business revolution really around total user experience emphasis?

Haydon: We have plenty of material from an Ariba perspective, not just about our solutions, but exactly what you're mentioning, Dana, about what is going on there. My first recommendation to everyone would be to educate yourselves and have a look at your business -- how many millennials are in your business, what are the new working paradigms that need to happen from a mobile approach -- and go and embrace it.
The second lesson is that if businesses think that this is not already happening outside of the control of their IT departments, they're probably mistaken.

The second lesson is that if businesses think that this is not already happening outside of the control of their IT departments, they're probably mistaken. These things are already going on. So I think those are the kind of macro things to go and have a look at.

But, of course, we have a lot more information about Ariba’s total user experience thinking on thought leadership and then how we go about and implement that in our solutions for our customers, and I would just encourage anyone to go and have a look at ariba.com. You'll be able to see more about our total user experience, and like I said, some of the leading practice thoughts that we have about implementations (see a demo).

Gardner: I'd also encourage people to listen or read the conversation you and I had just a month or two ago about the roadmap. There's an awful lot that you're working on that people will be able to exploit further for total user experience exploits. 

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. See a demo. Sponsor: Ariba, an SAP company.

You may also be interested in:

Wednesday, July 22, 2015

Zynga builds big data innovation culture by making analytics open to all developers

The next BriefingsDirect analytics innovation case study interview explores how Zynga in San Francisco exploits big-data analytics to improve its business via a culture of pervasive, rapid analytics and experimentation.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about how big data impacts Zynga in the fast-changing and highly competitive mobile gaming industry, BriefingsDirect sat down with Joanne Ho, Senior Engineering Manager at Zynga, and Yuko Yamazaki, Head of Analytics at Zynga. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How important is big data analytics to you as an organization?

Ho
Ho: To Zynga, big data is very important. It's a main piece of the company and as a part of the analytics department, big data is serving the entire company as a source of understanding our users' behavior, our players, what they like, and what they don’t like about games. We are using this data to analyze the user’s behavior and we also will personalize a lot of different game models that fit the user’s player pattern.

Gardner: What’s interesting to me about games is the people will not only download them but that they're upgradable, changeable. People can easily move. So the feedback loop between the inferences, information, and analysis you gain by your users' actions is rather compressed, compared to many other industries.

What is it that you're able to do in this rapid-fire development-and-release process? How is that responsiveness important to you?

Real-time analysis

Ho: Real-time analysis, of course, is critical, and we have our streaming system that can do it. We have our monitoring and alerting system that can alert us whenever we see any drops in user’s install rating, or any daily active users (DAU). The game studio will be alerted and they will take appropriate action on that.

Gardner: Yuko, what sort of datasets we are talking about? If we're going to the social realm, we can get some very large datasets. What's the volume and scale we're talking about here?
Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition
Yamazaki: We get data of everything that happens in our games. Almost every single play gets tracked into our system. We're talking about 40 billion to 60 billion rows a day, and that's the data that our game product managers and development engineers decide what they want to analyze later. So it’s already being structured and compressed as it comes into our database.
Gardner: That’s very impressive scale. It’s one thing to have a lot of data, but it’s another to be able to make that actionable. What do you do once that data is assembled?

Yamazaki: The biggest success story that I will normally tell about Zynga is that we make data available to all employees. From day one, as soon as you join Zynga, you get to see all the data through our visualization to whatever we have. Even if you're FarmVille product manager, you get to see what Poker is doing, making it more transparent. There is an account report that you can just click and see how many people have done this particular game action, for example. That’s how we were able to create this data-driven culture for Zynga.

Yamazaki
Gardner: And Zynga is not all that old. Is this data capability something that you’ve had right from the start, or did you come into it over time? 

Yamazaki: Since we began Poker and Words With Friends, our cluster scaled 70 times.

Ho: It started off with three nodes, and we've grown to 230 node clusters.

Gardner: So you're performing the gathering of the data and analysis in your own data centers?

Yamazaki: Yes.

Gardner: When you realized the scale and the nature of your task, what were some of the top requirements you had for your cluster, your database, and your analytics engine? How did you make some technology choices?

Biggest points

Yamazaki: When Zynga was growing, our main focus was to build something that was going to be able to scale and provide the data as fast as possible. Those were the two biggest points that we had in mind when we decided to create our analytics infrastructure.

Gardner: And any other more detailed requirements in terms of the type of database or the type of analytics engine?
Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition
Yamazaki: Those are two big ones. As I mentioned, we wanted to have everyone be able to access the data. So SQL would have been a great technology to have. It’s easy to train PMs instead of engineering sites, for example, MapReduce for Hadoop. Those were the three key points as we selected our database.

Gardner: What are the future directions and requirements that you have? Are there things that you’d like to see from HP, for example, in order to continue to be able do what you do at increasing scale?

Ho: We're interested in real-time analytics. There's a function aggregated projection that we're interested in trying. Also Flex Tables [in HP Vertica] sounds like a very interesting feature that we also will attempt to try. And cloud analytics is the third one that we're also interested in. We hope HP will get it matured, so that we can also test it out in the future.
We we have 2,000 employees, and  at least 1,000 are using our visualization tool on a daily basis.

Gardner: While your analytics has been with you right from the start, you were early in using Vertica?

Ho: Yes.

Gardner: So now we've determined how important it is, do you have any metrics of what this is able to do for you? Other organizations might be saying they we don't have as much of a data-driven culture as Zynga, but would like to and they realize that the technology can now ramp-up to such incredible volume and velocity, What do you get back? How do you measure the success when you do big-data analytics correctly?

Yamazaki: Internally, we look at adoption of systems. We we have 2,000 employees, and  at least 1,000 are using our visualization tool on a daily basis. This is the way to measure adoption of our systems internally.

Externally, the biggest metric is retention. Are players coming back and, if so, was that through the data that we collect? Were we able to do personalization so that they're coming back because of the experience they've had?

Gardner: These are very important to your business, obviously, and it’s curious about that buy-in. As the saying goes, you can lead a horse to water, but you can't make him drink. You can provide data analysis and visualization to the employees, but if they don’t find it useful and impactful, they won’t use it. So that’s interesting with that as a key performance indicator for you.

Any words of advice for other organizations who are trying to become more data-driven, to use analytics more strategically? Is this about people, process, culture, technology, all the above? What advice might you have for those seeking to better avail themselves of big data analytics?

Visualization

Yamazaki: A couple of things. One is to provide end-to-end. So not just data storage, but also visualization. We also have an experimentation system, where I think we have about 400-600 experiments running as we speak. We have a report that shows you run this experiment, all these metrics have been moved because of your experiment, and A is better than B.

We run this other experiment, and there's a visualization you can use to see that data. So providing that end-to-end data and analytics to all employees is one of the biggest pieces of advice I would provide to any companies.

One more thing is try to get one good win. If you focus too much on technology or scalability, you might be building a battleship, when you actually don’t need it yet. It's incremental. Improvement is probably going to take you to a place that you need to get to. Just try to get a good big win of increasing installs or active users in one particular game or product and see where it goes.

Gardner: And just to revisit the idea that you've got so many employees and so many innovations going on, how do you encourage your employees to interact with the data? Do you give them total flexibility in terms of experiments? How do they start the process of some of those proof-of-concept type of activities?
Become a member of myVertica today
Register now
Gain access to the free HP Vertica Community Edition
Yamazaki: It's all freestyle. They can log whatever they want. They can see whatever they want, except revenue type of data, and they can create any experiments they want. Her team owns this part, but we also make the data available. Some of the games can hit real time. We can do that real-time personalization using that data that you logged. It’s almost 360-degree of the data availability to our product teams.
If you focus too much on technology or scalability, you might be building a battleship, when you actually don’t need it yet.

Gardner: It’s really impressive that there's so much of this data mentality ingrained in the company, from the start and also across all the employees, so that’s very interesting. How do you see that in terms of your competitive edge? Do you think the other gaming companies are doing the same thing? Do you have an advantage that you've created a data culture?

Yamazaki: Definitely, in online gaming you have to have big data to succeed. A lot of companies, though, are just getting whatever they can, then structure it, and make it analyzable. One of the things that we've done that do well was to make a structure to start with. So the data is already structured.

Product managers are already thinking about what they want to analyze before hand. It's not like they just get everything in and then see what happens. They think right away about, "Is this analyzable? is this something we want to store?" We're a lot smarter about what we want to store. Cost-wise, it's a lot more optimized.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, July 20, 2015

How big data powers GameStop to gain retail advantage and deep insights into its markets

The next BriefingsDirect analytics innovation case study interview highlights how GameStop, based in Grapevine, Texas, uses big data to improve how it conducts its business and better serve its customers.

By accessing data sources that were unattainable before and pulling that data out into reports in just a few minutes across nationally distributed retail outlets, GameStop more deeply examines how its campaigns and products are performing.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.

To learn more about how they deploy big data and use the resulting analytics, BriefingsDirect sat down with John Crossen, Data Warehouse Lead at GameStop. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Tell us a little bit about GameStop. Most people are probably familiar with the retail outlets that they see, where you can buy, rent, trade games, and learn more about games. Why is big data important to your organization?

Crossen: We wanted to get a better idea of who our customers are, how we can better serve our customers and what types of needs they may have. With prior reporting, we would get good overall views of here’s how the company is doing or here’s how a particular game series is selling, but we weren’t able to tie that to activities of individual customers and possible future activity of future customers, using more of a traditional SQL-based platform that would just deliver flat reports.

Crossen
So, our goal was to get s more 360-degree view of our customer and we realized pretty quickly that, using our existing toolsets and methodologies, that wasn’t going to be possible. That’s where Vertica ended up coming into play to drive us in that direction.

Gardner: Just so we have a sense of this scale here, how many retail outlets does GameStop support and where are you located?

Crossen:  We're international. There are approximately 4,200 stores in the US and another 2,200 international.

Gardner: And in terms of the type of data that you are acquiring, is this all internal data or do you go to external data sources and how do you to bring that together?

Internal data

Crossen: It's primarily internal data. We get data from our website. We have the PowerUp Rewards program that customers can choose to join, and we have data from individual cash registers and all those stores.

Gardner: I know from experience in my own family that gaming is a very fast-moving industry. We’ve quickly gone from different platforms to different game types and different technologies when we're interacting with the games.

It's a very dynamic changeable landscape for the users, as well as, of course, the providers of games. You are sort of in the middle. You're right between the users and the vendors. You must be very important to the whole ecosystem.

Crossen: Most definitely, and there aren’t really many game retailers left anymore. GameStop is certainly the preeminent one. So a lot of customers come not just to purchase a game, but get information from store associates. We have Game Informer Magazine that people like to read and we have content on the website as well.

Gardner: Now that you know where to get the data and you have the data, how big is it? How difficult is it to manage? Are you looking for real-time or batch? How do you then move forward from that data to some business outcome?

Crossen: It’s primarily batch at this point. The registers close at night, and we get data from registers and loads that into HP Vertica. When we started approximately two years ago, we didn't have a single byte in Vertica. Now, we have pretty close to 24 terabytes of data. It's primarily customer data on individual customers, as well Weblogs or mobile application data.
Become a member of myVertica today
Register now
Access the FREE HP Vertica Community Edition
Gardner: I should think that when you analyze which games are being bought, which ones are being traded, which ones are price-sensitive and move at a certain price or not, you're really at the vanguard of knowing the trends in the gaming industry -- even perhaps before anyone else. How has that worked for you, and what are you finding?

Crossen: A lot of it is just based on determining who is likely to buy which series of games. So you won't market the next Call of Duty 3 or something like that to somebody who's buying your children's games. We are not going to ask people buy Call of Duty 3, rather than My Little Pony 6.

The interesting thing, at least with games and video game systems, is that when we sell them new, there's no price movement. Every game is the same price in any store. So we have to rely on other things like customer service and getting information to the customer to drive game sales. Used games are a bit of a different story.

Gardner: Now back to Vertica. Given that you've been using this for a few years and you have such a substantial data lake, what is it about Vertica that works for you? What are learning here at the conference that intrigues you about the future?

Quick reports

Crossen: The initial push with HP Vertica was just to get reports fast. We had processes that literally took a day to run to accumulate data. Now, in Vertica, we can pull that same data out in five minutes. I think that if we spend a little bit more time, we could probably get it faster than half of that.

The first big push was just speed. The second wave after that was bringing in data sources that were unattainable before, like web-click data, a tremendous amount of data, loading that into SQL, and then being able to query it out of SQL. This wasn't doable before, and it’s made it do that. At first, it was faster data, then acquiring new data and finding different ways to tie different data elements together that we haven’t done before.

Gardner: How about visualization of these reports? How do you serve up those reports and do you make your inference and analytics outputs available to all your employees? How do you distribute it? Is there sort of an innovation curve that you're following in terms of what they do with that data?
We had processes that literally took a day to run to accumulate data. Now, in Vertica, we can pull that same data out in five minutes.

Crossen: As far as a platform, we use Tableau as our visualization tool. We’ve used a kind of an ad-hoc environment to write direct SQL queries to pull data out, but Tableau serves the primary tool.

Gardner: In that data input area, what integration technologies are you interested in? What would you like to see HP do differently? Are you happy with the way SQL, Vertica, Hadoop, and other technologies are coming together? Where would you like to see that go?

Crossen: A lot of our source systems are either SQL-server based or just flat files. For flat files, we use the Copy Command to bring data, and that’s very fast. With Vertica 7, they released the Microsoft SQL Connector.

So we're able to use our existing SQL Server Integration Services (SSIS) data flows and change the output from another SQL table to direct me into Vertica. It uses the Copy Command under the covers and that’s been a major improvement. Before that, we had to stage the data somewhere else and then use the Copy Command to bring it in or try to use Open Database Connectivity (ODBC) to bring it in, which wasn’t very efficient.

20/20 hindsight

Gardner: How about words of wisdom from your 20/20 hindsight? Others are also thinking about moving from a standard relational database environment towards big data stores for analytics and speed and velocity of their reports. Any advice you might offer organizations as they're making that transition, now that you’ve done it?

Crossen: Just to better understand how a column-store database works, and how that's different from a traditional row-based database. It's a different mindset, everything from how you are going to lay out data modeling.
Become a member of myVertica today
Register now
Access the FREE HP Vertica Community Edition
For example, in a row database you would tend to freak out if you had a 700-column table. In the column stores, that doesn’t really matter. So just to get in the right mindset of here’s how a column-store database works, and not try to duplicate row-based system in the column-store system.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, July 13, 2015

A tale of two IT departments, or how cloud governance proves essential in the Bimodal IT era

Welcome to a special BriefingsDirect panel discussion in conjunction with The Open Group's upcoming conference on July 20, 2015 in Baltimore. Our panel of experts examines how cloud governance and enterprise architecture can prove essential in the Bimodal IT era, a period of increasingly fragmented IT.

Not only are IT organizations dealing with so-called shadow IT and myriad proof-of-concept affairs, there is now a strong rationale for fostering what Gartner calls Bimodal IT. There's a strong case to be made for exploiting the strengths of several different flavors of IT, except that -- at the same time -- businesses are asking IT in total to be faster, better, and cheaper.
 
The topic before us then is how to allow for the benefits of Bimodal IT -- or even multimodal IT -- but without IT fragmentation leading to fractured and even broken businesses.
Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy.
Here to update us on the work of The Open Group Cloud Governance initiatives and working groups and to further explore the ways that companies can better manage and thrive with hybrid IT are our guests:
  • Dr. Chris Harding, Director for Interoperability and Cloud Computing Forum Director at The Open Group.
  • David Janson, Executive IT Architect and Business Solutions Professional with the IBM Industry Solutions Team for Central and Eastern Europe and a leading contributor to The Open Group Cloud Governance Project.
  • Nadhan, HP Distinguished Technologist and Cloud Adviser and Co-Chairman of The Open Group Cloud Governance Project.
The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Before we get into an update on The Open Group Cloud Governance initiatives, in many ways over the past decades IT has always been somewhat fragmented. Very few companies have been able to keep all their IT oars rowing in the same direction, if you will. But today things seem to be changing so rapidly that some degree of disparate IT methods are necessary. We might even think of old IT and new IT, and this may even be desirable.

But what are the trends that are driving this need for a multimodal IT? What's accelerating the need for different types of IT, and how can we think about retaining a common governance, and even a frameworks-driven enterprise architecture umbrella, over these IT elements?

Nadhan: Basically, the change that we're going through is really driven by the business. Business today has much more rapid access to the services that IT has traditionally provided. Business has a need to react to its own customers in a much more agile manner than they were traditionally used to.

Nadhan
We now have to react to demands where we're talking days and weeks instead of months and years. Businesses today have a choice. Business units are no longer dependent on the traditional IT to avail themselves of the services provided. Instead, they can go out and use the services that are available external to the enterprise.

To a great extent, the advent of social media has also resulted in direct customer feedback on the sentiment from the external customer that businesses need to react to. That is actually changing the timelines. It is requiring IT to be delivered at the pace of business. And the very definition of IT is undergoing a change, where we need to have the right paradigm, the right technology, and the right solution for the right business function and therefore the right application.

Since the choices have increased with the new style of IT, the manner in which you pair them up, the solutions with the problems, also has significantly changed. With more choices, come more such pairs on which solution is right for which problem. That's really what has caused the change that we're going through.
With more choices, come more such pairs on which solution is right for which problem.

A change of this magnitude requires governance that goes across building up on the traditional governance that was always in play, requiring elements like cloud to have governance that is more specific to solutions that are in the cloud across the whole lifecycle of cloud solutions deployment.

Gardner: David, do you agree that this seems to be a natural evolution, based on business requirements, that we basically spin out different types of IT within the same organization to address some of these issues around agility? Or is this perhaps a bad thing, something that’s unnatural and should be avoided?

Janson: In many ways, this follows a repeating pattern we've seen with other kinds of transformations in business and IT. Not to diminish the specifics about what we're looking at today, but I think there are some repeating patterns here.

There are new disruptive events that compete with the status quo. Those things that have been optimized, proven, and settled into sort of a consistent groove can compete with each other. Excitement about the new value that can be produced by new approaches generates momentum, and so far this actually sounds like a healthy state of vitality.

Good governance

However, one of the challenges is that the excitement potentially can lead to overlooking other important factors, and that’s where I think good governance practices can help.

For example, governance helps remind people about important durable principles that should be guiding their decisions, important considerations that we don’t want to forget or under-appreciate as we roll through stages of change and transformation.
Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here
At the same time, governance practices need to evolve so that it can adapt to new things that fit into the governance framework. What are those things and how do we govern those? So governance needs to evolve at the same time.
There is a pattern here with some specific things that are new today, but there is a repeating pattern as well, something we can learn from.

Gardner: Chris Harding, is there a built-in capability with cloud governance that anticipates some of these issues around different styles or flavors or even velocity of IT innovation that can then allow for that innovation and experimentation, but then keep it all under the same umbrella with a common management and visibility?

Harding: There are a number of forces at play here, and there are three separate trends that we've seen, or at least that I have observed, in discussions with members within The Open Group that relate to this.

Harding
The first is one that Nadhan mentioned, the possibility of outsourcing IT. I remember a member’s meeting a few years ago, when one of our members who worked for a company that was starting a cloud brokerage activity happened to mention that two major clients were going to do away with their IT departments completely and just go for cloud brokerage. You could see the jaws drop around the table, particularly with the representatives who were from company corporate IT departments.

Of course, cloud brokers haven’t taken over from corporate IT, but there has been that trend toward things moving out of the enterprise to bring in IT services from elsewhere.

That’s all very well to do that, but from a governance perspective, you may have an easy life if you outsource all of your IT to a broker somewhere, but if you fail to comply with regulations, the broker won’t go to jail; you will go to jail.

So you need to make sure that you retain control at the governance level over what is happening from the point of view of compliance. You probably also want to make sure that your architecture principles are followed and retain governance control to enable that to happen. That’s the first trend and the governance implication of it.

In response to that, a second trend that we see is that IT departments have reacted often by becoming quite like brokers themselves -- providing services, maybe providing hybrid cloud services or private cloud services within the enterprise, or maybe sourcing cloud services from outside. So that’s a way that IT has moved in the past and maybe still is moving.

Third trend

The third trend that we're seeing in some cases is that multi-discipline teams within line of business divisions, including both business people and technical people, address the business problems. This is the way that some companies are addressing the need to be on top of the technology in order to innovate at a business level. That is an interesting and, I think, a very healthy development.

So maybe, yes, we are seeing a bimodal splitting in IT between the traditional IT and the more flexible and agile IT, but maybe you could say that that second part belongs really in the line of business departments -- rather than in the IT departments. That's at least how I see it.

Nadhan: I'd like to build on a point that David made earlier about repeating patterns. I can relate to that very well within The Open Group, speaking about the Cloud Governance Project. Truth be told, as we continue to evolve the content in cloud governance, some of the seeding content actually came from the SOA Governance Project that The Open Group worked on a few years back. So the point David made about the repeating patterns resonates very well with that particular case in mind.
I think there's a repeating pattern here of new approaches, new ways of doing things, coming into the picture.

Gardner: So we've been through this before. When there is change and disruption, sometimes it’s required for a new version of methodologies and best practices to emerge, perhaps even associated with specific technologies. Then, over time, we see that folded back in to IT in general, or maybe it’s pushed back out into the business, as Chris alluded to.

My question, though, is how we make sure that these don’t become disruptive and negative influences over time. Maybe governance and enterprise architecture principles can prevent that. So is there something about the cloud governance, which I think really anticipates a hybrid model, particularly a cloud hybrid model, that would be germane and appropriate for a hybrid IT environment?

David Janson, is there a cloud governance benefit in managing hybrid IT?

Janson: There most definitely is. I tend to think that hybrid IT is probably where we're headed. I don’t think this is avoidable. My editorial comment upon that is that’s an unavoidable direction we're going in. Part of the reason I say that is I think there's a repeating pattern here of new approaches, new ways of doing things, coming into the picture.

Janson
And then some balancing acts goes on, where people look at more traditional ways versus the new approaches people are talking about, and eventually they look at the strengths and weaknesses of both.

There's going to be some disruption, but that’s not necessarily bad. That’s how we drive change and transformation. What we're really talking about is making sure the amount of disruption is not so counterproductive that it actually moves things backward instead of forward.

I don’t mind a little bit of disruption. The governance processes that we're talking about, good governance practices, have an overall life cycle that things move through. If there is a way to apply governance, as you work through that life cycle, at each point, you're looking at the particular decision points and actions that are going to happen, and make sure that those decisions and actions are well-informed.

We sometimes say that governance helps us do the right things right. So governance helps people know what the right things are, and then the right way to do those things.

Bimodal IT

Also, we can measure how well people are actually adapting to those “right things” to do. What’s “right” can vary over time, because we have disruptive change. Things like we are talking about with Bimodal IT is one example.

Within a narrower time frame in the process lifecycle, there are points that evolve across that time frame that have particular decisions and actions. Governance makes sure that people are well informed as they're rolling through that about important things they shouldn’t forget. It’s very easy to forget key things and optimize for only one factor, and governance helps people remember that.

Also, just check to see whether we're getting the benefits that people expected out of it. Coming back around and looking afterward to see if we accomplish what we thought we would or did we get off in the wrong direction. So it’s a bit like a steering mechanism or a feedback mechanism, in it that helps keep the car on the road, rather than going off in the soft shoulder. Did we overlook something important? Governance is key to making this all successful.

Gardner: Let’s return to The Open Group’s upcoming conference on July 20 in Baltimore and also learn a bit more about what the Cloud Governance Project has been up to. I think that will help us better understand how cloud governance relates to these hybrid IT issues that we've been discussing.

Nadhan, you are the co-chairman of the Cloud Governance Project. Tell us about what to expect in Baltimore with the concepts of Boundaryless Information Flow, and then also perhaps an update on what the Cloud Governance Project has been up to.
Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here
Nadhan: When the Cloud Governance Project started, the first question we challenged ourselves with was, what is it and why do we need it, especially given that SOA governance, architecture governance, IT governance, enterprise governance, in general are all out there with frameworks. We actually detailed out the landscape with different standards and then identified the niche or the domain that cloud governance addresses.

After that, we went through and identified the top five principles that matter for cloud governance to be done right. Some of the obvious ones being that cloud is a business decision, and the governance exercise should keep in mind whether it is the right business decision to go to the cloud rather than just jumping on the bandwagon. Those are just some examples of the foundational principles that drive how cloud governance must be established and exercised.

Subsequent to that, we have a lifecycle for cloud governance defined and then we have gone through the process of detailing it out by identifying and decoupling the governance process and the process that is actually governed.

So there is this concept of process pairs that we have going, where we've identified key processes, key process pairs, whether it be the planning, the architecture, reusing cloud service, subscribing to it, unsubscribing, retiring, and so on. These are some of the defining milestones in the life cycle.

We've actually put together a template for identifying and detailing these process pairs, and the template has an outline of the process that is being governed, the key phases that the governance goes through, the desirable business outcomes that we would expect because of the cloud governance, as well as the associated metrics and the key roles.

Real-life solution

The Cloud Governance Framework is actually detailing each one. Where we are right now is looking at a real-life solution. The hypothetical could be an actual business scenario, but the idea is to help the reader digest the concepts outlined in the context of a scenario where such governance is exercised. That’s where we are on the Cloud Governance Project.

Let me take the opportunity to invite everyone to be part of the project to continue it by subscribing to the right mailing list for cloud governance within The Open Group.

Gardner: Just for the benefit of our readers and listeners who might not be that familiar with The Open Group, perhaps you could give us a very quick overview -- its mission, its charter, what we could expect at the Baltimore conference, and why people should get involved, either directly by attending, or following it on social media or the other avenues that The Open Group provides on its website?
Until an Open Group standard is published, there is no official Open Group position on the topic, and members will present their views at conferences.

Harding: The Open Group is a vendor-neutral consortium whose vision is Boundaryless Information Flow. That is to say the idea that information should be available to people within an enterprise, or indeed within an ecosystem of enterprises, as and when needed, not locked away into silos.

We hold main conferences, quarterly conferences, four times a year and also regional conferences in various parts of the world in between those, and we discuss a variety of topics.

In fact, the main topics for the conference that we will be holding in July in Baltimore are enterprise architecture and risk and security. Architecture and security are two of the key things for which The Open Group is known, Enterprise architecture, particularly with its TOGAF Framework, is perhaps what The Open Group is best known for.

We've been active in a number of other areas, and risk and security is one. We also have started a new vertical activity on healthcare, and there will be a track on that at the Baltimore conference.

There will be tracks on other topics too, including four sessions on Open Platform 3.0. Open Platform 3.0 is The Open Group initiative to address how enterprises can gain value from new technologies, including cloud computing, social computing, mobile computing, big data analysis, and the Internet of Things.

We'll have a number of presentations related to that. These will include, in fact, a perspective on cloud governance, although that will not necessarily reflect what is happening in the Cloud Governance Project. Until an Open Group standard is published, there is no official Open Group position on the topic, and members will present their views at conferences. So we're including a presentation on that.

Lifecycle governance

There is also a presentation on another interesting governance topic, which is on Information Lifecycle Governance. We have a panel session on the business context for Open Platform 3.0 and a number of other presentations on particular topics, for example, relating to the new technologies that Open Platform 3.0 will help enterprises to use.

There's always a lot going on at Open Group conferences, and that’s a brief flavor of what will happen at this one.

Gardner: Thank you. And I'd just add that there is more available at The Open Group website, opengroup.org.

Going to one thing you mentioned about a standard and publishing that standard, is there a roadmap that we could look to in order to anticipate the next steps or milestones in the Cloud Governance Project? When would such a standard emerge and when might we expect it?

Nadhan: As I said earlier, the next step is to identify the business scenario and apply it. I'm expecting, with the right level of participation, that it will take another quarter, after which it would go through the internal review with The Open Group and the company reviews for the publication of the standard. Assuming we have that in another quarter, Chris, could you please weigh in on what it usually takes, on average, for those reviews before it gets published.
I want to step back and think about what are the changes to project-related processes that new approaches require.

Harding: You could add on another quarter. It shouldn’t actually take that long, but we do have a thorough review process. All members of The Open Group are invited to participate. The document is posted for comment for, I would think, four weeks, after which we review the comments and decide what actually needs to be taken.

Certainly, it could take only two months to complete the overall publication of the standard from the draft being completed, but it’s safer to say about a quarter.

Gardner: So a real important working document could be available in the second half of 2015. Let’s now go back to why a cloud governance document and approach is important when we consider the implications of Bimodal or multimodal IT.

One of things that Gartner says is that Bimodal IT projects require new project management styles. They didn’t say project management products. They didn’t say, downloads or services from a cloud provider. We're talking about styles.

So it seems to me that, in order to prevent the good aspects of Bimodal IT to be overridden by negative impacts of chaos and the lack of coordination that we're talking about, not about a product or a download, we're talking about something that a working group and a standards approach like the Cloud Governance Project can accommodate.

David, why is it that you can’t buy this in a box or download it as a product? What is it that we need to look at in terms of governance across Bimodal IT and why is that appropriate for a style? Maybe the IT people need to think differently about accomplishing this through technology alone?

First question

Janson: When I think of anything like a tool or a piece of software, the first question I tend to have is what is that helping me do, because the tool itself generally is not the be-all and end-all of this. What process is this going to help me carry out?

So, before I would think about tools, I want to step back and think about what are the changes to project-related processes that new approaches require. Then secondly, think about how can tools help me speed up, automate, or make those a little bit more reliable?

It’s an easy thing to think about a tool that may have some process-related aspects embedded in it as sort of some kind of a magic wand that's going to automatically make everything work well, but it’s the processes that the tool could enable that are really the important decision. Then, the tools simply help to carry that out more effectively, more reliably, and more consistently.

We've always seen an evolution about the processes we use in developing solutions, as well as tools. Technology requires tools to adapt. As to the processes we use, as they get more agile, we want to be more incremental, and see rapid turnarounds in how we're developing things. Tools need to evolve with that.
Once you've settled on some decisions about evolving those processes, then we'll start looking for tools that help you automate, accelerate, and make consistent and more reliable what those processes are.

But I'd really start out from a governance standpoint, thinking about challenging the idea that if we're going to make a change, how do we know that it's really an appropriate one and asking some questions about how we differentiate this change from just reinventing the wheel. Is this an innovation that really makes a difference and isn't just change for the sake of change?

Governance helps people challenge their thinking and make sure that it’s actually a worthwhile step to take to make those adaptations in project-related processes.

Once you've settled on some decisions about evolving those processes, then we'll start looking for tools that help you automate, accelerate, and make consistent and more reliable what those processes are.

I tend to start with the process and think of the technology second, rather than the other way around. Where governance can help to remind people of principles we want to think about. Are you putting the cart before the horse? It helps people challenge their thinking a little bit to be sure they're really going in the right direction.

Gardner: Of course, a lot of what you just mentioned pertains to enterprise architecture generally as well.

Nadhan, when we think about Bimodal or multimodal IT, this to me is going to be very variable from company to company, given their legacy, given their existing style, the rate of adoption of cloud or other software as a service (SaaS), agile, or DevOps types of methods. So this isn’t something that’s going to be a cookie-cutter. It really needs to be looked at company by company and timeline by timeline.

Is this a vehicle for professional services, for management consulting more than IT and product? What is n the relationship between cloud governance, Bimodal IT, and professional services?

Delineating systems

Nadhan: It’s a great question Dana. Let me characterize Bimodal IT slightly differently, before answering the question. Another way to look at Bimodal IT, where we are today, is delineating systems of record and systems of engagement.

In traditional IT, typically, we're looking at the systems of record, and systems of engagement with the social media and so on are in the live interaction. Those define the continuously evolving, growing-by-the-second systems of engagement, which results in the need for big data, security, and definitely the cloud and so on.

The coexistence of both of these paradigms requires the right move to the cloud for the right reason. So even though they are the systems of record, some, if not most, do need to get transformed to the cloud, but that doesn’t mean all systems of engagement eventually get transformed to the cloud.
There are good reasons why you may actually want to leave certain systems of engagement the way they are.

There are good reasons why you may actually want to leave certain systems of engagement the way they are. The art really is in combining the historical data that the systems of record have with the continual influx of data that we get through the live channels of social media, and then, using the right level of predictive analytics to get information.

I said a lot in there just to characterize the Bimodal IT slightly differently, making the point that what really is at play, Dana, is a new style of thinking. It's a new style of addressing the problems that have been around for a while.

But a new way to address the same problems, new solutions, a new way of coming up with the solution models would address the business problems at hand. That requires an external perspective. That requires service providers, consulting professionals, who have worked with multiple customers, perhaps other customers in the same industry, and other industries with a healthy dose of innovation.

That's where this is a new opportunity for professional services to work with the CxOs, the enterprise architects, the CIOs to exercise the right business decision with the rights level of governance.

Because of the challenges with the coexistence of both systems of record and systems of engagement and harvesting the right information to make the right business decision, there is a significant opportunity for consulting services to be provided to enterprises today.

Drilling down

Gardner: Before we close off I wanted to just drill down on one thing, Nadhan, that you brought up, which is that ability to measure and know and then analyze and compare.

One of the things that we've seen with IT developing over the past several years as well is that the big data capabilities have been applied to all the information coming out of IT systems so that we can develop a steady state and understand those systems of record, how they are performing, and compare and contrast in ways that we couldn’t have before.

So on our last topic for today, David Janson, how important is it for that measuring capability in a governance context, and for organizations that want to pursue Bimodal IT, but keep it governed and keep it from spinning out of control? What should they be thinking about putting in place, the proper big data and analytics and measurement and visibility apparatus and capabilities?

Janson: That’s a really good question. One aspect of this is that, when I talk with people about the ideas around governance, it's not unusual that the first idea that people have about what governance is is about the compliance or the policing aspect that governance can play. That sounds like that’s interference, sand in the gears, but it really should be the other way around.
Good governance has communicated that well enough, so that people should actually move faster rather than slower. In other words, there should be no surprises.

A governance framework should actually make it very clear how people should be doing things, what’s expected as the result at the end, and how things are checked and measured across time at early stages and later stages, so that people are very clear about how things are carried out and what they are expected to do. So, if someone does use a governance-compliance process to see if things are working right, there is no surprise, there is no slowdown. They actually know how to quickly move through that.

Good governance has communicated that well enough, so that people should actually move faster rather than slower. In other words, there should be no surprises.

Measuring things is very important, because if you haven’t established the objectives that you're after and some metrics to help you determine whether you're meeting those, then it’s kind of an empty suit, so to speak, with governance. You express some ideas that you want to achieve, but you have no way of knowing or answering the question of how we know if this is doing what we want to do. Metrics are very important around this.

We capture metrics within processes. Then, for the end result, is it actually producing the effects people want? That’s pretty important.

One of the things that we have built into the Cloud Governance Framework is some idea about what are the outcomes and the metrics that each of these process pairs should have in mind. It helps to answer the question, how do we know? How do we know if something is doing what we expect? That’s very, very essential.

Gardner: I am afraid we'll have to leave it there. We've been examining the role of cloud governance and enterprise architecture and how they work together in the era of increasingly fragmented IT. And we've seen how The Open Group Cloud Governance Initiatives and Working Groups can help allow for the benefits of Bimodal IT, but without necessarily IT fragmentation leading to a fractured or broken business process around technology and innovation.
Attend The Open Group Baltimore 2015
July 20-23, 2015
Register Here
This special BriefingsDirect thought leadership panel discussion comes to you in conjunction with The Open Group’s upcoming conference on July 20, 2015 in Baltimore. And it’s not too late to register on The Open Group’s website or to follow the proceedings online and via social media such as Twitter and LinkedIn.

Listen to the podcast. Find it on iTunes. Get the mobile app for iOS or Android. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: