Monday, June 27, 2016

CA streamlines cloud and hybrid IT infrastructure adoption through better holistic services monitoring

New capabilities in CA Unified Infrastructure Management (CA UIM) are designed to help enterprises adopt cloud more rapidly and better manage hybrid IT infrastructure heterogeneity across several major cloud environments.

Enterprises and SMBs are now clamoring for hybrid cloud benefits, due to an ability for focus on on apps and to gain speed for new business initiatives, says Stephen Orban, Global Head of Enterprise Strategy at AWS.

"Going cloud-first allows organizations to focus on the apps that make the business run, says Orban. Using hybrid computing, the burden of proof soon shifts to why should we use cloud for more of IT," he says.

As has been the case with legacy IT for decades, the better the overall management, the better the adoptions success, productivity, and return on investment (ROI) for IT systems and the apps they support -- no matter their location of IT architecture. This same truth is now being applied to solve the cloud heterogeneity problem, just as it did the legacy platforms heterogeneity problem. The total visibility solution may be even more powerful in this new architectural era.

Cloud-fist is business-first

The stakes are now even higher. As you migrate to the cloud, one weak link in a complex hybrid cloud deployment can ruin the end-user experience, says Ali Siddiqui, general manager, Agile Operations at CA, "By providing insight across the performance of all of an organization's IT resources in a single and unified view, CA UIM gives users the power to choose the right mix of modern cloud enablement technologies."
UIM gives users the power to choose the right mix of modern cloud enablement technologies that can best support new endeavors that can contribute to business growth.

CA UIM reduces complexity of hybrid infrastructures by providing visibility across on-premises, private-, and public-cloud infrastructures through a single console UI. Such insight enables users to adopt new technologies and expand monitoring configurations across existing and new IT resource elements. CA expects the solution to reduce the need for multiple monitoring tools. [Disclosure: CA is a sponsor of BriefingsDirect.]

"Keep your life simple from a monitoring and management perspective, regardless of your hybrid cloud [topology]," said Michael Morris, Senior Director Product Management, at CA Technologies in a recent webcast.

To grease the skids to hybrid cloud adoption, CA UIM now supports advanced performance monitoring of Docker containers, PureStorage arrays, Nutanix, hyperconverged systems, OpenStack cloud environments, and additional capabilities for Amazon Web Services (AWS) cloud infrastructures, CA Technologies announced last week.

CA is putting its IT systems management muscle behind the problem of migrating from data centers to the cloud, and then better supporting hybrid models, says Siddiqui. The "single pane of glass" monitoring approach that CA is delivering allows measurement and enforcement of service-level agreements (SLAs) before and after cloud migration. This way, continuity of service and IT value-add can be preserved and measured, he added.

Managing a cloud ecosystem

"Using advanced monitoring and management can significantly cut costs of moving to cloud," says Siddiqui.

Indeed, CA is working with several prominent cloud and IT infrastructure partners to make the growing diversity of cloud implementations a positive, not a drawback. For example, "Virtualization tools are too constrained to specific hypervisors, so you need total cloud visibility," says Steve Kaplan, Vice President of Client Strategy at Nutanix, of CA's new offerings.

And it's not all performance monitoring. Enhancements to CA UIM's coverage of AWS cloud infrastructures include billing metrics and support for additional services that provide deeper actionable insights on cloud brokering.

CA UIM now also provides:

  • Service-centric and unified analytics capabilities that rapidly identify the root cause of performance issues, resulting in a faster time to repair and better end-user experience
  • Out-of-the-box support for more than 140 on-premises and cloud technologies

  • Templates for easier configuring of monitors than can be applied to groups of disparate systems
What's more, to ensure the reliability of networks such as SDN/NFV that connect and scale hybrid environments, CA has also delivered CA Virtual Network Assurance, which provides a common view of dynamic changes across virtual and physical network stacks.

You may also be interested in:

Friday, June 24, 2016

Here's how two part-time DBAs maintain mobile app ad platform Tapjoy’s massive data needs

The next BriefingsDirect Voice of the Customer big data case study discussion examines how mobile app advertising platform Tapjoy handles fast and massive data -- some two dozen terabytes per day -- with just two part-time database administrators (DBAs).

Examine how Tapjoy’s data-driven business of serving 500 million global mobile users -- or more than 1.5 million add engagements per day, a data volume of a 120 terabytes -- runs with extreme efficiency.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy.

To learn more about how high scale and complexity meets minimal labor for building user and advertiser loyalty we're joined by David Abercrombie, Principal Data Analytics Engineer at Tapjoy in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Mobile advertising has really been a major growth area, perhaps more than any other type of advertising. We hear a lot about advertising waning, but not mobile app advertising. How does Tapjoy and its platform help contribute to the success of what we're seeing in the mobile app ad space?

Abercrombie: The key to Tapjoy’s success is engaging the users and rewarding them for engaging with an ad. Our advertising model is you engage with an ad and then you get typically some sort of reward: A virtual currency in the game you're playing or some sort of discount.

Abercrombie
We actually have the kind of ads that lead users to seek us out to engage with the ads and get their rewards.

Gardner: So this is quite a bit different than a static presented ad. This is something that has a two-way street, maybe multiple directions of information coming and going. Why the analysis? Why is that so important? And why the speed of analysis?

Abercrombie: We have basically three types of customers. We have the app publishers who want to monetize and get money from displaying ads. We have the advertisers who need to get their message out and pay for that. Then, of course, we have the users who want to engage with the ads and get their rewards.

The key to Tapjoy’s success is being able to balance the needs of all of these disparate uses. We can’t charge the advertisers too much for their ads, even though the monetizers would like that. It’s a delicate balancing act, and that can only be done through big-data analysis, careful optimization, and careful monitoring of the ad network assets and operation.

Gardner: Before we learn more about the analytics, tell us a bit more about what role Tapjoy plays specifically in what looks like an ecosystem play for placing, evaluating, and monetizing app ads? What is it specifically that you do in this bigger app ad function?

Ad engagement model

Abercrombie: Specifically what Tapjoy does is enable this rewarded ad engagement model, so that the advertisers know that people are going to be paying attention to their ads and so that the publishers know that the ads we're displaying are compatible with their app and are not going to produce a jarring experience. We want everybody to be happy -- the publishers, the advertisers, and the users. That’s a delicate compromise that’s Tapjoy’s strength.

Gardner: And when you get an end user to do something, to take an action, that’s very powerful, not only because you're getting them to do what you wanted, but you can evaluate what they did under what circumstances and so forth. Tell us about the model of the end user specifically. What is it about engaging with them that leads to the data -- which we will get to in a moment?
HPE Vertica
Community Edition
Start Your Free Trial Now
Abercrombie: In our model of the user, we talk about long-term value. So even though it may be a new user who has just started with us, maybe their first engagement, we like to look at them in terms of their long-term value, both to the publishers and the advertiser.

We don’t want people who are just engaging with the ad and going away, getting what they want and not really caring about it. Rather, we want good users who will continue their engagement and continue this process. Once again, that takes some fairly sophisticated machine-learning algorithms and very powerful inferences to be able to assess the long-term value.

As an example, we have our publishers who are also advertisers. They're advertising their app within our platform and for them the conversion event, what they are looking for, is a download. What we're trying to do is to offer them users who will not only download the game once to get that initial payoff reward, but will value the download and continue to use it again and again.
The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising.

So all of our models are designed with that end in mind -- to look at the long-term value of the user, not just the immediate conversion at this instant in time.

Gardner: So perhaps it’s a bit of a misnomer to talk about ads in apps. We're really talking about a value-add function in the app itself.

Abercrombie: Right. The people who are advertising don’t want people to just see their ads. They want people to follow up with whatever it is they're advertising. If it’s another app, they want good users for whom that app is relevant and useful.

That’s really the way we look at it. That’s the way to enhance the overall experience in the long-term. We're not just in it for the short-term. We're looking at developing a good solid user base, a good set of users who engage thoroughly.

Gardner: And as I said in my set-up, there's nothing hotter in all of advertising than mobile apps and how to do this right. It’s early innings, but clearly the stakes are very high.

A tough business

Abercrombie: And it’s a tough business. People are saturated. Many people don’t want ads. Some of the business models are difficult to master.

For instance, there may be a sequence of multiple ad units. There may be a video followed by another ad to download something. It becomes a very tricky thing to balance the financing here. If it was just a simple pass-through and we take a cut, that would be trivial, but that doesn't work in today's market. There are more sophisticated approaches, which do involve business risk.

If we reward the user, based on the fact that they're watching the video, but then they don't download the app, then we don't get money. So we have to look very carefully at the complexity of the whole interaction to make it as smooth and rewarding as possible, so that the thing works. That's difficult to do.

Gardner: So we're in a dynamic, fast-growing, fairly fresh, new industry. Knowing what's going to happen before it happens is always fun in almost any industry, but in this case, it seems with those high stakes and to make that monetization happen, it’s particularly important.
HPE Vertica
Community Edition
Start Your Free Trial Now
Tell me now about gathering such large amounts of data, being able to work with it, and then allowing analysis to happen very swiftly. How do you go about making that possible?

Abercrombie: Our data architecture is relatively standard for this type of clickstream operation. There is some data that can be put directly into a transactional database in real time, but typically, that's only when you get to the very bottom of the funnel, the conversion stuff. But all that clickstream stuff gets written, has JSON formatted log files, gets swept up by a queuing system, and then put into our data systems.

Our legacy system involved a homegrown queuing system, dumping data into HDFS. From there, we would extract and load CSVs into Vertica. As with so many other organizations, we're moving to more real-time operations. Our queuing system has evolved from a couple of different homegrown applications, and now we're implementing Apache Kafka.

We use Spark as part of our infrastructure, as sort of a hub, if you will, where data is farmed out to other systems, including a real-time, in-memory SQL database, which is fairly new to us this year. Then, we're still putting data in HDFS, and that's where the machine learning occurs. From there, we're bringing it into Vertica.

In Vertica -- and our Vertica cluster has two main purposes -- there is the operational data store, which has the raw, flat tables that are one row for every event, with the millisecond timestamps and the IDs of all the different entities involved.

From that operational data store, we do a pure SQL ETL extract into kind of an old-school star schema within Vertica, the same database.

Pure SQL

So our business intelligence (BI) ETL is pure SQL and goes into a full-fledged snowflake schema, moderately denormalized with all the old-school bells and whistles, the type 1, type 2, slowly changing dimensions. With Vertica, we're able to denormalize that data warehouse to a large degree.

Sitting on top of that we have a BI tool. We use MicroStrategy, for which we have defined our various metrics and our various attributes, and it’s very adept at knowing exactly which fact table and which dimensions to join.

So we have sort of a hybrid architecture. I'd say that we have all the way from real-time, in-memory SQL, Hadoop and all of its machine learning and our algorithmic pipelines, and then we have kind of the old-school data warehouse with the operational data store and the star schema.

Gardner: So a complex, innovative, custom architectural approach to this and yet I'm astonished that you are running and using Vertica in multiple ways with two part-time DBAs. How is it possible that you have minimal labor, given this topology that you just described?

Abercrombie: Well, we found Vertica very easy to manage. It has been very well-behaved, very stable.
In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

For instance, we don’t even really use the Management Console, because there is not enough to manage. Our cluster is about 120 terabytes. It’s only on eight nodes and it’s pretty much trouble free.

One of the part-times DBAs deals with kind of more operating-system level stuff --  patches, cluster recovery, those sorts of issues. And the other part-time DBA is me. I deal more with data structure design, SQL tuning and Vertica training for our staff.

In terms of ad-hoc users of our Vertica database, we have well over 100 people who have the ability to run any query they want at any time into the Vertica database.

When we first started out, we tried running Vertica in Amazon EC2. Mind you, this was four or five years ago. Amazon EC2 was not where it is today. It failed. It was very difficult to manage. There were perplexing problems that we couldn’t solve. So we moved our Vertica and essentially all of our big-data data systems out of the cloud onto dedicated hardware, where they are much easier to manage and much easier to bring the proper resources.

Then, at one time in our history, when we built a dedicated hardware cluster for Vertica, we failed to heed properly the hardware planning guide and did not provision enough disk I/O bandwidth. In those situations, Vertica is unstable, and we had a lot of problems.

But once we got the proper disk I/O, it has been smooth sailing. I can’t even remember the last time we even had a node drop out. It has been rock solid. I was able to go on a vacation for three weeks recently and know that there would be no problem, and there was no problem.

Gardner: The ultimate key performance indicator (KPI), "I was able to go on vacation."

Fairly resilient

Abercrombie: Exactly. And with the proper hardware design, HPE Vertica is fairly resilient against out-of-control queries. There was a time when half my time was spent monitoring for slow queries, but again, with the proper hardware, it's smooth sailing. I don’t even bother with that stuff anymore.

Our MicroStrategy BI tool writes very good SQL. Part of the key to our success with this BI portion is designing the Vertica schema and the MicroStrategy metadata layer to take advantage of each other’s strengths and avoid each other’s weaknesses. So that really was key to the stable, exceptional performance we get. I basically get no complaints of slow queries from my BI tool. No problem.

Gardner: The right kind of problem to have.

Abercrombie: Yes.

Gardner: Okay, now that we have heard quite a bit about how you are doing this, I'd like to learn, if I could, about some of the paybacks when you do this properly, when it is running well, in terms of SQL queries, ETL load times reduction, the ability for you to monetize and help your customers create better advertising programs that are acceptable and popular. What are the paybacks technically and then in business terms?
The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

Abercrombie: In order to get those paybacks, a key element was confidence in the data, the results that we were shipping out. The only way to get that confidence was by having highly accurate data and extensive quality control (QC) in the ETL.

What that also means is that as a product is under development and when it’s not ready yet, the instrumentation isn’t ready, that stuff doesn’t make it into our BI tool. You can only get that stuff from ad hoc.

So the benefit has been a very clear understanding of the day-to-day operations of our ad network, both for our internal monitoring to know when things are behaving properly, when the instrumentation is working as expected, and when the queues are running, but also for our customers.

Because of the flexibility that we can do from a traditional BI system with 500 metrics, over a couple of dozen dimensions, our customers, the publishers and the advertisers, get incredible detail, customized exactly the way they need for ingestion into their systems or to help them understand how Tapjoy is serving them. Again, that comes from confidence in the data.

Gardner: When you have more data and better analytics, you can create better products. Where might we look next to where you take this? I don’t expect you to pre-announce anything, but where can you now take these capabilities as a business and maybe even expand into other activities on a mobile endpoint?

Flexibility in algorithms

Abercrombie: As we expand our business and move into new areas, what we really need is flexibility in our algorithms and the way we deal with some of our real-time decision making.

So one area that’s new to us this year is the in-memory SQL database like MemSQL. Some of our old real-time ad optimization was based on pre-calculating data and serving it up through HBase KeyValue, but now, where we can do real-time aggregation queries using SQL, that is easy to understand, easy to modify, very expressive and very transparent. It gives us more flexibility in terms of fine-tuning our real-time decision-making algorithms, which is absolutely necessary.

As an example, we acquired a company in Korea called 5Rocks that does app tech and that tracks the users within the app, like what level they're on, or what activities they're doing and what they enjoy, with an eye towards in-app purchase optimization.
HPE Vertica
Community Edition
Start Your Free Trial Now
And so we're blending the in-app purchase optimization along with traditional ad network optimization, and the two have different rules and different constraints. So we really need the flexibility and expressiveness of our real-time decision making systems.

Gardner: One last question. You mentioned machine learning earlier. Do you see that becoming more prominent in what you do and how you're working with data scientists, and how might that expand in terms of where you employ it?

Abercrombie: Tapjoy started with machine learning. Our data scientists are machine learning. Our productive algorithm team is about six times larger than our traditional Vertica BI team. Mostly what we do at Tapjoy is predictive analytics and various machine-learning things. So we wouldn't be alive without it. And we expanded. We're not shifting in one direction or another. It's apples and oranges, and there's a place for both.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in:

Tuesday, June 21, 2016

Expert panel explores the new reality for cloud security and trusted mobile apps delivery

The next BriefingsDirect thought leadership panel discussion focuses on the heightened role of security in the age of global cloud and mobile delivery of apps and data.

As enterprises and small to medium-sized businesses (SMBs) alike weigh the balance of apps and convenience with security -- a new dynamic is emerging. Security concerns increasingly dwarf other architecture considerations.

Yet advances in thin clients, desktop virtualization (VDI), cloud management services, and mobile delivery networks are allowing both increased security and edge applications performance gains.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about the new reality for end-to-end security for apps and data, please welcome our panel: Stan Black, Chief Security Officer at Citrix; Chad Wilson, Director of Information Security at Children's National Health System in Washington, DC; Whit Baker, IT Director at The Watershed in Delray Beach, Florida; Craig Patterson, CEO of Patterson and Associates in San Antonio, Texas, and Dan Kaminsky, Chief Scientist at White Ops in San Francisco. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Stan, a first major use case of VDI was the secure, stateless client. All the data and apps remain on the server, locked down, controlled. But now that data is increasingly mobile, and we're all mobile. So, how can we take security on the road, so to speak? How do we move past the safe state of VDI to full mobile, but not lose our security posture?

Black: Probably the largest challenge we all have is maintaining consistent connectivity. We're now able to keep data locally or make it highly extensible, whether it’s delivered through the cloud or a virtualized application. So, it’s a mix and a blend. But from a security lens, each one of those of service capabilities has a certain nuance that we need to be cognizant of while we're trying to protect data at rest, in use, and in motion.

Gardner: I've heard you speak about bring your own device (BYOD), and for you, BYOD devices have ended up being more secure than company-provided devices. Why do you think that is?

Caring for assets

Black: Well, if you own the car, you tend to take care of it. When you have a BYOD asset, you tend to take care of it, because ultimately, you're going to own that, whether it’s purchased for you with a retainer or what have you.

Black
Often, corporate-issued assets are like a car rental. You might not bring it back the same way you took it. So it has really changed quite a bit. But the containerization gives us the ability to provide as much, if not more, control in that BYOD asset.

Gardner: This also I think points out the importance of behaviors and end-user culture and thinking about security, acting in certain ways. Let's go to you, Craig. How do we get that benefit of behavior and culture as we think more about mobility and security?

Patterson: When we look at mobile, we've had people who would have a mobile device out in the field. They're accustomed to being able to take an email, and that email may have, in our situation, private information -- Social Security numbers, certain client IDs -- on it, things that we really don't want out in the public space. The culture has been, take a picture of the screen and text it to someone else. Now, it’s in another space, and that private information is out there.

You go from working in a home environment, where you text everything back and forth, to having secure information that needs to be containerized, shrink-wrapped, and not go outside a certain control parameter for security. Now, you're having a culture fight [over] utilization. People are accustomed to using their devices in one way and now, they have to learn a different way of using devices with a secure environment and wrapping. That’s what we're running into.

Gardner: We've also heard at the recent Citrix Synergy 2016 in Las Vegas that IT should be able to increasingly say "Yes," that it's an important part of getting to better business productivity.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Dan, how do we get people to behave well in secure terms, but not say "No"? Is there a carrot approach to this?

Kaminsky: Absolutely. At the end of the day, our users are going to go ahead and do stuff they need to get their jobs done. I always laugh when people say, "I can’t believe that person opened a PDF from the Internet." They work in HR. Their job is to open resumes. If they don’t open resumes, they're going to lose their job and be replaced by someone else.

Kaminsky
The thing I see a lot is that these software-as-a-service (SaaS) providers are being pressed into service to provide the things that people need. It’s kind of like a rogue IT or an outsourced IT, with or without permission.

The unusual realization that I had is that all these random partners we're getting have random policies and are storing data. We hear a lot of stuff about the Internet of Things (IoT), but I don't know any toasters that have my Social Security number. I know lots of these DocuSign, HelloSign systems that are storing really sensitive documents.

Maybe the solution, if we want people to implement our security technologies, or at least our security policies, is to pay them. Tell them, "If you actually have attracted our users, follow these policies, and we'll give you this amount of money per day, per user, automatically through our authentication layer." It sounds ridiculous, but you have to look at the status quo. The status quo is on fire, and maybe we can pay people to put out their fires.

Quid pro quo

Gardner: Or perhaps there are other quid pro quos that don't involve money? Chad, you work at a large hospital organization and you mentioned that you're 100 percent digital. How did you encourage people with the carrot to adhere to the right policies in a challenging environment like a hospital?

Wilson: We threw out the carrot-and-stick philosophy and just built a new highway. If you're driving on a two-lane highway, and it's always congested, and you want somebody to get there faster, then build a new highway that can handle the capacity and the security. Build the right on- and off-ramps to it and then cut over.

Wilson
We've had an electronic medical record (EMR) implementation for a while. We just finished up rolling out to all of our ambulatory spaces for electronic medical record. It's all delivered through virtualization on that highway that we built. So, they have access to it wherever they need it.

Gardner: It almost sounds like you're looking at the beginning bowler’s approach, where you put rails up on the gutters, so you can't go too far afield, whether you wish to or not. Whit Baker, tell us a little bit about The Watershed and how you view security behavior. Is it rails on the gutters, carrots or sticks, how does it go?

Baker: I would say rails on the gutters for us. We've completely converted everything to a VDI environment. Whether they're connecting with a laptop, with broadband, or their own home computer or mobile device, that session is completely bifurcated from their own operating system.

So, we're not really worried. Your desktop machine can be completely loaded with malware and whatnot, but when you open that session, you're inside of our system. That's basically how we handle the security. It almost doesn't require the users to be conscious of security.

Baker
At the same time, we're still afraid of attachments and things like that. So, we do educational type things. When we see some phishing emails come in, I'll send out scam alerts and things like that to our employees, and they're starting to become self-aware. They are starting to ask, "Should I even open this?" -- those sort of things.

So, it's a little bit of containerization, giving them some rails that they can bounce off of, and education.

Gardner: Stan, thinking about other ways that we can encourage good security posture in the mobility era, authentication certainly comes to mind, multi-factor authentication (MFA). How does that play into this keeping people safe?

Behavior elements

Black: It’s a mix of how we're going to deliver the services, but it's also a mix of the behavior elements and the fact that now technology has progressed so much that you can provide a user an entire experience that they actually enjoy. It gives them what they need, inside of a secure session, inside of a secure socket layer, with the inability to go outside of those bowling lanes, if they're not authorized to do so.

Additionally, authentication technologies have come a long way from hard tokens that we used to wear. I've seen people with four, five, or six of them, all in one necklace. I think I might have been one of them.
Authentication technologies have come a long way from hard tokens that we used to wear.

Multi-factor authentication and the user interface  are all pieces of information that aren't tied to the person's privacy or that individual, like their Social Security Number, but it’s their user experience enabling them to connect seamlessly. Often, when you have a help-desk environment, as an example, you put a time-out on their system. They go from one phone call to another phone call and then they have to log back in.

The interfaces that we have now and the MFA, the simple authentication, the simplified side on all of those, enable a person, depending upon what their role is, to connect into the environment they need to do their job quickly and easily.

Gardner: You mentioned user experience, and maybe that’s the quid pro quo. You get more user experience benefits if you take more precautions with how you behave using your devices.

Dan, any thoughts on where we go with authentication and being able to say, Yes, and encourage people to do the right thing?
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Kaminsky: I cannot emphasize how important usability is in getting security wins. We've had some major ones. We moved people from Telnet to SSH. Telnet was unencrypted and was a disaster. SSH is encrypted. It is actually the thing people use now, because if you jump through a few hoops, you stopped having to type in a password.

You know what VPNs meant? VPNs meant you didn't have to drive into the office on a Sunday. You could be at home and fix the problem, and hours became minutes or seconds. Everything that we do that really works involves making things more useable and enabling people. Security is giving you permission to do this thing that used to be dangerous.
Security is giving you permission to do this thing that used to be dangerous.

I actually have a lot of hope in the mobility space, because a lot of these mobile environments and operating systems are really quite secure. You hand someone an iPad, and in a year, that iPad is still going to work. There are other systems where you hand someone a device and that device is not doing so well a year from now.

So there are a lot more controls and stability from some of these mobile things that people actually like to use more, and they turn out to also be significantly more secure.

Gardner: Craig, as we're also thinking about ways of keeping people on the straight and narrow path, we're getting more intelligent networks. We're starting to get more data and analytics from those devices and we're able to see what goes on in that network in high detail.

Tell us about the ways in which we can segment and then make zones for certain purposes that may come and go based on policies. Basically, how are intelligent networks helping us provide that usability and security?

Access to data

Patterson: The example that comes to my mind is that in many of the industries, we have partners who come on site for a short period of time. They need access to data. They might be doing inspections for us and they'll be going into a private area, but we don't want them to take certain photos, documents and other information off site after a period of time.

Patterson
Containerizing data and having zones allows a person to have access while they're on premises, within a certain "electronic wire fence," if you will, or electronic guardrails. Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We had kind of an old-fashioned example where people think they are more secure, because they don't know what they're losing. We had people with file cabinets that were locked and they had the key around their neck. They said, "Why should we go to an electronic documents system where I can see when you viewed it, when you downloaded it, where you moved that document to?" That kind of scared some people.

Then, I walked in with half their file cabinet and I said, "You didn’t even know these were gone, but you felt secure the whole time. Wouldn’t you rather know that it was gone and have been able to institute some security protocols behind it?"

A lot of it goes to usability. We want to make things usable and we have to have access to it, but at the same time, those guardrails include not only where we can access it and at what time, but for how long and for what purposes.
Once they go outside of that area, that data is no longer accessible or they've been logged off the system and they no longer have access to those documents.

We have mobile devices for which we need to be able to turn the camera functions off in certain parts of our facility. For mobile device management, that's helpful. For BYOD, that becomes a different challenge, and that's when we have to handle giving them a device that we can control, as opposed to BYOD.

Gardner: Stan, another major trend these days is the borderless enterprise. We have supply chains, alliances, ecosystems that provide solutions, an API-first mentality, and that requires us to be able to move outside and allow others to cross over. How does the network-intelligence factor play into making that possible so that we can say, Yes, and get a strong user experience regardless of which company we're actually dealing with?

Black: I agree with the borderless concept. The interesting part of it, though, is with networks knowing where they're connecting to physically. The mobile device has over 20 sensors in it. When you take all of that information and bring it together with whatever APIs are enabled in the applications, you start to have a very interesting set of capabilities that we never had before.

A simple example is, if you're a database administrator and you're administering something inside the European Union (EU), there are very stringent privacy laws that make it so you're not allowed to do that.

We don’t have to make it that we have to train the person or make it more difficult for them; we simply disable the capability through geofencing. When one application is talking securely through a socket, all the way to the back end, from a mobile device, all the way into the data center, you have pretty darn good control. You can also separate duties; system administration being one function, whereas database administration is another very different thing. One set doesn't see the private data; one set has very clear access to it.

Getting visibility

Gardner: Chad, you mentioned how visibility is super important for you and your organization. Tell me a bit about moving beyond the user implications. What about the operators? How do you get that visibility and keep it, and how important is that to maintaining your security posture?

Wilson: If you can't see it, you can’t protect it. No matter how much visibility we get into the back end, if the end user doesn't adopt the application or the virtualization that we've put in place or the highway that we've built, then we're not going to see the end-to-end session. They're going to continue to do workarounds.

So, usability is very important to end-user adoption and adopting the new technologies and the new platforms. Systems have to be easy for them to access and to use. From the back-end, the visibility piece, we look at adopting technology strategically to achieve interoperability, not just point products here and there to bolt them on.
So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

A strategic innovation and a strategic procurement around technology and partnership, like we have with Citrix, allows us to have a consistent delivery of the application and the end user experience, no matter what device they go to, and where they access from in the world. On the back side, that helps us, because we can have that end-to-end visibility of where our data is heading, the authentication right upfront, as well as all the pieces and parts of the network that go into play to deliver that experience.

So, instead of thinking about things from a device-to-device-to-device perspective, we're thinking about one holistic service-delivery platform, and that's the new highway that provides that visibility.

Gardner: Whit, we've heard a lot about the mentality that you should always assume someone unwanted is in your network. Monitoring and response is one way of limiting that. How does your organization acknowledge that bad things can happen, but that you can limit that, and how important is monitoring and response for you in reducing damage?

Baker: In our case, we have several layers of user experience. Through policy, we only allow certain users to do certain things. We're a healthcare system, but we have various medical personnel; doctors, nurses and therapists, versus people in our corporate billing area and our call center.  All of those different roles are basically looking only at the data that they need to be accessing, and through policy, it’s fairly easy to do.

Gardner: Stan, on the same subject, monitoring and response, assuming that people are in, what is Citrix seeing in the field, and how are you giving that response time as low a latency as possible?

Standard protocol

Black: The standard incident-response protocol is identify, contain, control, and communicate. We're able to shrink what we need to identify. We're able to connect from end-to-end, so we're able to communicate effectively, and we've changed how much data we gather regarding transmissions and communications.

If you think about it, we've shrunk our tech surface, we've shrunk our vulnerable areas, methods, or vectors by which people can enter in. At the same time, we've gained incredibly high visibility and fidelity into what is supposed to be going over a wire or wireless, and what is not.

We're now able to shrink the identify, contain, control, and communicate spectrum to a much shorter area and focus our efforts with really smart threat intelligence and incident response people versus everyone in the IT organization and everyone in security. Everyone is looking at the needle in the haystack; now we just have a smaller stack of needles.

Patterson: I had a thought on that, because as we looked at a cloud-first strategy, one of the issues that we looked at was, "We have a voice-over-IP system in the cloud, we have Azure, we have Citrix, we have our NetScaler. What about our firewalls now, and how do we actually monitor intrusion?"
Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

We have file attachments and emails coming through in ways that aren’t on our on-premises firewall and not with all our malware detection. So, those are questions that I think all of us are trying to answer, because now we're creating known unknowns and really unknown unknowns. When it happens, we're going to say, "We didn’t know that that part could happen."

That’s where part of the industry is, too. Citrix and Microsoft are helping us with that in our environments, but those are still open questions for us. We're not entirely satisfied with the answers yet.

Gardner: Dan, one of the other ways that we want to be able to say, Yes, to our users and increase their experiences as workers is to recognize the heterogeneity -- any cloud, any device, multiple browser types, multiple device types. How do you see the ability to say, Yes, to vast heterogeneity, perhaps at a scale we've never seen before, but at the same time, preserve that security and keep those users happy?

Kaminsky: The reason we have different departments and multiple teams is because different groups have different requirements. They have different needs that are satisfied in ways that we don't necessarily understand. It’s not the heterogeneity that bothers us; it’s the fact that a lot of systems have different risks. We can merge the risks, or simultaneously address them with consistent technologies, like containerization and virtualization, like the sort of centralization solutions out there.

People are sometimes afraid of putting all their eggs in one basket. I'll take one really well-built basket over 50,000 totally broken ones. What I see is, create environments in which users can use whatever makes their job work best, and go ahead and realize that it's not actually the fact that the risks are that distinct, that they are that unique. The risk patterns of the underlying software are less diverse than the software itself.
Learn more about the Citrix Security Portfolio 
of Workspace-as-a-Service, Application Delivery, 
Virtualization, Mobility, Network Delivery,
 and File-Sharing Solutions
Gardner: Stan, most organizations that we speak to say they have at least six, perhaps more, clouds. They're using all sorts of new devices. Citrix has recently come out with Raspberry Pi at less than a $100 to be a viable Windows 10 endpoint. How do we move forward and keep the options open for any cloud and any device?

Multitude of clouds

Black: When you look at the cloud, there is a multitude of public clouds. Many companies have internal clouds. We've seen all of this hyperconvergence, but what has blurred over time are the controls between whether it’s a cloud, whether it’s the enterprise, and whether it’s mobile.

Again, some of what you've seen has been how certain technologies can fulfill controls between the enterprise and the cloud, because cloud is nimble, it’s fast, and it's great.

At the same time, if you don't control it, don’t manage it, or don't know what you have in the cloud, which many companies struggle with, your risk starts to sprawl and you don't even know it's happened.

So it's not adding difficult controls, what I would call classic gates, but transparency, visibility, and thresholds. You're allowed to do this between here and here. An end user doesn't know those things are happening.
Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Also, weaving analytics into every connection, knowing what that wire is supposed to look like, what that packet is supposed to look like gives you a heck of a lot more control than we've had for decades.

Gardner: Chad, for you and your organization, how would you like to get security visibility in terms of an analytic dashboard, visualization, and alerts? What would you like to see happen in terms of that analytics benefit?

Wilson: It starts with population health and the concept behind it. Population health takes in all the healthcare data, puts it into a data warehouse, and leverages analytics to be able to show trends with, say, kids presenting with asthma or patients presenting with asthma across their lifespan and other triggers. That goes to quality of care.

The same concept should be applied to security. When we bring that data together, all the various logs, all of the various threat vectors and what we are seeing, not just signatures, but we're able to identify trends, and how folks are doing it, how the bad guys are doing it. Are the bad guys single-vectored or have they learned the concept of combined arms, like our militaries have? Are they able to put things together to have better impact? And where do we need to put things together to have better protection?

We need to change the paradigm, so when they show their hand once, it doesn't work anymore. The only way that we can do that is by being able to detect that one time when they show their hand. It's getting them to do one thing to show how they are going to attack us. To do that, we have to pull together all the logs, all of the data, and provide analytics and get down to behavior; what is good behavior, what is bad behavior.

That's not a signature that you're detecting for malware; that is a behavior pattern. Today I can do one thing, and tomorrow I can do it differently. That's what we need to be able to get to.

Getting information

Patterson: I like the illustration that was just used. What we're hoping for with the cloud strategy is that, when there's an attack on one part of the cloud, even if it's someone else that’s in Citrix or another cloud provider, then that is shared, whereas before we have had all these silos that need to be independently secured.

Now, the windows that are open in these clouds that we're sharing are going to be ways that we can protect each one from the other. So, when one person attacks Citrix a certain way, Azure a certain way, or AWS a certain way, we can collectively close those windows.
I want to know where the windows are open and where the heat loss went or where there was air intrusion.

What I like to see in terms of analytics is, and I'll use kind of a mechanical engineering approach, I want to know where the windows are open and where the heat loss went or where there was air intrusion. I would like to see, whether it went to an endpoint that wasn't secured or that I didn't know about. I'd like to know more about what I don't know in my analytics. That’s really what I want analytics for, because the things that I know I know well, but I want my analytics to tell me what I don't know yet.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Citrix.

You may also be interested in:

Thursday, June 16, 2016

451 analyst Berkholz on how DevOps, automation and orchestration combine for continuous apps delivery

The next BriefingsDirect Voice of the Customer thought leadership discussion focuses on the burgeoning trends around DevOps and how that’s translating into new types of IT infrastructure that both developers and operators can take advantage of.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. 

To learn more about trends and developments in DevOps, micro services, containers, and the new direction for composable infrastructure, we’re joined by Donnie Berkholz, Research Director at 451 Research, and he’s based in Minneapolis. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: Why are things changing so much for apps deployment infrastructure? Why is DevOps newly key for software development? And why are we looking for “composable infrastructure?”

Berkholz: It’s a good question. There are a couple of big drivers behind it. One of them is cloud, probably the biggest one, because of the scale and transience that we have to deal with now, with virtual machines (VMs) appearing and disappearing on such a rapid basis.

Berkholz
We have to have software, processes, and cultures that support that kind of new approach, and IT is getting more-and-more demands for scale and to do more from the line of business. They're not getting more money or people, and they have to figure out what’s the right approach to deal with this. How can we scale and how can we do more and how can we be more agile?

DevOps is the approach that’s been settled on. One of the big reasons behind that is the automation. That’s one of what I think of as the three pillars of DevOps, which are culture, automation, and measurement.

Automation is what lets you move from this metaphor of cattle versus pets, moving from the pet side of it, where you carefully name and handcraft each server, to a cattle mindset, where you're thinking about fleets of servers and about services rather than individual servers, VMs, containers, or what have you. You can have syste
ms administrators maintaining 10,000 VMs, rather than 100 or 150 servers by hand. That’s what automation gets you.

More with less

So you're doing more with less. Then, as I said, they're also getting demands from the business to be more agile and deliver it faster, because the businesses all want to compete with companies like Netflix or Zenefits, the Teslas of the world, the software-defined organizations. How can they be more agile, how can they become competitive, if they're a big insurance company or a big bank?

DevOps is one of the key approaches behind that. You get the automation, not just on the server side, but on the application-delivery pipeline, which is really a critical aspect of it. You're moving toward this continuous delivery approach, and being able to move a step beyond agile to bring agile all the way through to production and to deploy software, maybe even on every comment, which is the far end of DevOps. There are a lot of organizations that aren’t there yet, but they're taking steps toward that, toward moving from deployments every three months or six months to every few weeks.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: So the vision of having that constant iterative process, continuous development, continuous test, continuous deployment -- at the same time being able to take advantage of these new cloud models -- it’s still kind of a tricky equation for people to work out.

What is it that we need to put in place that allows us to be agile as a development organization and to be automated and orchestrated as an operations organization? How can we make that happen practically?

Berkholz: It always goes back to three things -- people, process, and technology. From the people perspective, what I have run into is that there are a lot of organizations that have either development or operational groups, where some of them just can't make this transition.
IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold.

They can't start thinking about the business impacts of what they're doing. They're focused on keeping the lights on, maintaining the servers, writing the code, and being able to make that transition to focusing on what the business needs. How am I helping the company is the critical step from an individual level, but also from an organizational level.

IT is going through this kind of existential crisis of moving from being a cost center to fighting shadow IT, fighting bring your own device (BYOD), trying to figure out how to bring that all into the fold. How they do so is this transition toward IT as a service is the way we think about it. IT becoming more like a service provider in their own right, pulling in all these external services and providing a better experience in house.

If you think about shadow IT, for example, you think about developers using a credit card to sign-up for some public cloud or another. That’s all well and good, but wouldn’t it be even nicer if they didn’t have to worry about the billing, the expensing, the payments, and all that stuff, because IT already provided that for them. That’s where things are going, because that’s the IT-as-a-service provider model.

Gardner: People, process, technology, and existential issues. The vendors are also facing existential issues, things and changing so fast, and they provide technology, the people and the process which is up to the enterprise to figure out. What's happening on the technology side, and how are the vendors reacting to allow enterprises to then employ the people and put in place the processes that will bring us to this better DevOps automated reality? What can we put in place technically to make this possible?

Two approaches

Berkholz: It goes back to two approaches -- one coming in from the development side and one coming in from the operational side.

From a development side, we're talking about things like continuous-delivery pipelines --  what does the application delivery process look like? Typically, you'd start with something like continuous integration (CI).

Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another. This is a big transition for people to make, especially as you think about moving the next step to continuous delivery, which is not just testing the code base, but testing the full environment and being ready to deploy that to production with every commit, or perhaps on a daily basis.
Just moving toward an automated testing environment, every commit you make, you're testing the code base against it one way or another.

So that's a continuous-integration, continuous-delivery approach using CI servers. There's a pretty well-known open-source one called Jenkins. There are many other examples of as-a-service options around the prime options. That tends to be step one, if you're coming in from the development side.

Now, on the operational side, automation is much more about infrastructure as code. It's really the core tenet, and this is embodied by configuration management software like Puppet, Chef, Ansible, Salt, maybe CFEngine, and the approaches defining server configuration and code and maintaining it in version control, just like you would maintain the software that you're building in version control. You can scale it easily because you know exactly how a server is created.

You can ask if that's one mail server or is it 20? It doesn’t really matter. I'm just running the same code again to deploy a new VM or to deploy onto a bare-metal environment or to deploy a new container. It’s all about that infrastructure-as-code approach using configuration-management tools. When you bring those two things together, that’s what enables you to really do continuous delivery.

You’ve got the automated application delivery pipeline on the top and you've got the automated server environment on the bottom. Then, in the middle, you’ve got things like service virtualization, data virtualization, and continuous-integration servers all letting you have an extremely reliable and reproducible and scalable environment that is the same all the way from development to production.
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
Gardner: And when we go to infrastructure as code, when we go to software-based everything. There's a challenge getting there, but there are also some major paybacks. When you feed-up to analyze your software, when you can replicate things rapidly, when you can deploy to a cloud model that works for your economic or security requirements, you get lot of benefits.

Are we seeing those yet, Donnie?

Berkholz: One of the challenges is that we know there are benefits, but they're very challenging to quantify. When you talk about the benefit of delivering a solution to market faster than your competitors, the benefit is that you're still in business. The benefit is that you’re Netflix and you're not Blockbuster. The benefit is that you’re Tesla and you’re not one of the big-three car manufacturers. Tesla, for example, can ship an update to its cars that let them self-drive on-the-fly for people who already purchased the car.
If you want to survive, you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

You can't really quantify the value of that easily. What you can quantify is natural selection and action. There's no mandatory requirement that any company survive or that any company can make the transition to software-defined. But, if you want to survive,  you’re going to have to take this DevOps mindset, so that you can be more agile, not just as a software group, but as a business.

Gardner: Perhaps one of the ways we can measure this is that we used to look at IT spend as a percentage of capital spend for an enterprise. Many organizations, over the past 20 or 30, years found themselves spending 50 percent or more of their capital expenditures on IT.

I think they'd like to ratchet back. If we go to IT as a service, if we pay for things at an operations level, if we only pay for what we use, shouldn't we start to see a fairly significant decrease in the total IT spend, versus revenue or profit for most organizations?

Berkholz: The one underlying factor is how important software is to your company. If that importance is growing, you're probably going to spend more as a percentage. But you're going to be generating more margin as a result of that. That's one of the big transitions that are happening, the move from IT as a cost center to IT as a collaborator with the business.

The move is away from your traditional old CIO view of we're going to keep the lights on. A lot of companies are bringing in Chief Digital Officers, for example, because the CIO wasn't taking this collaborative business view. They're either making the transition or they're getting left behind.

Spending increase

I think we'll see IT spend increase as a percentage, because companies are all realizing that, in actuality, they're software companies or they're becoming software companies. But as I said, they are going to be generating a lot more value on top of that spend.

To your point about OPEX and buying things for the service, the piece of advice I always give to companies is the saying, "How many of these things that you're doing are significant differentiators for your company?" Is it really a differentiator for your company to be an expert at automating a delivery pipeline, to be an expert at automating your servers, to be an expert at setting up file sharing, to be an expert at setting up an internal chat server? None of those, right?

Why not outsource them to people who are experts and to people who do generate that as their core differentiator and their core value creator and focus on the things that your business cares about.

Gardner: Let's get back to this infrastructure equation. We're hearing about composable infrastructure, software-defined data center (SDDC), micro services, containers and, of course, hybrid cloud or hybrid computing. If I'm looking to improve my business agility where do I look to in terms of understanding my future infrastructure partners? Is my IT organization just a broker and are they going to work with other brokers? Are we looking at a hierarchy of brokering with some sort of a baseline commoditized set of services underneath?
Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability.

So, where do we go in terms of knowing who the preferred vendors are. I guess we're sort of looking at a time when no one got fired for from buying IBM, for example. Everyone is saying Amazon is going to take over the world, but I've heard that about other vendors in the past, and it didn't pan out. This is a roundabout way of saying when you want to compose infrastructure, how do you keep choice, how to keep from getting locked in, how do you find a way to be in a market at all times?

Berkholz: Composability is really key. We see a lot of IT organizations. As you said, they used to just buy Big Blue, for example, at their IBM shops. That's no longer a thing in the way that it used to be. There's a lot more fragmentation in terms of technology, programming languages, hardware, JavaScript toolkits, and databases.

Everything is becoming polyglot or heterogeneous, and the only way to cope with that is to really focus on composability. Focus on multi-vendor solutions, focus on openness, opening APIs, and open-source as well, are incredibly important in this composable world, because everything has to be able to piece together.

But the problem is that when you give traditional enterprises a bunch of pieces, it's like having kids just create a huge mess on the floor. Where do you even get started? That's one of the challenges they need to have. The way I always think about it is what are enterprises looking for? They're looking for a Lego castle, right? They don’t want the Lego pieces, and they don't want like that scene in The Lego Movie where the father glues all the blocks together. They don't want to be stuck. That's the old monolithic world.

The new composable world is where you get that castle and you can take off the tower and put on a new tower if you want to. But you're not given just the pieces; you're given not just something that is composable, but something that is pre-composed for you, for your use. case. So that generates value and looks like what we used to think about reference architectures, for example, being something sitting on a PowerPoint slides with kind of a fancy diagram.

It’s moving more toward reference architectures in the form of code, where it’s saying, "Here's a piece of code that’s ready to deploy and that’s enabled through things like infrastructure as code."

Gardner: Or a set of APIs.

Ready to go

Berkholz: Exactly. It’s enabled by having all of that stuff ready to go, ready to build in a way that wasn’t possible before. The best-case scenario before was, "Here’s a virtual appliance; have fun with that." Now, you can distribute the code and they can roll that up, customize it, take a piece out, put a piece in, however they want to.

Gardner: Before we close out, Donnie, any words of advice for organizations back to that cultural issue -- probably the more difficult one really? You have a lot of choices of technology, but how you actually change the way people think and behave among each other is always difficult. DevOps, leading to composable infrastructure, leading to this sort of services brokering economy, for lack of a better word, or marketplace perhaps.

What are you telling people about how to make that cultural shift? How do organizations change while still keeping the airplane flying so to speak?
You can’t do it as a big bang. That's absolutely the worst possible way to go about it.

Berkholz: You can’t do it as a big bang. That's absolutely the worst possible way to go about it. If you think about change management, it’s a pretty well-studied discipline at this point. There's an approach I prefer from a guy named John Kotter who has written books about change management. He lays out an eight- or nine-step process of how to make these changes happen. The funny thing about it is that actually doing the change is one of the last steps.

So much of it is about building buy-in, about generating small wins, about starting with an independent team and saying, "We're going to take the mobile apps team and we're going to try a continuous delivery over there. We're not going to stop doing everything for six months as we are trying to roll this out across the organization, because the business isn’t going to stand for that."
Learn More about DevOps
Solutions that Unify Development and Operations
To Accelerate Business
They're going to say, "What are you doing over there? You're not even shipping anything. What are you messing around with?" So, you’ve got to go piece by piece. Let’s say, start by rolling out continuous integration and slowly adding more and more automated tests to it, while keeping the manual testers alongside, so that you're not dropping all of the quality that you had before. You're actually adding more quality by adding the automation and slowly converting those manual testers over to the engineers on test.

That’s the key to it. Generate small wins, start small, and then gradually work your way up as you are able to prove the value to the organization. Make sure while you're doing so that you have executive buy-in. The tool side of things you can start at a pretty small level, but thinking about reorganization and cultural change, if you don’t have executive buy-in, is never going to fly.

Listen to the podcast. Find it on iTunes. Get the mobile app. Read a full transcript or download a copy. Sponsor: Hewlett Packard Enterprise.

You may also be interested in: