Showing posts with label cloud services. Show all posts
Showing posts with label cloud services. Show all posts

Wednesday, October 1, 2014

Cloud services brokerages add needed elements of trust and oversight to complex cloud deals

Our BriefingsDirect discussion today focuses on an essential aspect of helping businesses make the best use of cloud computing.

We're examining the role and value of cloud services brokers with an emphasis on small to medium-sized businesses (SMBs), regional businesses, and government, and looking for attaining the best results from a specialist cloud service brokerage role within these different types of organizations.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

No two businesses have identical needs, and so specialized requirements need to be factored into the use of often commodity-type cloud services. An intermediary brokerage can help companies and government agencies make the best use of commodity and targeted IaaS clouds, and not fall prey to replacing an on-premises integration problem with a cloud complexity problem.

To learn more about the role and value of the specialist cloud services brokerage, we're joined by Todd Lyle, President of Duncan, LLC, a cloud services brokerage in Ohio, and Kevin Jackson, the Founder and CEO of GovCloud Network in Northern Virginia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:

Gardner: How do we get regular companies to effectively start using these new cloud services?

Lyle: Through education. That’s our first step. The technology is clearly here, the three of us will agree. It's been here for quite some time now. The beauty of it is that we're able to extract bits and pieces for bundles, much like you get from your cell phone or your cable TV folks. You can pull those together through a cloud services brokerage.

Lyle
So brokerage firms will go out and deal with the cloud services providers like Amazon, Rackspace, Dell, and those types of organizations. They bring the strengths of each of those organizations together and bundle them. Then, the consumer gets that on a monthly basis. It's non-CAPEX, meaning there is no capital expenditure.

You're renting these services. So you can expand and contract as necessary. To liken this to a utility environment, utility organizations that do electric and do power, you flip the switch on or turn the faucet on and off. It’s a metered service.
Learn more about Todd D. Lyle's book, 
Grounding the Cloud: Basics and Brokerages, 
at groundingthecloud.org.
That's where you're going to get the largest return on your collective investment when you switch from a traditional IT environment on-premises, or even a private cloud, to the public cloud and the utility that this brings.

Government agencies

Gardner: Kevin you're involved more with government agencies. They've been using IT for an awfully long time. How is the adjustment to cloud models for them? Is it easier, is it better, or is it just a different type of approach, and therefore requires only adjustment?

Jackson: Thank you for bringing that up. Yes, I've been focused on providing advanced IT to the federal market and Fortune 500 businesses for quite a while. The advent of cloud computing and cloud services brokerages is a double-edged sword. At once, it provides a much greater agility with respect to the ability to leverage information technology.

Jackson
But, at the same time, it brings a much greater amount of responsibility, because cloud service providers have a broad range of capabilities. That broad range has to be matched against the range of requirements within an enterprise, and that drives a change in the management style of IT professionals.

You're going more from your implementation skills to a management of IT skills. This is a great transition across IT, and is something that cloud services brokerages can really aid. [See Jackson's recent blog on brokerages.]

Gardner: Todd, it sounds as if we're moving this from an implementation and a technology skill set into more of a procurement, governance, contracts, and creating the right service-level agreements (SLAs). These are, I think, new skills for many businesses. How is that coaching aspect of a cloud service’s brokerage coming out in the market? Is that something you are seeing a lot of demand for?

Lyle: It’s customer service, plain and simple. We hear about it all the time, but we also pass it off all the time. You have to be accessible. If you're a 69-year-old business owner and embracing a technology from that demographic, it’s going to be different than if you are 23 years old, different in the approach that you take with that person.

As we all get more tenured, we'll see more adaptability to new technologies in a workplace, but that’s a while out. That's the 35-and-younger crowd. If you go to 35-and-above, it's what Kevin mentioned -- changing the culture, changing the way things are procured within those cultures, and also centralizing command. That’s where the brokerage or the exchange comes into place for this. [See Lyle's video on cloud brokerages.]
Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

Gardner: One of the things that’s interesting to me is that a lot of companies are now looking at this as not just as a way of switching from one type of IT, say a server under a desk, to another type of IT, a server in a cloud.

It’s forcing companies to reevaluate how they do business and think of themselves as a new process-management function, regardless of where the services reside. This also requires more than just how to write a contract. It's really how to do business transformation.

Does that play into the cloud services brokerage? Do you find yourselves coaching companies on business management?

Jackson: Absolutely. One of the things cloud services is bringing to the forefront is the rapidity of change. We're going from an environment where organizations expect a homogenous IT platform to where hybrid IT is really the norm. Change management is a key aspect of being able to have an organization take on change as a normal aspect of their business.

This is also driving business models. The more effective business models today are taking advantage of the parallel and global nature of cloud computing. This requires experience, and cloud services brokerages have the experience of dealing with different providers, different technologies, and different business models. This is where they provide a tremendous amount of value.

Different types of services

Gardner: Todd, this notion of being a change agent also raises the notion that we're not just talking about one type of cloud service. We're talking about software as a service (SaaS), bringing communications applications like e-mail and calendar into a web or mobile environment. We're talking about platform as a service (PaaS), if you're doing development and DevOps. We're talking about even some analytics nowadays, as people try to think about how to use big data and business intelligence (BI) in the cloud.

Tell me a bit more about why being a change agent across these different models -- and not just a cloud implementer or integrator -- raises the value of this cloud service brokerage role?

Lyle: It’s a holistic approach. I've been talking to my team lately about being the Dale Carnegie of the cloud, hence the specialist cloud services brokerage, because it really does come down to personalities.

In a book that I've recently written called Grounding the Cloud, Basics and Brokerages, I talk about the human element. That's the personalities, expectations, and abilities of your workforce, not only your present workforce but your future workforce, which we discussed just a moment ago, as far as demographics were concerned.

It's constant change. Kevin said it, using a different term, but that's the world we live in. Some schools are doing this, where they're adding this to their MBA programs. It is a common set of skills that you must have, and it's managing personalities more than you're managing technology, in my opinion.
It's about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

Gardner: Tell me a bit more about this book, Todd, it’s called Grounding the Cloud. When is it available and how can people learn more about it?

Lyle: It’s available now on Amazon, and they can find out more at www.groundingthecloud.org. This is a layman’s introduction to cloud computing, and so it helps business men and women get a better understanding of the cloud -- and how they could best maximize their time and their money, as it associates to their IT needs.

Gardner: Does the book get into this concept of the specialist cloud services brokerage (SCSB), as opposed to just a general brokerage, and getting at what's the difference?

Lyle: That’s an excellent question, Dana. There are a lot of perceptions, you have one as well, of what a cloud services brokerage is. But, at the end of the day -- and we've been talking about this in the entire discussion -- it's about the human element, our personalities, and how to make these changes so that the companies actually can speed up.

We discuss it here in the "flyover country," in Ohio. We meet in the book with Cleveland State University. We meet with Allen Black Enterprises, and then even with a small landscaping company to demonstrate how the cloud is being applied from six and seven users, all the way up to 25,000 users. And we're doing it here in the Midwest, where things tend to take a couple of years to change.

User advocate

Gardner: How is a cloud services brokerage different from a systems integrator? It seems there's some commonality. But you are not just a channel, or reseller, you are really as much an advocate for the user.

Lyle: A specialist cloud services brokerage is going to be more like Underwriters Laboratories (UL). It’s going to go out, fielding all the different cloud flavors that are available, pick what they feel is best, and bring it together in a bundle. Then, the SCSB works with the entity to adapt to the culture and the change that's going to have to occur and the education within their particular businesses, as opposed to a very high-level vertical, where some things are just pushed out at an enterprise level.

Jackson: I see this cloud services brokerage and specialist cloud services brokerage as the new-age system integrator, because there are additional capabilities that are offered.

For example, you need a trusted third-party to monitor and report on adherence to SLAs. The provider is not going to do that. That’s a role for your cloud services brokerage. Also you need to maintain viable options for alternative cloud-service providers. The cloud services brokerage will identify your options and give you choices, should you need the change. A specialist cloud services brokerage also helps to ensure portability of your business process and data from one cloud service provider to another.

Management of change is more than a single aspect within the organization. It’s how to adapt with constant change and make sure that your enterprise has options and doesn't get locked into a single vendor.

Lyle: It comes to the point, Kevin, of building for constant change. You're exactly right.
Learn more about Todd D. Lyle's book, 
Grounding the Cloud: Basics and Brokerages, 
at groundingthecloud.org
Gardner: You raise an interesting point too, Kevin, that one shouldn’t get lulled into thinking that they can just make a move to the cloud, and it will all be done. This is going to be a constant set of moves, a journey, and you're going to want to avail yourself of the cloud services marketplace that’s emerging.

We're seeing prices driven down. We're seeing competition among commodity-level cloud services. I expect we'll see other kinds of market forces at work. You want to be agile and be able to take advantage of that in your total cost of computing.

Jackson: There's a broad range of providers in the marketplace, and that range expands daily. Similarly, there's a large range of requirements within any enterprise of any size. Brokers act as matchmakers, avoiding common mistakes, and also help the organizations, the SMBs in particular, implement best practices in their adoption of this new model.

Gardner: Also, when you have a brokerage as your advocate, they're keeping their eye on the cloud marketplace, so that you can keep your eye on your business and your vertical, too. Therefore, you're going to have somebody to tip you off when things change and they will be on the vanguard for deals. Is that something that comes up in your book, Todd, of the public service brokerage being an educated expert in a field where the business really wants to stick to its knitting?

Primary goal

Lyle: Absolutely. That’s the primary goal, both at a strategic level, when you're deciding what products to use -- the Rackspaces, the Microsofts, the RightSignatures, etc. -- all the way down to the tactical one of the daily operation. When I leave the company, how soon can we lock Todd out? How soon can we lock him down or lock him out? It becomes a security issue at a very granular level. Because it's metered, you turn it off, you turn Todd off, you save his data, and put it someplace else.

That’s a role that, requires command and control and oversight, and that's a responsibility. You're part butler. You're looking out for the day-to-day, the minute issues. Then you get up to a very high level. You're like UL. You're keeping an eye on everything that’s occurring. UL comes to mind because they do things that are tactile and those things that you can't touch, and definitely the cloud is something you can’t touch.

Jackson: Actually, I believe it represents the embracing of a cooperative model of my consumers of this information technology, but embracing with open eyes. This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude toward industry. Cloud services brokerages and specialist cloud services brokerages sit at the same the table with these consumers.
This is particularly of interest within the federal marketplace, because federal procurement executives have to stop their adversarial attitude towards industry.

Lyle: Kevin, your point is very well taken. I'll go one step further. We were talking up and down the scales, strategic down to the daily operations. One of the challenges that we have to overcome is the signatories, the senior executives, that make these decisions. They're in a different age group and they're used to doing things a certain way.

That being said, getting legislation to be changed at the federal level, directives being pushed down, will make the difference, because they do know how to take orders. I know I'm speaking frankly, but what's going to have to occur for us to see some significant change within the next five years is being told how the procurement process is going to happen.

You're taking the feather; I'm taking the stick, but it’s going to take both of those to accomplish that task at the federal level.

Gardner: We know that Duncan, LLC is a specialized cloud services brokerage. Kevin, tell us a little bit about the GovCloud Network. What is your organization, and how do you align with cloud brokerages?

Jackson: GovCloud Network is a specialty consultancy that helps organizations modify or change their mission and business processes in order to take advantage of this new style of system integrator.

Earlier, I said that the key to transition in a cloud is adopting and adapting to the parallel nature and a global nature of cloud computing. This requires a second look at your existing business processes and your existing mission processes to do things in different ways. That's what GovCloud Network allows. It helps you redesign your business and mission processes for this constant change and this new model.

Notion of governance

Gardner: I'd like to go back to this notion of governance. It seems to me, Todd, that when you have different parts of your company procuring cloud services, sometimes this is referred to as shadow IT. They're not doing it in concert, through a gatekeeper like a cloud broker. Not only is there a potential redundancy of efforts in labor and work in process, but there is this governance and security risk, because one hand doesn’t know what the other hand is doing.

Let's address this issue about better security from better governance by having a common brokerage gatekeeper, rather than having different aspects of your company out buying and using cloud services independently.

Lyle: We're your trusted adviser. We’re also very much a trusted member of your team when you bring us into the fold. We provide oversight. We're big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources. You don’t want to leave a window open at night. You certainly don't want to leave your network open.

There's a lot going on in today's world, a lot of transition, the NSA and everything we worry about. It's important to have somebody providing command and control. We don’t sit there and stare at a monitor all day. We use systems that watch this, but we can tell when there's an increase or decrease out of the norm of activities within your organization.
We're big brother, if you want to look at it that way, but big brother is important when you are dealing with your business and your business resources.

It really doesn't matter how big or how small, there are systems that allow us to monitor this and give a heads up. If you're part of a leadership team, you’d be notified that again Todd Lyle has left an open window. But if you don't know that Todd even has the window, then that’s even a bigger concern. That comes down to the leadership again -- how you want to manage your entity.

We all want to feel free to make decisions, but there are too many benefits available to us, transparent benefits, as Kevin put it, to using the cloud and hiding in plain sight, maximizing e-mail at 100,000 plus users. Those are all good things but they require oversight.

It's almost like an aviation model, where you have your ground control and your flight crew. Everybody on that team is providing oversight to the other. Ultimately, you have your control tower that's watching that, and the control tower, both in the air and on the ground, is your cloud services brokerage.

Jackson: It’s important to understand that cloud computing is the industrialization of information technology. You're going from an age where the IT infrastructure is a hand-designed and built work of art to where your IT infrastructure is a highly automated assembly-line platform that requires real-time monitoring and metering. Your specialist cloud services brokerage actually helps you in that transition and operations within this highly automated environment.

Gardner: Todd, we spoke earlier about how we're moving from implementation to procurement. We've also talked about governance being important, SLAs, and managing a contract across variety of different organizations that are providing cloud type services. It seems to me that we're talking about financial types of relations.
So even the Federal Government can adopt cloud services brokerage and respond in a very quick and efficient and effective manner.

How does the cloud services brokerage help the financial people in a company. Maybe it's an individual who wears many hats, but you could think of them as akin to a chief financial officer, even though that might not be their title?

What is it that we are doing with the cloud services brokerage that is of a special interest and value to the financial people? Is it unified billing or is it one throat to choke? How does that work?

Lyle: Both, and then some. Ultimately it's unified billing and unified management from daily operations. It's helping people understand that we're moving from a capitalized expense, the server, the software, things that are tactile that we are used to touching. We're used to being able to count them and we like to see our stuff.

So it's transitioning and letting go, especially for the people who watch the money. We have a fiduciary responsibility to the organizations that we work for. Part of that is communicating, educating, and helping the CFO-type person understand the transition not only from the CAPEX to the OPEX, because they get that, but also how you're going to correlate it to productivity.

It's letting them know to be patient. It's going to take a couple months for your metering to level up. We have some statistics and we can read into that. It's holding their hand, helping them out. That's a very big deal as far as that's concerned.

Gardner: Let's start to think about how to get started. Obviously, every company is different. They're going to be at a different place in terms of maturity, in their own IT, never mind the transition to cloud types of activities. Would you recommend the book as a starting point? Do you have some other materials or references? How do you help that education process get going. I'm thinking about organizations that are really at the very beginning?

Gateway cloud

Lyle: We've created a gateway cloud in our book, not to confuse the cloud story. Ultimately, we have to take in consideration our economy, the world economy today. We're still very slow to move forward.

There are some activities occurring that are forcing us to make change. Our contracts may be running out. Software like XP is no longer supported. So we may be forced into making a change. That's when it's time to engage a cloud services brokerage or a specialist cloud services brokerage.

Go out and buy the book. It's available on Amazon. It gives you a breakdown, and you can do an assessment of your organization as it currently is and it will help you map your network. Then, it will help you reach out to a cloud services brokerage, if you are so inclined, through points of interest for request for proposal or request for information.

The fun part is, it gives you a recipe using Rackspace, Jungle Disk, and gotomeeting.com, where you get to build a baby cloud. Then, you can go out and play with it.
This is written for the layperson. I've been told it’s entertaining, which is the most important part, because you’re going to read it then.

You want to begin with three points: file sharing, remote access, and email. You can be the lighthouse or you can be a dry-cleaners, but every organization needs file sharing, remote access, and email. We open-sourced this recipe or what we call the industrial bundle for small businesses.

It's not daunting. We’ve got some time yet, but I would encourage you to get a handle on where your infrastructure is today, digest that information, go out and play with the gateway cloud that we've created, and reach out to us if you are so inclined.
Learn more about Todd D. Lyle's book, 
Grounding the Cloud: Basics and Brokerages, 
at groundingthecloud.org
We’d love for you to use one of our organizations, but ultimately know that there are people out there to help you. This book was written for us, not for the technical person. It is not in geek speak. This is written for the layperson. I've been told it’s entertaining, which is the most important part, because you’re going to read it then.

Jackson: I would urge SMBs to take the plunge. Cloud can be scary to some, but there is very little risk and there is much to gain for any SMB. The using, leveraging, taking advantage of the cloud gateway that Todd mentioned is a very good, low risk, and high reward path towards the cloud.

Gardner: I would agree with both of what you all said. The notion of a proof of concept and dipping your toe in. You don't have to buy it all at once, but find an area of your company where you’re going to be forced to make a change anyway and then to your point, Kevin, do it now. Take the plunge earlier rather than later.

Jackson: Before you're forced.

Large changes

Gardner: Before you’re forced, but you want to look at a tactical benefit and where to work toward strategic benefit, but there is going to be some really large changes happening in what these cloud providers can do in a fairly short amount of time.

We're moving from discrete apps into the entire desktop, so a full PC experience as a service. That’s going to be very attractive to people. They're going to need to make some changes to get there. But rather than thinking about services discreetly, more and more of what they're looking for is going to be coming as the entire IT services experience, and more analytics capabilities mixed into that. So I am glad to hear you both explaining how to do it, managed at a proof-of-concept level. But I would say do it sooner rather than later.

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Duncan, LLC.

You may also be interested in:

Tuesday, February 4, 2014

Network virtualization eases developer and operations snafus in the mobile and cloud era

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.

As developers are pressured to produce mobile and distributed cloud apps ever faster and with more network unknowns, the older methods of software quality control can lack sufficient predictability.

And as Agile development means faster iterations and a constant stream of updates, newer means of automated testing of the apps in near-production realism prove increasingly valuable.

Fortunately, a tag-team of service and network virtualization for testing has emerged just as the mobile and cloud era requires unprecedented focus on DevOps benefits and rapid quality assurance.

BriefingsDirect had an opportunity to learn first-hand how Shunra Software and HP have joined forces to extend the capabilities of service virtualization for testing at the recent HP Discover 2013 Conference in Barcelona.

Learn here how Shunra Software uses service virtualization to help its developer users improve the distribution, creation, and lifecycle of software applications from Todd DeCapua, Vice President of Channel Operations and Services at Shunra Software, based in Philadelphia. The discussion is moderated by me, Dana Gardner, Principal Analyst at Interarbor Solutions.

Here are some excerpts:
Gardner: There are a lot of trends affecting software developers. They have mobile on their minds. They have time constraints issues. They have to be faster, better, and cheaper along the apps lifecycle way. What among the trends is most important for developers?

DeCapua
DeCapua: One of the biggest ones -- especially around innovation and thinking about results, specifically business results -- is Agile. Agile development is something that, fortunately, we've had an opportunity to work with quite a bit. Our capabilities are all structured around not only what you talked about with cloud and mobile, but we look at things like the speed, the quality, and ultimately the value to the customers.

We’re really focusing on these business results, which sometimes get lost, but I try to always go back to them. We need to focus on what's important to the business, what's important to the customer, and then maybe what's important to IT. How does all that circle around to value?

Gardner: With mobile we have many more networks, and people are grasping at how to attain quality before actually getting into production. How does service virtualization come to bear on that?

Distributed devices

DeCapua: As you look at almost every organization today, something is distributed. Their customers might be on mobile devices out in the real world, and so are distributed. They might be working remotely from home. They might have a distribution center or a truck that has a mobile device on it.

There are all these different pieces. You’re right. Network is a significant part that unfortunately many organizations have failed to notice and failed to consider, as they do any type of testing.

Network virtualization gives you that capability. Where service virtualization comes into play is looking at things like speed and quality. What if the services are not available? Service virtualization allows you to then make them available to your developers.

In the early stage, where Shunra has been able to really play a huge difference in these organizations is by bringing network virtualization in with service virtualization. We’re able to recreate their production environments with 100 percent scale -- all prior to production.

When we think about the value to the business, now you’re able to deliver the product working. So, it is about the speed to market, quality of product, and ultimately value to your customer and to your business.

Gardner: And another constituency that we should keep in mind are those all-important operators. They’re also dealing with a lot of moving parts these days -- transformation, modernization, and picking and choosing different ways to host their data centers. How do they fit into this and how does service virtualization cut across that continuum to improve the lives of operators?
Service virtualization and network virtualization can benefit them is by being able to recreate these scenarios.

DeCapua: You’re right, because as the delivery has sped up through things like Agile, it's your operations team that is sitting there and ultimately has to be the owners of these applications. Service virtualization and network virtualization can benefit them by being able to recreate these in-production scenarios.

Unfortunately, there are still some reactive actions required in production today, so you’re going to have a production incident. But, you can now understand the network in production, capture those conditions, and recreate that in the test environment. You can also do the same for the services.

We now have the ability to quickly and easily recreate a production incident in a prior-to-production environment. The operations team can be part of the team that's fixing it, because again, the ultimate question from CIOs is, “How can you make sure this never happens again?”

We now have the way to quickly and confidently recreate incidents and fix it the first time, not having to change code in production, on the fly. That is one of the scariest moments in any of the times when I've been at the customer site or when I was an employee and had to watch that happen.

Agile iterations

Gardner: As you mentioned earlier, with Agile we’re seeing many more iterations on applications as they need to be rapidly improved or changed. How does service and network virtualization aid in being able to produce many more iterations of an application, but still maintain that high quality?

DeCapua: One of our customers actually told us that -- prior to leveraging network virtualization with service virtualization -- he was doing 80 percent of his testing in-production, simply because he knew the shortcomings, and he needed to test it, but he had no way of re-creating it. Now, let's think about Agile. Let's think about how we shift and get the proven enterprise tools in the developer’s hands sooner, more often, so that we can drive quality early in the process.

That's where these two components play a critical role. As you look at it more specifically and go just a hair deeper, how in integrated environments can you provide continuous development and continuous deployment? And with all that automated testing that you’re already doing, how you can incorporate performance into that? Or, as I call it, how do you “build performance in” from the beginning?

As a business person, a developer, a business analyst, or a Scrum Master, how is it that you’re building performance into your user scenarios today? How is it that you’re setting them up for understanding how that feature or function is going to perform? Let's think about it as we’re creating, not once we get two or three sprints into use and we have our hardening sprint, where we’re going to run our performance scenario. Let's do it early, and let's do it often.
Get the proven enterprise tools in the developer’s hands sooner, more often, so that we can drive quality early in the process.

Gardner: If we’re really lucky, we can control the world and the environment that we live in, but more often than not these days, we’re dealing with third-party application programming interfaces (APIs). We’re dealing with outside web services. We have organizational boundaries that are being crossed, but things are happening across that boundary that we can't control.

So, is there a benefit here, too, when we’re dealing with composite applications, where elements of that mixed service character are not available for your insight, but that you need to be able to anticipate and then react quickly should a change occur?

DeCapua: I can't agree with you more. It’s funny, I am kind of laughing here, Dana, because this morning I was riding the metro in Barcelona and before I got to the stop here, I looked down to my phone, because I was expecting a critical email to come in. Lo and behold, my phone pops up a message and says, “We’re sorry, service is unavailable.”

I could clearly see that I had one out of five bars on the Orange network, and I was on the EDGE network. So, it was about a 2.5G connection. I should still have been able to get data, but my phone simply popped up and said, “Sorry, cannot retrieve email because of a poor data connection.”

I started thinking about it some more, and as I was engaging with other folks today at the show, I asked them why is it that the developer of the application found it necessary to alert me three times in a row that it couldn’t get my email because of a poor data connection? Why didn’t it just not wait 30 seconds, 60 seconds, 90 seconds until it did, and then have it reach out and query it again and pull the data down?

Changing conditions

This is just one very simple example that I had this morning. And you’re right, there are constantly changing conditions in the world. Bandwidth, latency, packet loss and jitter are those conditions that we’re all exposed to every day. If you’re in a BMW driving down the road at 100 miles per hour, that car is now a mobile phone or a mobile device on wheels, constantly in communication. Or if you’re riding the metro or the tube and you have your mobile device on your hands, there are constantly changing conditions.

Network virtualization and service virtualization give you the ability to recreate those scenarios so that you can build that type of resiliency into your applications and, ultimately, the customers have the experience that you want them to have.

Gardner: Todd, tell us about so-called application-performance engineering solutions?

DeCapua: So, application performance engineering (APE) is something that was created within the industry over a number of years. It's meant to be a methodology and an approach. Shunra plays a role in that.

A lot of people had thought about it as testing. Then people thought about it as performance testing. At the next level, many of us in the industry have defined it is application engineering. It’s a lot more than just that, because you need to dive behind the application and understand the in’s and the out’s. How does everything tie together?
Understanding APE will help you to reduce those types of production incidents.

You’d mentioned some of the composite applications and the complexities there -- and I’m including the endpoints or the devices or mobile devices connecting through it. Now, you introduce cloud into the equation, and it gets 10 times worse.

Thinking about APE, it's more of an art and a skill. There is a science behind it. However, having that APE background knowledge and experience gives you the ability to go into these composite apps, go into these cloud deployments, and leverage the right tools and the right process to be able to quickly understand and optimize the solutions.

Gardner: Why aren’t the older scripting and test-bed approaches to quality control good enough? Why can't we keep doing what we've been doing?

DeCapua: In the United States recently, October 1 of 2013, there was a large healthcare system being rolled out across the country. Unfortunately, they used the old testing methodologies and have had some significant challenges. HP and Shunra were both engaged on October 2 to assist.

Understanding APE will help you to reduce those types of production incidents. All due to inaccurate results in the test environment, using the current methodologies, about 50 percent of our customers come to us in a crisis mode. They say, “We just had this issue, I know that you told us this is going to happen, but we really need your help now.”

They’re also thinking about how to shift and how to build performance in all these components -- just have it built in, have it be automatic, and get the results that are accurate.

Coming together

Gardner: Of course HP has service virtualization, you have network virtualization. How are they coming together? Explain the relationship and how Shunra and HP work together?

DeCapua: To many people's surprise, this relationship is more than a decade old. Shunra’s network-virtualization capability has, for a long time, been built in to HP LoadRunner, also is now being built into HP Performance Center.

There are other capabilities that we have that are built into their Unified Functional Testing (UFT) products. In addition, within service virtualization, we’re now building that product into there. It’s one that, when you think about anything that has some sort of distribution or network involved, network virtualization needs to come into play.

Some people have a hard time initially understanding the service virtualization need, but a very simple example I often use is an organization like a bank. They’ll have a credit check as you’re applying for a loan. That credit check is not going to be a service that the bank creates. They’re going to outsource it to one of the many credit-check services. There is a network involved there.

In your test environment, you need to recreate that and take that into consideration as a part of your end-to-end testing, whether it's functional, performance, or load. It doesn’t matter.
In your test environment, you need to recreate that and take that into consideration as a part of your end-to-end testing, whether it's functional, performance, or load.

As we think about Shunra, network virtualization and the very tight partnership that we've had with HP for service virtualization, as well as their ability to virtualize the users, it's been an OEM relationship. Our R and D teams sit together as they’re doing the development so that this is a seamless product for the HP customer to be able to get the benefit and value for their business and for their customers.

Gardner: Let's talk a little bit about what you get when you do this right. It seems to me the obvious point is getting to the problem sooner, before you’re in production, extending across network variables, across other composite application-type variables. But, I’m going to guess that there are some other benefits that we haven't yet hit on.

So, when you've set up you're testing, when you have virtualization as your tool, what happens in terms of paybacks?

DeCapua: There are many benefits there, which we have already covered. There are dozens more that we could get into. One that I would highlight, being able to pull all the different pieces that we've been talking about, are shorter release times.

TechValidate did a survey in February of 2013. The findings were very compelling in that they found a global bank was able to speed up their deployment or application delivery by 30 to 40 percent. What does that mean for that organization as compared to their competitor? If you can get to market 30 to 40 percent faster, it means millions or billions of dollars over time. Talk about numbers of customers or brands, it's a significant play there.

Rapid deployment

There are other things like rapid deployment. As we think about Agile and mobile, it's all about how fast we get this feature function out, leveraging service virtualization in a greater way, and reducing associated costs.

In the example that I shared, the customer was able to virtualize the users, virtualize the network, and virtualize the services. Prior to that, he would never have been able to justify the cost of rebuilding a production environment for test. Through user virtualization, network virtualization, and service virtualization, he was able to get to 100 percent at a fraction of the cost.

Time and time again we mention automation. This is a key piece of how you can test early, test often, ultimately driving these accurate results and getting to the automated optimization recommendations.

Gardner: What comes next in terms of software productivity? What should organizations be thinking in terms of vision?

Slow down

DeCapua: I see Agile, mobile, and cloud. There are some significant risks out in the marketplace today. As organizations look to leverage these capabilities to benefit their business and the customers, maybe they need to just slow down for a moment and not create this huge strategy, but go after “How can I increase my revenue stream by 20 percent in the next 90 days?” Another one that I've had great success with is, “What is that highest visibility, highest risk project that you have in your organization today?”

As I look at The Wall Street Journal, and I read the headlines everyday, it's scary. But, what's coming in the future? We can all look into our crystal balls and say that this is what it is. Why not focus on one or two small things of what we have now, and think about how we’re mitigating our risk of  looking at larger organizations that are making commitments to migrate critical applications into the cloud?

You’re biting off a fairly significant risk, which that there isn’t a lot there to catch you when you do it wrong, and, quite frankly, nearly everybody is doing it wrong. What if we start small and find a way to leverage some of these new capabilities? We can actually do it right, and then start to realize some of the benefits from cloud, mobile, and other channels that your organization is looking to.

Gardner: The role of software keeps increasing in many organizations. It's becoming the business itself and, as a fundamental part of the business, requires lots of tender love and care.
The more that we can think about that and tune ourselves and make ourselves lean and focused on delivering better quality products, we’re going to be in the winning circle more often.

DeCapua: You got it. The only other bit that I would add on to that is looking at the World Quality Report that was presented this morning by HP, Capgemini, and Sogeti, they highlighted that there is an increased spend from the IT budget, and a rather significant increase in spend from last year in testing.

It’s exactly what you’re saying. Organizations didn’t enter the market thinking of themselves as a software house. But time and time again, we’re seeing how people who treat what they do as a software house ultimately is improving not only life for their internal customers, but also their external customers.

So I think you’re right. The more that we can think about that and tune ourselves and make ourselves lean and focused on delivering better quality software products, we’re going to be in the winning circle more often.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy.
Sponsor: HP.

You may also be interested in:

Monday, March 11, 2013

Fighting in the cloud service orchestration wars

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Combine the supercharged cloud computing marketplace with the ubergeek cred of the open source movement, and you’re bound to have some Mentos-in-Diet-Coke moments. Such is the case with today’s cloud service orchestration (CSO) platforms. At this moment in time, the leading CSO platform is OpenStack. Dozens of vendors and cloud service providers (CSPs) have piled on this effort, from Rackspace to HP to Dell, and most recently, IBM has announced that they’re going all in as well. Fizzy to be sure, but all Coke, no Mentos.

Then there are CloudStack, Eucalyptus, and a few other OpenStack competitors. With all the momentum of OpenStack, it might seem that these open source alternatives are little more than also-rans, doomed to drop further and further behind the burgeoning leader. But there’s more to this story. This is no techie my-open-source-is-better-than-your-open-source battle of principle, of interest only to the cognoscenti. On the contrary: big players are now involved, and they’re placing increasingly large bets. Add a good healthy dose of Mentos – only this time, the Mentos are money.

Understanding the CSO Marketplace

Look around the infrastructure-as-a-service (IaaS) market. Notice that elephant in the corner? That’s Amazon Web Services (AWS). The IaaS market simply doesn’t make sense unless you realize that AWS essentially invented IaaS. And by invented, we mean actually got it to work. Which if you think about it, is rather atypical for most technology vendors. Your average software vendor will identify a new market opportunity, take some old stuff they’ve been struggling to sell, give it a nice new coat of PowerPoint, and shoehorn it into the new market. If customers bite, then the vendor will devote resources into making the product actually do what it’s supposed to do. Eventually. We hope.

Bloomberg
But AWS is different. Amazon.com is an online reseller, not a software vendor. They think more like Wal-Mart than IBM. They figured out elasticity at scale, added customer self-service, and christened it IaaS. Then they grew it exponentially, defining what cloud computing really means. Today, they leverage their market dominance and economies of scale to continually lower prices, squeezing their competitors’ margins to nothing. It worked for Rockefeller’s Standard Oil, and it works for Wal-Mart. Now it’s working for Amazon.

But as with any market, there are always competitors looking to carve off a bit of opportunity for themselves. Given AWS’s dominance, however, there are two basic approaches to competing with Amazon: do what AWS is doing but try to do it a bit better (say, with Rackspace’s promise of better customer service), or do something similar to AWS but different enough to interest some segment of the market (leading in particular to the enterprise public cloud space populated by the likes of Verizon Terremark and Savvis, to name a few).

And then there are the big vendors like HP and IBM, who not only offer a range of enterprise software products, but who also offer enterprise data center managed services and associated consulting. Such vendors want to play two sides of this market: they want to be public cloud providers in their own right, and also offer “turnkey” cloud gear to customers who want to build their own private clouds.

Enter OpenStack. Both of the aforementioned vendors as well as the smaller players realize that piecing together their own cloud offerings will never enable them to catch up to AWS. Instead, they’re joining forces to build out a common cloud infrastructure platform that supports the primary capabilities of IaaS (compute, storage, database, and network), as well as providing the infrastructure platform for platform-as-a-service (PaaS) and Software-as-a-Service (SaaS) capabilities down the road. The open source model is perfect for such collaboration, as the Apache license allows contributors to take the shared codebase and build out whatever proprietary add-ons they like.

Most challenging benefits

Perhaps the most touted, and yet most challenging benefits of the promised all-OpenStack world is the holy grail of workload portability. In theory, if you’re running your workloads on one OpenStack-based cloud, you should be able to move them lock stock and barrel to any other OpenStack-based cloud, even if it belongs to a different CSP. Workload portability is the key to cloud-based failover and disaster recovery, cloud bursting, and multi-cloud deployments. Today, workload portability requires a single proprietary platform, and only VMware offers such portability. AWS offers a measure of portability within its cloud, but will face challenges supporting portability between itself and other providers. As a result, if OpenStack can get portability to work properly, participating CSPs will have a competitive lever against Amazon.

Achieving a strong competitive position against AWS with OpenStack is easier said than done, however. OpenStack is a work in progress, and many bits and pieces are still missing. Open source efforts take time to mature, and meanwhile, AWS keeps growing. In response, the players in this space are taking different tacks to build mature offerings that have a hope of carving off a viable chunk of the IaaS marketplace:
  • Rackspace is trying to capitalize on its OpenStack leadership position and the aforementioned customer service to provide a viable alternative to AWS. They are also touting the workload portability benefits of OpenStack. But downward pricing pressure combined with the holes in OpenStack capabilities are pounding on Rackspace’s stock price.
    Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration.

  • Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration. Cloud integration is a critical enabler of hybrid clouds, but doesn’t address the workload portability challenge. As a result, Dell’s cloud marketing efforts are focused on the benefits of integration over portability. Dell’s recent acquisition of Quest Software also hints at a Microsoft application migration strategy for Dell Cloud.
  • HP wants to rush its enterprise public cloud offering to market, and it doesn’t want to wait for OpenStack to mature. Instead, it’s hammering out its own version of OpenStack, essentially forking the OpenStack codebase to its own ends, according to Nnamdi Orakwue, vice president for Dell Cloud. Such a move may pay off for HP, but increases the risk that the HP add-ons to OpenStack will have quality issues.
  • IBM recently announced that they are “all in” with OpenStack with the rollout of  IBM SmartCloud Orchestrator built on the platform.  But IBM has a problem: the rest of their SmartCloud suite isn’t built on OpenStack, leaving them to scramble to rewrite a number of existing products leveraging OpenStack’s incomplete codebase, while in the meantime, integrating the mishmash of SmartCloud components at the PowerPoint layer.
  • Red Hat is making good progress hammering out what they consider an “enterprise” deployment of OpenStack. As perhaps the leading enterprise open source vendor, they are well-positioned to lead this segment of the market, but it still remains to be seen whether enterprise customers will want to  build all open source private clouds in the near term, as the products gradually mature. On the other hand, IBM has a history of leveraging Red Hat’s open source products, so an IBM/Red Hat partnership may move SmartCloud forward more quickly than IBM might be able to accomplish on its own.
CSO Wild Card: CloudStack

There are several more players in this story, but one more warrants a discussion: Citrix. The desktop virtualization leader had been one face in the OpenStack crowd, but they suddenly decided to switch horses and take a contrarian strategy. They ditched OpenStack, took their 2011 Cloud.com acquisition and donated the code to CloudStack. Then they switched CloudStack’s licensing model from GNU (derivative products must be licensed under GNU) to Apache (OK to build proprietary offerings on top of the open source codebase), and subsequently passed the entire CloudStack effort along to the Apache Foundation, where it’s now in incubation.

There are far fewer players on the CloudStack team than OpenStack’s, and its core value proposition is quite similar to OpenStack, so on first glance, Citrix’s move raises eyebrows. After all, why bail on the market leader to join the underdog? But look more closely, and it seems that Citrix may be onto something.
Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen.

First, Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen. Xen is one of the two leading open source virtualization platforms, and provides the underpinnings to many commercial virtualization products on the market today. Citrix’s 2007 acquisition of XenSource positioned them as a Xen leader, and they’ve been driving development of the Xen codebase ever since.

Citrix’s heavy investment in Xen bucks the conventional virtualization wisdom: since Xen’s primary competitor, KVM (Kernel-based Virtual Machine) is distributed as part of standard Linux distros, KVM is the no-brainer choice for the virtualization component of open source CSOs. After all, it’s essentially part of Linux, so any CSP (save those focusing on Windows-centric IaaS) don’t have to lift a finger to build their offerings on KVM. Citrix, however, picked up on a critical fact: KVM is simply not as good as Xen. And now that Citrix has been pushing Xen to mature for half a dozen years, Xen is a far better choice for building turnkey cloud solutions than KVM. So they Citrix combined Xen and CloudStack into a single cloud architecture they dubbed Windsor, which forms the basis of their CloudPlatform offering.

And therein lies the key to Citrix’s contrarian strategy: CloudPlatform is a turnkey cloud solution for customers who want to deploy private clouds – or as close to turnkey as today’s still nascent cloud market can offer. Citrix is passing on the opportunity to be their own CSP (at least for now), instead focusing on driving CloudStack and Xen maturity to the point that they can put together a complete cloud infrastructure software offering. In other words, they are focusing on a niche and giving it all they got.

The ZapThink Take

If this ZapFlash makes comprehending the IaaS marketplace look like herding cats, you’re right. AWS has gotten so big, so fast, and their products are so good, that everyone else is scrambling to put something together that will carve off a piece of what promises to be an immense market. But customers are holding the cards, because everyone knows how AWS works, which means that everyone knows how IaaS is supposed to work. If a vendor or CSP brings an offering to market that doesn’t compare with AWS on quality, functionality, or cost, then customers will steer clear, no matter how good the contenders’ PowerPoints are.

But as with feline wrangling, it’s anybody’s guess where this tabby or that calico is heading next. If anyone truly challenges Amazon’s dominance, who will it be? Rackspace? IBM? Dell? Or any of the dozens of other four-legged critters just looking for a warm spot in the sun? And then there’s the turnkey cloud solution angle. Today, building out your own private cloud is difficult, expensive, and fraught with peril. But if tomorrow brings simple, low cost, low risk private clouds to the enterprise, how will that impact the public CSP marketplace? You pays your money, you takes your chances. But today, the safe IaaS choice is AWS, unless you have a really good reason for selecting an alternative.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in:

Tuesday, February 5, 2013

US Department of Energy: Proving the cloud service broker model

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Emerging markets don’t generally follow smooth, predictable paths. Rather, they struggle and jerk unexpectedly, much like an eaglet escaping from its shell. Vendors, analysts, and pundits may seek to define such markets, but typically fall short. After all, vendors don’t establish markets. Customers do.

Today, cloud computing is still in its birth throes. Yes, many organizations are now achieving value in the cloud, but many more still struggle to understand its true value proposition as cloud service providers (CSPs) and vendors mature their offerings in the space. One problem: cloud computing is not a single market. It is in fact many interrelated markets, as its core service models, infrastructure-, platform-, and software as a service (SaaS), fragment as though they were so many pieces of eggshell.

Bloomberg
To bring order to this chaos, a new sub-market of the broader cloud-computing market has emerged: the cloud service broker (CSB). Envision some kind of cloud middleman, helping to cut through the plethora of cloud options and services by offering…well, just what a CSB offers isn’t quite clear. And that’s the problem with the whole notion of a CSB. The market has yet to fully define it.

Not that there aren’t plenty of perspectives on just what a CSB should actually do, mind you. If anything, there are too many opinions, prompting arguments among bloggers and confusion among customers.

Gartner claims CSBs should offer aggregation, integration, and customization, while Forrester delineates simple cloud brokers, full infrastructure brokers, and SaaS brokers – at least initially. And then there’s the National Institute for Standards and Technology (NIST), who calls for CSBs to provide aggregation, intermediation, and arbitrage, specifically for brokers that would serve the US federal government.
There’s only one way to cut through this confusion: talk to an organization who not only figured out what they wanted from a CSB, but also built one themselves.

But poke around the blogosphere, and many other CSB features come to light. Management is a huge requirement -- or two requirements, actually, as some organizations have needs that focus on business management, while others focus more on the technical aspects of management.

And what about assessments? Shouldn’t your broker assess CSPs who wish to join the CSB, providing some kind of thumbs-up before providers can participate? Then there are the questions about the nature and configuration of the CSB itself. Is it internal to the organization, or a third party much like a real-estate broker might be? And finally, is the broker essentially a software solution, or is it an organization or team in its own right, where software plays a support role to what are essentially a set of brokering business processes?

There’s only one way to cut through this confusion: talk to an organization who not only figured out what they wanted from a CSB, but also built one themselves. The organization in question: the National Nuclear Security Administration (NNSA), an agency of the United States Department of Energy (DOE).

Management and security

According to its Web site, NNSA is responsible for the management and security of the nation’s nuclear weapons, nuclear nonproliferation, naval reactor programs, and related activities. Under the auspices of Deputy Chief Technology Officer Anil Karmel, NNSA and the Los Alamos National Lab (LANL) implemented a CSB they call YOURcloud, in collaboration with partners in the contractor community.

According to Karmel, YOURcloud both leverages and supports the DOE’s Information on Demand (IoD) strategy. It provides a self-service portal for infrastructure-as-a-service (IaaS) offerings across multiple CSPs, including on-premise, community, and public cloud services like Amazon’s Elastic Compute Cloud (EC2). YOURcloud balances a diversity of choices among IaaS providers for various DOE programs while allowing those programs to maintain full autonomy of their cloud workloads.

YOURcloud users include DOE users, laboratory and plant users, other government agency users, support contractors, and members of the public. DOE business use cases for the CSB include rapid deployment of servers to scientists, security controls based on data sensitivity, calculating energy savings, disaster recovery, and capital expenditure reduction. And of course, security is a paramount concern.

Karmel describes YOURcloud as a “Cloud of Clouds.” In other words, it’s a secure hybrid CSB that incorporates both on-premise and public cloud offerings. This approach gives them a unified management control plane for IaaS and IoD, and in fact, this technical management capability is central to the role of the CSB at NNSA.
The central problem that led NNSA to build YOURcloud was their desire to deploy cloud services rapidly.

The central problem that led NNSA to build YOURcloud was their desire to deploy cloud services rapidly. Before the debut of the broker, cloud deployments had taken 70 days or more, according to Karmel.

NNSA also required a comprehensive security plan that was more sophisticated than the security capabilities other CSBs, both in production as well as on the drawing board, might offer. To this end, YOURcloud delivers software-defined security covering network, storage, and compute resources. It provides adaptive security that covers both NNSA’s virtual desktop infrastructure (VDI) as well as service enclaves.

In fact, the notion of service enclaves is central to how YOURcloud deals with security. It’s possible to partition enclaves so that an organization can use one cloud, while protecting sensitive data from users who lack the credentials to access the information in that cloud.

In essence, enclaves provide a container for both workloads and configurations. After a program creates an enclave, it establishes role-based access control (RBAC) by assigning permissions to the organization’s technical staff. In the future, YOURcloud will also provide a shared services enclave that will provide the foundation for enterprise “app store” functionality for the DOE broadly and NNSA in particular.

Critical function

Organization-centric user registration is also a critical function of the CSB. NNSA requires that YOURcloud identify each participating organizations’ top-level contacts in part to prevent unnecessary organization overlap. Users include technical contacts who select providers, create enclaves, grant permissions, and manage configurations. In particular, security contacts provide organizational firewall control, while billing contacts handle billing statement controls.

Cost reduction is one of the most trumpeted benefits of cloud computing, but the government procurement context complicates the ability of departments to leverage the cloud’s utility model. It’s essential, therefore, for YOURcloud to define the cost structure for IaaS, including the duration of the infrastructure services as well as the mechanism for payment.

Simple pay-as-you-go pricing, however, won’t work for the DOE. The risk with such pricing, of course, is the possibility of an unexpectedly large bill. Such unpredictability is inconsistent with normal government procurement processes. Instead, agencies require full allocation, meaning a fixed price for a maximum level of consumption of cloud services. YOURcloud facilitates this full allocation pricing model, and also enables programs to turn off cloud services and hold them for future use. In effect, delivery of the CSB enables the DOE to save money while simultaneously providing an agnostic platform for innovation.

Since NNSA is a government agency, it’s no surprise that YOURcloud follows NIST’s definition of a CSB more closely than Gartner’s or Forrester’s. In fact, YOURcloud exhibits all three of NIST’s CSB capabilities: aggregation, intermediation, and arbitrage. Not only does YOURcloud aggregate pre-approved CSPs, it provides both business intermediation as well technical intermediation.
Perhaps the most important asset YOURcloud brings to the table for DOE is how well it supports program autonomy.

The current version of YOURcloud also has limited arbitrage capabilities in the form of a dynamic cost calculator, as well as chargeback and showback functionality (showback refers to providing management with an analysis of the IT costs due to each department, without actually charging those costs back to the departments).

Perhaps the most important asset YOURcloud brings to the table for DOE is how well it supports program autonomy. YOURcloud allows programs within the DOE to maintain full control over their workloads within the context of a common security baseline. Karmel’s cloud-of-clouds approach enables YOURcloud to broker any organization, through any device, to any service. This respect for program autonomy addresses the “not invented here” problem: program managers can leverage the capabilities of YOURcloud without feeling like the broker is pushing them to select services or follow policies that are not in line with their requirements.

It’s not clear how well YOURcloud will define the characteristics of CSBs across the entire cloud-computing market, but NNSA’s efforts have not gone without notice within the federal government. CSBs are a hot topic across both civilian and military agencies, with the General Services Administration (GSA) and the Defense Information Systems Agency (DISA) both fleshing out their respective CSB strategies.

That being said, there is no better way to prove a model than by implementing a working, successful example. By implementing a CSB that supports secure, hybrid Cloud environments, NNSA and the DOE have set the bar for the next generation of Cloud Service Brokers.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in: