Wednesday, October 19, 2011

Top 10 pitfalls of P2P integration to avoid in the cloud

This guest post comes courtesy of Ross Mason, CTO and founder of MuleSoft. Disclosure: MuleSoft is a sponsor of BriefingDirect podcasts.

By Ross Mason

While integration isn’t necessarily a new problem, the unique challenges of integrating in the cloud require a new approach. Many enterprises, however, are still using point-to-point (P2P) solutions to address their cloud integration needs.

In order to tackle cloud integration successfully, we need to move beyond P2P integration and avoid repeating the same mistakes. To aid in that effort, here is a list (in no particular order) of the top 10 pitfalls of P2P integration to avoid repeating in the cloud:

1. Building vs. buying: If you have developers with integration experience in your IT department, you can have them build a custom P2P integration in house, rather than buy a packaged solution. Building your own integration, however, typically means that you will also have to manage and maintain a codebase that isn’t central to your business and is difficult to change.

2. Quickfire integrations: Let’s say you need to integrate two systems quickly and hire a developer to work on the project over a couple of days. You notice an improvement in business efficiency and see an opportunity to integrate additional systems. You hire the same developer and expect the same quickfire integrations, but the complexity of the project has increased exponentially. The takeaway? It’s always a good idea to approach integration systematically and establish a plan up front, rather than integrate your systems in an ad hoc P2P fashion.

3. Embedding integrations in your application: Although it might be tempting to embed P2P integrations in your web application, you should be cautious about this approach. It may be fine for really simple integrations, but over time, your integration logic becomes scattered in different web apps. Instead, you should think of integration as a separate tier of your application architecture and centralize this logic.

One of the consistently recurring mistakes of doing quick P2P integrations is assuming that things will not break.



4. Creating dependencies between applications: When you integrate applications in a P2P fashion, you create a dependency between them. For example, let’s say you’ve integrated App A and App B. When App A is modified or updated, you will need to change the integration that connects it to App B. You also need to re-test the integration to make sure it works properly. If you add App C to the mix, your workload can increase exponentially.

5. Assuming everything always works: One of the consistently recurring mistakes of doing quick P2P integrations is assuming that things will not break. The reality is that integrations don’t always work as planned. As you integrate systems, you need to design for errors and establish a course of action for troubleshooting different kinds of errors. Error handling is particularly troublesome when integrating software-as-a-service (SaaS) applications, because you have limited visibility and control over the changes that SaaS vendors make to them.

Test each integration

6. It worked yesterday: Just because P2P integration worked for one project does not mean it will work for another. The key is to test each integration you build. Unfortunately, P2P integrations are often built and deployed quickly without sufficient planning or proper testing, increasing the chances for errors. Although it can be difficult and does require a decent amount of effort, testing integrations is absolutely critical.

7. Using independent consultants: Many companies are not staffed with developers who have enough integration expertise and hire consultants to resolve their integration issues. The problem with this approach is that you often have limited visibility into whatever the consultant delivers. If you need to make changes, you typically need to work with the same consultant, which is not always possible.

8. Creating single points of failure: As your P2P integration architecture grows in size and complexity, its chances of becoming a single point of failure in your entire network increase as well. Minimizing the potential for single points of failure should be a priority when it comes to integration, but the lack of decoupling in a P2P approach makes it hard to eliminate bottlenecks in your system.

Quick P2P integrations are relatively manageable when you have 2 or 3 systems to connect, but when you start adding other systems, your architecture quickly becomes a complicated mess.



9. Black-box solutions: Custom-built P2P solutions are usually black box in nature. In other words, they lack reporting capabilities that tell you what is happening between systems. This makes it very hard to debug problems, measure performance, or find out if things are working properly.

10. Creating a monster: Quick P2P integrations are relatively manageable when you have 2 or 3 systems to connect, but when you start adding other systems, your architecture quickly becomes a complicated mess. And because no two P2P integrations are exactly the same, managing your integrations becomes a major pain. If you invest in doing some design work up front, however, this will save you from having to throw away a tangled P2P architecture and starting from scratch to find a new solution under pressure. If you have a well thought out design and a simple architecture, you can reduce the management burdens and costs associated with integration.

Ross Masson is the CTO and Founder of MuleSoft. He founded the open source Mule project in 2003. Frustrated by integration "donkey work," he started the Mule project to bring a modern approach, one of assembly, rather than repetitive coding, to developers worldwide. Now, with the MuleSoft team, Ross is taking these founding principles of dead-simple integration to the cloud with Mule iON, an integration platform as a service (iPaaS).

You may also be interested in:

Monday, October 17, 2011

VMworld case study: City of Fairfield uses virtualization to more efficiently deliver crucial city services

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

Our next VMware case study interview focuses on the City of Fairfield, California, and how the IT organization there has leveraged virtualization and cloud-delivered applications to provide new levels of service in an increasingly efficient manner.

We’ll see how Fairfield, a mid-sized city of 110,000 in Northern California, has taken the do-more-with-less adage to its fullest, beginning interestingly with core and mission-critical city services applications.

This story comes as part of a special BriefingsDirect podcast series from the VMworld 2011 Conference. The series explores the latest in cloud computing and virtualization infrastructure developments.

Here to share more detail on how virtualization is making the public sector more responsive at lower costs is Eudora Sindicic, Senior IT Analyst Over Operations in Fairfield. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why virtualize mission-critical applications, things like police and fire support, first?

Sindicic: First of all, it’s always been challenging in disaster recovery and business continuity. Keeping those things in mind, our CAD/RMS systems for the police center and also our fire staffing system were high on the list for protecting. Those are Tier 1 applications that we want to be able to recover very quickly.

We thought the best way to do that was to virtualize them and set us up for future business continuity and true failover and disaster recovery.

So I put it to my CIO, and he okayed it. We went forward with VMware, because we saw they had the best, most robust, and mature applications to support us. Seeing that our back-end was SQL for those two systems, and seeing that we were just going to embark on a brand-new upgrading of our CAD/RMS system, this was a prime time to jump on the bandwagon and do it.

Also, with our back-end storage being NetApp, and NetApp having such an intimate relationship with VMware, we decided to go with VMware.

Gardner: So you were able to accomplish your virtualization and also gain that disaster recovery and business continuity benefit, but you pointed out the time was of the essence. How long did it take you?.

We went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.



Sindicic: Back in early fiscal year 2010, I started doing all the research. I probably did a good nine months of research before even bringing this option to my CIO. Once I brought the option up, I worked with my vendors, VMware and NetApp, to obtain best pricing for the solution that I wanted.

I started implementation in October and completed the process in March. So it took some time. Then we went live with our CAD/RMS system on May 10, and it has been very robust and running beautifully ever since.

Gardner: Tell me about your IT operations.

Sindicic: I have our finance system, an Oracle-based system, which consists of an Oracle database server and Apache applications server, and another reporting server that runs on a different platform. Those will all be virtual OSs sitting in one of my two clusters.

For the police systems, I have a separate cluster just for police and fire. Then, in the regular day-to-day business, like finance and other applications that the city uses, I have a campus cluster to keep those things separated and to also relieve any downtime of maintenance. So everything doesn’t have to be affected if I'm moving virtual servers among systems and patching and doing updates.

Other applications

We’re also going to be virtualizing several other applications, such as a citizen complaint application called Coplogic. We're going to be putting that in as well into the PD cluster.

The version of VMware that we’re using is 4.1, we’re using ESXi server. On the PD cluster, I have two ESXi servers and on my campus, I have three. I'm using vSphere 4, and it’s been really wonderful having a good handle on that control.

Also, within my vSphere, vCenter server, I've installed a bunch of NetApp storage control solutions that allow me to have centralized control over one level snapshotting and replication. So I can control it all from there. Then vSphere gives me that beautiful centralized view of all my VMs and resources being consumed.

It’s been really wonderful to be able to have that level of view into my infrastructure, whereas when the things were distributed, I hadn’t had that view that I needed. I’d have to connect one by one to each one of my systems to get that level.

Also, there are some things that we’ve learned during this whole thing. I went from two VLANs to four VLANs. When looking at your traffic and the type of traffic that’s going to traverse the VLANs, you want segregate that out big time and you’ll see a huge increase in your performance.

We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.



The other thing is making sure that you have the correct type of drives in your storage. I knew that right off the bat that IOPS was going to be an issue and then, of course, connectivity. We’re using Brocade switches to connect to the backend fiber channel drives for the server VMs, and for lower-end storage, we’re using iSCSI.

Gardner: And how has the virtualization efforts within all of that worked out?

Sindicic: It’s been wonderful. We’ve had wonderful disaster recovery capabilities. We have snapshotting abilities. I'm snapshotting the primary database server and application server, which allows for snapshots up to three weeks in primary storage and six months on secondary storage, which is really nice, and it has served us well.

We already had a fire drill, where one report was accidentally deleted out of a database due to someone doing something -- and I'll leave it at that. Within 10 minutes, I was able to bring up the snapshot of the records management system of that database.

The user was able to go into the test database, retrieve his document, and then he was able to print it. I was able to export that document and then re-import it into the production system. So there was no downtime. It literally took 10 minutes, and everybody was happy.

... We are seeing cost benefits now. I don’t have all the metrics, but we’ve spun up six additional VMs. If you figure out the cost of the Dells, because we are a Dell shop, it would cost anywhere between $5,000 and $11,000 per server. On top of that, you're talking about the cost of the Microsoft Software Assurance for that operating system. That has saved a lot of money right there in some of the projects that we’re currently embarking on, and for the future.

We have several more systems that I know are going to be coming online and we're going to save in cost. We’re going to save in power. Power consumption, I'm projecting, will slowly go down over time as we add to our VM environment.

As it grows and it becomes more robust, and it will, I'm looking forward to a large cost savings over a 5- to 10-year period.

Better insight

Gardner: Was there anything that surprised you that you didn’t expect, when you moved from the physical to the virtualized environment?

Sindicic: I was pleasantly surprised with the depth of reporting that I could physically see, the graph, the actual metrics, as we were ongoing. As our CAD system came online into production, I could actually see utilization go up and to what level.

I was also pleasantly surprised to be able to see to see when the backups would occur, how it would affect the system and the users that were on it. Because of that, we were able to time them so that would be the least-used hours and what those hours were. I could actually tell in the system when it was the least used.

It was real time and it was just really wonderful to be able to easily do that, without having to manually create all the different tracking ends that you have to do within Microsoft Monitor or anything like that. I could do that completely independently of the OS.

We're going to have some compliance issues, and it’s mostly around encryption and data control, which I really don’t foresee being a problem with VMware.



Gardner: We're hearing a lot here at VMworld about desktop virtualization as well. I don’t know whether you’ve looked at that, but it seems like you've set yourself up for moving in that direction. Any thoughts about mobile or virtualized desktops as a future direction for you?

On the horizon

Sindicic: I see that most definitely on the horizon. Right now, the only thing that's hindering us is cost and storage. But as storage goes down, and as more robust technologies come out around storage, such as solid state, and as the price comes down on that, I foresee that something definitely coming into our environment.

Even here at the conference I'm taking a bunch of VDI and VMware View sessions, and I'm looking forward to hopefully starting a new project with virtualizing at the desktop level.

This will give us much more granular control over not only what’s on the user’s desktop, but patch management and malware and virus protection, instead of at the PC level doing it the host level, which would be wonderful. It would give us really great control and hopefully decreased cost. We’d be using a different product than probably what we’re using right now.

If you're actually using virus protection at the host level, you’re going to get a lot of bang for your buck and you won't have any impact on the PC-over-IP. That’s probably the way we we'll go, with PC-over-IP.

Right now, storage, VLANing all that has to happen, before we can even embark on something like that. So there's still a lot of research on my part going on, as well as finding a way to mitigate costs, maybe trade-in, something to gain something else. There are things that you can do to help make something like this happen.

I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.



... In city government, our IT infrastructure continues to grow as people are laid off and departments want to automate more and more processes, which is the right way to go. The IT staff remains the same, but the infrastructure, the data, and the support continues to grow. So I'm trying to implement infrastructure that grows smarter, so we don’t have to work harder, but work smarter, so that we can do a lot more with less.

VMware sure does allow that with centralized control in management, with being able to dynamically update virtual desktops, virtual servers, and the patch management and automation of that. You can take it to whatever level of automation you want or a little in between, so that you can do a little bit of check and balances with your own eyes, before the system goes off and does something itself.

Also, with the high availability and fault tolerance that VMware allows, it's been invaluable. If one of my systems goes down, my VMs automatically will be migrated over, which is a wonderful thing. We’re looking to implement as much virtualization as we can as budget will allow.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

Wednesday, October 12, 2011

As cloud and mobile trends drive user expectations higher, networks must now deliver applications faster, safer, cheaper

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Learn more. Sponsor: Akamai Technologies.

W
e hear about the post-PC era, but rarely does anyone talk about the post-LAN or even the post-WAN era. Yet the major IT trends of the day -- from mobile to cloud to app stores -- are changing the expectations we all have from our blended networks.

How are the campus networks of yesterday going to support the Internet-borne applications and media delivery requirements of tomorrow?

It’s increasingly clear that more users will be using more devices to access more types of web content and services. They want coordination among those devices for that content. They want it done securely with privacy, and they want their IT departments to support all of their devices for all of their work applications and data too.

From the IT mangers' perspective, they want to be able to deliver all kinds of applications using all sorts of models, from smartphones to tablets to zero clients to HD web streaming to fat-client downloads and website delivery across multiple public and private networks with control and with ease.

This is all a very tall order, and networks will need to adjust rapidly or the latency and hassle of access and performance issues will get in the way of users, their new expectations, and their behaviors -- for both work and play.

The latest BriefingsDirect IT discussion is with an executive from at Akamai Technologies to delve into the rapidly evolving trends and subsequently heightened expectations that we're all developing around our networks. We are going to look at how those networks might actually rise to the task with Neil Cohen, Vice President of Product Marketing at Akamai Technologies. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Akamai is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Given the heightened expectations -- this always-on, hyper connectivity mode -- how are networks going to rise to these needs?

Cohen: Nobody wants the network to be the weak link, but changes definitely need to happen. Look at what’s going on in the enterprise and the way applications are being deployed. It’s changing to where they're moving out to the cloud. Applications that used to reside in your own infrastructure are moving out to other infrastructure, and in some cases, you don’t have the ability to place any sort of technology to optimize the WAN out in the cloud.

Mobile device usage is exploding. Things like smartphones and tablets are all becoming intertwined with the way people want to access their applications. Obviously, when you start opening up more applications through access to the internet, you have a new level of security that you have to worry about when things move outside of your firewall that used to be within it.

Gardner: How do you know where the weak link is when there is a problem?

Cohen: The first step is to understand just what many networks actually mean, because even that has a lot of different dimensions to it. The fact that things are moving out to public clouds means that users are getting access, usually over the internet. We all know that the internet is very different than your private network. Nobody is going to give you a service-level agreement (SLA) on the internet.

Something like mobile is different, where you have mobile networks that have different attributes, different levels of over subscription and different bottlenecks that need to be solved. This really starts driving the need to not only 1) bring control over the internet itself, as well as the mobile networks.

There are a lot of different things that people are looking at to try to solve application delivery outside of the corporate network.



But also 2) the importance for performance analytics from a real end-user perspective. It becomes important to look at all the different choke points at which latency can occur and to be able to bring it all into a holistic view, so that you can troubleshoot and understand where your problems are.

There are a lot of different things that people are looking at to try to solve application delivery outside of the corporate network. Something we’ve been doing at Akamai for a long time is deploying our own optimization protocols into the internet that give you the control, the SLA, the types of quality of service that you normally associate with your private network.

And there are lots of optimization tricks that are being done for mobile devices, where you can optimize the network. You can optimize the web content and you can actually develop different formats and different content for mobile devices than for regular desktop devices. All of those are different ways to try to deal with the performance challenges off the traditional WAN.

Gardner: Are the carriers stepping up to the plate and saying, "We’re going to take over more of this network performance issue?"

Cohen: I think they're looking at it and saying, "Look, I have a problem. My network is evolving. It's spanning in lots of different ways, whether it's on my private network or out on the internet or mobile devices," and they need to solve that problem. One way of solving it is to build hardware and do lots of different do-it-yourself approaches to try to solve that.

Unwieldy approach

That’s a very unwieldy approach. It requires a lot of dollars and arguably doesn’t solve the problem very well, which is why companies look for managed services and ways to outsource those types of problems, when things move off of their WAN.

But at the same time, even though they're outsourcing it, they still want control. It's important for an IT department to actually see what traffic and what applications are being accessed by the users, so that they understand the traffic and they can react to it.

Gardner: I'm seeing a rather impressive adoption pattern around virtualized desktop activities and there’s a variety of ways of doing this. We’ve seen solutions from folks like Citrix and VMware and Microsoft and we’re seeing streaming, zero-client, thin-client, and virtual-desktop activities, like infrastructure in the data center, a pure delivery of the full desktop and the applications as a service.

Cohen: There are different unique challenges with the virtual desktop models, but it also ties into that same hyper-connected theme. In order to really unleash the potential of virtual desktops, you don’t only want to be able to access it on your corporate network, but you want to be able to get a local experience by taking that virtual desktop anywhere with you just like you do with a regular machine. You’re also seeing products being offered out in the market that allow you to extend virtual desktops onto your mobile tablets.

In order to really unleash the potential of virtual desktops, you don’t only want to be able to access it on your corporate network, but you want to be able to get a local experience.



You have the same kind of issues again. Not only do you have different protocols to optimize for virtual desktops, but you have to deal with the same challenges of delivering it across that entire ecosystem of devices, and networks. That’s an area that we’re investing heavily in as it relates to unlocking the potential of VDI. People will have universal access, to be able to take their desktops wherever they want to go.

Gardner: And is there some common thread to what we would think of in the past as acceleration services for things like websites, streaming, or downloads? Are we talking about an entirely new kind of infrastructure or is this some sort of a natural progression of what folks like Akamai have been doing for quite some time?

Cohen: It's a very logical extension of the technology we’ve built for more than a decade. If you look a decade ago, we had to solve the problem of delivering streaming video, real-time over the web, which is very sensitive to things like latency, packet loss, and jitter and that’s no different for virtual desktops. In order to give that local experience for virtual users, you have to solve the challenges of real-time communication back and forth between the client and the server.

Gardner: If I were an architect in the enterprise, it seems to me that many of my long-term cost-performance improvement activities of major strategic initiatives are all hinging on solving this network problem.

Business transformation

Cohen: What I'm hearing is more of a business transformation example, where the business comes down and puts pressure on the network to be able to access applications anywhere, to be able to outsource, to be able to offshore, and to be able to modernize their applications. That’s really mandating a lot of the changes in the network itself.

The pressure is really coming from the business, which is, "How do I react more quickly to the changing needs of the business without having IT in a position where they say, 'I can't.' " The internet is the pervasive platform that allows you to get anywhere. What you need is the quality of service guarantees that should come with it.

If you can help transform a business and you can do it in a way that is operationally more efficient at a lower cost, you’ve got the winning combination.

... Akamai continues to offer the consumer-based services as it relates to improving websites and rich media on the web. But now we have a full suite of services that provide application acceleration over the internet. We allow you to reach users globally while consolidating your infrastructure and getting the same kind of benefits you realize with WAN optimization on your private network, but out over the internet.

Security services

And as those applications move outside of the firewall, we’ve got a suite of security services that address the new types of security threats you deal with when you’re out on the web.

Gardner: Is there an analysis, a business intelligence benefit from doing this as well?

Cohen: What’s important is not only that you improve the delivery of an application, but that you have the appropriate insight in terms of how the application is performing and how people are using the application so that you can take action and react accordingly.

Just because something has moved out into the cloud or out on the Internet, it doesn’t mean that you can’t have the same kind of real-time personalized analytics that you expect on your private network. That’s an area we’ve invested in, both in our own technology investment, but also with some partnerships that provide real-time reporting and business intelligence in terms of our critical websites and applications.

Just because something has moved out into the cloud or out on the Internet, it doesn’t mean that you can’t have the same kind of real-time personalized analytics that you expect on your private network.



Gardner: Not only are the types of applications changing, but is there a need to design and build these applications differently, in such a way that they are cloud-ready or hybrid-ready or mobile-ready?

Cohen: If I were to go back to the developers, I’d ask, "Do you really need to build different websites or separate apps for all these different form factors, or is there a better way to build one common source, a code, and then adapt it using different techniques in the network, in the cloud that allow you to reuse that investment over and over again?"

What I expect to see is more adoption of standard web languages. It means that you need to use good semantic design principles, as it relates to the way you design your applications. But in terms of optimizing content and building for mobile devices and mobile specific sites, a lot of that is going to be using standard web languages that people are familiar with and that are just evolving and getting better.

Websites are based on HTML and with HTML5, the web is getting richer, more immersive, and starting to approach that as the same kind of experience you get on your desktop.

We go back to the developers and get them to build on a standard set of tools that allow them to deal with the different types of connected devices out in the market? If you build one code base based on HTML, for example, could you take that website that you've built and be able to render it differently in the cloud and allow it to adapt on the fly for something like an iPhone, an Android, a BlackBerry, a 7-inch tablet, or a 9-inch tablet?

Gardner: So part of the solution to the many screens problem isn’t more application interface designs, but perhaps a more common basis for the application and services, and let the network take care of those issues on a screen to screen basis. Is that closer?

Cohen: That’s exactly right. More and more of the intelligence is actually moving out to the cloud. We’ve already seen this on the video side. In the past people had to use lots of different formats and bit rates. Now what they’re doing is taking that stuff and saying, "Give me one high quality source." Then all of the adaptation capabilities that are going to be done in the network, in the cloud, just simplify that work from the customer.

I expect exactly the same thing to happen in the enterprise, where the enterprise is one common source of code and a lot of the adaptation capabilities are done, again, that intelligent function inside of the network.

These are all hot topics. The WAN is becoming everything, but you really need to change your views as it relates to not just thinking about what happens inside of your corporate network, but with the movement of cloud, all of the connected devices, all of this quickly becoming the network.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Learn more. Sponsor: Akamai Technologies.

You may also be interested in:

Tuesday, October 11, 2011

Complex IT security risks can only be treated with comprehensive response, not point products

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Learn more. Sponsor: HP.

T
his latest BriefingsDirect discussion takes on the rapidly increasing threat that enterprises face from complex IT security breaches.

In just the past year, the number of attacks are up, the costs associated with them are higher and more visible, and the risks of not securing systems and processes are therefore much greater. Some people have even called the rate of attacks a pandemic.

The path to reducing these risks, even as the threats escalate, is to confront security at the framework and strategic level, and to harness the point solutions approach into a managed and ongoing security enhancement lifecycle.

As part of the series of recent news announcements from HP, this discussion examines how such a framework process can unfold, from workshops that allow a frank assessment of an organization’s vulnerabilities, to tailored framework-level approaches that can transform a company based on its own specific needs.

Here to describe how a "fabric of technology," a "framework of processes," and a "lifecycle of preparedness" can all work together to help organizations become more secure -- and keep them secure -- is Rebecca Lawson, Director of Worldwide Security Initiatives at HP. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why has the security vulnerability issue come to a head?

Lawson: Open up the newspaper and you see another company getting hit almost every day. As an industry, we've hit a tipping point with so many different security related issues -- for example, cyber crime, hacktivism, nation-state attacks. When you couple that with the diversity of devices that we use, and the wide range of apps and data we access every day, you can see how these dynamics create a very porous environment for an enterprise.

So we are hearing from our customers that they want to step back and think more strategically about how they're going to handle security, not just for the short term, when threats are near and present, but also from a longer term point of view.

Gardner: What do you think are some of the trends that are supporting this vulnerability?

For more detail on the the extent of security breaches, read the
Second Annual Cost of Cyber Crime Study.

Lawson: In HP’s recent research, we've found that thirty percent of the people know that they've had a security breach by an unauthorized internal access, and over 20 percent have experienced an external breach. So breaches happen both internally and externally, and they happen for different reasons. Sometimes a breach is caused by a disgruntled customer or employee. Sometimes, there is a political motive. Sometimes, it's just an honest error ... Maybe they grab some paper off a printer that has some proprietary information, and then it gets into the wrong hands.

There are so many different points at which security incidents can occur; the real trick is getting your arms around all of them and focusing your attention on those that are most likely to cause reputation damage or financial damage or operational damage.

We also noticed in our research that the number of attacks, particularly on web applications, is just skyrocketing. One of the key areas of focus for HP is helping our customers understand why that’s happening, and what they can do about it.

Gardner: It also seems to me that, in the past, a lot of organizations could put up a walled garden, and say, "We're not going to do a lot of web stuff. We're not going to do mobile. We're going to keep our networks under our control." But nowadays that’s really just not possible.

If you're not doing mobile, not looking seriously at cloud, not making your workers able to access your assets regardless of where they are, you're really at a disadvantage competitively. So it seems to me that this is not an option, and that the old defensive posture just doesn’t work anymore.

Lawson: That is exactly right. In the good old days, we did have a walled garden, and it was easy for IT or the security office to just say “no” to newfangled approaches to accessing the web or building web apps. Of course, today they can still say no, but IT and security offices realize that they can't thwart the technology-related innovation that helps drive growth.

Our customers are keenly aware that their information assets are the most important assets now. That’s where the focus is, because that’s where the value is. The problem is that all the data and information moves around so freely now. You can send data in the blink of an eye to China and back, thru multiple application, where it’s used in different contexts. The context can change so rapidly that you have to really think differently about what it is you're protecting and how you're going to go about protecting it. So it's a different game now.

Gardner: And as we confront this "new game," it also appears that our former organizational approach is wanting. If we've had a variety of different security approaches under the authority of different people -- not really coordinated, not talking to each other, not knowing what the right hand and left hand are doing -- that’s become a problem.

So how do we now elevate this to a strategic level, getting a framework, getting a comprehensive plan? It sounds like that’s what a lot of the news you've been making these days is involved with.

No silver bullet

Lawson: You're exactly right. Our customers are realizing that there is no one silver bullet. You have to think across functional areas, lines of business, and silos.

Job number one is to bring the right people together and to assess the situation. The people are going to be from all over the organization -- IT, security and risk, AppDev, legal, accounting, supply chain -- to really assess the situation. Everyone should be not only aware of where vulnerabilities might be, or where the most costly vulnerabilities might be, but to look ahead and say, "Here is how our enterprise is innovating with technology -- Let's make sure we build security into them from the get-go."

There are two takeaways from this. A structured methodical framework approach helps our customers get the people on the same page, getting the processes from top-down really well-structured so that everyone is aware of how different security processes work and how they benefit the organizations so that they can innovate.

One of the other elements is that every enterprise has to deal with a lot of short-term fixes.



[But] it's also about long-term thinking, about building security in from the get-go; this is where companies can start to turn the corner. I'll go back again to web apps, building security into the very requirement and making sure all the way through the architecture design, testing, production, all the way through that you are constantly testing for security.

Gardner: What are the high-level building blocks to the framework approach?

Read more on HP's security framework
Rethinking Your Enterprise Security:
Critical Priorities to Consider

Lawson: The framework that I just mentioned is our way of looking at what you have to do across securing data, managing suppliers, ensuring physical assets, or security, but our approach to executing on that framework is a four-point approach.

We help our customers first assess the situation, which is really important just to have all eyes on what's currently happening and where your current vulnerabilities may lie. Then, we help them to transform their security practices from where they are today to where they need to be.

Then, technologies and services to help them manage that on an ongoing basis, so that you can get more and more of the security controls automated. And then, we help them optimize that, because security just doesn't stand still. So we have tools and services that help our customers keep their eye on the right ball, as all of the new threats evolve or new compliance requirements come down the pike.

Gardner: What is HP Secure Boardroom, and why is it an important as part of this organizational shift?

Get more information on the executive dashboard:
Introducing the HP Secure Boardroom.

Lawson: The Secure Boardroom combines dashboard technology with a good dose of intellectual property we have developed that helps us generate the APIs into different data sources within an organization.

The result is that a CISO can look at a dashboard and instantly see what's going on all across the organization. What are the threats that are happening? What's the rate of incidents? What's going on across your planning spectrum?

To have the visibility into disparate systems is step one. We've codified this over the several years that we've been working on this into a system that now any enterprise can use to pull together a consistent C-level view, so that you have the right kind of transparency.

Half the battle is just seeing what's going on every day in a consistent manner, so that you are focused on the right issues, while discovering where you might need better visibility or where you might need to change process. The Secure Boardroom helps you to continually be focused on the right processes, the right elements, and the right information to better protect financial, operational, and reputation-related assets.

... Because we've been in the systems management and business service management business for so long, I would elevate this up to the level of the business service management.

We already have a head start with our customers, because they can already see the forest for the trees with regard to any one particular service. Let's just say it's a service in the supply chain, and that service might comprise network elements and systems and software and applications and all kinds of data going through it. We're able to tie the management of that through traditional management tools, like what we had with OpenView and what we have with our business service management to the view of security.

When you think about vulnerabilities, threats, and attacks, the first thing you have to do is have the right visibility. The technology in our security organization that helps us see and find the vulnerabilities really quickly.

Integration with operations

Because we have our security technology tied with IT operations, there is an integration between them. When the security technology detects something, they can automatically issue an alert that is picked up from our incident management system, which might then invoke our change management system, which might then invoke a prescribed operations change, and we can do that through HP Operations Orchestration.

It really is a triad -- security, applications, operations. At HP, we’re making them work together. And because we have such a focus now on data correlation, on Big Data, we're able to bring in all the various sources of data and turn that into actionable information, and then execute it through our automation engine.

... For example, we have a technology that lets you scan software and look for vulnerabilities, both dynamic and static testing. We have ways of finding vulnerabilities in third-party applications. We do that through our research organization which is called DVLabs. DV stands for Digital Vaccine. We pull data in from them every day as to new vulnerabilities and we make that available to the other technologies so we can blend that into the picture.

Focused technology

The right kind of security fabric has to be composed of different technologies that are very focused on certain areas. For example, technologies like our intrusion protection technology, which does the packet inspection and can identify bad IP addresses. They can identify that there are certain vulnerabilities associated with the transaction, and they can stop a lot of traffic right at the gate before it gets in.

The reason we can do that so well is because we've already weaved in information from our applications group, information from our researchers out there in the market. So we've been able to pull these together and make more value out of them working as one.

Gardner:
Is there a path now toward security as a service, or some sort of a managed service, hybrid model?

Lawson: A lot of people think that when the words cloud and security are next to each other, bad things happen, but in fact, that’s not always the case.

Once an enterprise has the right plan and strategy in place, they start to prioritize what parts of their security are best suited in-house, with your own expertise, or what parts of the security picture can you or should you hand off to another party. In fact, one of our announcements this week is that we have a service for endpoint threat management.

If you're not centrally managing your endpoint devices, a lot of incidents can happen and slip through the cracks -- everything from an employee just losing a phone to an employee downloading an application that may have vulnerabilities.

So managing your endpoints devices in general, as well as the security associated with the endpoints, make a lot of sense. And it’s a discrete area where you might consider handing the job to a managed services provider, who has more expertise as well as better economic incentives.

Application testing

Another great example of using a cloud service for security is application testing. We are finding that a lot of the web apps out in the market aren't necessarily developed by application developers who understand that there's a whole lifecycle approach involved.

In fact, I've been hearing interesting statistics about the number of web apps that are written by people formerly known as webmasters. These folks may be great at designing apps, but if you're not following a full application lifecycle management practice, which invokes security as one of the base principles of designing an app, then you're going to have problems.

What we found is that this explosion of web apps has not been followed closely enough by testing. Our customers are starting to realize this and now they're asking for HP to help, because in fact there are a lot of app vulnerabilities that can be very easily avoided. Maybe not all of them, but a lot of them, and we can help customers do that.

So testing as a service as a cloud service or as a hosted or managed service is a good idea, because you can do it immediately. You don't incur the time and money to spin up a testing of center of excellence – you can use the one that HP makes available through our SaaS model.

Gardner: As part of your recent announcements, you're moving more toward a managed services provider role.

One of the great things about many of the technologies that we've purchased and built in the last few years is that we're able to use them in our managed services offerings.



Lawson: One of the great things about many of the technologies that we've purchased and built in the last few years is that we're able to use them in our managed services offerings.

I'll give you an example. Our ArcSight product for Security Information and Event Management is now offered as a service. That's a service that really gets better the more expertise you have and the more focused you are on that type of event correlation and analysis. For a lot of companies they just don't want to invest in developing that expertise. So they can use that as a service.

We have other offerings, across testing, network security, endpoint security, that are all offered as a service. So we have a broad spectrum of delivery model choices for our customers. We think that’s the way to go, because we know that most enterprises want a strategic partner in security. They want a trusted partner, but they're probably not going to get all of their security from one vendor of course, because they're already invested.

We like to come in and look first at establishing the right strategy, putting together the right roadmap, making sure it's focused on helping our customer innovate for the future, as well as putting some stopgap measures in so that you can thwart the cyber threats that are near and present danger. And then, we give them the choice to say what's best for their company, given their industry, given the compliance requirements, given time to market, and given their financial posture?

There are certain areas where you're going to want to do things yourself, certain areas where you are going to want to outsource to a managed service. And there are certain technologies already at play that are probably just great in a point solution context, but they need to be integrated.

Integrative approach

M
ost of our customers have already lots of good things going on, but they just don't all come together. That's really the bottom line here. It has to be an integrative approach. It has to be a comprehensive approach. And the reason is that the bad guys are so successful causing havoc is that they know that all of this is disconnected. They know that security technologies tend to be fragmented and they're going to take advantage of that.

I'd definitely suggest going to hp.com/go/enterprisesecurity. In particular, there is a report that you can download and read today called the "HP DVLabs’ Cyber Security Risks Report." It’s a report that we generate twice a year and it has got some really startling information in it. And it’s all based on, not theoretical stuff, but things that we see, and we have aggregated data from different parts of the industry, as well as data from our customers that show the rate of attacks and where the vulnerabilities are typically located. It’s a real eye opener.

It’s a little startling, when you start to look at some of the facts about the costs associated with application attacks.



So I would just suggest that you search for the DVLabs’ Cyber Security Risks Report and read it, and then pass it on to other people in your company, so that they can become aware of what the situation really is. It’s a little startling, when you start to look at some of the facts about the costs associated with application breaches or the nature of complex persistent attacks. So awareness is the right place to start.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Learn more. Sponsor: HP.

You may also be interested in:

Tuesday, October 4, 2011

Take a deep dive with Embarcadero on how enterprise app stores help drive productivity

Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Learn more. Sponsor: Embarcadero Technologies.

The popularity of mobile devices like smartphones and tablets has energized users on the one hand, but on the other hand it’s caused IT and business leaders to scramble to adjust to new models of applications delivery.

That's why enterprise app stores are quickly creating productivity and speed-to-value benefits for PC users and IT departments alike as they grapple with the new models around consumerization of IT. The author of a recent Ovum white paper on why app stores says they are increasingly important for enterprises as they consider ways to better track, manage, and distribute all of their applications.

The popularity of mobile devices like smartphones and tablets, on one hand, has energized users, but on the other hand, it’s caused IT and business leaders to scramble to adjust to new models of applications delivery.



Join this podcast discussion then as we examine the steps businesses can now take to build and develop their own enterprise app stores. We'll further see what rapid and easy access to self-service apps on PCs and notebook computers through such app stores is doing for businesses.

And we’ll learn how app stores are part of the equation for improved work and process success on and off the job. Furthermore, we uncover how Embarcadero’s AppWave solution brings the mobile apps experience to millions of PC users in their workplace in the enterprise.

The panel consists of Tony Baer, Principal Analyst at Ovum; Michael Swindell, Senior Vice President of Products and Marketing at Embarcadero Technologies, and Richard Copland, Principal Innovation Consultant at Logica. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Embarcadero is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Richard, in your looking over the landscape for IT innovations, is there something about the app store model that you think will encourage users to adopt new technologies and new applications faster?

Copland: Undoubtedly. The whole socialization and the social trend which I see as probably the biggest driver behind this is for the way in which people use software and the way in which people comment on a software.

The organization will cluster around the toolkits for which the feedback from the users is positive. I can think of one large global financial organization here that has 5,000 apps within their world. They would look to simplify their landscape by over 60 percent, because they recognize that they've got so many kinds of individual pockets of activity going on in the organization.

And you need to support those individual pockets of activity that, in terms of your users in the tail effect, they’ll be the mainstream enterprise apps, such as Windows-based or Office-based, which the majority will use. But if you could tap into an environment, in which you are giving the people what they want, then the return on investment (ROI) from that is going to be a lot faster.

My role as a Principal Innovation Consultant is effectively twofold. It's to find new things and introduce new things to our clients. Something innovative to me is something that's new to you and provides a benefit. This can be cash, people, or green ideas. I spend my day looking at cool new stuff, which means ways of working, technologies, partners, and even wacky research coming out of the various universities here in Europe.
At Logica, we're a business and technology service company. We provide business consulting, system integration, and outsourcing to our clients around the world including many of Europe’s largest businesses.

Generation Next

For me, these app stores are also the whole Generation Next piece which is about a whole new generation that is educated and tech-savvy. They're multitasking all the time. They work as consumers. They're purchasing products and customize them to their needs in terms of their lifestyles. So they’re regularly sharing insight and comment on things which are good for them.

That’s playing out in terms of lifestyle and that's being brought into the business scenario, whereby the formal and informal hierarchies of organizations are blurring.

Gardner: Tony, this sounds like it’s something quite new.

Baer: From the end-user standpoint, there certainly is quite a new win to this. But we also have to look at the fact that this is going to change the way IT serves the organization. At least this aspect of it is really going to become more of a service provider. And there are a lot of implications for that.

From the end-user standpoint, there certainly is quite a win to this. But we also have to look at the fact that this is going to change the way IT serves the organization.



For one thing, IT has to be more responsive but they also have to work on more of a shorter fuse, almost like a just-in-time type of model.

... I was a little bit surprised because there is certainly a concept leap from a $1.99 little applet that you pull down from the iPhone app store or from the Android marketplace to a full-blown enterprise desktop application.

That being said, it’s not surprising, given that there’s been a huge demand from the bottom-up, from the people in the workplace. So it’s a phenomenon that’s probably better known as the consumerization of IT -- "I have these sophisticated mobile devices and tablets. Why can’t I get that easy to use experience on my regular machine for my day job?"

Therefore, the demand for the comfort and convenience of that was inevitably bound to spread into the enterprise environment. You've seen that manifested in a number of ways. For example, companies have basically embraced more social collaboration. And you’re also starting to see some use of many of these new form factors.

So again, what Embarcadero has been starting to introduce is symbolic in a way that’s really not surprising.

But there's no free lunch in all this, it still requires management. For example, we still need to worry about dealing with security governance, managing consumption, and also making sure that you lock down, or secure the licensing issues. As I said, there’s no free lunch, but compare that to the overhead of the traditional application distribution and deployment process.

So again, from the end user standpoint, it should be a win-win, but from the IT standpoint, it's going to mean a number of changes. Also, this is breaking new ground with a number of the vendors. What they need to do is check on things such as licensing issues, because what you're really talking about is a more flexible deployment policy.

Gardner: Michael Swindell, tell me a little bit about AppWave and what it takes for an IT organization to make the transition from that long process that Tony outlined to a more streamlined app-store approach.

Swindell: The best way to describe AppWave is that it’s just a pretty simple three-step process. The first step is taking traditional software, which is traditionally complex for end users and for organizations to manage. This includes things like installations, un-installations, considerations about applications, of how they affect the users’ environment.

Then, converting those traditional software applications into the concept of apps where they are self-contained, don’t require installation, can be streamed and run to a user anywhere they are, and really delivering the mobile-like experience of mobile software to the more complex traditional desktop PC software.

AppWave has tooling that allows users to take their applications and convert them into apps. And that’s any type of application- commercial application or internally developed.



AppWave has tooling that allows users to take their applications and convert them into apps. And that’s any type of application -- commercial application or internally developed.

That's the first step. The second is to centralize those apps in an app store, where users can get to them, and where organizations can have visibility into their usage, manage access to them, etc. So the second step is simply centralizing those apps.

The third is the user experience. One of the key drivers behind the success of apps in the mobile space has been the visibility that users have into application availability. It’s very easy for users to search and find an app as they need it.

Think about how a user uses a mobile phone to come up with an app. Maybe they’re walking down the street, they see a business, and they have an idea, or they want directions to something. They can simply search in an app store on their mobile device and immediately get an app to solve that problem.

If you look in the business space and inside the workplace, when a user has a problem, they don’t really have a mechanism to sit down and search to solve a problem and then get an application to solve it immediately.

As we talked about earlier, and Tony really well-described that the process, once they identify an application to solve a problem, that can take weeks or months to roll out. so you don’t have that instant feedback.

Instantaneous experience

The user experience has to be instantaneous. An area that we focused on very heavily with AppWave is to provide the users an ability to search, find apps based on the problems that they’re trying to solve, and instantly run those apps, rather than having to go through a long process.

Gardner: Can we perhaps make the association that app stores can fundamentally change the way workers behave in an innovation sense?

Copland: Absolutely. You’re on the money. We talked a little bit about looking at the mobile aspects of it and moving to this on-demand usage and the challenges for the organization to do that.

Certainly, the components within the AppWave solution give you the opportunity to move to more of what I would describe as smart working or remote working, by which the user doesn't necessarily have to come into the office to access the tools, which are traditionally being provided to them at their desk in their environment.

If you start remote working or are given a broader range of remote access, then you can be operating a much stronger work-life balance. So if you're in a situation where you’ve got a young family and you need to take the kids to school, you can come on and go off the company network and use the tools which are provided to you in a much more user-friendly flexible environment. That would be certainly from the user's perspective.

If you start remote working or are given a broader range of remote access, then you can be operating a much stronger work-life balance.



From the business’s perspective, I start moving to a scenario where I don't necessarily need to maintain a real estate where if I’ve got 5,000 users, I need to have 5,000 desks. That certainly becomes quite empowering across the rest of the organization, and other stakeholders -- the facility’s officers, business managers -- start taking real notice of those types of savings and the nature of how work is achieved.

Gardner: How far can the app store model be taken in terms of legacy, the installed base of apps?

Swindell: Our vision is any type of application in the organization will eventually be supported by AppWave. The initial support is for PC apps in organizations, which is the vast majority of productivity applications that end users need. It also is where the largest problem set is, both from an end-user perspective and from an organization's perspective.

So we're tackling the hardest problem first and then our plan is to roll in other type of apps, web apps, and applications that you might be using in an organization, using other types of delivery technologies.

But the idea is to take any type of these applications and present them as an app inside the AppWave ecosystem. So a user can have a centralized way to search for any type of app whether it’s a corporate HR, a web application, a hosted software as a service (SaaS ) application, or a PC application. Certainly, mobile would be an obvious direction as well.

The idea is to take any type of these applications and present them as an app inside the AppWave ecosystem.



There are really two sides to the benefit of using the app store methodology. There's an organizational side of understanding application usage, as you said maybe sunsetting applications, understanding how applications are used within their organization, so that they can make good decisions.

Then we have the user side, where users have a lot more information that they can provide that’s very useful for both the organization and other users.

The app store metaphor works very well in sharing that type of information. It gives the organization usage information and statistics, and the demand information that's valuable for the organization to plan and understand their application usage. It also provides information to other users on the applicability of applications for certain scenarios, whether applications are good or bad for a particular scenario.

This has worked well in the mobile space with public app stores, and we see that there's a lot of applicability inside the firewall, inside organizations, to be able to use this information and create more value out of their applications and to help users get more value and understanding about their applications.

One of the things that AppWave and the app store concept can do is to help create a centralized app view of the different types of applications and even the different types of services in your organization, and to be able to understand what’s available.

Common presentation

There are also opportunities for the same types of socialization and sharing of information and knowledge about services using the app store concept, as there is with apps.

The important thing is to take these different types of applications and present them in a common way in the same place, so that it really doesn’t matter whether the app is a web app or it’s a PC app. Users can find them, run them, and share information about them at the same place.

Gardner: Tony, back to your Ovum white paper, what do you see as the efficiency aspects to this?

Baer: Compare this model to the traditional application deployment model ... Number one, it's a much more of a long-fused process. There is elaborate planning of the rollout. You're trying to figure out all the different client targets that you're trying to address. Even if you do have locked-down machines, you're still going to have issues. Then, package the release,. Then, regression test it to death. Then distribution, and you actually get the thing installed. Hopefully, it's up during some off hour, let's say, at 3 a.m. Then, you prepare for all the support calls.

That's a pretty involved process. That consumes a lot of time both for the end user, who is waiting for the functionality that he or she may want -- or not. And it's also, of course, a considerable overhead in the IT organization.

If you take that all away into a more modular model, more like a radio broadcast model, essentially it becomes a lot more efficient. You lose all this lead time, and as Michael was talking about, you then get all the visibility for all these apps being consumed. End users have more sway. As long as they are authorized to use these apps, they have this choice.

So it's not that all of a sudden they have a whole number of apps that are loaded on their machine, whether they like it or not. We haven't done anything to quantify this, because trying to quantify productivity is like asking “what's the cost of downtime?” And in a lot of sectors that can be a very subjective number. But intuitively, this model, if it scales out, should basically provide a much lower cost of ownership and much greater satisfaction.

This model, if it scales out, should basically provide a much lower cost of ownership and much greater satisfaction.



Gardner: Richard Copland, as someone who is out there hunting down innovations that they can bring to their user organization and their clients, was there anything about AppWave or app stores in general for enterprise use that was interesting and attractive to you that we perhaps haven’t hit on yet?

Copland: In AppWave and the Embarcadero team, we have a global innovation venture partner program. They were our recent winner. They went up against competition from around the world. We believe that the app store concept has got so much within it in terms of the user experience, the socialization aspects, and the collaboration aspects of it.

Bridging point

The area which we haven't touched on so much is that it's a bridging point between your legacy systems and your more visionary cloud-type solutions where you really are SaaS, on-demand and pay-per-click.

The thing that will kill innovation is just operating slowly. One of the biggest blockers that organizations face with regard to innovation is the nature of how that sets out and the speed at which they react to what are their internal ideas.

Swindell: You can look at this as being in a way a cultural preparation for transition to the cloud, if indeed the cloud is suitable for specific parts of your application portfolio.

... Having an on-premise private app store that runs within your organization that is on site really addresses a lot of those concerns and uses the cloud simply to deliver new applications and apps from ISVs and from other vendors.

Once they are inside your organization, they're operating within your security and governance environment. So you don't really have to worry about those concerns, but it still delivers a lot of the benefits of the user experience of cloud and the on-demand nature.

Gardner: I know this is going a little bit out further into the future and perhaps into the hypothetical. It sounds as if you can effectively use this app store model and technology and approach like AppWave to be a gateway for your internal PC apps, but that same gateway might then be applicable for all these other services.

Driven by demand

Swindell: The foundation is there, and I think it will be demand driven by users. Every time we talk to a customer with AppWave, the list of possibilities and where customers want to use and take the environment is exciting, and the list continues to grow on how they can use it in the long-term.

So we're building facilities today to connect the private AppWaves into our cloud infrastructure, so that we can deliver certainly apps but there could be other types of services that connect into that as well.

Gardner: Okay, and just to be clear. AppWave is available now. I believe we have a 30-day free trial, is that correct?

Swindell: Yes, there is a free trial, and we also offer free version of AppWave that organizations can download and use today with free apps. There's an entire catalog of free apps that are included and are streamed down from our cloud.

So you can get set up and started with AppWave, using free apps in your organization. What can be added then is your own internal custom apps or commercial licenses that organizations have. So if you've hundreds of commercial licenses, you can add those in or add your own internally developed apps. You can go to www.embarcadero.com/appwave and try it for free.
Listen to the podcast. Find it on iTunes/iPod. Read a full transcript or download a copy. Learn more. Sponsor: Embarcadero Technologies.

You may also be interested in: