Tuesday, January 15, 2013

The Networked Economy forces new directions for collaboration in business and commerce, says author Zach Tumin

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba.

New levels of collaboration have emerged from an increasingly networked world, and business leaders and academic researchers alike are now sorting out what the new capabilities mean for both commerce and society at large.

To learn more about how rapid trends in collaboration and business networking are driving new innovation and social interactions, BriefingsDirect invited a Harvard Kennedy School researcher and a chief strategist at global business network provider Ariba to a panel discussion.

We joined up Zach Tumin, Senior Researcher at the Science, Technology, and Public Policy Program at the Harvard Kennedy School, and Tim Minahan, Senior Vice-President of Global Network Strategy and Chief Marketing Officer at Ariba, an SAP company.

Tumin is co-author with William Bratton of 2012’s Collaborate or Perish: Reaching Across Boundaries in a Networked World, published by Random House. Minahan, at Ariba, is exploring how digital communities are redefining and extending new types of business and collaboration for advanced commerce. 

The chat is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Zach, in your book "Collaborate or Perish," you're exploring collaboration and you show what it can do when it's fully leveraged. And, Tim, at Ariba you've been showing how a more networked economy is producing efficiencies for business and even extending the balance of what we would consider commerce to be. I’d like to start with looking at how these come together.

Tumin: The opportunities for collaboration are expanding even as we speak. The networks around the world are volatile. They're moving fast. The speed of change is coming at managers and executives at a terrific pace. There is an incredible variety of choice, and people are empowered with these great digital devices that we all have in our pockets.
Tumin

That creates a new world, where the possibilities are tremendous for joining forces, whether politically, economically, or socially. Yet it's also a difficult world, where we don't have authority, if we have to go outside of our organizations -- but where we don't have all the power that we need, if we stay within the boundaries of our charters.

So, we're always reaching across boundaries to find people who we can partner with. The key is how we do that. How do we move people to act with us, where we don't have the authority over them? How do we make it pay for people to collaborate?

A lot of change

Minahan: Collaboration certainly is the new business imperative. Companies have leaned out their operations over the past couple of years and they spent the previous 30 years focusing on their internal operations and efficiencies and driving greater performance, and getting greater insights.

Minahan
When they look outside their enterprise today, it's still a mess. Most of the transactions still occur offline or through semi-automated processes. They lack transparency into those processes and efficiency in executing them. As a result, that means lots of paper and lots of people and lots of missed opportunities, whether it's in capitalizing on getting a new product to market or achieving new sales with new potential customers.

What business networks and this new level of collaboration bring is four things. It brings the transparency that’s currently lacking into the process. So you know where your opportunities are. You know where your orders are. You know where your invoices are and what your exposure to payables are.

It brings new levels of efficiencies executing against those processes, much faster than you ever could before through mostly automated process.
It brings new levels of efficiencies executing against those processes, much faster than you ever could before through mostly automated process. It brings new types of collaboration which I am sure we will get into later in this segment.

The last part, which I think is most intriguing, is that it brings new levels of insights. We're no longer making decisions blindly. We no longer need to double order, because we don’t know if that shipment is coming in and we need to stockpile, because we can't let the refinery go down. So it brings new levels of insight to make more informed decisions in real time.

Gardner: Zach, in your book you're basically describing a new workforce, and some companies and organizations are recognizing that and embracing it. What’s driving this? What has happened that is basically redefining the workforce?

It's the demographics

Tumin: It’s in the demographics, Dana. Young people are accustomed to doing things today that were not possible 10 years ago. The digital power in everyone’s pocket or pocket book, the digital wallet in markets, are ready, willing, and able to deal with them and to welcome them. That means that there’s pressure on organizations to integrate and take advantage of the power that individuals have in the marketplace and that come in to their workforce.

Everyone can see what's going on around the world. We're moving to a situation where young people are feeling pretty powerful. They're able to search, find, discover, and become experts all on their own through the use of technologies that 10 years ago weren’t available.

So a lot of the traditional ways of thinking about power, status, and prestige in the workforce are changing as a result, and the organizations that can adapt and adopt these kinds of technologies and turn them to their advantage are the ones that are going to prevail.

Gardner: Tim, with that said, there's this demographic shift, the shift in the mentality of self-started discovery of recognizing that the information you want is out there, and it’s simply a matter of applying your need to the right data and then executing on some action as a result. Your network seems ready-made for that.


The reality of the community is that it is organic. It takes time to grow.
Minahan: The reality of the community is that it is organic. It takes time to grow. At Ariba we have more than 15 years of transactional history, relationship history, and community generated content that we've amassed. In fact, over the past 12 months those, nearly a million connected companies have executed more than $400 billion in purchase, sales, invoice, and payment transactions over the Ariba network.

Aggregate that over 15 years, and you have some great insights beyond just trading efficiencies for those companies participating there. You can deliver insights to them so that they can make more informed decisions, whether that’s in selecting a new trading partner or determining when or how to pay.

Should I take an early-payment discount in order to accelerate or reduce my cost basis? From a sales standpoint, or seller’s standpoint, should I offer an early payment discount in order to accelerate my cash flow? There are actually a host of examples where companies are taking advantage of this today and it’s not just for the large companies. Let me give you two examples.

From the buyer side, there was a company called Plaid Enterprises. Plaid is a company that, if you have daughters like I do who are interested in hobbies and creating crafts, you are very familiar with. They're one of the leading providers for the do-it-yourself crafts that you would get at your craft store.

Like many other manufacturers, they were a mid sized company, but they decided a couple of years ago to offshore their supplies. So they went to the low cost region of China. A few years into it, they realized that labor wages were rising, their quality was declining, and worse than that, it was sometimes taking them five months to get their shipment.

New sources of supply

So they went to the Ariba Network to find new sources of supply. Like many other manufacturers, they thought, "Let’s look in other low cost regions like Vietnam." They certainly found suppliers there, but what they also found were suppliers here in North America.

They went through a bidding process with the suppliers they found there, with the qualifying information on who was doing business with whom and how they performed in the past, and they wound up selecting a supplier that was 30 miles down the road. They wound up getting a 40 percent cost reduction from what they had previously paid in China and their lead times were cut from more than 120 days down to 30.

That’s from the buy side. From the sell side, the inverse is true. I'll use an example of a company called Mediafly. It's a fast growing company that provides mobile marketing services to some of the largest companies in the world, large entertainment companies, large consumer products companies.

They were asked to join the Ariba Network to automate their invoicing and they have gotten some great efficiencies from that. They've gotten transparencies to know when their invoice is paid, but one other thing was really interesting.

Once they were in the networked environment and once they had automated those processes, they were now able to do what we call dynamic discounting. That meant when they want their cash, they can make offers to their customers that they're connected to on the Ariba Network and be able to accelerate their cash.
You have extraordinary volatility on your network and that can rumble all the way through.

So they were able not only to shrink their quote-to-settle cycle by 84 percent, but they gained access to new financing and capital through the Ariba network. So they could go out and hire that new developer to take on that new project and they were even able to defer a next round of funding, because they have greater control over their cash flow.

Gardner: Zach, in listening to Tim, particularly that discovery process, we're really going back to some principles that define being human -- collaboration, word of mouth, sharing information about what you know. It just seems that we have a much greater scale that we can deploy this. How is that fundamentally changing how people are relating in business and society?

Tumin: The scaling means that things can get big in a hurry and they can get fast in a hurry. So you get a lot of volume, things go viral, and you have a velocity of change here. New technologies are introducing themselves to the market. You have extraordinary volatility on your network and that can rumble all the way through, so that you feel it seconds after something halfway around the world has put a glitch in your supply chain. You have enormous variability. You're dealing with many different languages, both computer languages and human languages.

That means that the potential for collaboration really requires coming together in ways that helps people see very quickly why it is that they should work together, rather than go it alone. They may not have a choice, but people are still status quo animals. We're comfortable in the way that we have always done business, and it takes a lot to move us out.

It comes down to people

When crisis hits, it’s not exactly a great time to build those relationships. Speaker of the House Tip O'Neill here in United States once said "Make friends before you need them." That’s a good advice. We have great technology and we have great networks, but at the end of day, it’s people that make them work.

People rely on trust, and trust relies on relationships. Technology here is a great enabler but it’s no super bullet. It takes leadership to get people together across these networks and to then be able to scale and take advantage of what all these networks have to offer.

Gardner: Tim, another big trend today of course, is the ability to use all of this data that Zach has been describing, and you are alluding to, about what’s going on within these networks. Now, of course, with this explosive scale, the amount of that data has likewise exploded.

Minahan: We've only begun to scratch surface on this. When you look at the data that goes on in a business commerce network, it’s really three levels. One is the transactional data, the actual transactions that are going on, knowing what commodities are being purchased and so on. Then, there's relationship data, knowing the relationship between a given buyer and seller.

Finally, there's what I would call community data, or community generated data, and that can take the form of performance ratings, so buyers rating suppliers and suppliers rating buyers. Others in the community can use that to help determine who to do business with or to help to detect some risk in their supply chain.

There are also community generated content, like request for proposal (RFP) templates. A lot of our communities members use a "give a template, take a template" type approach in which they are offering RFP templates to other members of the community that work well for them. These can be templates on how to source temp labor or how to source corrugated packaging.

We have dozens and dozens of those. When you aggregate all of this, the last part of the community data is the benchmarking data. It's understanding not just process benchmarking but also spend benchmarking.

One of the reasons we're so excited about getting access to SAP HANA is the ability to offer this information up in real time, at the point of either purchase or sale decision, so that folks can make more informed decisions about who to engage with or what terms to take or how to approach a particular category. That is particularly powerful and something you can’t get in a non-networked model.

Gardner: One of the things I sense, as people grapple with these issues, is a difficulty in deciding where to let creative chaos rein and where to exercise control and where to lock down and exercise traditional IT imperatives around governance, command and control, and systems of records.

Zach, in your book with William Bratton, are there any examples that you can point to that show how some organizations have allowed that creativity of people to extend their habits and behaviors in new ways unfettered and then at the same time retain that all-important IT control?

Tumin: It's a critical question that you’ve raised. We have young people coming into the workforce who are newly empowered. They understand how to do all the things that they need do without waiting online and without waiting for authority. Yet, they're coming into organizations that have strong cultures that have strong command-and-control hierarchies.

There's a clash that’s happening here, and the strong companies are the ones that find the path to embracing the creativity of networked folks within the organization and across their boundaries, while maintaining focus on set of core deliverables that everyone needs to do.

Wells Fargo

There are plenty of terrific examples. I will give you one. At Wells Fargo, for the development of the online capability for the wholesale shop, Steve Ellis was Executive Vice President. He had to take his group offline to develop the capability, but he had two responsibilities. One was to the bank, which had a history of security and trust. That was its brand. That was its reputation. But he was also looking to the online world, to variability, to choice, and to developing exactly the things that customers want.

Steve Ellis found a way of working with his core group of developers to engage customers in the code design of Wells Fargo's online presence for the wholesale side. As a result, they were able to develop systems that were so integrated into the customers over time that they can move very, very quickly, adapt as new developments required, and yet they gave full head to the creativity of the designers, as well as to the customers in coming to these new ways of doing business.

So here's an example of a pretty staid organization, 150 years old with a reputation for trust and security, making its way into the roiling water of the networked world and finding a path through engagement that helped to prevail in the marketplace over a decade.

Minahan: I'd also like to talk about the dynamics going on that are fueling more B2B collaboration. There is certainly the need for more productivity. So that's a constant in business, particularly as we're in tight environments. Many times companies are finding they are tapped out within the enterprise.

Becoming more dependent

Companies are becoming more and more dependent on getting insights and collaborating with folks outside their enterprise.

So policies do need to be put down. Just like many businesses put policies down on their social media, there needs to be policies put down on how we share information and with whom, but the great thing about technology is that it can enforce those controls. It can help to put in checks and balances and give you a full transparency and audit trail, so you know that these policies are being enforced. You know that there are certain parameters around security of data.

You don't have those controls in the offline world. When paper leaves the building, you don't know. But when a transaction is shared or when information is shared over a network, you, as a company, have greater control. You have a greater insight, and the ability to track and trace.
When a transaction is shared or when information is shared over a network, you, as a company, have greater control.

So there is this balancing act going on between opening the kimono, as we talked about in '80s, being able to share more information with your trading partners, but now being able to do it in a controlled environment that is digitized and process-oriented. You have the controls you need to ensure you're protecting your business, while also growing your business.


Gardner: Tim, for the benefit of our audience, help us better understand how Ariba is helping to fuel this issue of safely allowing creativity and new types of collaboration, but at the same time maintaining that the important principles of good business.

Minahan: The problem we solve at Ariba is quite basic, yet one of the biggest impediments to business productivity and performance that still exists. That's around inter-enterprise collaboration or collaboration between businesses.

We talked about the deficits there earlier. Through our cloud-based applications and business network, we eliminate all of the hassles, the papers, the phone calls, and other manual or disjointed activities that companies do each day to do things like find new suppliers, find new business opportunities as a seller, to place or manage orders, to collaborate with customers suppliers and other partners, or to just get paid.

They can connect with known trading partners much more efficiently and then automate the processes and the information flows between each other.
Nearly a million business today are digitally connected through the Ariba Network. They're empowered to discover one another in new ways, getting qualifying information from the community, so that they know who that party is even if they haven’t met them before. It's similar to what you see on eBay. When you want to sell your golf clubs, you know that that buyer has a performance history of doing business with other buyers.

They can connect with known trading partners much more efficiently and then automate the processes and the information flows between each other. Then, they can collaborate in new ways, not only to find one another, but also to get access to preferred financing or new insights into market trends that are going on around particular commodities.

That’s the power of bringing a business network to bear in today’s world. It's this convergence of cloud applications, the ability to access and automate a process. Those that share that process share the underlying infrastructure and a digitally connected community of relevant parties, whether that’s customers, suppliers, potential trading partners, banking partners, or other participants involved in the commerce process

Sharing data

Gardner: When it comes to exposing the data from these processes, assuming we can do it safely, what can we do now that really wasn’t possible five years ago?

Tumin: One of the things that we're seeing around the world is that innovation is taking place at the level of individual apps and individual developers. There's a great example in London. London Transport had a data set and a website that people would use to find out where their trains were, what the schedule was, and what was happening on a day-to-day basis.

As we all know, passengers on mass transit like to know what's happening on a minute-to-minute basis. London Transport decided they would open up their data, and the open data movement is very, very important in that respect. They opened the data and let developers develop some apps for folks. A number of apps developers did and put these things out on the system. The demand was so high that they crashed London Transport, initially.

London Transport took their data and put it into the cloud, where they could handle the scale much more effectively. Within a few days, they had gone from those thousand hits on the website per day to 2.3 million in the cloud.
You need governance and support people, and people to make it work and to trust each other and share information.

The ability to scale is terribly important. The ability to innovate and turn these open data sets over to communities of developers, to make this data available to people the way they want use it, is terribly important. And these kinds of industry-government relations that makes this possible are critical as well.

So across all those dimensions, technology, people, politics, and the platform, the data has to line up. You need governance and support people, and people to make it work and to trust each other and share information. These are the keys to collaboration today.

Gardner: Zach, last word to you. What do we get? What's the payoff, if we can balance this correctly? If we can allow these new wheels of innovation to spin, to scale up, but also apply the right balance, as Tim was describing, for audit trails and access and privilege controls? If we do this right, what's in the offing?

Tumin: I think you can expect four things, Dana. First is that you can expect innovations faster with ideas that work right away for partners. The partners who collaborate deeply and right from the start get their products right without too much error built-in and they can get them to market faster.

Second is that you're going to rinse out the cost of rework, whether it's from carrying needless inventory or handling paper that you don’t have to touch where there is cost involved. You're going to be able to rinse that out.

Third is that you're going to be able to build revenues by dealing with risk. You're going to take advantage of customer insight. You're going to make life better and that's going to be good news for you and the marketplace.

Constant learning

The fourth is that you have an opportunity for constant learning, so that insight moves to practice faster. That’s really important, because the world is changing so fast, you have the volatility, a velocity, a volume, variability, being able to learn and adapt is critical. That means embracing change, setting out the values that you want to lead by, helping people understand them.

Great leaders are great teachers. The opportunity of the networked world is to share that insight and loop it across the network, so that people understand how to improve every day and every way the core business processes that they're responsible for.

Gardner: Allow me to extend a big thanks to our guests, Zach Tumin, Senior Researcher at the Science, Technology, and Public Policy Program at Harvard Kennedy School, and the co-author with William Bratton of Collaborate or Perish: Reaching Across Boundaries in a Networked World, and Tim Minahan, Senior Vice-President of Global Network Strategy and Chief Marketing Officer at Ariba.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba.

You may also be interested in:

Tuesday, January 8, 2013

Learn how a telecoms provider takes strides to make applications security pervasive

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how a major telecommunications provider is tackling security, managing the details and the strategy simultaneously, and extending that value onto their many types of customers.

Here to explore these and other enterprise IT security issues, we're joined by our co-host for this sponsored podcast, Raf Los, who is the Chief Security Evangelist at HP Software.

And we also welcome our special guest, George Turrentine, Senior IT Manager at a large telecoms company, with a focus on IT Security and Compliance. George started out as a network architect and transitioned to a security architect and over the past 12 years, George has focused on application security, studying vulnerabilities in web applications using dynamic analysis, and more recently, using static analysis. George holds certifications in CISSP, CISM, and CRISC.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: George, many of the organizations that I'm familiar with are very focused on security, sometimes at a laser level. They're very focused on tactics, on individual technologies and products, and looking at specific types of vulnerabilities. But I sense that, sometimes, they might be missing the strategy, the whole greater than the sum of the parts, and that there is lack of integration in some of these aspects, of how to approach security.

I wonder if that’s what you are seeing it, and if that’s an important aspect to keeping a large telecommunications organization robust, when it comes to a security posture.

Turrentine: We definitely are at the time and place where attacks against organizations have changed. It used to be that you would have a very focused attack against an organization by a single individual or a couple of individuals. It would be a brute-force type attack. In this case, we're seeing more and more that applications and infrastructure are being attacked, not brute force, but more subtly.

The fact that somebody that is trying to effect an advanced persistent threat (APT) against a company, means they're not looking to set off any alarms within the organization. They're trying to stay below the radar and stay focused on doing a little bit at a time and breaking it up over a long period of time, so that people don’t necessarily see what’s going on.

Gardner: Raf, how does that jibe with what you are seeing? Is there a new type of awareness that is, as George points out, subtle?

Los: Subtlety is the thing. Nobody wants to be a bull-in-a-china-shop hacker. The reward may be high, but the risk of getting caught and getting busted is also high. The notion that somebody is going to break in and deface your website is childish at best today. As somebody once put it to me, the good hackers are the ones you catch months later; the great ones, you'll never see.

That’s what we're worried about, right. Whatever buzzwords we throw around and use, the reality is that attacks are evolving, attackers are evolving, and they are evolving faster than we are and than we have defenses for.

They are evolving faster than we are and than we have defenses for.
As I've said before, it’s like being out in a dark field chasing fireflies. We tend to be chasing the shiny, blinky thing of the day, rather than doing pragmatic security that is relevant to the company or the organization that you're supporting.

Gardner: One of the things I've seen is that there is a different organization, even a different culture, in managing network security, as opposed to, say, application security, and that often, they're not collaborating as closely as they might. And that offers some cracks between their different defenses.

George, it strikes me that in the telecommunications arena, the service providers are at an advantage, where they've got a strong network history and understanding and they're beginning to extend more applications and services onto that network. Is there something to be said that you're ahead of the curve on this bridging of the cultural divide between network and application?

Turrentine: It used to be that we focused a whole lot on the attack and the perimeter and trying to make sure that nobody got through the crunchy exterior. The problem is that, in the modern network scenario, when you're hosting applications, etc., you've already opened the door for the event to take place, because you've had to open up pathways for users to get into your network, to get to your servers, and to be able to do business with you. So you've opened up these holes.

Primary barrier

Unfortunately, a hole that's opened is an avenue of an attack. So the application now has become the primary barrier for protecting data. A lot of folks haven't necessarily made that transition yet to understanding that application security actually is your front row of attack and defense within an organization.

It means that you have to now move into an area where applications not only can defend themselves, but are also free from vulnerabilities or coding flaws that can easily allow somebody to grab data that they shouldn't have access to.

Gardner: Raf, it sounds as if, for some period of time, the applications folks may have had a little bit of an easy go at it, because the applications were inside a firewall. The network was going to be protected, therefore I didn't have to think about it. Now, as George is pointing out, the applications are exposed. I guess we need to change the way we think about application development and lifecycle.

Los: Dana, having spent some time in extremely large enterprise, starting in like 2001, for a number of years, I can't tell you the amount of times applications’ owners would come back and say, "I don't feel I need to fix this. This isn’t really a big risk, because the application is inside the firewall.”

Raf Los
Even going back that far, though, that was still a cop-out, because at that time, the perimeter was continuing to erode. Today, it's just all about gone. That’s the reality.

So this erosion of perimeter, combined with the fact that nothing is really internal anymore, makes this all difficult. As George already said, applications need not just to be free of bugs, but actually be built to defend themselves in cases where we put them out into an uncertain environment. And we'll call the Internet uncertain on a good day and extremely hostile on every other day.

Turrentine: Not only that, but now developers are developing applications to make them feature rich, because consumers want feature-rich applications. The problem is that those same developers aren't educated and trained in how to produce secure code.

The other thing is that too many organizations have a tendency to look at that big event with a possibility of it taking place. Yet hackers aren’t looking for the big event. They're actually looking for the small backdoor that they can quietly come in and then leverage that access. They leverage the trust between applications and servers within the infrastructure to promote themselves to other boxes and other locations and get to the data.
Little applications

We used to take for granted that it was protected by the perimeter. But now it isn’t, because you have these little applications that most security departments ignore. They don’t test them. They don’t necessarily go through and make sure that they're secure or that they're even tested with either dynamic or static analysis, and you are putting them out there because they are "low risk."

Gardner: Let’s chunk this out a little bit. On one side, we have applications that have been written over any number of years, or even decades, and we need to consider the risks of exposing them, knowing that they're going to get exposed. So is that a developer’s job? How do we make those older apps either sunsetted or low risk in terms of being exposed?

And on the other side, we've got new applications that we need to develop in a different way, with security instantiated into the requirements right from the get-go. How do you guys parse either side of that equation? What should people be considering as they approach these issues?

Turrentine: I'm going to go back to the fact that even though you may put security requirements in at the beginning, in the requirements phase of the SDLC, the fact is that many developers are going to take the low path and the easiest way to get to what is required and not necessarily understand how to get it more secure.

This is where the education system right now has let us down. I started off programming 30 years ago. Back then, there was a very finite area of memory that you could write an application into. You had to write overlays. You had to make sure that you moved data in and out of memory and took care of everything, so that the application could actually run in the space provided. Nowadays, we have bloat. We have RAM bloat. We have systems with 16 to 64 gigabytes of RAM.

Los: Just to run the operating system.

We've gotten careless

Turrentine: Just to run the operating system. And we've gotten careless. We've gotten to where we really don’t care. We don’t have to move things in and out of memory, so we leave it in memory. We do all these other different things, and we put all these features and functionality in there.

The schools, when they used to teach you how to write in very small areas, taught how to optimize the code, how to fix the code, and in many ways, efficiency and optimization gave you security.

Nowadays, we have bloatware. Our developers are going to college, they are being trained, and all they're learning is how to add features and functionality. The grand total of training they get in security is usually a one hour lecture.

You've got people like Joe Jarzombek at the Department of Homeland Security (DHS), with a Software Assurance Forum that he has put together. They're trying to get security back into the colleges, so that we can teach developers that are coming up how to develop secure code. If we can actually train them properly and look at the mindset, methodologies, and the architecture to produce secure code, then we would get secure applications and we would have secure data.

Gardner: That’s certainly a good message for the education of newer developers. How about building more of the security architect role into the scrum, into the team that’s in development? Is that another cultural shift that seems to make sense?
It's just a reactive move to the poor quality that’s been put out over the last couple of years of software.

Turrentine: Part of it also is the fact that application security architects, who I view differently than a more global security architect, tend to have a myopic view. They're limited, in many cases, by their education and their knowledge, which we all are.

Face it. We all have those same things. Part of the training that needs to be provided to folks is to think outside the box. If all you're doing is defining the requirements for an application based upon the current knowledge of security of the day, and not trying to think outside the box, then you're already obsolescent, and that's imposed upon that application when it’s actually put into production.

Project into the future

You have to start thinking further of the evolution that’s going on in the way of the attacks, see where it’s going, and then project two years or three years in the future to be able to truly architect what needs to be there for today’s application, before the release.

Gardner: What about legacy applications? We've seen a lot of modernization. We're able to move to newer platforms using virtualization, cutting the total cost when it comes to the support and the platform. Older applications, in many cases, are here to stay for quite a few number of years longer. What do we need to think about, when security is the issue of these apps getting more exposure?

Turrentine: One of the things is that if you have a legacy app, one of the areas that they always try to update, if they're going to update it at all, is to write some sort of application programming interface (API) for it. Then, you just opened the door, because once you have an API interface, if the underlying legacy application hasn’t been securely built, you've just invited everybody to come steal your data.

So in many ways, legacy applications need to be evaluated and protected, either by wrapper application or something else that actually will protect the data and the application that has to run and provide access to it, but not necessarily expose it.

I know over the years everybody has said that we need to be putting out more and more web application firewalls (WAFs). I have always viewed a WAF as nothing more than a band aid, and yet a lot of companies will put a WAF out there and think that after 30 days, they've written the rules, they're done, and they're now secure.
A WAF, unless it is tested and updated on a daily basis, is worthless.

A WAF, unless it is tested and updated on a daily basis, is worthless.

Los: That’s the trick. You just hit a sore spot for me, because I ran into that in a previous life and it stunk really bad. We had a mainframe app that had ported along the way that the enterprise could not live without. They put a web interface on it to make it remotely accessible. If that doesn’t make you want to run your head through a wall, I don’t know what will.

On top of that, I complained loud enough and showed them that I could manipulate everything I wanted to. SQL injection was a brand-new thing in 2004 or something, and it wasn’t. They were like, fine, "WAF, let’s do WAF." I said, "Let me just make sure that we're going to do this while we go fix the problem." No, no, we could either fix the problem or put the WAF in. Remember that’s what the payment card industry (PCI) said back then.

Tactics and strategy

Gardner: So let's get back to this issue of tactics and strategy. Should there be someone who is looking at both of these sides of the equation, the web apps, the legacy, vulnerabilities that are coming increasingly to the floor, as well as looking at that new development? How do we approach this problem?

Turrentine: One of the ways that you approach it is that security should not be an organization unto itself. Security has to have some prophets and some evangelists -- we are getting into religion here -- who go out throughout the organization, train people, get them to think about how security should be, and then provide information back and forth and an interchange between them.

That’s one of the things that I've set up in a couple of different organizations, what I would call a security focal point. They weren’t people in my group. They were people within the organizations that I was to provide services to, or evaluations of.

They would be the ones that I would train and work with to make sure that they were the eyes and ears within the organizations, and I'd then provide them information on how to resolve issues and empower them to be the primary person that would interface with the development teams, application teams, whatever.

If they ran into a problem, they had the opportunity to come back, ask questions, and get educated in a different area. That sort of militia is what we need within organizations.
I've not seen a single security organization that could actually get the headcount they need.

I've not seen a single security organization that could actually get the headcount they need. Yet this way, you're not paying for headcount, which is getting people dotted lined to you, or that is working with you and relying on you. You end up having people who will be able to take the message where you can’t necessarily take it on your own.

Gardner: Raf, in other podcasts that we've done recently we talked about culture, and now we're talking organization. How do we adjust our organization inside of companies, so that security becomes a horizontal factor, rather than group oversight? I think that’s what George was getting at. Is that it becomes inculcated in the organization.

Los: Yeah. I had a brilliant CISO I worked under a number of years back, a gentleman by a name of Dan Conroy. Some of you guys know him. His strategy was to split the security organization essentially uneven, not even close to down the middle, but unevenly into a strategy, governance, and operations.

Strategy and governance became the team that decided what was right, and we were the architects. We were the folks who decided what was the right thing to do, roughly, conceptually how to do it, and who should do it. Then, we made sure that we did regular audits and performed governance activities around it's being done.

Then, the operational part of security was moved back into the technology unit. So the network team had a security component to it, the desktop team had a security component to it, and the server team had security components, but they were all dotted line employees back to the CISO.

Up to date

They didn’t have direct lines of reporting, but they came to our meetings and reported on things that were going on. They reported on issues that were haunting them. They asked for advice. And we made sure that we were up to date on what they were doing. They brought us information, it was bidirectional, and it worked great.

If you're going to try to build a security organization that scales to today’s pace of business, that's the only way to do it, because for everything else, you're going to have to ask for $10 million in budget and 2,000 new headcounts, and none of those is going to be possible.

Gardner: Moving to looking at the future, we talked about some of the chunks with legacy and with new applications. What about some of the requirements for mobile in cloud?

As organizations are being asked to go with hybrid services delivery, even more opportunity for exposure, more exposure both to cloud, but also to a mobile edge, what can we be advising people to consider, both organizationally as well as tactically for these sorts of threats or these sorts of challenges?

Turrentine: Any time you move data outside the organization that owns it, you're running into problems, whether it’s bring your own device (BYOD), or whether it’s cloud, that is a public offering. Private cloud is internal. It's just another way of munging virtualization and calling it something new.

But when you start handling data outside your organization, you need to be able to care for it in a proper way. With mobile, a lot of the current interface IDEs and SDKs, etc., try to handle everything as one size fits all. We need to be sending a message back to the owners of those SDKs that you need to be able to provide secure and protected areas within the device for specific data, so that it can either be encrypted or it can be processed in a different way, hashed, whatever it is.

Then, you also need to be able to properly and cleanly delete it or remove it should something try and attack it or remove it without going through the normal channel called the application.

Secure evolution

I don’t think anybody has a handle on that one yet, but I think that, as we can start working with the organizations and with the owners of the IDEs, we can get to the point where we can have a more secure evolution of mobile OS and be able to protect the data.

Gardner: I am afraid we will have to leave it there. With that, I would like to thank our co-host, Rafal Los, Chief Security Evangelist at HP Software. And I'd also like to thank our supporter for this series, HP Software, and remind our audience to carry on the dialogue with Raf through his personal blog, Following the White Rabbit, as well as through the Discover Performance Group on LinkedIn.

I'd also like to extend a huge thank you to our special guest, George Turrentine, the Senior Manager at a large telecoms company.

You can also gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance. And you can always access this and other episodes in our HP Discover Performance Podcast Series on iTunes under BriefingsDirect.

Wednesday, December 12, 2012

Global open-source vendors gain new leg up in selling to US agencies, thanks to favorable Talend ruling

Open-source provider Talend has received a favorable advisory ruling from the U.S. Customs and Border Protection (CBP) agency concerning the government's ability to purchase open-source software, opening the way for all software vendors to increase their share of business with US federal agencies.

The CBP has determined that software products comply with the Trade Agreement Act (TAA) when that software is manufactured in what is known as a "designated country," even if the majority of its source code was created in a non-designated country. [Disclosure: Talend is a sponsor of BriefingsDirect podcasts.]
Many software vendors -- whether they are open-source based or not -- will benefit from the ruling.

The US TAA says that government agencies may acquire only products or services produced in certain countries -- known as designated countries. This has sometimes hampered the agencies from acquiring open-source software if some of the code was developed outside of those countries, even when the majority of production took place inside designated countries.

“Country of origin” issues sometimes have been used as a pretext to make a case against the procurement of open-source software. Talend conducts the vast majority of its software production in the U.S., France or Germany, but like many manufacturers, it also seeks talent in countries that can fall outside those considered designated countries

"With this finding, any other company that meets the same criteria can get the same approval," said Yves de Montcheuil, Vice President of Marketing at Talend. "And then government buying can meet the trade agreement status. The process can now be easily repeated."

While governments around the world have been moving to embrace open source for a long time, adoption has been slow and inconsistent in the U.S., though it is steadily growing as more federal agencies revise their guidelines and regulations, and some states pass laws requiring the consideration of open-source options.

Useful guidance

"The Talend Ruling is significant because government users now have useful guidance specifically addressing open source software that is developed and substantially transformed in a designated country, but also includes, or is based upon, source code from a non-designated country," said Fern Lavallee, DLA Piper LLP, counsel to Talend. "The timing of this ruling is right given the Department of Defense’s well publicized attention and commitment to Better Buying Power and DoD’s recent Open Systems Architecture initiative."
 
"This is great news for everyone in the software industry," said Bertrand Diard, co-founder and CEO of Talend. "While the news is significant for Talend and offers an opportunity for us to address needs in the federal space, our belief is that many software vendors -- whether they are open-source based or not -- will benefit from the ruling."
This is great news for everyone in the software industry.

A copy of the advisory ruling can be obtained by emailing press@talend.com.

The U.S. Department of Defense (DoD) is currently and significantly revising the December 2011 draft of the “DoD Open Systems Architecture, Contract Guidebook for Program Managers.” The guidance document, expect by the end of 2012, helps DoD program managers use Open System Architecture principles for National Security Systems.

You may also be interested in:

Tuesday, December 11, 2012

Insurance leader AIG drives business transformation and IT service performance through center of excellence model

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how global insurance leader American International Group (AIG) has leveraged a performance center of excellence (COE) to help drive business transformation.

We learn in our discussion how AIG's Global Performance Architecture Group improved performance of their services to deliver better experiences and payoffs for businesses and end-users alike.

Here to explore these and other enterprise IT issues, we're joined by our co-host for this sponsored podcast, Chief Software Evangelist at HP, Paul Muller.

And we also welcome our special guest, Abe Naguib, Senior Director of AIG’s Global Performance Architecture Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Many organizations are now focusing more on the user experience and the business benefits and less on pure technology -- and for many, it's a challenge. From a very high level, how do you perceive the best way to go about a cultural shift, or an organizational shift, from a technology focus more toward this end-user experience focus?
The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money.

Naguib: There are several paradigms involved from the COO and CFO’s push on innovation and efficiency. A lot of the tooling that we use, a lot of the products we use, help to fully diversify and resolve some of the challenges we have. That’s to keep change running.

Abe Naguib
The CIO has to keep his eye forward to periodically change tracks, ensuring that the customers are getting the best value for their money. That’s a tall order and, he has to predict benefit, gauge value, maintain integrity, socialize, and evolve the strategy of business ideas on how technology should run.

We have to manage quite a few challenges from the demand of operating a global franchise. Our COE looks at various levels of optimization and one key target is customer service, and factors that drive the value chain.

That’s aligning DevOps to business, reducing data-center sprawl, validating and making sense of vendors, products, and services, increasing the return on investment (ROI) and total cost of ownership (TCO) of emerging technologies, economy of scale, improving services and hybrid cloud systems, as we isolate and identify the cascading impacts on systems. These efforts help to derive value across the chain and eventually help improve customer value.

Gardner: Paul Muller, does this jibe with what you're seeing in the field? Do you see an emphasis that’s more on this sort of process level, when it comes to IT with of course more input from folks like the COO and the chief financial officer?

Level of initiatives

Muller: As I was listening to Abe's description I was thinking that you really can tell the culture of an organization by the level of initiatives, and thinking that it has. In fact, you can't change one without changing the other. What I've just described is a very high level of cultural maturity.

Paul Muller
We do see it, but we see it in maybe 10 to 15 percent of organization that have gone through the early stages of understanding the performance and quality of applications, optimizing it for cost and performance, but then moving through to the next stage, reevaluating the entire chain, and looking to take a broader perspective with lots of user experience. So it's not unique, but it's certainly used among the more mature in terms of observational thinking.

Gardner: Tell us about AIG, its breadth, and particularly the business requirements that your Global Performance Architecture Group is tasked with meeting.

Naguib: AIG is a leading international insurance organization, across 130 countries. AIG’s companies serving commercial, institutional, individual customers, through one of the world’s most extensive property/casualty networks, are leading providers of life insurance and retirement services in the US.

Among the brand pillars that we focused on are integrity, innovation, and market agility across the variety of products that we offer, as well as customer service.
Bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity.

With AIG’s mantra of "better, faster, cheaper," my organization’s people, strategy, and comprehensive tools help us to bridge these gaps that a global firm faces today. There are many technology objectives across different organizations that we align, and we utilize various HP solutions to drive our objectives, which is getting the various IT delivery pistons firing in the same direction and at the right time.
These include performance, application lifecycle management (ALM), and business service management (BSM), as well as project and portfolio management (PPM). Over time our Global Performance organization has evolved, and our senior manager realized our strategic benefit and capability to reduce cost, risk, and mitigate production and risk.

Our role eventually moved out of quality assurance's QA’s functional testing area to focus on emphasizing application performance, architecture design patterns, emerging technologies, infrastructure and consolidation strategies, and risk mitigation, as well increasing ROI and economy of scale. With the right people, process, and tools, our organization enabled IT transparency and application tuning, reduced infrastructure consumption, and accelerated resolution of any system performances in dev and production.

The key is bringing together our business-critical and strategic drivers across IT’s various segments fosters alignment, agility, and eventually unity. Now, our leaders seek our guidance to help tune IT at some degree of financial performance to unlock optimal business value.

Culture of IT

Gardner: Is that a pattern you're seeing, that the people in QA are in the sense breaking out of just an application performance level and moving more into what we could call IT performance level?

Naguib: In the last six or seven years, there's less focus on just basic performance optimization. The focus is now on business strategy impact on infrastructure CAPEX, and OPEX. Correlating business use cases to impact on infrastructure is the golden grail.
I always say that software drives the hardware.

Once you start communicating to CIOs the impact of a system and the cost of hosting, licensing, headcount, service sprawl, branding, and services that depend on each other, we're more aligning DevOps with business.

Muller: I just had a conversation not three weeks ago with a financial institution in another part of the world. I asked who is responsible for your end-to-end business process -- in this case I think it was mortgage origination -- and the entire room looked at each other, laughed, and said "We don't know."

So you've really got this massive gap in terms of not just IT process maturity, but you also have business-process maturity, and it's very challenging, in my experience, to have one without having the other.

Gardner: I think we have to recognize too that most businesses now realize that software is such an integral part of their business success. Being adept at software, whether it's writing it, customizing it, implementation and integration, or just overall lifecycle has become kind of the lifeblood of business, not just an element of IT. Do you sense that, Abe, that software is given more clout in your organization?

Naguib: Absolutely Dana. I truly believe that. I've been kind of an internal evangelist on this, but I always say that software drives the hardware. Whether I communicate with the enterprise architects, the dev teams, the infrastructure teams, software frankly does drive the hardware.

That's really the key point here. If you start managing your root cost and performance from a software perspective and then work your way out, you’ve got the key to unlocking everything from efficiencies to optimizing your ROI and to addressing TCO over time. It's all business driven. Know your use cases. Know how it impacts your software, which impacts your infrastructure.

Converged infrastructure

Gardner: Just being productive for its own sake isn’t good enough in this economy. We have to show real benefits, and you have to measure those benefits. Maybe you have some way to translate how this actually does benefit your customers. Any metrics of success you can share with us, Abe?

Naguib: Yes, during our initial requirements-gathering phase with our business leaders, we start defining appropriate test-modeling strategy, including volumetrics, and managing and understanding the deployment pattern with subscriber demographics and user roles. We start aligning DevOps organizations with business targets which improves delivery expectations, ROI, TCO, and capacity models.
The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations.

Then, before production, our Application Performance Engineering (APE) team identifies weak spots to provide the production team with a reusable script setting thresholds on exact hotspots in a system, so that eventually in production, they can take appropriate productive measures. Now, this is value add.

Muller: As we’re seeing across the planet at the moment, there's a recognition that to bring great software and information is really a function of getting Layers 1 through 7 in the technology stack working, but it's also about getting Layer 8 working. Layer 8, in this case, is the people. Unfortunately, being technologists, we often forget about the people in this process.

What Abe just described is a great representation of the importance of getting not just a functional part of IT, in this case quality and performance working well, but it's about recognizing the software will one day be delivered to operational staff to internally monitor and manage it in a production setting.

The big transformation taking place right now is that our organization is connecting different silos of IT delivery, in particular development, quality, and operations, to help them accelerate the release of quality applications, and to automate things like threshold setting, and optimize monitoring of metrics ahead of time. Rather than discovering that an application might fail to perform in a production setting, where you've got users screaming at you, you get all of that work done ahead of time.

Sharing and trust

You create a culture of sharing and trust between development, quality, and operations that frankly doesn’t exist in a lot of process where the relationship between development and operations is pretty strained.

Gardner: Abe, how do you measure this? We recognized the importance of the metrics, but is there a new coin of the realm in terms of measurement? How do you put this into a standardized format that you’re going to take to your CFO and your COO and say here’s what's really happening?

Naguib: That's a good question. Tying into what Paul was saying, nobody cared about whether we improved performance by three seconds or two seconds. You care at the front end, when you hear users grumbling. The bottom line is how the application behaves, translating that into business impact as well as IT impact.

Business impact is what are the dollar values to make key use cases and transactions that don't scale. Again, software drives the hardware. If an application consumes more hardware, the hardware is cheap now-a-days, but licenses aren’t. You have database and you have middleware products running in that environment, whether it's on-premise or in the cloud.

The point is that impact should be measured, and that's how we started communicating results through our organization. That's when we started seeing C-level officers tuning in and realizing the impact of performance of both to the bottom line, even to the top line.
We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization.

Our role is to provide more insight earlier and quicker to the right people at the right time.
Leveraging HP’s partnership and solutions helped us to address technologies, whether Web 2.0, client-server, legacy systems, Web, cloud-based, or hybrid models. We were able to leverage consistent dashboards across different IT solutions internally, then target weak spots and help drive optimization, whether on premise or cloud.

Muller: In the enterprise today, it's all about getting your ideas out of your head and making them a reality. As Abe just described, most of the best ideas today that are on their way into business processes you can ultimately turn into software. So success is really all about having the best applications and information possible.

Understand maturity

The challenge is understanding how the technology, the business process and the benefits come together and then orchestrating that the delivery of that benefit to your organization. It's not something that can be done without a deliberate focus on process. Again, the challenge is always understanding your organization's maturity, not just from an IT standpoint, but importantly from a broader standpoint.

Naguib: What's the common driver for all? Money talks. Translating things into a dollar value started to bring groups together to understand what we can do better to improve our process.

What we're seeing more is that it's not just internal dev and ops that we're aligning with, or even our business service level expectations. It's also partnerships with key vendors that have opened up the roadmap to align our technologies, requirements, and our challenges into those solutions.
The gains we make are simple. They can be boiled down into three key benefits: savings, performance, and business agility. Leveraging HP's ALM solutions helps us drive IT and business transformation and unlock resources and efficiencies. That helps streamline delivery and an increased reliability of our mission critical systems.
After we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

My favorite has always been HP's LoadRunner Performance Center. It’s basically our Swiss Army Knife to support diverse platform technologies and align business use cases to the impact on IT and infrastructure via SiteScope, HP SiteScope.

We're able to deep dive into the diagnostics, if needed. And the best part is, after we've dealt with tuning, we can help activate post-production monitoring using the same script, understanding where the weak spots are.

So the tools are there. The best part is integrated, and actually work together very well.

Gardner: It really sounds like you've grabbed onto this system-of-record concept for IT, almost enterprise resource planning (ERP) for IT. Is that fair?

Naguib: That's a good way to put it.

Muller: One of the questions I get a lot from organizations is how we measure and reflect the benefit. What hard data have you managed to get?

Three-month study

Naguib: IDC came in and did an extensive three-month study, and it was interesting what they have found. We've realized a saving of more than $11 million annually for the past five years by increasing our economy of scale. Scale on a system allows more applications on the same host.

It's an efficiency from both hardware and software. They also found that our using solutions from HP increased staff productivity by over $300,000 a year. Instead of fighting fires, we're actually now focusing on innovation, and improving business reliability by over $600,000 a year.

So all that together shows a recoup, a five-year ROI, about 577 percent. I was very excited about that study. They also showed that we resolved mean time resolution over 70 percent through production debugging, root cause, and resolution efforts.

So what we found, and technologists would agree with me, is that today, with hardware being cheaper than software, there is a hidden cost associated with hosting an application. The bottom line, if we don’t test and tune our applications holistically, either the architecture, code, infrastructure, and shared services, these performance issues can quickly degrade quality of service, uptime, and eventually IT value.
I have a saying, which is that quality costs money but bad quality costs more.

Gardner: Abe, any recommendations that you might have for other organizations that are thinking of moving in this direction and that want to get more mature, as Paul would say. What are some good things to keep in mind as you start down this path?

Naguib: Besides software drives the hardware -- and I can't stress that enough -- are all the ways to understand business impact and translate whatever you're testing into the business model.

What happens to the scenarios such as outages? What happens when things are delayed? What is the impact on business operability, productivity, liability, customer branding. There are so many details that stem from performance. We used to be dealing with the "Google factor" of two-second response time, but now, we're getting more like millisecond response, because there are so many interdependencies between our systems and services.

Another fact is that a lot of products come into our doors on a daily basis. Modern technologies come in with a lot of promises and a lot of commitments.

Identify what works

So it's being able to weed through the chaff, identify what works, how the interdependencies work, and then, being able to partner with vendors of those solutions and services. Having tools that add transparency into their products and align with our environment helps bring things together more. Treating IT like a business by translating the impact into dollar value, helps to get lined up and responsive.

Muller: It might be a little controversial here, but the first step for progress on all of this is look in the mirror and understand your organization and its level of maturity. You really need to assess that very self-critically before you start. Otherwise, you're going to burn a lot of capital, a lot of time, and a lot of credibility trying to make a change to an organization from state A to state B. If you don’t understand the level of maturity of your present state before you start working on the desired state, you can waste a lot of time and money. It's best to look in the mirror.

The second step is to make sure that, before you even begin that process, you create that alignment and that desired state in the construct of the business. Make sure that your maturity aligns to the business's maturity and their goal. I just described the ability to measure the business impact in terms of revenue of IT services. Many companies can’t even do something as fundamental as that. It can be really hard to drive alignment, unless you’ve got business-IT alignment ahead of time.

I have said this so many times. The technology is a manageable problem, Layers 1 through 7, including management software to a certain degree, have solved problems the most time. Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.
Solving the problem of Layer 8 is tough. You can reboot the server, but you can’t reboot a person.

I always recommend bringing along some sort of management of organizational change function. In our case, we actually have a number of trained organizational psychologists working for us who understand what it takes to get several hundred, sometimes several thousand, people to change the way they behave, and that’s really important. You’ve got to bring the people along with it.

Gardner: I'd like to thank our supporter for this series, HP Software, and remind our audience to carry on the dialogue with Paul Muller through the Discover Performance Group on LinkedIn, and also to follow Raf on his popular blog, Following the White Rabbit.

You can also gain more insights and information on the best of IT performance management at http://www.hp.com/go/discoverperformance.

And you can always access this and other episodes in our HP Discover Performance Podcast Series at hp.com and on iTunes under BriefingsDirect.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.


You may also be interested in:

Monday, December 10, 2012

Multi-device tool architecture from Embarcadero primes pump for accelerated enterprise mobile development for 2013

The modern class of C and C++ tools are workhorses of PC applications development. And Objective-C tools have proven the rapid application development means of choice for native mobile development for iOS and Mac OS X.

So wouldn't it be nice to let the developers with the skills and proficiency in building native applications for the prominent enterprise computing clients of yesteryear (like Windows) gain ease in bringing better apps to all the mobile and fat client types demanded for the foreseeable future?

Embarcadero Technologies thought so, and long enough ago that they began re-architecting their compiler and C++ Builder development architecture in time to now provide write-once, run-natively-anywhere-that-counts benefits. [Disclosure: Embarcadero Technologies is a sponsor of BriefingsDirect podcasts.]
And now is when it really counts, with the advent of Windows 8, growing Mac OS X use and exploding sales of iOS and Android clients.


And now is when it really counts, with the advent of Windows 8, growing Mac OS X use and exploding sales of iOS and Android clients.

Embarcadero on Monday made generally available C++Builder XE3, which allows a common development effort to natively target -- using a new 64-bit compiler -- Windows 8, Mac OS X and Intel (not yet ARM) clients. And coming this summer, the same compiler outputs to run those same apps natively on iOS and Android mobile clients. ARM support comes at end of 2013.

What's more, more of the Embarcadero stable of tools and IDEs will leverage the architecture. So more tools to build more apps once that run on more devices natively. The compiler architecture is extensible to make more tools that make more code more extensible to more platforms. Almost rhymes.

Vision to close chasms

The vision to bridge the long-standing chasm between mobile and full client environments -- never mind the Windows-Mac chasm -- came as Embarcadero acquired the CodeGear technology set from Borland back in 2008. Embarcadero said it immediately set out to build C++Builder XE3 then, to allow one code effort for many more targets.
"The old way of supporting multiple platforms was not practical," said Michael Swindell, Senior Vice President of Marketing and Product Management at Embarcadero in San Francisco. That old way included highly redundant and costly development to target different platforms. The old way forced ISVs and enterprises to make guesses about which clients to target, despite an extremely dynamic market and fast-changing users preferences.

"We needed to re-organize for a multi-client world," said Swindell. He said that ISVs and developers can hedge their bets by using C++Builder XE3 now, with the knowledge the same code will be able to quickly tuned and deployed in Q3 of this year on iOS and Android.
And there are some additional synergies that should appeal to the commercial ISVs.


The common mantra behind Delphi and C++ Builder, as well as any RAD IDE, of course, is to make less code to more work fast. C++Builder XE3 takes that a big step further by applying Embarcadero's agile benefits to a common architecture supporting the major IDEs to deliver cross-client platform development on all the major targets. Full Delphi support on the new C++Builder XE3 underlying architecture comes this spring, with all the Delphi database connectivity and web services support built in.

And there are some additional synergies that should appeal to the commercial ISVs. The C++Builder XE3 architecture is already "app store ready," enabling ease in bringing the apps to Apple and Google app stores. But for enterprises, Embarcadero is also developing synergies between its AppWave capabilities and C++Builder XE3 so that enterprises too can gain a streamlined means to deploy the apps for PCs, Macs and iOS and Android uses from an AppWave app store. Expect that in the fall, said Swindell.

So the net-net on this from my perspective is that Embarcadero has primed the pump for accelerated enterprise mobile development for 2013. And, it's given developers with C and C++ skills the means to build and deploy via app stores mobile apps on-demand, via subscription models, even inside enterprises. It also means that apps can be designed with common logic and requirements and then delivered on multiple devices, so workforces can use those apps anywhere, anytime. Very powerful.

Best of mobile to enterprise

In essence, this brings what we have come to like about consumer and entrainment and web apps -- but to the workplace on all relevant platforms natively -- in a way that's not too complicated, costly or time-consuming.

I'm not seeing that in any comprehensive way from Microsoft, Apple or Google, nor from any PaaS development offerings in the market.
I would expect that PaaS-hungry providers may look to OEM or otherwise license the C++Builder XE3 technology to bring to a cloud deployment model.


And so I would expect that PaaS-hungry providers may look to OEM or otherwise license the C++Builder XE3 technology to bring to a cloud deployment model, and to better cross the PC-Mac divide, and to consolidate new apps development for all uses.

The C and C++ IDE tools and C++Builder XE3 technology, incidentally, need not only run on-premises. Embarcadero is exploring the means to make it all cloud-based, and to make tool clients using HTML5. A hybrid future for such multi-device development can't be too far off.

You may also be interested in:

GigaSpaces survey shows need for tools for fast big data, strong interest in big data in cloud

It's no surprise that most enterprises are now taking big data more seriously. But what might raise an eyebrow is how many organizations say they rely on real-time processing of big data to fuel their business, as well as the number of companies who say they're thinking about taking their big data to the cloud.

These findings come from a recent survey conducted by GigaSpaces, which asked 243 IT executives in various industries about their big data perceptions and plans. GigaSpaces, a provider of end-to-end scaling solutions for distributed application environments and an open platform-as-a-service (PaaS) stack for cloud deployment, conducted the survey online during the fall of 2012.
The first finding shows that enterprises are moving beyond collecting and storing big data and delving deeper.

Among the survey findings:
  • Some 80 percent of respondents said that big-data processing is a mission-critical function


  • More than 70 percent said their business requires processing of big data in fast -- in real time -- either in large volumes, at high velocity, or both


  • Only 20 percent of respondents said they have no plans to move their big data to the cloud, indicating a widespread readiness to consider the option
The first finding shows that enterprises are moving beyond collecting and storing big data and delving deeper. Their businesses require that they process this data in real time as events occur, be they trades on a stock exchange, alerts from security monitors, or location changes from GPS devices.

The second finding demonstrates the need for low latency and high performance in processing big data streams, as these functions are becoming mission critical and delays or dropped data can't be tolerated.

Real-time tools

GigaSpaces, which sponsored the survey, also asked survey respondents what tools they're using to process big data in real time, and here's where a gap is revealed: only 12 percent have adopted real-time event processing tools. According to GigaSpaces, this suggests that most enterprises still have not found the right solution that offers the ability to handle massive data while also providing the required speed.

"Most enterprises haven’t yet adopted these real-time event processing tools, they're managing instead with a combination of a NoSQL data store with a Hadoop processing platform," says Tsipi Erann, marketing communications manager at GigaSpaces. "It's clear that enterprises haven’t yet found the right solution that’s dedicated to real-time processing and also fits into their architecture."

As for moving big data to the cloud, survey respondents seem eager to reap the cost-savings and improved agility offered by this model. Only 20 percent of them said they have no plans to move big data applications to the cloud, while 44 percent have concrete plans or have already started this migration.

Among the 34 percent who said they were unsure about cloud deployments, primary concerns cited were scalability and security.
It's clear that enterprises haven’t yet found the right solution that’s dedicated to real-time processing and also fits into their architecture.


GigaSpaces cross-referenced answers to the question of big data's business importance with answers to the cloud question and came up with this statement: 80 percent of respondents who define their big data applications as mission critical to the business are planning or considering a move to the cloud. The company said it will use findings from this survey to help shape the direction of its offerings.

"We understand the importance of giving customers the right features and will use the input in the creation of such a solution, whether it’s integration with Hadoop or processing or transactional management," says Yaron Parasol, product manager at GigaSpaces.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Wednesday, December 5, 2012

Message Bus bets its cloud-native messaging service will improve the art of email delivery


Message Bus has a pedigreed CEO, an impressive list of customers and partners, and technology that makes its cloud-based service highly scalable and resilient, yet the young company's goal is simple: help customers keep their legitimate email messages out of recipients' spam folders.

With Twitter co-founder Jeremy LaTrasse at the helm, Message Bus is navigating the often dark waters of email delivery so that its customers don't have to. The company's Global Delivery Network, launched in mid-November, aims to be to email and mobile messaging what Amazon Web Services are to cloud computing and Dropbox to cloud storage.
Currently, one in five legitimate emails is either blocked or routed to the spam folder.

The service is a cloud-native application, meaning that it's not tied to the underlying infrastructure of a single cloud service provider. Therefore Message Bus can scale and move its customers' workloads across different cloud infrastructures as needed (the company says it currently deploys on Joyent, Amazon Web Services and Rackspace cloud services). This approach avoids the scale limitations of working with a single cloud service provider, as well as the possibility of service disruption if a provider experiences an outage.

But it takes more than the right architecture to provide an effective message delivery service. Message Bus has done extensive relationship building with top ISPs including AOL, Microsoft and Google to understand what they expect from a trusted sender and sticks to those guidelines, resulting in a higher likelihood that legitimate emails make it to the inbox.

"More than 90 percent of all mail worldwide ends up in one of those places; if there’s no trust with those ISPs then the message won’t make it into the box," says LaTrasse. "So we had the idea to build best practices into the network, so everyone who sends through our service follows them. We made the relationships happen, and all our customers benefit, as well as their recipients."

Out of control

Currently, one in five legitimate emails is either blocked or routed to the spam folder, says Message Bus, making it difficult for companies relying on email as a primary driver of revenue and brand recognition to get their message across. What's more, the cost and complexity of launching messaging campaigns across multiple channels (email, mobile and social messaging, etc) is spinning out of control.

Customers of the Global Delivery Network don't need dedicated messaging hardware or personnel; instead they build a virtual SMTP bridge to send their messages across Message Bus' network. This significantly reduces upfront infrastructure costs as well as ongoing staffing, says LaTrasse, and allows customers to focus on the content of the messages, knowing that they'll be delivered in a manner that's effective, secure, and complaint.
If there’s no trust with those ISPs then the message won’t make it into the box.
At the same it unveiled the Global Delivery Network Message Bus launched a free reporting service called Discover that informs customers of email senders who may be abusing their domain name for illicit or unauthorized purposes. And late in November the company announced an enhancement to its service with the deployment of Opscode's Hosted Chef to automate configuration, environment and application management across the multiple cloud infrastructures powering the company's service.

Message Bus lists American Greetings, MyFitnessPal, and Telly among its early users.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn at http://linkd.in/T6trhH.)

You may also be interested in: