Tuesday, October 1, 2013

Enterprise architecture: The key to cybersecurity

This guest post comes courtesy of Jason Bloomberg of ZapThink, a Dovel Technologies company.

By Jason Bloomberg


When I first discuss security in our Licensed ZapThink Architect (LZA) SOA course, I ask the class the following question: if a building had 20 exterior doors, and you locked 19 of them, would you be 95 percent secure? The answer to this 20-doors problem, of course, is absolutely not – you’d be 0 percent secure, since the bad guys are generally smart enough to find the unlocked door.

While the 20-doors problem serves to illustrate how important it is to secure your services as part of a comprehensive enterprise IT strategy, the same lesson applies to enterprise cybersecurity in general: applying inconsistent security policies across an organization leads to weaknesses hackers are only too happy to exploit. However, when we’re talking about the entire enterprise, the cybersecurity challenge is vastly more complex than simply securing all your software interfaces. Adequate security involves people, process, information, as well as technology. Getting cybersecurity right, therefore, depends upon enterprise architecture (EA).

Understanding the context for cybersecurity

A fundamental axiom of security is that we can never drive risk to zero. In other words, perfect security is infinitely expensive. We must therefore understand our tolerance for risk and our budget for addressing security, and ensure these two factors are in balance across the organization. Fundamentally, it is essential to build threats into your business model, and do so consistently.

Bloomberg
Credit card companies, for example, realize that despite their best efforts, there will always be a certain amount of fraud. True, they spend money to actively combat such fraud, but not as much as they could. Instead, they balance the budget for fighting such crime with the money lost through fraud in order to determine the acceptable level of risk.

In many organizations, however, the tolerance for risk and the budget for security are not in balance – or to be more precise, the balance is different in different departments or contexts across the enterprise. Part of this problem is due to the lottery fallacy, which we recently discussed in the context of big data. People tend to place an inordinate emphasis on improbable events. This fallacy frequently occurs in the context of risk, which is why we’re more worried about airplane crashes than car accidents, even though car crashes are far, far more likely.

But the lottery fallacy isn’t the only problem. Politics is a much greater issue. Department heads have their own ideas about tolerable risk in their fiefdoms, and the risk tolerance for one division may be very different from another. Furthermore, in most organizations, certain departments are responsible for security while others are not. Now department heads have a much more difficult time evaluating their level of risk and calculating their budget for security, as it’s someone else’s budget and supposedly someone else’s problem.

The solution to these challenges is the effective use of EA. You must think like an insurance company: undertake an objective analysis of the known risks and calculate the average cost of threats over all the activities in your organization. Just as an insurance company must be able to set their premiums high enough to cover losses on average, you must set your security budget high enough to cover your threats. Of course, sometimes a particular threat costs more than you expect, just as a catastrophic loss may cost more than a lifetime of premiums for the affected insurance customer. But the average still generally works out to your advantage.

With risk comes reward, but not all risks have the same promise of reward. In other words, some bets are better than others. Properly applied, EA can inform the organization about which bets have better expected returns than others, so that the organization can place its bets more rationally by distributing the risk across the organization in a fact-based manner.

Cybersecurity: dealing with change

Even organizations with robust EA efforts typically don’t leverage architecture to drive their cybersecurity strategies. The reason for this lack are diverse, and often include political and competence issues, but the most fundamental reason is because traditional EA doesn’t deal well with change. Cybersecurity is an inherently dynamic challenge: hackers keep inventing new attacks, new technologies continually introduce new vulnerabilities, and the interrelationship among the various trends in IT are increasingly convoluted, as we illustrate on our new ZapThink 2020 poster.

In contrast, the agile architecture approach I champion in my book, The Agile Architecture Revolution, calls for EA that focuses on change by explicitly working at the “meta” level: instead of simply architecting the things themselves, focus on architecting how those things change. For example, instead of focusing on the processes in the organization, architect the meta-processes:
The focus shouldn’t be on threats, but rather on how those threats might change.
processes for how processes change. Similarly, the role of software development isn’t simply to build to requirements. Instead, the focus should be on building systems that respond to changing requirements, what my book calls the meta-requirement of business agility.

So too with architecting for security. The focus shouldn’t be on threats, but rather on how those threats might change. At the technology level, this focus on change shifts the focus from a static “locked door” approach to security to the immune system metaphor I discussed last year. But there’s more to architecting for security than the technology. At the organizational level, effective EA will help resolve shadow IT issues which can lead to unmanaged security threats as an example. At the process level, EA will address social engineering challenges like phishing attacks. Securing your technology without applying a comprehensive, best practice approach to organizational and process security is tantamount to leaving some of your doors unlocked.

The ZapThink take

Remember the scene from Apollo 13, where the Flight Director goes around the room, asking each division leader for a go/no-go decision? Essentially, every division leader was a stakeholder in all important decisions, and any one of them had the ability to nix any idea with a thumbs-down. The thinking behind this approach was one of risk mitigation: only if there be a unanimous thumbs-up can the organization make a critical decision to take action.

Just so in the enterprise. Your EA should require the security team to be part of the planning for all systems (both human and technology) across the organization. Without EA, security tends to be an afterthought. Instead, security must be a stakeholder in all critical decisions across the enterprise.
By giving your enterprise architects the ability to offer thumbs-up or thumbs-down opinions on critical decisions, you are essentially saying that you mandate EA.

EA should also have a seat at the table, of course. By giving your enterprise architects the ability to offer thumbs-up or thumbs-down opinions on critical decisions, you are essentially saying that you mandate EA. And without such a mandate, architects find themselves in the proverbial ivory tower, creating artifacts and standards that the rank and file consider optional – which is a recipe for disaster. There’s no surer way to increase your cybersecurity risk than to treat EA as anything but absolutely necessary to the proper functioning of your organization.

This guest post comes courtesy of Jason Bloomberg of ZapThink, a Dovel Technologies company.

You may also be interested in:


Thursday, September 26, 2013

Application development efficiencies drive Agile payoffs for healthcare tech provider TriZetto

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series highlights how healthcare technology provider TriZetto has been improving its development processes and modernizing its ability to speed the applications lifecycle process.

To learn more about how quality and Agile methods tools better support a lifecycle approach to software, we sat down with Rubina Ansari, Associate Vice President of Automation and Software Development Lifecycle Tools at TriZetto.

The discussion, which took place at the recent HP Discover 2013 Conference in Las Vegas, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Where you are in terms of moving to Agile processes?

Ansari: TriZetto currently is going through an evolution. We’re going through a structured waterfall-to-scaled-Agile methodology. As you mentioned, that's one of the innovative ways that we're looking at getting our releases out faster with better quality, and be able to respond to our customers. We realize that Agile, as a methodology, is the way to go when it comes to all those three things I just mentioned.

We're currently in the midst of evolving how we work. We’re going through a major transformation within our development centers throughout the country.

TriZetto is a healthcare software provider. We have the software for all areas of healthcare. Our mission is to integrate different healthcare systems to make sure our customers have seamless information. Over 50 percent of the American insured population goes through our software for their claims processing. So, we have a big market and we want to stay there.
Leaner and faster

Our software is very important to us, just as it is to our customers. We're always looking for ways of making sure we’re leaner, faster, and keeping up with our quality in order to keep up with all the healthcare changes that are happening.

Gardner: You've been working with HP Software and Application Lifecycle Management (ALM) products for some time. Tell us a little bit about what you have in place, and then let's learn a bit more about the Asset Manager capabilities that you're pioneering?

Ansari
Ansari: We've been using HP tools for our testing area, such as the QTP Products Performance Center and Quality Center. We’ve recently went ahead with ALM 11.5, it has a lot of cross-project abilities. As for agile, we're now using HP Agile Manager.

This has helped us move forward fairly quickly into scaled agile using HP Agile Manager, while integrating with our current HP tools. We wanted to make sure that our tools were integrated and that we didn’t lose that traceability and the effectiveness of having a single vendor to get all our data.

HP Agile Manager is very important to us. It's a software-as-a-service (SaaS) model, and it was very easy for us to implement within our company. There was no concept of installing, and the response that we get from HP has been very fast, as this is the first experience we’ve had with a SaaS deliverable from HP.
It's very lightweight, it's web-based SaaS and it integrates with their current tool suite.

They're following agile, so we get releases every three months. Actually, every few weeks, we get enhancements for defects we may find within their product. It's worked out very well. It's very lightweight, it's web-based SaaS and it integrates with their current tool suite, which was vital to us.

We have between 500 and 1,000 individuals that make up development teams throughout United States. For Agile Manager, the last time we checked, it was approximately 400. We're hoping to get up to 1,000 by end of this year, so that way everyone is using Agile Manager for all their agile/scrum teams and their backlogs and development.
Gardner: Do you have any sense of how much faster you're able to develop? What are the paybacks in terms of quality, traceability, and tracking defects? What's the payback from doing this in the way you have?

Working together

Ansari: We’ve seen some, but I think the most is yet to come in rolling this out. One of the things that Agile Manager promotes is collaboration and working together in a scrum team. Agile Manager, having the software work all around the agile processes, makes it very easy for us to roll an agile methodology.

This has helped us collaborate better between testers and developers, and we're finding those defects earlier, before they even happen. We’ll have more hard metrics around this as we roll this out further. One of the major reasons we went with HP Agile Manager is that it has very good integration with the development tools we use.

They integrate with several development tools, allowing our testers to be able to see what changes occurred, what piece of code has changed for each defect enhancement that the tester would be testing. So that tight integration with other development tools was a very pivotal factor in our decision of going forward with that HP Agile Manager.

Gardner: So Rubina, not only are you progressing from waterfall to agile and adopting more up-to-date tools, but you’ve made this leap to a SaaS-based delivery for this. If that's working out well as you’ve said, do you think this is going to lead to doing more with other SaaS tools and tests and capabilities and maybe even look at cloud platform as a service opportunity?
We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it.

Ansari: Absolutely. This was our first experience and it is going very well. Of course, there were some learning curves and some learning pains. Being able to get these changes so quickly and not having it do it ourselves was kind of a mind shift change for us. We're reaping the benefits from it obviously, but we did have to have a little more scheduled conversations, release notes, and documentation about changes from HP.

We're not new to SaaS. We're also looking at offering some of our products in a SaaS model. So we realize what's involved in it. It was great to be on the receiving end of a SaaS product, knowing that TriZetto themselves are playing that space as well.

There's always so much more to improve. What we’re looking for is how to quickly respond to our customers. That means also integrating HP Service Manager and any other tools that may be part of this software testing lifecycle or part of our ability to release or offer something to our clients.
We'll continue doing this until there is no more space for efficiency. But, there are always places where we can be even more effective.

The technologies that we’re advancing toward as well will allow us to easily go into the mobile space once we plan and do that.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, September 23, 2013

Navicure gains IT capacity optimization and performance monitoring using VMware vCenter Operations Manager

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

The next VMworld innovator interview focuses on how a fast-growing healthcare claims company is gaining better control and optimization across its IT infrastructure. Learn how IT leaders at Navicure have been deploying a comprehensive monitoring and operational management approach.

To understand how they're taming IT complexity as they set the stage to adopt the latest in cloud-computing and virtualization infrastructure developments, join Donald Wilkins, Director of Information Technology at Navicure Inc. in Duluth, Georgia.

The discussion, which took place at the recent 2013 VMworld Conference in San Francisco, is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: VMware is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why is your organization so focused on taming complexity?

Wilkins
Wilkins: At Navicure, we've been focused on scaling a fast-growing business. And if you incorporate very complex infrastructure, it becomes more difficult to scale it. So we're focused on technologies that are simple to implement, yet have a lot of upward availability of growth from the storage, the infrastructure, and the software we use. We do that in order to be able to scale that growth we needed to satisfy our business objectives.

Gardner: Tell us a little bit about Navicure, what you do, how is that you're growing, and why that's putting a burden on your IT systems.

Wilkins: Navicure has been around for about 12 years. We started the company in about 2001 and delivered the product to our customers in the late 2001-2002 time-frame. We've been growing very fast. We're adding 20 to 30 employees every year, and we're up to about 230 employees today.

We have approximately 50,000 physicians on our system. We're growing at a rate of 8,000 to 10,000 physicians a year, and it’s a healthy growth. We don't want to grow too fast, so as not to water down our products and services, but at the same time, we want to grow at a pace that better enables us to deliver better products for our customers.

Revenue cycle management

Claim clearinghouses have been around for a couple of decades now. We've evolved from that claim-clearinghouse model to what we refer to as revenue cycle management. We pioneered that term early as we started the company.

We take the transactions from physicians and send them to the insurance companies. That’s what the clearinghouse model is. But on that product, we added a lot of value-added services, a lot analytics around those transactions to help the provider generate more revenue for their transactions. They get paid faster, and that they get paid the first time through the system.

It was very costly for transactions to be delayed weeks because of poorly submitted transactions to the insurance company or denials because they coded something wrong.

We try to catch all that, so that they get paid the first time through. That’s the return on investment (ROI) that our customers are looking for when they look at our products, to lower the AR days and to increase their revenue at the bottom line.

Customer service is one of the foundation cornerstones of our business. We feel that our customers are number one, and retaining those customers is one of our primary goals. 
We wanted to build a foundational structure that we can just build on as we get go into business and growing the transaction volume.

Gardner: Tell us a little bit about your IT environment.

Wilkins: The first thing we did at Navicure, when we started the company, is we looked at and decided that we didn't want to be in the data-center business. We wanted to use a colo that does that work at a much higher level than we could ever do. We wanted to focus on our product and let the colo focus on what they do.

They serve us from our infrastructure standpoint, and then we can focus on our products and build a good product. With that, we adopted very early on, the grid approach or the rack approach. This means that we wanted to build a foundational structure that we can just build on as we get go into business and grow the transactions volume.

That terminology has changed over the years, and that can be referred to a software-defined infrastructure today, but back then it was that we wanted to build infrastructure that would have a grid approach to it, so we could plug in more modules and components to add to scale out as we scale up.

With that, we continued to evolve what we do, but that inherent structure is still there. We need to be able to scale our business as our transactional volume doubles approximately every two years.

Gardner: And how did you begin your path to virtualization, and how did that progress into this more of a software-defined environment?

Ramping up fast

Wilkins: In the first few years of the operation of the company, we really had enough headroom in our infrastructure that it wasn't a big issue, but as we got four years into the company, we started realizing that we were going to hit a point where we would have to start ramping-up really fast.

Consolidation was not something that we had to worry about, because we didn’t have a lot to consolidate. It was a very early product, and we had to build the customer base. We had to build our reputation in the industry, and we did that. But then we started adding physicians by the thousands to our system every year.

With that, we started to have to add infrastructure. Virtualization came along at such a time that we could add it virtually faster and more efficiently than we could ever have if we added physical infrastructure.

So it became a product that we put in a test, dev, and production all at the same time, but it was something that just allowed us to meet the demands of the business.
We want to evolve that to be more proactive in our approach to monitoring.

Gardner: Of course, as many organizations have used virtualization to their benefit, they've also recognized that there is some complexity involved. And getting better management means further optimization, which further reduces costs. That also, of course, maintains their performance requirements. How did you then focus in on managing and optimizing this over time?

Wilkins: Well, one of the things we tried to look at, when we look at products and services, was to keep it simple. I have a very limited staff, and the staff needs to be able to drive to the point of whatever issue they're researching and/or inspecting.

As we've added technologies and services, we tried to add those that are very simple to scale, very, very simple to operate. We look at all these different tools to make that happen. This has led us to new products like VMware as they have also tried to drive to the same level, trying to simplify their product offering with their new products.

For years, we've been doing monitoring with other tools that were network-based monitoring tools. Those drive only so much value. They give us things like up-time alerting and responsiveness that are just about when issues happen. We want to evolve that to be more proactive in our approach to monitoring.
It’s not so much about how we can fix a problem when there is one. It’s more of, let’s keep the problem from happening to start with. That's where we've looked at some products for that. Recently we've actually implemented vCenter Operations Manager.

That product gives us a different twist that other SMNP monitoring tools do. It's a history of what's going on, but also a future analysis of that history and how it will change, based on our historical trends.

New line-up

Gardner: Of course, here at VMworld, we're hearing vSphere improvements and upgrades, but also the arrival of VMware vCloud Suite 5.5 and VMware vSphere with Operations Management 5.5. Is there anything in the new line-up that is particularly of interest to you, and have you had a chance to look at over?

Wilkins: I haven’t had a chance to look over the most recent offering, but we're running the current version. Again, for us, it's the efficiency mechanism inside the product that drives the most value for us to make sure that we can budget a year in advance of the expanding infrastructure that we need to have to meet the demands.

vCenter Operations Manager is key to understanding your infrastructure. If you don’t have it today, you're going to be very reactive to some of your pains and the troubles you're dealing with.

That product, while it does allow you to do a lot of research for various problems and services to drill down from the cluster level, down into the virtual machine levels and find out where your problems and pain points or, actually allows you to more quickly isolate the issue. At the same time, it allows you to project where you're growing and where you need to put your money into resources, whether that's more storage, compute resources, or network resources.

That's where we're seeing value out of the product, because it allows me to go during budget cycles to say that looking at infrastructure and our current growth, we will be out of resources by this time. We need to add this much, based on our current growth. Barring additional new products and services we may be coming up with, we may be adding to our service, if we don't do anything today. We're growing at this pace and here's the numbers to prove it.

When you have that information in front of you, you can actually build a business case around that that further educates the CFOs and the finance people to understanding what your troubles are and what you have to deal with on a day-to-day basis to operate the business.

Gardner: What sort of paybacks are there when you do this right?

Wilkins: Just being able to drive more density in our colo by being virtualized is a big value for us. Our footprint is relatively small. As for an actual dollar amount, it’s hard to pin something on there. We're growing so fast, we're trying to keep up with the demand, and we've been meeting that and exceeding that.
Desktop virtualization is going to be a critical component for that.

Really, the ROI is that our customers aren’t experiencing major troubles with our infrastructure not expanding fast enough. That's our goal, to drive high availability for infrastructure and low downtime, and we can do that with VMware and with their products and service.

We're a current customer of Site Recovery Manager. That's a staple in our virtual infrastructure and has been since 2008. We've been using that product for many years. It drives all of the planning and the testing of our virtual disaster recovery (DR) plan. I've been a very big proponent of that product and services for years, and we couldn’t do without it.

There are other products we will be looking at. Desktop virtualization is something that will be incorporated into the infrastructure in the next year or two.

As a small business, the value of that becomes a little harder to prove from a dollar standpoint. Some of those features like remote working come into play as office space continues to be expensive. It's something we will be looking at to expand our operations, especially as we have more remote employees working. Desktop virtualization is going to be a critical component for that.

Gardner: How about some 20/20 hindsight. If there were other folks that were ramping up on virtualization, or getting to the point where complexity was becoming an issue for them, do you have any thoughts on getting started or lessons learned that you could share?

Trusted partner

Wilkins: The best thing with virtualization is to get a trusted partner to help you get over the hurdle of the technical issues that may bring themselves to light.

I had a very trusted partner when I started this in 2005-2006. They actually just sat with me and worked with me, with no compensation whatsoever, to help work through virtualization. They made it such an easy value that it just became, "I've got to do this, because there's no way I can sustain this level of operational expense and of monitoring and managing this infrastructure, if it's all physical."

So, seeing that value proposition from a partner is key, but it has to be a trusted partner. It has to be a partner that has your best interest in mind, and not so much a new product to sell. It’s going to be somebody that brings a lot to the table, but, at the same time, helps you help yourself and lets you learn these products, so that you can actually implement it and research it on your own to see what value you can bring into the company.
It has to be a partner that has your best interest in mind, and not so much a new product to sell.

It’s easy for somebody to tell you how you can make your life better, but you have to actually see it, because then, you become a passionate person for the technology, and then you become a person that realizes you have to do this and will do whatever it takes to get this in here, because it will make your life easier.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: VMware.

You may also be interested in:

IT technology trends -- a risky business?


This guest post comes courtesy of Patty Donovan, Vice President, Membership & Events, at The Open Group and a member of its executive management team.

By Patty Donovan

On Wednesday, September 25, The Open Group will host a tweet jam looking at a multitude of emerging/converging technology trends and the risks they present to organizations who have already adopted or are looking to adopt them. Most of the technology concepts we’re talking about – Cloud, Big Data, BYOD/BYOS, the Internet of Things, etc – are not new, but organizations are at differing stages of implementation and do not yet fully understand the longer term impact of adoption.

Donovan
This tweet jam will allow us to explore some of these technologies in more detail and look at how organizations may better prepare against potential risks – whether this is in regards to security, access management, policies, privacy or ROI. As discussed in our previous Open Platform 3.0 tweet jam, new technology trends present many opportunities but can also present business challenges if not managed effectively. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Please join us on Wednesday, September 25 at 9:00 a.m. PT/12:00 p.m. ET/5:00 p.m. BST for a tweet jam that will discuss and debate the issues around technology risks. A number of key areas will be addressed during the discussion including: Big Data, Cloud, Consumerization of IT, the Internet of Things and mobile and social computing with a focus on understanding the key risk priority areas organizations face and ways to mitigate them.

We welcome Open Group members and interested participants from all backgrounds to join the session and interact with our panel thought leaders led by David Lounsbury, CTO and Jim Hietala, VP of Security, from The Open Group. To access the discussion, please follow the #ogChat hashtag during the allotted discussion time.
This tweet jam will allow us to explore some of these technologies in more detail and look at how organizations may better prepare against potential risks
  • Do you feel prepared for the emergence/convergence of IT trends? – Cloud, Big Data, BYOD/BYOS, Internet of things
  • Where do you see risks in these technologies? – Cloud, Big Data, BYOD/BYOS, Internet of things
  • How does your organization monitor for, measure and manage risks from these technologies?
  • Which policies are best at dealing with security risks from technologies? Which are less effective?
  • Many new technologies move data out of the enterprise to user devices or cloud services. Can we manage these new risks? How?
  • What role do standards, best practices and regulations play in keeping up with risks from these & future technologies?
  • Aside from risks caused by individual trends, what is the impact of multiple technology trends converging (Platform 3.0)?
And for those of you who are unfamiliar with tweet jams, here is some background information:

What Is a Tweet Jam?

A tweet jam is a one hour “discussion” hosted on Twitter. The purpose of this tweet jam is to share knowledge and answer questions on emerging/converging technology trends and the risks they present. Each tweet jam is led by a moderator and a dedicated group of experts to keep the discussion flowing. The public (or anyone using Twitter interested in the topic) is encouraged to join the discussion.

Participation Guidance

Whether you’re a newbie or veteran Twitter user, here are a few tips to keep in mind:
  • Have your first #ogChat tweet be a self-introduction: name, affiliation, occupation.
  • Start all other tweets with the question number you’re responding to and the #ogChat hashtag.
    • Sample: “Big Data presents a large business opportunity, but it is not yet being managed effectively internally – who owns the big data function? #ogchat”
    • Please refrain from product or service promotions. The goal of a tweet jam is to encourage an exchange of knowledge and stimulate discussion.
    • While this is a professional get-together, we don’t have to be stiff! Informality will not be an issue!
    • A tweet jam is akin to a public forum, panel discussion or Town Hall meeting – let’s be focused and thoughtful.


If you have any questions prior to the event or would like to join as a participant, please direct them to Rob Checkal (rob.checkal at hotwirepr.com). We anticipate a lively chat and hope you will be able to join!

This guest post comes courtesy of Patty Donovan, Vice President, Membership & Events, at The Open Group and a member of its executive management team.

You may also be interested in:

Friday, September 20, 2013

Are you ready for the convergence of new disruptive technologies?

The following guest post comes courtesy of Dr. Chris Harding, Director of Interoperability and SOA at The Open Group.

By Chris Harding

The convergence of technical phenomena such as cloud, mobile and social computing, big data analysis, and the Internet of things that is being addressed by The Open Group’s Open Platform 3.0 Forum will transform the way that you use information technology. Are you ready? Take our survey at https://www.surveymonkey.com/s/convergent_tech

What the technology can do

Mobile and social computing are leading the way. Recently, the launch of new iPhone models and the announcement of the Twitter stock flotation were headline news, reflecting the importance that these technologies now have for business. For example, banks use mobile text messaging to alert customers to security issues. Retailers use social media to understand their markets and communicate with potential customers.

Harding
Other technologies are close behind. In Formula One motor racing, sensors monitor vehicle operation and feed real-time information to the support teams, leading to improved design, greater safety, and lower costs. This approach could soon become routine for cars on the public roads too. (Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

Many exciting new applications are being discussed. Stores could use sensors to capture customer behavior while browsing the goods on display, and give them targeted information and advice via their mobile devices. Medical professionals could monitor hospital patients and receive alerts of significant changes. Researchers could use shared cloud services and big data analysis to detect patterns in this information, and develop treatments, including for complex or uncommon conditions that are hard to understand using traditional methods. The potential is massive, and we are only just beginning to see it.

What the analysts say

Market analysts agree on the importance of the new technologies.

Gartner uses the term “Nexus of Forces” to describe the convergence and mutual reinforcement of social, mobility, cloud and information patterns that drive new business scenarios, and says that, although these forces are innovative and disruptive on their own, together they are revolutionizing business and society, disrupting old business models and creating new leaders.

IDC predicts that a combination of social cloud, mobile, and big data technologies will drive around 90% of all the growth in the IT market through 2020, and uses the term “third platform” to describe this combination.

The Open Group will identify the standards that will make Gartner’s Nexus of Forces and IDC’s Third Platform commercial realities. This will be the definition of Open Platform 3.0.

Disrupting enterprise use of IT

The new technologies are bringing new opportunities, but their use raises problems. In particular, end users find that working through IT departments in the traditional way is not satisfactory. The delays are too great for rapid, innovative development. They want to use the new technologies directly – “hands on."

Increasingly, business departments are buying technology directly, by-passing their IT departments. Traditionally, the bulk of an enterprise’s IT budget was spent by the IT department and went on maintenance. A significant proportion is now spent by the business departments, on new technology.
Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs.

Business and IT are not different worlds any more. Business analysts are increasingly using technical tools, and even doing application development, using exposed APIs. For example, marketing folk do search engine optimization, use business information tools, and analyze traffic on Twitter. Such operations require less IT skill than formerly because the new systems are easy to use. Also, users are becoming more IT-savvy. This is a revolution in business use of IT, comparable to the use of spreadsheets in the 1980s.

Also, business departments are hiring traditional application developers, who would once have only been found in IT departments.

Are you ready?

These disruptive new technologies are changing, not just the IT architecture, but also the business architecture of the enterprises that use them. This is a sea change that affects us all.

The introduction of the PC had a dramatic impact on the way enterprises used IT, taking much of the technology out of the computer room and into the office. The new revolution is taking it out of the office and into the pocket. Cell phones and tablets give you windows into the world, not just your personal collection of applications and information. Through those windows you can see your friends, your best route home, what your customers like, how well your production processes are working, or whatever else you need to conduct your life and business.
You must learn how to tailor and combine the information and services available to you, to meet your personal objectives.

This will change the way you work. You must learn how to tailor and combine the information and services available to you, to meet your personal objectives. If your role is to provide or help to provide IT services, you must learn how to support users working in this new way.

To negotiate this change successfully, and take advantage of it, each of us must understand what is happening, and how ready we are to deal with it.

The Open Group is conducting a survey of people’s reactions to the convergence of Cloud and other new technologies. Take the survey, to input your state of readiness, and get early sight of the results, to see how you compare with everyone else.

To take the survey, visit https://www.surveymonkey.com/s/convergent_tech

The following guest post comes courtesy of Dr. Chris Harding, Director of Interoperability and SOA at The Open Group.

You may also be interested in: