Friday, August 5, 2011

Architect certification increasingly impacts professionalization of IT in the cloud era

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

We've assembled a panel in conjunction with the recent Open Group Conference in Austin, Texas, to explore the impact and role of certifications for IT professionals. Examine here how certification for enterprise architects, business architects, and such industry initiatives as ArchiMate are proving instrumental as IT organizations seek to reinvent themselves.

There are now a lot of shifts in skills and a lot of movement about how organizations should properly staff themselves. There have been cost pressures and certification issues for regulation and the adoption of new technologies. We're going to look at how all these are impacting the role of certification out in the field.

Here to help better understand how an organization like The Open Group is alleviating the impact and supporting the importance of finding verified IT skills is Steve Philp, Marketing Director for Professional Certification at The Open Group; Andrew Josey, Director of Standards at The Open Group, and James de Raeve, Vice President of Certification at The Open Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a Sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
de Raeve: The primary driver here that we're hearing from members and customers is that they need to get more out of the investments that they're making -- their payroll for their IT staff. They need to get more productivity. And that has a number of consequences.

Realizing talent

They want to ensure that the people they are employing and that they're staffing their teams with are effective. They want to be sure that they're not bringing in external experts when they don’t need to. So there is a need to realize the talent that they've actually got in their internal IT community and to develop that talent, nurture it, and exploit it for the benefit of the organization.

And professionalism, professionalization, and profession frameworks are all tools that can be used in identifying, measuring, and developing the talents and capabilities of your people. That seems to be the major driver.

Philp: Something I have noticed since joining The Open Group is that we’ve got some skills and experience-based certifications. They seem to be the things that people are particularly interested in, because it’s not just a test of your knowledge about a particular vendor or product, but how you have applied your skills and experience out there in the marketplace. They have proven to be very successful in helping people assess where they are and in working towards developing a career path.

That’s one of the areas of certification that things are going to move more towards -- more skills and experience-based certification programs in organizations.

Looking at certification in general, you still have areas like Microsoft MCSE, Microsoft technical specialist, application development, and project management that are in demand, and things like CCNA from Cisco. But I've also noticed a lot more in the security field. CISSP and CCSA seem to be the ones that are always getting a lot of attention. In terms of security, the trends in mobile computing, cloud computing, means that security certification is a big growth area.

There is a need for people too in the building of teams and in the delivering of results to nurture and grow their people to be team players and team participants.



We're just about to put a security track into our Certified IT Specialist Program at The Open Group, so there will be a skills and experience-based track for security practitioners soon.

de Raeve: There is a whole world out there of technology and product-related certifications that are fulfilling a very important function in helping people establish and demonstrate their knowledge of those particular products and technologies.

But there is a need for people too in the building of teams and in the delivering of results to nurture and grow their people to be team players and team participants and to be able to work with them to function within the organization as, for want of a better term, "t-shaped people," where there are a number of soft and people-related skills and potentially architecture related skills for the IT specialists, and skills and capabilities enable people to be rounded professionals within an organization.

T-shaped people

It’s that aspect that differentiates the professionalization and the profession-oriented certification programs that we're operating here at The Open Group -- The Open Certified Architect, The Open Certified IT Specialist. Those are t-shaped people and we think that makes a huge difference. It’s what’s going to enable organizations to be more effective by developing their people to have that more rounded t-shaped capability.

Josey: We see the certification as being the ultimate drive in the uptake of the standards, and so we're able to go from not just having a standard on the shelf to actually seeing it being deployed in the field and used. We've actually got some people certification programs, such as TOGAF, and we've got some over 20,000 practitioners now.

We've gone through the certification program and we've been using and evangelizing, TOGAF as a standard in the field and then feeding that back to our members and, through the association, the feedback improvements to the standards. So it’s very much part of the end-to-end ecosystem -- developing a standard for deploying it, and getting people on it, and then getting the feedback in the right way.

Philp: It’s very much an important part of the process now. TOGAF and IT Architect Certification (ITAC) have appeared in a number of RFPs for government and for major manufacturing organizations. So it’s important that the suppliers and the buyers recognize these programs.

Similarly with recruitment, you find that things like TOGAF will appear in most recruitment ads for architects. Certainly, people want knowledge of it, but more and more you’ll see TOGAF certification is required as well.

ITAC, which is now Open CA, has also appeared in a number of recruitment ads for members like Logica, Capgemini, Shell. More recently, organizations like the CBS, EADS, ADGA Group, Direct Energy have requested it. And the list goes on. It’s a measure of how important the awareness is for these certifications and that’s something we will continue to drive at The Open Group.

In development

Josey: ArchiMate certification is something new that we’re developing right now. We haven’t deployed a certification program as yet. The previous certification program was under the ArchiMate Foundation, which was the body that developed ArchiMate, before it transferred into The Open Group.

We’re currently working on the new program which will be similar to some aspects of our TOGAF program, and it’ll be knowledge base certification with an assessment by exam and a practical assessment in which the candidate can actually do modeling. So this will be people certification and there will also be accredited training course certification.

And then also what we're going to do there is actually to provide certification for tools. There will be certifications there.

That’s pretty much what we’re doing in ArchiMate, so we don’t have a firm timeline. So it will not be available it looks like, probably towards the end of the year would be the earliest, but possibly early next year.

ArchiMate is a modeling language for enterprise architecture (EA) in general and specifically it’s a good fit for TOGAF. It’s a way of communicating and developing models for TOGAF EA. Originally it was developed by the Telematica Instituut and funded, I think, by the EU and a number of commercial companies in the Netherlands. It was actually brought into The Open Group in 2008 by the ArchiMate Foundation and is now managed by the ArchiMate Forum within The Open Group.

The latest version of TOGAF is TOGAF 9 for certification. As we mentioned earlier, there are two types of certification programs, skills and knowledge based. TOGAF falls into the knowledge based camp. We have two levels. TOGAF 9 Foundation, which is our level one, is for individuals to assess that they know the terminology and basic concepts of EA in TOGAF.

Level two, which is a superset of level one, in addition assesses analysis and comprehension. The idea is that some people who are interested in just getting familiar with TOGAF and those people who work around enterprise architects can go into TOGAF Foundation. And these enterprise architects themselves should initially start with the TOGAF Certified, the level two, and then perhaps move on later to Open CA. That will be helpful.

For TOGAF 9 Certification, we introduced that by midyear 2009. We launched TOGAF 9 in February, and it took a couple of months to just roll out all these certifications through all the exam channels.

Since then, we’ve gone through 8,000 certifications. We've seen that two-thirds of those were at the higher level, level two, for EA practitioners and one-third of those are currently at the foundation level.

A new area

Philp: Business architecture is a new area that we've been working on. Let me just to go back to what we did on the branding, because it ties in with that. We launched The Open Group’s new website recently and we used that as the opportunity to re-brand ITAC as The Open Group Certified Architect (Open CA) program. The IT Specialist Certification (ITSC) has now become The Open Group Certified IT Specialist or Open CITS Program.

We did the rebranding at that time, because we wanted to be it associated with the word “open.” We wanted to give the skills and experience-based certification a closer linkage to The Open Group. That’s why we changed from ITAC to Open CA. But, we’ve not changed the actual program itself. Candidates still have to create a certification package and be interviewed by three board members, and there are still three levels of certification: Certified, Master, and Distinguished.

However, what we’re intending to do is have some core requirements that architects need to meet, and then add some specific specializations for different types of architects. The one that we’ve been working on the most recently is the Business Architecture Certification. This came about from an initiative about 18 months ago.

We formed something called the Business Forum with a number of platinum members who got involved with it --companies like IBM, HP, SAP, Oracle and Capgemini. We’ve been defining the conformance requirements for the business architecture certification. It's going through the development process and hopefully will be launched sometime later this year or early next year.

“For the first time we feel like management is paying attention to us.”



de Raeve: There's a very good example [of the importance of staffing issues in IT] ..., and they’ve done a presentation about this in one of our conferences. It's Philips, and they used to have an IT workforce that was divided among the business units. The different businesses had their own IT function.

They changed that and went to a single IT function across the organization, providing services to the businesses. In doing so, they needed to rationalize things like grades, titles, job descriptions, and they were looking around for a framework within which they could do this and they evaluated a number of them.

They were working with a partner who wass helping them do this. The partner was an Open Group member and suggested they look at the Open Group’s IT Specialist Certification, the CITS Certification Program, as it provides a set of definitions for the capabilities and skills required for IT professionals. They picked it up and used it, because it covered the areas they were interested in.

This was sufficient and complete enough to be useful to them, and it was vendor neutral, and an industry best practice. So they could pick this up and use it with confidence. And that has been very successful. They initially benchmarked their entire 900 strong IT workforce against The Open Group definition, so they could get to calibrate themselves, where their people were on their journey through development as professionals.

It’s had a very significant impact in terms of not only enabling them to get a handle upon their people, but also in terms of their employee engagement.



They’ve started to embrace the certification programs as a method of not only measuring their people, but also rewarding them. It’s had a very significant impact in terms of not only enabling them to get a handle upon their people, but also in terms of their employee engagement. In the engagement surveys that they do with their staff, some of the comments they got back after they started doing this process were, “For the first time we feel like management is paying attention to us.”

It was very positive feedback, and the net result is that they are well on their way to meeting their goal of no longer having automatically to bring in an external service provider whenever they were dealing with a new project or a new topic. They know that they’ve got people with sufficient expertise in-house on their own payroll now. They've been able to recognize that capability, and the use of it has had a very positive effect. So it’s a very strong good story.

I think that the slides will be available to our members in the conference’s proceedings from the London conference in April. That will be worth something to look at.

Philp: If you go to the Open Group website, www.opengroup.org/certifications, all of the people based certifications are there, along with the benefits for individuals, benefits for organizations and various links to the appropriate literature. There’s also a lot of other useful things, like self-assessment tests, previous webinars, sample packages, etc. That will give you more of an idea of what’s required for certification along with the conformance requirements and other program documentation. There’s a lot of useful information on the website.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in:

Wednesday, August 3, 2011

Case study: MSP InTechnology improves network services via automation and consolidation of management systems

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

The latest BriefingsDirect podcast discussion focuses on a UK-based managed service provider’s journey to provide better information and services for its network, voice, VoIP, data, and storage customers. The network management and productivity benefits have come from an alignment of many service management products into an automated lifecycle approach to overall network operations.

We explore here how InTechnology has implemented a coordinated, end-to-end solution using HP solutions that actually determine the health of its networks by aligning their tools to ITIL methods. And, by using their system-of-record approach with a configuration management database, InTechnology is better serving its customers with lean resources by leveraging systems over manual processes.

Hear from an operations manager, Ed Jackson, Operational System Support Manager at InTechnology, to explore their choices and outcomes when it comes to better operations and better service for their hundreds of enterprise customers. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Jackson: We've basically been growing exponentially year over year. In the past four years, we've grown our network about 75 percent. In terms of our product set, we've basically tripled that in size, which obviously leads to major complexity on both our network and how we manage the product lifecycle.

Previously, we didn’t have anything that could scale as well as the systems that we have in place now. We couldn’t hope to manage 8,000 or 9,000 network devices, plus being able to deliver a product lifecycle, from provisioning to decommission, which is what we have now.

It's pretty massive in terms of the technologies involved. A lot of them are cutting-edge. We have many partners. Our suite of cloud services is very diverse and comprises what we believe is the UK’s most complete and "joined-up"set of pay-monthly voice and data services.

Their own pace

In practice what we aim to do is help our customers engage with the cloud at a pace that works for them. First, we provide connectivity to our nationwide network ring – our cloud. Once their estate is connected they can then cherry pick services from our broad pay-as-you-go (PAYG) menu.

For example, they might be considering replacing their traditional "tin" PBXs with hosted IP telephony. We can do that and demonstrate massive savings. Next we might overlay our hosted unified communications (UC) suite providing benefits such as "screen sharing," "video calling," and "click-to-dial." Again, we can demonstrate huge savings on planes, trains and automobiles.

Next we might overlay our exciting new hosted call recording package -- Unity Call Recording (UC) -- which is perfect if they are in a regulated industry and have a legal requirement to record calls. It’s got some really neat features including the ability to tag and bookmark calls to help easy searching and playback.

While we're doing this, we might also explore the data path. For example our new FlexiStor service provides what we think is the UK’s most straightforward PAYG service designed to manage data by its business "value" and not just as one big homogenous lump of data. It treats data as critical, important or legacy and applies an appropriate storage process to each ... saving up to 40 percent against traditional data management methods.

Imagine trying to manage this disparate set of systems. It would be pretty impossible. But due to the HP product set that we have, we've been able to utilize all the integrations and have a fully managed, end-to-end lifecycle of the service, the devices, and the product sets that we have as a company.

[Our adoption of the HP suites] was spurred by really bad data that we had in the systems. We couldn't effectively go forward. We couldn't scale anymore. So, we got the guys at HP to come in and design us a solution based on products that we already had, but with full integration, and add in additional products such as HP Asset Manager and device Discovery and Dependency Mapping Inventory (DDMI).

With the systems that we already had in place, we utilized mainly HP Service Desk. So we decided to take the bold leap to go to Service Manager, which then gave us the ability to integrate it fully into the Operations Manager product and our Network Node Manager product.

Since we had the initial integrations, we've added extra integrations like Universal Configuration Management Database (UCMDB), which gives us a massive overview on how the network is progressing and how it's developing. Coupled with this, we've got Release Control, and we've just upgraded to the latest version of Service Manager 9.2.

For any auditor that comes in, we have a documented set of reports that we can give them. That will hopefully help us get this compliance and maintain it.



... We recently upgraded Connect-It from 4.1 to 9.3, and with that, we upgraded Asset Manager System to 9.3. Connect-It is the glue that holds everything together. It's a fantastic application that you can throw pretty much any data at, from a CSV file, to another database, to web services, to emails, and it will formulate it for you. You can do some complex integrations in that. It will give you the data that you want on the other side and it cleanses and parses, so that you can pass it on to other systems.

From our DDMI system, right through to our Service Manager, then into our Network Node Manager, we now have a full set of solutions that are held together by Connect-It.

We can discover the device on the network. We can then propagate it into Service Manager. We can add lots of financial details to it from other financial systems outside of the HP product set, but which are easy to integrate. We can therefore provision the circuit and provision the device and add to monitoring automatically, without any human intervention, just by the fact that the device gets shipped to the site.

It gets loaded up with the configuration, and then it's good to go. It's automatically managed right through to the decommissioning stage, or the upgrade stage, where it's replaced by another device. HP systems give us that capability.

So this all has given us a huge benefit in terms of process control, how ITIL is related. More importantly, one of the main things that we are going for at the moment is payment card industry (PCI) and ISO 27001 compliance.

For any auditor that comes in, we have a documented set of reports that we can give them. That will hopefully help us get this compliance and maintain it. One of the things as an MSP is that we can be compliant for the customer. The customer can have the infrastructure outsourced to us with the compliance policy in that. We can take the headache of compliance away from our customers.

More and more these days, we have a lot of solicitors and law firms on our books, and we're getting "are you compliant" as a request before they place business with us. We're finding all across the industry that compliance is a must before any contract is won. So to keep one step ahead of the game, this is something that we're going to have to achieve and maintain, and the HP product set that we have is key in that.

Due to the HP product set that we have, we've been able to utilize all the integrations and have a fully managed, end-to-end lifecycle of the service.



In terms of our service and support, we've basically grown the network massively, but we haven’t increased any headcount for managing the network. Our 24/7 guys are the same as they were four or five years ago in terms of headcount.

We get on average around 5,000 incidents a month automatically generated from our systems and network devices. Of these incidents, only about 560 are linked to customer facing Interactions using our Service Desk Module in the Service Manager application.

Approximately 80 percent of our total incidents are generated automatically. They are either proactively raised, based on things like CPU and memory of network devices or virtual devices or even physical servers in our data centers, or reactively raised based on for example device or interface downs.

Massive burden

When you've got like 80 percent of all incidents raised automatically, it takes a massive burden off the 24/7 teams and the customer support guys, who are not spending the majority of their time creating incidents but actually working to resolve them.

When we originally decided to take the step to upgrade from Service Desk to Service Manager and to get the network discovery product set in, we used HP’s Professional Services to effectively design the solution and help us implement it.

Within six months, we had Service Desk upgraded to Service Manager. We had an asset manager system that was fully integrated with our financials, our stock control. And we also had a Network Discovery toolset that was inventorying our estate. So we had a fully end-to-end solution.

Automatic incidents

I
nto that, we have helped to develop the Network Operations Management Solution into being able to generate automatic incidents. HP PS services provided a pivotal role in providing us with the kind of solutions that we have now.

Since then, we took that further, because we have very good in-house knowledgeable guys that really understand the HP systems and services. So we've taken it bit of a step further, and most of the stuff that we do now in terms of upgrades and things are done in-house.

One of the key benefits is it gives us a unique calling card for our potential customers. I don’t know of many other MSPs that have such an automated set of technology tools to help them manage the service that they provide to their customers.

Five years ago, this wasn't possible. We had disparate systems and duplicate data held in multiple areas So it wasn’t possible to have the integration and the level of support that we give our customers now for the new systems and services that we provide.

Mean time to restore has come down significantly, by way over 15 percent. As I said, there has been zero increase in headcount over our systems and services. We started off with a few thousand network devices and only three or four different products, in data, storage, networks and voice. Now we've got 16 different kinds of product sets, with about 8,000, 9,000 network devices.

In terms of cost saving, and increased productivity, this has been huge. Our 24/7 teams and customer support teams are more proactive in using knowledge bases and Level 1 triage. Resolution of incidents has gone up by 25 percent by customer support teams and level 1 engineers; this enables the level 3 engineers to concentrate on more complex issues.

In terms of SLAs, we manage the availability of network devices. It gives us a lot more flexibility in how we give these availability metrics to the customers.



If you take a Priority 3, Priority 4 incident, 70 percent of those are now fixed by Level 1 engineers, which was unheard of five or six years ago. Also, we now have a very good knowledge base in the Service Manager tool that we can use for our Level 1 engineers.

In terms of SLAs, we manage the availability of network devices. It gives us a lot more flexibility in how we give these availability metrics to the customers. Because we're business driven by other third party suppliers, we can maintain and get service credits from them. We've also got a fully documented incident lifecycle. We can tell when the downtime has been on these services, and give our suppliers a bit of an ear bashing about it, because we have this information to hand them. We didn’t have that five or six years ago.

With event correlation, we reduced our operations browsers down to just meaningful incidents, we filtered our events from over 100,000 a month to less than 20,000 many of these are duplicated and are correlated together. Most events are associated with knowledge base articles in Service Manager and contain instructions to escalate or how to resolve the event, increasingly by a level 1 engineer.

Contacting customers within agreed SLAs and how we can drive our suppliers to provide better service is fantastic because of the information that is available in the systems now. It gives us a lot more heads up on what’s happening around the network.

We're building a lot of information, taken from our financial systems and placing it into our UCMDB and CMDB databases to give us the breakdown of cost per device, cost per month, because now this information is available.

We have a couple of data centers. One of our biggest costs is power usage. Now, we can break down by use of collecting the power information, using NNMi -- how much our power is costing per rack by terms of how many amps have been used over a set period of time, say a week or a month. where previously we had no way of determining how our power usage was being spent or how much was actually costing us per rack or per unit.

From this performance information, we can also give our customers extra value reports and statistics that we can charge as a value added managed solution for them.



It's given us a massive information boost, and we can really utilize the information, especially in UCMDB, and because it’s so flexible, we can tailor it to do pretty much whatever we want. From this performance information, we can also give our customers extra value reports and statistics that we can charge as a value added managed solution for them.

[In terms of getting started], one of the main things is to have a clear goal in mind before you start. Plan everything, get it all written down, and have the processes looked at before you start implementing this, because it’s fairly hard to re-engineer if you decided that one of the actual solutions or one of the processes that you have implemented isn’t going to work. Because of the integration of all the systems, you might tend to find that reverse engineering them is a difficult task.

As a company, we decided to go for a clean start and basically said we'd filter all the data, take the data that we actually really required, and start off from scratch. We found that doing it that way, we didn’t get any bad data in there. All the data that we have now is pretty much been cleansed and enriched by the information that we can get from our automated systems, but also by utilizing the extra data that people have put in.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, August 1, 2011

New HP Service Manager tackles time and cost associated with help desk productivity

HP today announced the latest version on its Service Manager software in an attempt to drive out a large portion of help desk cost, 85 percent of which is estimated to be spent on personnel. Version 9.30 introduces several innovations aimed at ease of use for the help desk staff, the end users, and the administrators who maintain the system.

According to Chuck Darst, HP's ITSM Product Manager, three key features underlie the updated version: a mobile client, enhanced service catalog, and enhanced knowledge management (KM). [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Aimed at service desk personnel involved in the incident management and change approvals processes, the new mobile client (included at no additional cost) has been built for a smart phone form-factor and is supported on a wide range of devices, including the Apple iPhone and iPad, Google Android devices, HP WebOS devices and RIM Blackberry. Prior to this, users who wanted smart phone functionality had to rely on third-party plug-ins.

The service catalog portal provides an interface to cloud environments, providing options, sources, and methods for provisioning requests. Also, with a customizable mySM dashboard, IT can directly access information when they need it. The new dashboard can tailor data from HP Service Manager or other external sources, without the need of an administrator.

The service catalog portal provides an interface to cloud environments, providing options, sources, and methods for provisioning requests.



The new KM offering provides searches using updated search engine technology and new search forms designed to increase the amount of first-call resolutions and to reduce the number of calls that need to be escalated.

Other features include:
  • Graphical “Process Designer,” which allows IT organizations to speed implementations with a new GUI-based workflow designer and rules editor that simplify the editing and configuring of workflows, conditions and rules.
  • New survey capability, so IT can tune services to better serve its customers with a new survey instrument from MarketTools that captures end-user feedback.
  • End-user self-services, which organizations allows users to access self support help or place a new support request.
  • A new migration tool and an assessment tool for migration planning.
You may also be interested in:

Friday, July 29, 2011

Discover Case Study: How IHG has deployed and benefited from increased apps testing

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Welcome to a special BriefingsDirect podcast series coming to you from the HP Discover 2011 conference June 8 in Las Vegas. We explored some some major enterprise IT solutions, trends and innovations making news across HP’s ecosystem of customers, partners, and developers.

This enterprise case study discussion from the show floor focuses on InterContinental Hotels Group (IHG). We're going to be looking at what they are doing around automation and unification of applications, development and deployment, specifically looking at software as a service (SaaS) as a benefit, and how unification helps bring together performance, but also reduce complexity costs over time.

To help guide us through this use-case, we're here with Brooks Solomon, Manager of Test Automation at InterContinental. The interview was moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Solomon: InterContinental Hotels Group is the largest hotel company by number of rooms. We have 645,000 rooms, 4,400 hotels, with seven different brands, and the largest and first hotel loyalty program with 58 million members.

The majority of the hotels, 3,500 or so, are in the US and the others are distributed around the world. We're going to be expanding to China more and more over the next few years.

I couldn’t list the number of applications we have. The majority of the revenue comes from four major applications that are consumer-facing.

We use HP’s testing software all the way from Quality Center (QC), through Quick Test Professional (QTP), through LoadRunner, up into the Business Availability Center (BAC) tool. I've talked about how we get to the process of BAC and then how BAC benefits us from a global perspective.

I couldn’t list the number of applications we have. The majority of the revenue comes from four major applications that are consumer-facing.



The apps that we generate support the majority of IHG’s revenue and, if they're not customer-facing, they're a call-center application. If you call 1-800 Holiday Inn, that kind of thing, you'll get a reservation agent somewhere around the world wherever you are. Then, that agent will actually tap into another application that we developed to generate the reservation from there.

SaaS monitors

We use SaaS and we have a private use of SaaS. Going back to our call-center applications, there are local centers around the world, and we've installed SaaS monitors at those facilities. Not only do we get a sense of how the agent’s response time and availability is from their centers, we also get a full global view from customers and how their experience is, wherever they maybe.

Right now the only SaaS-based tool we have is the BAC. The other HP tools that we use are in-house.

Without the automated suite of tools that we have, we couldn’t deliver our products in a timely fashion and with quality. We have an aggressive release schedule every two weeks, distributing new products, new applications, or bug fixes for things that have occurred. Without the automated regression suite of tools that we have, we couldn’t get those out in time. Having those tools in place allows us approximately a 75 percent reduction in cost.

Without the automated suite of tools that we have, we couldn’t deliver our products in a timely fashion and with quality.



I would say just to define the core functionality of your applications and automate those first. Then, once new enhancements come along and there are business-critical type transactions, I would include those in your automated suite of tools and tests.

We're coming off of a mainframe reservation system and we are converting that into service oriented architecture (SOA). So, we’ve recently purchased HP service tests. We hope that acquisition would help us automate all of our services coming off the mainframe. We're going to do that on a gradual basis. So, we're going to be automating those as they come online.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, July 28, 2011

Standards effort points to automation via common markup language O-ACEML for improved IT compliance, security

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Join this podcast discussion in conjunction with the latest Open Group Conference in Austin, Texas, to examine the Open Automated Compliance Expert Markup Language (O-ACEML), a new standard creation and effort that helps enterprises automate security compliance across their systems in a consistent and cost-saving manner.

O-ACEML helps to achieve compliance with applicable regulations but also achieves major cost savings. From the compliance audit viewpoint, auditors can carry out similarly consistent and more capable audits in less time.

Here to help us understand O-ACEML and managing automated security compliance issues and how the standard is evolving are Jim Hietala, Vice President of Security at The Open Group, and Shawn Mullen, a Power Software Security Architect at IBM. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a Sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Hietala: One of the things you've seen in last 10 or 12 years -- since the compliance regulations have really come to the fore -- is that the more regulation there is, more specific requirements are put down, and the more challenging it is for organizations to manage. Their IT infrastructure needs to be in compliance with whatever regulations impact them, and the cost of doing so becomes significant.

So, anything that could be done to help automate, to drive out cost, and maybe make organizations more effective in complying with the regulations that affect them -- whether it’s PCI, HIPAA, or whatever -- there's lot of benefit to large IT organizations in doing that. That’s really what drove us to look at adopting a standard in this area.

Manual process

W e're moving to] enable compliance of IT devices specifically around security constraints and the security configuration settings and to some extent, the process. If you look at how people did compliance or managed to compliance without a standard like this, without automation, it tended to be a manual process of setting configuration settings and auditors manually checking on settings. O-ACEML goes to the heart of trying to automate that process and drive some cost out of an equation.

Mullen: This has been going on a while, and we’re seeing it on both classes of customers. On the high-end, we would go from customer-to-customer and they would have their own hardening scripts, their own view of what should be hardened. It may conflict with what compliance organization wanted as far as the settings. This was a standard way of taking what the compliance organization wanted, and also it has an easy way to author it, to change it.

If your own corporate security requirements are more stringent, you can easily change the O-ACEML configuration, so that is satisfies your more stringent corporate compliance or security policy, as well as satisfying the regulatory compliance organization in an easy way to monitor it, to report, and see it.

In addition, on the low end, the small businesses don’t have the expertise to know how to configure their systems. Quite frankly, they don’t want to be security experts. Here is an easy way to print an XML file to harden their systems as it needs to be hardened to meet compliance or just the regular good security practices.

One of the things that we're seeing in the industry is server consolidation. If you have these hundreds, or in large organizations thousands, of systems and you have to manually configure them, it becomes a very daunting task. Because of that, it's a one-time shot at doing this, and then the monitoring is even more difficult. With O-ACEML, it's a way of authoring your security policy as it meets compliance or for your own security policy in pushing that out.

This allows you to have a single XML and push it onto heterogeneous platforms. Everything is configured securely and consistently and it gives you a very easy way to get the tooling to monitor those systems, so they are configured correctly today. You're checking them weekly or daily to ensure that they remain in that desired state.

[As an example], let's take a single rule, and we'll use a simple case like the minimum password length. In PCI the minimum password length, for example, is seven. Sarbanes-Oxley, which relies on COBiT password length would be eight.

But with an O-ACEML XML, it's very easy to author a rule, and there are three segments to it. The first segment is, it's very human understandable, where you would put something like "password length equals seven." You can add a descriptive text with it, and that's all you have to author.

Actionable command

When that is pushed down on to the platform or the system that's O-ACEML-aware, it's able to take that simple ACEML word or directive and map that into an actionable command relevant to that system. When it finds the map into the actionable command, it writes it back into the XML. So that's completing the second phase of the rule. It executes that command either to implement the setting or to check the setting.

The result of the command is then written back into the XML. So now the XML for particular rule has the first part, the authored high-level directive as a compliance organization, how that particular system mapped into a command, and the result of executing that command either in a setting or checking format.

Now we have all of the artifacts we need to ensure that the system is configured correctly, and to generate audit reports. So when the auditor comes in we can say, "This is exactly how any particular system is configured and we know it to be consistent, because we can point to any particular system, get the O-ACEML XML and see all the artifacts and generate reports from that."

What's interesting about O-ACEML -- and this is one of our differences from, for example, the security content automation protocol (SCAP) -- is that instead of the vendor saying, "This is how we do it. It has a repository of how the checking goes and everything like that," you let the end point make the determination. The end point is aware of what OS it is and it's aware of what version it is.

For example, with IBM UNIX, which is AIX, you would say "password check at this different level." We've increased our password strength, we've done a lot of security enhancements around that. If you push the ACEML to a newer level of AIX, it would do the checking slightly differently. So, it really relies on the platform, the device itself, to understand ACEML and understand how best to do its checking.

We see with small businesses and even some of the larger corporations that they're maintaining their own scripts. They're doing everything manually. They're logging on to a system and running some of those scripts. Or, they're not running scripts at all, but are manually making all of these settings.

It's an extremely long and burdensome process, when you start considering that there are hundreds of thousands of these systems. There are different OSes. You have to find experts for your Linux systems or your HP-UX or AIX. You have to have all those different talents and skills in these different areas, and again the process is quite lengthy.

Different classes

Hietala: The way to think about it is the universe of IT devices that are in scope for these various compliance regulations. If you think about PCI DSS, it defines pretty tightly what your cardholder data environment consists of. In terms of O-ACEML, it could be networking devices, servers, storage equipment, or any sort of IT device. Broadly speaking, it could apply to lots of different classes of computing devices.

O-ACEML is relatively new. It was just published 60 days ago by The Open Group. The actual specification is on The Open Group website. It's downloadable, and we would encourage both, system vendors and platform vendors, as well as folks in the security management space or maybe the IT-GRC space, to check it out, take a look at it, and think about adopting it as a way to exchange compliance configuration information with platforms.

We want to encourage adoption by as broad a set of vendors as we can, and we think that having more adoption by the industry, will help make this more available so that end-users can take advantage of it.

Mullen: We had a very interesting presentation here at The Open Group Conference in Austin. Customers are finding the best way they can lower their compliance or their cost of meeting compliance is through automation. If you can automate any part of that compliance process, that’s going to save you time and money. If you can get rid of the manual effort with automation, it greatly reduces your cost.

Cost of compliance

There was a very good study [we released and discussed this week]. It found that the average cost of an organization to be compliant is $3 million. That's annual cost. What was also interesting was that the cost of being non-compliant, as they called it, was $9 million.

Hietala: The figures that Shawn is referencing come out of the study by the Ponemon Institute. Larry Ponemon does lots of studies around security risk compliance cost. He authors an annual data breach study that's pretty widely quoted in the security industry that gets to the cost of data breaches on average for companies.

In the numbers that were presented, he recently studied 46 very large companies, looking at their cost to be in compliance with the relevant regulations. It's like $3.5 million a year, and over $9 million for companies that weren't compliant, which suggests that companies that are actually actively managing toward compliance are probably little more efficient than those that aren't.

What O-ACEML has the opportunity to do for those companies that are in compliance is help drive that $3.5 million down to something much less than that by automating and taking manual labor out of process.

We want to encourage adoption by as broad a set of vendors as we can, and we think that having more adoption by the industry, will help make this more available so that end-users can take advantage of it.



Mullen: One of the things that we're hoping vendors will gravitate toward is the ability to have a central console controlling their IT environment or configuring and monitoring their IT environment. It just has to push out a single XML file. It doesn’t have to push out a special XML for Linux versus AIX versus a network device. It can push out that O-ACEML file to all of the devices. It's a singular descriptive XML, and each device, in turn, knows how to map it to its own particular platform in security configuring.

Hietala: And O-ACEML goes beyond just the compliance regulations that are inflicted on us or put on us by government organizations to defining a best practice instead of security policies in the organization. Then, using this as a mechanism to push those out to your environment and to ensure that they are being followed and implemented on all the devices in their IT environment.

So, it definitely goes beyond just managing compliance to these external regulations, but to doing a better job of implementing the ideal security configuration settings across your environment.

Moving to the cloud

If you think about how this sort of a standard might apply toward services that are built in somebody’s cloud, you could see using this as a way to both set configuration settings and check on the status of configuration settings and instances of machines that are running in a cloud environment. Shawn, maybe you want to expand on that?

Mullen: It's interesting that you brought this up, because this is the exact conversation we had earlier today in one of the plenary sessions. They were talking about moving your IT out into the cloud. One of the issues, aside from just the security, was how do you prove that you are meeting these compliance requirements?

ACEML is a way to reach into the cloud to find your particular system and bring back a report that you can present to your auditor. Even though you don’t own the system --it's not in the data center here in the next office, it's off in the cloud somewhere -- you can bring back all the artifacts necessary to prove to the auditor that you are meeting the regulatory requirements.

Hietala: The standard specification is up on our website. You can go to the "Publications" tab on our website, and do a search for O-ACEML, and you should find the actual technical standard document. Then, you can get involved directly in the security forum by joining The Open Group . As the standard evolves, and as we do more with it, we certainly want more members involved in helping to guide the progress of it over time.

It removes the burden of these different compliance groups from being security experts and it let’s them just use ACEML and the default settings that The Open Group came up with.



Mullen: That’s a perfect way to start. We do want to invite different compliance organization, everybody from the electrical power grid -- they have their own view of security -- to ISO, to payment card industry. For the electrical power grid standard, for example -- and ISO is the same way -- what ACEML helps them with is they don’t need to understand how Linux does it, how AIX does it. They don’t need to have that deep understanding.

In fact, the way ISO describes it in their PDF around password settings, it basically says, use good password settings, and it doesn’t go into any depth beyond that. The way we architected and designed O-ACEML is that you can just say, "I want good password settings," and it will default to what we decided. What we focused in on collectively as an international standard in The Open Group was, that good password hygiene means you change your password every six months. It should at least carry this many characters, there should be a non-alpha/numeric.

It removes the burden of these different compliance groups from being security experts and it let’s them just use O-ACEML and the default settings that The Open Group came up with.

We want to reach out to those groups and show them the benefits of publishing some of their security standards in O-ACEML. Beyond that, we'll work with them to have that standard up, and hopefully they can publish it on their website, or maybe we can publish it on The Open Group website.

... It’s an international standard, we want it to be used by multiple compliance organizations. And compliance is a good thing. It’s just good IT governance. It will save companies money in the long run, as we saw with these statistics. The goal is to lower the cost of being compliant, so you get good IT governance, just with a lower cost.

Hietala: You'll see more from us in terms of adoption of the standard. We’re looking already at case studies and so forth to really describe in terms that everyone can understand what benefits organizations are seeing from using O-ACEML. Given the environment we’re in today, we’re seeing about security breaches and hacktivism and so forth everyday in the newspapers.

I think we can expect to see more regulation and more frequent revisions of regulations and standards affecting IT organizations and their security, which really makes it imperative for engineers in IT environment in such a way that you can accommodate those changes, as they are brought to your organization, do so in an effective way, and at the least cost. Those are really the kinds of things that O-ACEML has targeted, and I think there is a lot of benefit to organizations to using it.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You man also be interested in:

Wednesday, July 27, 2011

IT industry looks to Open Trusted Technology Forum to help secure supply chains that support technology products

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

Join this podcast discussion in conjunction with the latest Open Group Conference in Austin, Texas, to examine The Open Group Trusted Technology Forum, also known as the OTTF, designed to help technology acquirers and buyers safely conduct global procurement and supply chain commerce.

We'll examine how the security risk for many companies and organizations has only grown, even as these companies form essential partnerships and integral supplier relationships. So how can all the players in a technology ecosystem gain assurances that the other participants are adhering to best practices and taking the proper precautions?

Here to help us better understand how established standard best practices and an associated accreditation approach can help make supply chains stronger and safer is Dave Lounsbury, the Chief Technical Officer at The Open Group; Steve Lipner, Senior Director of Security Engineering Strategy in the Trustworthy Computing Security at Microsoft; Joshua Brickman, Director of the Federal Certification Program Office at CA Technologies, and Andras Szakal, Vice President and CTO of IBM’s Federal Software Group. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: The Open Group is a Sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Lounsbury: A great quote coming out of the conference is that we have moved the entire world’s economy to being dependent on the Internet, without a backup plan. Anyone who looks at the world economy will see, not only are we dependent on it for exchange of value in many cases, but even for information about how our daily lives are run, traffic, health information, and things like that.

It's becoming increasingly vitally important that we understand all the aspects of what it means to have trust in the chain of components that deliver that connectivity to us, not just as technologists, but as people who live in the world.

Lipner: And the attackers are becoming more determined and more visible across the Internet ecosystem. Vendors have stepped up to improve the security of their product offerings, but customers are concerned. A lot of what we're doing in The Open Group and in the OTTF is about trying to give them additional confidence of what vendors are doing, as well as inform vendors what they should be doing.

Brickman: One of the things that I really like about this group is that you have all of the leaders, everybody who is important in this space, working together with one common goal.

One of the things we're thinking about is whether there's a 100 percent fail-safe solution to cyber? And there really isn't. There is just a bar that you can set, and the question is how much do you want to make the attackers spend, before they can get over that bar? What we're going to try to do is establish that level, and working together, I feel very encouraged that we are getting there, so far.

Szakal: We're going to develop a standard, or are in the process of developing a specification and ultimately an accreditation program, that will validate suppliers and providers against that standard.

It's focused on building trust into a technology provider organization through this accreditation program, facilitated through either one of several different delivery mechanisms that we are working on. We're looking for this to become a global program, with global partners, as we move forward.

Global effort

Lounsbury: Any electronic or information system now is really built on components and software that are delivered from all around the globe. We have software that’s developed in one continent, hardware that’s developed in another, integrated in a third, and used globally.

So, we really do need to have the kinds of global standards and engagement that Andras has referred to, so that there is that one bar for all to clear in order to be considered as a provider of trusted components.

[There has] been a change in these attacks, from just nuisance attacks, to ones that are focused on monetization of cyber crimes and exfiltration of data. So the spectrum of threats is increasing a lot. More sophisticated attackers are looking for narrower and narrower attack vectors each time. So we really do need to look across the spectrum of how this IT technology gets produced in order to address it.

Lipner: The tagline we have used for The Open Group TTF is "Build with Integrity, Buy with Confidence." We certainly understand that customers want to have confidence in the hardware and software of the IT products that they buy.

We believe that it’s up to the suppliers, working together with other members of the IT community, to identify best practices and then articulate them, so that organizations up and down the supply chain will know what they ought to be doing to ensure that customer confidence.

Szakal: [To that goal], we completed the white paper earlier this year, in the first quarter. The white paper was visionary in nature, and it was obviously designed to help our constituents understand the goals of the OTTF.

However, in order to actually make this a normative specification and design a program, around which you would have conformance and be able to measure suppliers’ conformity to that specification, we have to develop a specification with normative language.

First draft

We're finishing that up as we speak and we are going to have a first draft here within the next month. We're looking to have that entire specification go through company review in the fourth quarter of this year.

Simultaneously, we'll be working on the accreditation policy and conformance criteria and evidence requirements necessary to actually have an accreditation program, while continuing to liaise with other evaluation schemes that are interested in partnering with us. In a global international environment, that’s very important, because there exist more than one of these regimes that we will have to exist, coexist, and partner with.

Over the next year, we'll have completed the accreditation program and have begun testing of the process, probably having to make some adjustments along the way. We're looking at sometime within the first half of 2012 for having a completed program to begin ramping up.

The forum itself continues to liaise with the government and all of our constituents. As you know, we have several government members that are part of the TTF and they are just as important as any of the other members. We continue to provide update to many of the governments that we are working with globally to ensure they understand the goals of the TTF and how they can provide value synergistically with what we are doing, as we would to them.

We continue to provide update to many of the governments that we are working with globally to ensure they understand the goals of the TTF.



Brickman: We've made tremendous progress on wrapping up our framework and getting it ready for the first review.

We've also been meeting with several government officials. I can’t say who they are, but what’s been good about it is that they're very positive on the work that we're doing, they support what we are doing and want to continue this discussion.

It’s very much a partnership, and we do feel like it’s not just an industry-led project, where we have participation from folks who could very much be the consumers of this initiative.

Awareness of security

Lounsbury: A very clear possible outcome is that there will be a set of simple guidelines and ones that can be implemented by a broad spectrum of vendors, where a consumer can look and say, "These folks have followed good practices. They have baked secure engineering, secure design, and secure supply chain processes into their thing, and therefore I am more comfortable in dealing with them as a partner."

Of course, what the means is that, not only do you end up with more confidence in your supply chain and the components for getting to that supply chain, but also it takes a little bit of work off your plate. You don’t have to invest as much in evaluating your vendors, because you can use commonly available and widely understood sort of best practices.

From the vendor perspective, it’s helpful because we're already seeing places where a company, like a financial services company, will go to a vendor and say, "We need to evaluate you. Here’s our checklist." Of course, the vendor would have to deal with many different checklists in order to close the business, and this will give them some common starting point.

Of course, everybody is going to customize and build on top of what that minimum bar is, depending on what kind of business they're in. But at least it gives everybody a common starting point, a common reference point, some common vocabulary for how they are going to talk about how they do those assessments and make those purchasing decisions.

This is a living type of an activity that you never really finish. There’s always something new to be done.



Lipner: If we achieve the sort of success that we are aiming for and anticipating, you'll see requirements for the TTF, not only in RFPs, but also potentially in government policy documents around the world, basically aiming to increase the trust of broad collections of products that countries and companies use.

Brickman: One of the things that will happen is that as companies start to go out and test this, as with any other standard, the 1.0 standard will evolve to something that will become more germane, and as Steve said, will hopefully be adopted worldwide.

Agile and useful

I
don’t think anybody wants it to become a behemoth. We want it to be agile, useful, and certainly something readable and achievable for companies that are not multinational billion dollar companies, but also companies that are just out there trying to sell their piece of the pie into the space. That’s ultimately the goal of all of us, to make sure that this is a reasonable achievement.

Lounsbury: This is another thing that has come out of our meetings. We've heard a number of times that governments, of course, feel the need to protect their infrastructure and their economies, but also have a realization that because of the rapid evolution of technology and the rapid evolution of security threats that it’s hard for them to keep up. It’s not really the right vehicle.

There really is a strong preference. The U.S. strategy on this is to let industry take the lead. One of the reasons for that is the fact that industry can evolve, in fact must evolve, at the pace of the commercial marketplace. Otherwise, they wouldn’t be in business.

So, we really do want to get that first stake in the ground and get this working, as Joshua said. But there is some expectation that, over time, the industry will drive the evolution of security practices and security policies, like the ones OTTF is developing at the pace of commercial market, so that governments won’t have to do that kind of regulation which may not keep up.

One of our goals is to ensure that the viability of the specification itself, the best practices, are updated periodically.



Szakal: One of our goals is to ensure that the viability of the specification itself, the best practices, are updated periodically. We're talking about potentially yearly. And to include new techniques and the application of potentially new technologies to ensure that providers are implementing the best practices for development engineering, secure engineering, and supply chain integrity.

It's going to be very important for us to continue to evolve these best practices over a period of time and not allow them to fall into a state of static disrepair.

I'm very enthusiastic, because many of the members are very much in agreement that this is something that needs to be happening in order to actually raise the bar on the industry, as we move forward, and help the entire industry adopt the practices and then move forward in our journey to secure our critical infrastructure.

Lounsbury: The reason we've been able to make the progress we have is that we've got the expertise in security from all of these major corporations and government agencies participating in the TTF. The best way to maintain that currency and maintain that drive is for people who have a problem, if you're on the buy side or expertise from either side, to come in and participate.

Hands-on awareness

You have got the hands-on awareness of the market, and bringing that in and adding that knowledge of what is needed to the specification and helping move its evolution along is absolutely the best thing to do.

That’s our steady state, and of course the way to get started on that is to go and look at the materials. The white paper is out there. I expect we will be doing snapshots of early versions of this that would be available, so people can take a look at those. Or, come to an Open Group Conference and learn about what we are doing.

Szakal: As vendors, we'd would like to see minimal regulation and that's simply the nature of the beast. In order for us to conduct our business and lower the cost of market entry, I think that's important.

I think it's important that we provide leadership within the industry to ensure that we're following the best practices to ensure the integrity of the products that we provide. It's through that industry leadership that we will avoid potential damaging regulations across different regional environments.

It's important that we provide leadership within the industry to ensure that we're following the best practices to ensure the integrity of the products that we provide.



We certainly wouldn't want to see different regulations pop-up in different places globally. It makes for very messy technology insertion opportunity for us. We're hoping that by actually getting engaged and providing some self-regulation, we won't see additional government or international regulation.

Lipner: One of the things that my experience has taught me is that customers are very aware these days of security, product integrity, and the importance of suppliers paying attention to those issues. Having a robust program like the TTF and the certifications that it envisions will give customers confidence, and they will pay attention to that. That will change their behavior in the market even without formal regulations.

Brickman: Industry setting the standard is an idea that has been thrown around a while, and I think that it's great to see us finally doing it in this area, because we know our stuff the best.

We're going to try to set up a standard, whereby we're providing public information about what our products do and what we do as far as best practices. At the end of the day the acquiring agency, or whatever, is going to have to make decisions, and they're going to make intelligent decisions, based upon looking at folks that choose to go through this and folks that choose not to go through it.Bold
It will continue

The bad news that continues to come out is going to continue to happen. The only thing that they'll be able to do is to look to the companies that are the experts in this to try to help them with that, and they are going to get some of that with the companies that go through these evaluations. There's no question about it.

At the end of the day, this accreditation program is going to shake out the products and companies that really do follow best practices for secure engineering and supply chain best practices.

Szakal: Around November, we're going to be going through company review of the specification and we'll be publishing that in the fourth quarter.

We'll also be liaising with our government and international partners during that time and we'll also be looking forward to several upcoming conferences within The Open Group where we conduct those activities. We're going to solicit some of our partners to be speaking during those events on our behalf.

The only thing that they'll be able to do is to look to the companies that are the experts in this to try to help them.



As we move into 2012, we'll be working on the accreditation program, specifically the conformance criteria and the accreditation policy, and liaising again with some of our international partners on this particular issue. Hopefully we will, if all things go well and according to plan, come out of 2012 with a viable program.

Lounsbury: Andras has covered it well. Of course, you can always learn more by going to www.opengroup.org and looking on our website for information about the OTTF. You can find drafts of all the documents that have been made public so far, and there will be our white paper and, of course, more information about how to become involved.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: