Friday, March 22, 2013

ownCloud debuts cloud tool to give organizations more control over file sync and software

OwnCloud, Inc. recently released the latest version of the ownCloud Community Edition with a number of usability, performance, and integration enhancements.

Based on an open-source project of the same name, the ownCloud file sync and share software, deployed on-premise, not only offers users greater control, but allows organizations to integrate existing security, storage, monitoring and reporting tools, while still taking advantage of the software’s simplicity and flexibility.

File sync and share services like Dropbox, Google Docs, and Box Inc. have revolutionized the way users share information. These cloud-based services make it easy to share files with clean interfaces and seemingly endless amounts of storage. However, not everyone wants to turn over their information to a service provider – for those who prefer to control how and where their data is stored there’s ownCloud. 
We’ve completely revamped the design with a much simplified interface so you can differentiate the navigation elements and focus on what you want to work with.

OwnCloud comes in a free, community edition, and the company will launch a commercially supported enterprise edition of the software in the second quarter. That version will targeting enterprise IT departments in need of on-premise file sync and share for sensitive corporate data. The company estimates it has more than 750,000 users worldwide today.

In the latest offering, the user interface has been streamlined, so that the main web navigation panel is now clearly differentiated from in-app navigations, says Markus Rex, CEO of ownCloud. And the way in which the software’s settings are laid out have been revamped, making it easier to distinguish personal settings from app-specific settings, he says.

“We’ve completely revamped the design with a much simplified interface so you can differentiate the navigation elements and focus on what you want to work with, instead of distracting from that,” says Rex.

New features

This version of ownCloud also features a Deleted Files app that lets users restore accidentally deleted files and folders, and improved app management, so that third-party apps can be easily installed from the central apps repository and automatically removed from the server, if disabled. Also included is a new search engine that lets users find files stored by both name and by content, thanks to the Lucene-based full text search engine app, and a new antivirus feature courtesy of Clam AV scans uploaded files for malware. This release also includes improved contacts, calendar and bookmarks, says Rex.

Performance benefits in this release come from improved file cache and faster syncing of the desktop client, according to company officials. Externally mounted file systems such as Google Drive, Dropbox, FTP and others can be scanned on-demand and in the background to increase performance. And hybrid clouds can be created by mixing and matching storage, thanks to file system abstraction that offers more flexibility and greater performance.
You can get to the data in all of your data silos from one spot on a mobile client or desktop client, so you can get to files you might not be able to access otherwise from those devices.

“You can get to the data in all of your data silos from one spot on a mobile client or desktop client, so you can get to files you might not be able to access otherwise from those devices,” says Rex.

This release features improved integration with LDAP and Active Directory and an enhanced external storage app to boost performance of integrated secondary storage including Dropbox, Swift, FTP, Google Docs, Amazon S3, WebDAV and external ownCloud servers.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Wednesday, March 20, 2013

Gaining greater cohesion: Bringing business analysis and business architecture into focus

This guest post comes courtesy of Craig Martin, Chief Operating Officer and Chief Architect at Enterprise Architects, which is a specialist Enterprise Architecture firm operating in the U.S., UK, Asia and Australia.

By Craig Martin, Enterprise Architects

Having delivered many talks on business architecture over the years, I’m often struck by the common vision driving many members in the audience – a vision of building cohesion in a business, achieving the right balance between competing forces and bringing the business strategy and operations into harmony.  However, as with many ambitious visions, the challenge in this case is immense.  As I will explain, many of the people who envision this future state of nirvana are, in practice, inadvertently preventing it from happening.

Standards Silos

There are a host of standards and disciplines that are brought into play by enterprises to improve business performance and capabilities. For example standards such as PRINCE2, BABOK, BIZBOK, TOGAF, COBIT, ITIL and PMBOK are designed to ensure reliability of team output and approach across various business activities. However, in many instances these standards, operating together, present important gaps and overlaps. One wonders whose job it is to integrate and unify these standards. Whose job is it to understand the business requirements, business processes, drivers, capabilities and so on?

Apples to Apples?

As these standards evolve they often introduce new jargon to support their view of the world. Have you ever had to ask your business to explain what they do on a single page? The diversity of the views and models can be quite astonishing:
The list goes on and on…

Each has a purpose and brings value in isolation. However, in the common scenario where they are developed using differing tools, methods, frameworks and techniques, the result is usually greater fragmentation, not more cohesion – and consequently we can end up with some very confused and exacerbated business stakeholders who care less about what standard we use and more about finding clarity to just get the job done.

The Convergence of 
Business Architecture and Business Analysis

Ask a room filled with business analysts and business architects how their jobs differ and relate, and I guarantee that would receive a multitude of alternative and sometimes conflicting perspectives.

Martin
Both of these disciplines try to develop standardized methods and frameworks for the description of the building blocks of an organization. They also seek to standardize the means by which to string them together to create better outcomes.

In other words, they are the disciplines that seek to create balance between two important business goals:
  • To produce consistent, predictable outcomes
  • To produce outcomes that meet desired objectives
In his book, “The Design of Business: Why Design Thinking is the Next Competitive Advantage,” Roger Martin describes the relationships and trade-offs between analytical thinking and intuitive thinking in business. He refers to the “knowledge funnel,” which charts the movement of business focus from solving business mysteries using heuristics to creating algorithms that increase reliability, reducing business complexity and costs and improving business performance.

The disciplines of business architecture and business analysis are both currently seeking to address this challenge. Martin refers to this as ”design thinking.”


(Click here to see an illustration that further explains these concepts.)

Vision Vs. Reality For Business Analysts
and Business Architects

When examining the competency models for business analysis and business architecture, the desire is to position these two disciplines right across the spectrum of reliability and validity.

The reality is that both the business architect and the business analyst spend a large portion of their time in the reliability space, and I believe I’ve found the reason why.

Both the BABOK and the BIZBOK provide a body of knowledge focused predominantly around the reliability space. In other words, they look at how we define the building blocks of an organization, and less so at how we invent better building blocks within the organization.

Integrating the Disciplines

While we still have some way to go to integrate, the business architecture and business analysis disciplines are currently bringing great value to business through greater reliability and repeatability.

However, there is a significant opportunity to enable the intuitive thinkers to look at the bigger picture and identify opportunities to innovate their business models, their go-to-market, their product and service offerings and their operations.
Perhaps we might consider introducing a new function to bridge and unify the disciplines? This newly created function might integrate a number of incumbent roles and functions and cover:
  • A holistic structural view covering the business model and the high-level relationships and
    The business architecture and business analysis disciplines are currently bringing great value to business through greater reliability and repeatability.
    interactions between all business systems
  • A market model view in which the focus is on understanding the market dynamics, segments and customer need
  • A products and services model view focusing on customer experience, value proposition, product and service mix and customer value
  • An operating model view – this is the current focus area of the business architect and business analyst. You need these building blocks defined in a reliable, repeatable and manageable structure. This enables agility within the organization and will support the assembly and mixing of building blocks to improve customer experience and value
At the end of the day, what matters most is not business analysis or business architecture themselves, but how the business will bridge the reliability and validity spectrum to reliably produce desired business outcomes.

I will discuss this topic in more detail at The Open Group Conference in Sydney, April 15-18, which will be the first Open Group event to be held in Australia.

This guest post comes courtesy of Craig Martin, Chief Operating Officer and Chief Architect at Enterprise Architects, which is a specialist Enterprise Architecture firm operating in the U.S., UK, Asia and Australia. He is presenting the Business Architecture plenary at the upcoming Open Group conference in Sydney. Copyright The Open Group and Interarbor Solutions, LLC, 2005-2013. All rights reserved.

You may also be interested in:



Tuesday, March 19, 2013

Dutch insurance giant Achmea deploys 'ERP for IT' to reinvent IT processes and boost business performance

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how Achmea Holding, one of the largest providers of financial services and insurance in the Netherlands, has made large strides in running their IT operations like an efficient business itself.

We'll hear how Achmea rearchitected its IT operations to both be more responsive to users and more manageable by the business, all based on clear metrics.

Here to explore these and other enterprise IT performance issues, we're joined by our co-host for this sponsored podcast, Georg Bock, Director of the Customer Success Group at HP Software, and he's based in Germany.

And we also welcome our special guest, Richard Aarnink, leader in the IT Management Domain at Achmea in the Netherlands, to explain how they've succeeded in making IT better governed and agile -- even to attain "enterprise resource planning (ERP) for IT" benefits.

The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Why is running IT more like a business important? Why does this make sense now?

Aarnink: Over the last year, whenever a customer asked us questions, we delivered what he asked. We came to the conclusion that delivery of every request that we got was an intensive process for which we created projects.

It was very difficult to make sure that it was not a one-time hero effect, but that we could deliver to the customer what he asked every time, on scope, on specs, on budget, and on time. We looked at it and said, "Well, it is actually like running a normal business, and therefore why should we be different? We should be predictive as well."

Gardner: Georg Bock, is this something you are seeing more and more of in the field?

Trend in the market

Bock: Yes, we definitely see this as a trend in the market, specifically with the customers that are a little more mature in their top-down strategic thinking. Let’s face it, running IT like a business is an end-to-end process that requires quite a bit of change across the organization -- not only technology, but also process and organization. Everyone has to work hand in hand to be, at the end of the day, predictable and repeatable in what they're doing, as Richard just explained.

That’s a huge change for most organizations. However, when it’s being done and when it has lived in the organization, there's a huge payback. It is not an easy thing to undertake but it’s inevitable, specifically when we look at the new trends around cloud multi-sourcing, mobility, etc., which brings new complexity to IT.

You'd better have your bread and butter business under control before moving into those areas. That’s why also the timing right now is very important and top of people’s minds.

Gardner: Tell us a bit about Achmea, the size of your organization, and why IT is so fundamentally important to you.

Aarnink: Achmea is a large insurance provider in the Netherlands. We have around eight million customers in the Netherlands with 17,000 employees. We're a very old and cooperative organization, and we have had lots and lots of mergers and acquisitions in the last 20 years. So we had various sets of IT departments from all the other companies that we centralized over the past years.

Aarnink
If you look at insurance, it's actually having the trust that whenever something happens to a customer, he can rely on the insurer to help him out, and usually this means providing money. IT is necessary to ensure that we can deliver on those promises that we made to our customers. So it’s a tangible service that we deliver, it’s more like money, and it’s all about IT.

Of the 17,000 employees that we have in the Netherlands, about 1,800-2,000 employees work in the centralized IT department. Over the last year, we changed our target operating model to centralize the technologies in competence centers, as we call them, in the department that we call Solution Development.
We created a new department, IT Operations, and we created business-relationship departments that were merged with the business units that were asking or demanding functionality from our IT department. We changed our entire operating model to cope with that, but we still have a lot of homegrown applications that we have to deliver on a daily basis.

Changing the department and the organizational structure is one thing, and now we need to change the content and the applications we deliver.

Gardner: How has all this allowed you to better manage all the aspects of IT, and make it align with the business?

Strategy and governance

Aarnink: To answer that question I need to elaborate a little bit on the strategy and governance department, which is actually within the IT department. What we centralized there were project portfolio and project steering, and also the architectural capabilities.

We make sure that whatever solution we deliver is architectured from a single model that we manage centrally. That's a real benefit that we gained in centralizing this and making sure that we can -- from both the architecture and project perspectives -- govern the projects that we're going to deliver to our business units.

Bock: Achmea is a leader in that, and the structure that Richard described is inevitable to be successful. ERP for IT, or running IT as a business, the fundamental IT processes, is all about standardization, repeatability, and predictability, especially in situations where you have mergers and acquisitions. It’s always a disruption if you have to bring different IT departments together. If you have a standard that’s easy to replicate, that’s a no-brainer and winner from a business bottom-line perspective.

In order to achieve that, you have to have a team that has a horizontal unit and that can drive the standardization of the company. Richard and Achmea are not alone in that. Richard and I have quite a number of discussions with other companies from other industries, and we very much see that everyone has the same problem, and given those horizontal teams, primary enterprise architecture, chief technology officer (CTO) office, or whatever you like to call those departments, is definitely a trend in the industry and for those mature customers that want to take that perspective and drive it forward that way.
It’s not rocket science from an intellectual perspective, but we have to cut through the political difficulties.

But as I said, it’s all about standardization. It’s not rocket science from an intellectual perspective, but we have to cut through the political difficulties of driving the adoptions across the different organizations in the company.

Gardner: What sort of problems or issues did you need to resolve as you worked to change things for the better?

Aarnink: We looked at the entire scope of implementing ERP for IT and first we looked at the IT projects and the portfolio. We looked at that and found out that we still had several departments running their own solutions in managing IT projects and also budgets. In the past, we had a mechanism of only controlling the budget for the different business units, but no centralized view on the IT portfolio, as a whole, for Achmea.

We started in that area, looking at one system of record for IT projects and portfolio management, so we could steer what we wanted to develop and what we wanted to sunset.

Next, we looked at application portfolio management and tried to look at the set of applications that we want to currently use and want to use in the future and the set of applications that we want to sunset in the next year and how that related to the IT project. So that was one big step that we made in the last two years. There's still a lot of work to be done in that area, but it was a big topic.

Service management

The second big topic was looking at service management. Due to all the mergers, we still had lots of variations on IT process. Incident management was covered in a whole different way, when you looked at several departments from the past.

We adopted service desks to cater to all those kind of deviations from the standard ITIL process. We looked at that and said that we had to centralize again and we had to make sure that we become more prescriptive in how these process will look and how we make sure that it's standardized.

That was the second area that we looked at. The third area was more on the application quality. How could we make sure that we got a better first-time-right score in delivering IT projects? How could we make sure that there is one system of record for requirements and one system of record for test results and defects. That’s three areas that we invested in in the first phase.

Lots of change going on

Gardner: What have you have seen in the market that leads you to believe that ERP for IT is not a vision, but is, in fact, happening, and that we're starting to see tangible benefits?

Bock: Richard very much nicely described real, practical results, rather than coming up with a dogmatic, philosophical process in the first place. I think it’s all about practical results and practical results need to be predictable and repeatable, otherwise it’s always the one-time hero effort that Richard brought up in the beginning, and that’s not scalable at all.

At some point you need process, but you shouldn’t try that dogmatically. I also hear about the Agile versus the waterfall, whatever is applicable to the problem is the right thing to do. Does that rule out process? No, not at all. You have to live the process in a little different way.
Technology always came first and now we look for the nail that you can use that hammer for. That’s not the right thing to do.

Everyone has to get-away from their dogmatic position and look at it in a little more relaxed way. We shouldn’t take our thoughts too seriously, but when we drive ERP for IT to apply some standard ways of doing things, we just make our life easier. It has nothing to do with esoteric vision, but it's something that is very achievable. It’s about getting a couple of people to agree on practical ways of getting it done.

Then, we can draw the technological consequences from it, rather than the other way around. That's been the problem in IT from my perspective for years. Technology always came first and now we look for the nail that you can use that hammer for. That’s not the right thing to do.

From my perspective, standardization is simply a necessary conclusion from some of the trial-and-error mistakes that have been made over the last 10-15 years, where people tried to customize the hell out of everything just to be in line with the specificity of how things are being done in their particular company. But nobody asked why it was that way.

Aarnink: I completely agree. We had several discussions about how the incident process is being carried out, and it’s the same in every other company as well. Of course there are slight differences, but the fact is that an incident needs to be so resolved, and that’s the same within every company.

Best practice

You can easily create a best practice for that, adopt it within your own company, and unburden yourself from thinking about how you should go for this process, reinvent it, creating your own tool sets, interfaces with external companies. That can all be centralized, it can all be standardized.

It’s not our business to create our own IT tools. It’s the business of delivering policy management systems for our core industry, which is insurance. We don’t want all the IT that we need in order to just to keep the IT running. We want that standardized, so we can concentrate on delivering business value.

Gardner: Now that we've been calling this ERP for IT, I think it’s important to look back on where ERP as a concept came from and the fact that getting more data, more insight, repeatability, analyzing processes, determining best processes and methods and then instantiating them, is at the core of ERP. But when we try to do that with IT, how do we measure, what is the data, and what do we analyze?

Richard, at Achmea, are you looking at key performance indicators (KPIs) and are using project portfolio management maturity models? How is it that you're measuring this so that you can, in fact, do what ERP does best, make it repeatable, make it standardized?
The IT project is a vehicle helping you deliver the value that you need, and the processes underneath that actually do the work for you.

Aarnink: If you look from the budget perspective, we look at the budgets, the timeframes, and the scope of what we need to deliver and whether we deliver on time, on budget, and on specs, as I already said. So those are basically the KPIs that we're looking for when we deliver projects.

But also, if you look at the processes involved when you deliver a project, then you talk about requirements management. How quickly can you create a set of requirements and what is the reuse of requirements from the past. Those are the KPIs we're looking for in the specific processes when you deliver an IT project.

So the IT project is a vehicle helping you deliver the value that you need, and the processes underneath that actually do the work for you. At that level we try to standardize and we try to make KPIs in order to make sure that we use as much as possible, that we deliver quality, and we have the resources in place that we actually need to deliver those functionalities.

You need to look at small steps that can be taken in a couple of months’ time. So draw up a roadmap and enable yourself to deliver value every, let’s say 100 days. Make sure that every time you deliver functionality that’s actually used, and you can look at your roadmap and adjust it, so you enable yourself to be agile in that way as well.
The biggest thing that you need to do is take small steps. The other thing is to look at your maturity. We did a CMMi test review. We didn't do the entire CMMi accreditation, but only looked at the areas that we needed to invest in.

Getting advice

We looked at where we had standardized already and the areas that we needed to look at first. That can help you prioritize. Then, of course, look at companies in your network that actually did some steps in this and make sure that you get advice from them as well.

Bock: I absolutely agree with what Richard said. If we're looking for some recipe for successes, you have to have a good balance of strategic goals and tactical steps towards that strategic goal. Those tactical step need to have a clear measure and a clear success criteria associated with them. Then you're on a good track.

I just want to come back to the notion of ERP for IT that you alluded to earlier, because that term can actually hurt the discussion quite a bit. If you think about ERP 20 years ago, it was a big animal. And we shouldn’t look at IT nowadays in the same manner as ERP was looked at 20 years ago. We don’t want to reinvent a big animal right now, but we have to have a strategic goal where we look at IT from an end-to-end perspective, and that’s the analogy that we want to draw.
If we're looking for some recipe for successes, you have to have a good balance of strategic goals and tactical steps towards that strategic goal.

ERP is something that has always been looked as an end-to-end process, and having a clear, common context associated from an end-to-end perspective, which is not the case in IT today. We should learn from those analogies that we shouldn’t try to implement ERP literally for IT, because that would take the whole thing in one step, where as Richard just said very nicely, you have to take it in digestible pieces, because we have to deal with a lot of technology there. You can't take that in one shot.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Monday, March 18, 2013

Avaya announces flexible Collaborative Cloud UC offerings for cloud service providers, channel, enterprises

Avaya today announced a set of Collaborative Cloud offerings designed to make it easier for more types of organizations to deploy unified communications (UC), contact center (CC) and video conferencing -- all as on-demand services.

The adoption of UC and CC as a service (UCaaS and CCaaS) brings utility-based pricing to cloud-service providers (CSPs) so they can offer varied and flexible packages to many types of clients. This creates new revenue streams for CSPs by allowing them to deliver app integrations, mobile collaboration and multichannel customer service for their customers. And it allows buyers to only pay for the IP-based communications services they want and need.

This makes the burgeoning bring you own device (BYOD) trend easier for enterprises to manage because they can off-load more of the complexities of mobile and BYOD environments to their cloud and service providers, said Bruce MacVarish, Director of Cloud Solutions at Avaya. The offerings enable CSPs to evolve and augment enterprise communications with cloud-based solutions, as well as provide greater interoperability across vendors, domains and protocols, he said.
Think of it as video conferencing as a service on demand, integratable into more mobile devices and therefore business processes.

Santa Clara-based Avaya is carving out four delivery and distribution models for UCaaS and CCaaS: private cloud/on-premises stacks, managed services for service providers, hosted multi-tenancy services for channel players, and a full software-as-a-service (SaaS) cloud capability powered by Avaya focused on the mid-market and smaller organization users.

The video services are more geared toward synchronous video interactions, and not hosted, asynchronous video serving, although Avaya offers both. Think of it as video conferencing as a service on demand, integratable into more mobile devices and therefore business processes.

Avaya's move, like with many evolving cloud models, forms a transition from CapEx to OpEx, utility-based pricing and consumption. It also offers ease and speed in adoption, and a single point of integration for value-added SPs and developers.

I expect to see more SaaS business apps providers and cloud-savvy enterprises integrate Avaya's and other UC services into their web, mobile and cloud offerings. These would include such benefits as click-to-call, customer support interception points, and embedded video conferencing brought directly into more business apps, services and processes.

Hybrid deployments

It will be curious to see how the hybrid deployments of UCaaS and CCaaS are assimilated into other business cloud services as the market matures. Will enterprises and SPs alike seek to embed more UC functions, while themselves controlling the UC stack? Or will communications, like many other business services, be something they expect in any cloud stack? Or what combo of hosting will they prefer in which apps?

A lot of the noise around hybrid cloud fails to take the communications feature and their integration into account. Same for big data: Shouldn't all the unstructured data in communications by part of any analytics mix? How to manage that?

Avaya is now in a controlled release of the solutions, and expects general availability in three to six months, said MacVarish.

Earlier this month, Avaya announced new security enhancements for enterprise collaboration.
A lot of the noise around hybrid cloud fails to take the communications feature and their integration into account.

In more detail, the new and expanded Avaya offerings for CSPs are:
  • Avaya Cloud Enablement for Unified Communications and Customer Experience Management. Based on Avaya Aura, it allows flexible, utility-based, OpEx pricing for CSPs so they pay on actual customer usage. Avaya Control Manager enables centrally managed multi-tenancy.
  • Avaya Cloud Enablement for Video provides CSPs with a scalable platform and multi-tenancy that delivers interoperable, multi-vendor mobile video collaboration. Enhancements to the Elite Series MCUs, Scopia Mobile and Scopia Desktop extend BYOD videoconferencing across most endpoints.
  • Avaya Communications Outsourcing Solutions (COS) Express, a private cloud offering for up to 500-seat contact centers, can be hosted by Avaya, a CSP or channel partners -- either as Avaya or co-branded services.
Avaya Collaborative Cloud solutions also include Avaya Collaboration Pods, a portfolio of cloud-ready, turnkey solutions designed to simplify installation and operations of real-time applications; and the AvayaLive suite of public-cloud based communications and collaboration services.

You may also be interested in:

Monday, March 11, 2013

Fighting in the cloud service orchestration wars

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

By Jason Bloomberg

Combine the supercharged cloud computing marketplace with the ubergeek cred of the open source movement, and you’re bound to have some Mentos-in-Diet-Coke moments. Such is the case with today’s cloud service orchestration (CSO) platforms. At this moment in time, the leading CSO platform is OpenStack. Dozens of vendors and cloud service providers (CSPs) have piled on this effort, from Rackspace to HP to Dell, and most recently, IBM has announced that they’re going all in as well. Fizzy to be sure, but all Coke, no Mentos.

Then there are CloudStack, Eucalyptus, and a few other OpenStack competitors. With all the momentum of OpenStack, it might seem that these open source alternatives are little more than also-rans, doomed to drop further and further behind the burgeoning leader. But there’s more to this story. This is no techie my-open-source-is-better-than-your-open-source battle of principle, of interest only to the cognoscenti. On the contrary: big players are now involved, and they’re placing increasingly large bets. Add a good healthy dose of Mentos – only this time, the Mentos are money.

Understanding the CSO Marketplace

Look around the infrastructure-as-a-service (IaaS) market. Notice that elephant in the corner? That’s Amazon Web Services (AWS). The IaaS market simply doesn’t make sense unless you realize that AWS essentially invented IaaS. And by invented, we mean actually got it to work. Which if you think about it, is rather atypical for most technology vendors. Your average software vendor will identify a new market opportunity, take some old stuff they’ve been struggling to sell, give it a nice new coat of PowerPoint, and shoehorn it into the new market. If customers bite, then the vendor will devote resources into making the product actually do what it’s supposed to do. Eventually. We hope.

Bloomberg
But AWS is different. Amazon.com is an online reseller, not a software vendor. They think more like Wal-Mart than IBM. They figured out elasticity at scale, added customer self-service, and christened it IaaS. Then they grew it exponentially, defining what cloud computing really means. Today, they leverage their market dominance and economies of scale to continually lower prices, squeezing their competitors’ margins to nothing. It worked for Rockefeller’s Standard Oil, and it works for Wal-Mart. Now it’s working for Amazon.

But as with any market, there are always competitors looking to carve off a bit of opportunity for themselves. Given AWS’s dominance, however, there are two basic approaches to competing with Amazon: do what AWS is doing but try to do it a bit better (say, with Rackspace’s promise of better customer service), or do something similar to AWS but different enough to interest some segment of the market (leading in particular to the enterprise public cloud space populated by the likes of Verizon Terremark and Savvis, to name a few).

And then there are the big vendors like HP and IBM, who not only offer a range of enterprise software products, but who also offer enterprise data center managed services and associated consulting. Such vendors want to play two sides of this market: they want to be public cloud providers in their own right, and also offer “turnkey” cloud gear to customers who want to build their own private clouds.

Enter OpenStack. Both of the aforementioned vendors as well as the smaller players realize that piecing together their own cloud offerings will never enable them to catch up to AWS. Instead, they’re joining forces to build out a common cloud infrastructure platform that supports the primary capabilities of IaaS (compute, storage, database, and network), as well as providing the infrastructure platform for platform-as-a-service (PaaS) and Software-as-a-Service (SaaS) capabilities down the road. The open source model is perfect for such collaboration, as the Apache license allows contributors to take the shared codebase and build out whatever proprietary add-ons they like.

Most challenging benefits

Perhaps the most touted, and yet most challenging benefits of the promised all-OpenStack world is the holy grail of workload portability. In theory, if you’re running your workloads on one OpenStack-based cloud, you should be able to move them lock stock and barrel to any other OpenStack-based cloud, even if it belongs to a different CSP. Workload portability is the key to cloud-based failover and disaster recovery, cloud bursting, and multi-cloud deployments. Today, workload portability requires a single proprietary platform, and only VMware offers such portability. AWS offers a measure of portability within its cloud, but will face challenges supporting portability between itself and other providers. As a result, if OpenStack can get portability to work properly, participating CSPs will have a competitive lever against Amazon.

Achieving a strong competitive position against AWS with OpenStack is easier said than done, however. OpenStack is a work in progress, and many bits and pieces are still missing. Open source efforts take time to mature, and meanwhile, AWS keeps growing. In response, the players in this space are taking different tacks to build mature offerings that have a hope of carving off a viable chunk of the IaaS marketplace:
  • Rackspace is trying to capitalize on its OpenStack leadership position and the aforementioned customer service to provide a viable alternative to AWS. They are also touting the workload portability benefits of OpenStack. But downward pricing pressure combined with the holes in OpenStack capabilities are pounding on Rackspace’s stock price.
    Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration.

  • Faced with the demise of its traditional PC business, Dell is focusing on its Boomi B2B integration product, recently rechristened as cloud integration. Cloud integration is a critical enabler of hybrid clouds, but doesn’t address the workload portability challenge. As a result, Dell’s cloud marketing efforts are focused on the benefits of integration over portability. Dell’s recent acquisition of Quest Software also hints at a Microsoft application migration strategy for Dell Cloud.
  • HP wants to rush its enterprise public cloud offering to market, and it doesn’t want to wait for OpenStack to mature. Instead, it’s hammering out its own version of OpenStack, essentially forking the OpenStack codebase to its own ends, according to Nnamdi Orakwue, vice president for Dell Cloud. Such a move may pay off for HP, but increases the risk that the HP add-ons to OpenStack will have quality issues.
  • IBM recently announced that they are “all in” with OpenStack with the rollout of  IBM SmartCloud Orchestrator built on the platform.  But IBM has a problem: the rest of their SmartCloud suite isn’t built on OpenStack, leaving them to scramble to rewrite a number of existing products leveraging OpenStack’s incomplete codebase, while in the meantime, integrating the mishmash of SmartCloud components at the PowerPoint layer.
  • Red Hat is making good progress hammering out what they consider an “enterprise” deployment of OpenStack. As perhaps the leading enterprise open source vendor, they are well-positioned to lead this segment of the market, but it still remains to be seen whether enterprise customers will want to  build all open source private clouds in the near term, as the products gradually mature. On the other hand, IBM has a history of leveraging Red Hat’s open source products, so an IBM/Red Hat partnership may move SmartCloud forward more quickly than IBM might be able to accomplish on its own.
CSO Wild Card: CloudStack

There are several more players in this story, but one more warrants a discussion: Citrix. The desktop virtualization leader had been one face in the OpenStack crowd, but they suddenly decided to switch horses and take a contrarian strategy. They ditched OpenStack, took their 2011 Cloud.com acquisition and donated the code to CloudStack. Then they switched CloudStack’s licensing model from GNU (derivative products must be licensed under GNU) to Apache (OK to build proprietary offerings on top of the open source codebase), and subsequently passed the entire CloudStack effort along to the Apache Foundation, where it’s now in incubation.

There are far fewer players on the CloudStack team than OpenStack’s, and its core value proposition is quite similar to OpenStack, so on first glance, Citrix’s move raises eyebrows. After all, why bail on the market leader to join the underdog? But look more closely, and it seems that Citrix may be onto something.
Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen.

First, Citrix’s open source cloud strategy is not all about CloudStack. They’re also heavily invested in Xen. Xen is one of the two leading open source virtualization platforms, and provides the underpinnings to many commercial virtualization products on the market today. Citrix’s 2007 acquisition of XenSource positioned them as a Xen leader, and they’ve been driving development of the Xen codebase ever since.

Citrix’s heavy investment in Xen bucks the conventional virtualization wisdom: since Xen’s primary competitor, KVM (Kernel-based Virtual Machine) is distributed as part of standard Linux distros, KVM is the no-brainer choice for the virtualization component of open source CSOs. After all, it’s essentially part of Linux, so any CSP (save those focusing on Windows-centric IaaS) don’t have to lift a finger to build their offerings on KVM. Citrix, however, picked up on a critical fact: KVM is simply not as good as Xen. And now that Citrix has been pushing Xen to mature for half a dozen years, Xen is a far better choice for building turnkey cloud solutions than KVM. So they Citrix combined Xen and CloudStack into a single cloud architecture they dubbed Windsor, which forms the basis of their CloudPlatform offering.

And therein lies the key to Citrix’s contrarian strategy: CloudPlatform is a turnkey cloud solution for customers who want to deploy private clouds – or as close to turnkey as today’s still nascent cloud market can offer. Citrix is passing on the opportunity to be their own CSP (at least for now), instead focusing on driving CloudStack and Xen maturity to the point that they can put together a complete cloud infrastructure software offering. In other words, they are focusing on a niche and giving it all they got.

The ZapThink Take

If this ZapFlash makes comprehending the IaaS marketplace look like herding cats, you’re right. AWS has gotten so big, so fast, and their products are so good, that everyone else is scrambling to put something together that will carve off a piece of what promises to be an immense market. But customers are holding the cards, because everyone knows how AWS works, which means that everyone knows how IaaS is supposed to work. If a vendor or CSP brings an offering to market that doesn’t compare with AWS on quality, functionality, or cost, then customers will steer clear, no matter how good the contenders’ PowerPoints are.

But as with feline wrangling, it’s anybody’s guess where this tabby or that calico is heading next. If anyone truly challenges Amazon’s dominance, who will it be? Rackspace? IBM? Dell? Or any of the dozens of other four-legged critters just looking for a warm spot in the sun? And then there’s the turnkey cloud solution angle. Today, building out your own private cloud is difficult, expensive, and fraught with peril. But if tomorrow brings simple, low cost, low risk private clouds to the enterprise, how will that impact the public CSP marketplace? You pays your money, you takes your chances. But today, the safe IaaS choice is AWS, unless you have a really good reason for selecting an alternative.

This BriefingsDirect guest post comes courtesy of Jason Bloomberg, president of ZapThink, a Dovel Technologies company.

You may also be interested in:

Thursday, March 7, 2013

Cloud, mobile bringing new value to Agile development methods, even in bite-sized chunks

As IT aligns itself with business goals, Agile software development is increasingly enabling developers to better create applications that meet user needs quickly. And, now, the advent of increased mobile apps development is further accelerating the power of Agile methods.

Thought it’s been around for decades, Agile’s tenets of collaboration, incremental development, speed, and flexibility resonate with IT leaders who want developers to focus on working with users to develop the applications. This method stands in contrast to the more rigid and traditional process of collecting user requirements, taking months to create a complete application, and delivering the application to users with the hopes that it fits the bill and that requirements haven’t changed during the process.
In many cases today, the business has alternatives, thanks to cloud -- all the services they could need are available with a credit card.

In fact, in today’s world, where business leaders can shop for the technology they need with any cloud or software-as-a-service (SaaS) provider they choose, IT must ensure enterprise applications are built collaboratively to meet needs, or lose out to the competition.

“In many cases today, the business has alternatives, thanks to cloud -- all the services they could need are available with a credit card,” says Raziel Tabib, Senior Product Manager of Application Lifecycle Management with HP Software . “IT has to work to be the preferred solution. If the IT department wants to maintain its position, it has to make the best tools to meet business needs. Developers have to get engaged with end users to ensure they are meeting those needs.” [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

HP Software recently released HP Agile Manager, a SaaS-based solution for planning and executing agile projects. And the division itself has embraced some of the principles of agile that have, for example, helped it to move from an 18-month release cycle to come up with product releases and refreshes every month, says Tabib.

Pick and choose

However, Agile is far from an all-or-nothing proposition, particularly for large organizations with developers distributed across the globe that may have a harder a time adopting certain agile work styles, he warns.

“We’re not saying any organization can just look at the agile manifesto and start tomorrow with scrum meetings and everything will work well,” Tabib says. “We have engineers in Israel, Prague, and Vietnam. While some agile practices are easy to pick up, others are really difficult to adopt, when you’re talking about organizations at that scale.”

That’s okay, he adds -- organizations should be encouraged to cherry pick the elements of agile that make sense to embrace, blend them with more traditional approaches to application development, and still reap benefits.

A report published in September of 2012 by Forrester Consulting on behalf of HP supports the idea that Agile is one of many disciplines that can be used to develop applications that meet users needs.
Agile software development is one tool in a vast toolbox,” reads the report. “But a fool with a tool is still a fool.

The report, entitled Agile Software Development and the Factors that Drive Success, surveyed 112 professionals regarding application development habits and success. It found that companies already successful in application development used Agile techniques to make them even better.

For example, respondents cited the Agile practice of limiting the amount of work in progress to reduce the impact of sudden business change meant that requirements didn’t grow stale while waiting for coding to begin -- but that their overall success was based on more than just implementing agile.

And it found respondents at companies that weren’t as successful with application development reported using aspects of agile. The upshot of the survey was that simply adopting agile did not ensure success. “Agile software development is one tool in a vast toolbox,” reads the report. “But a fool with a tool is still a fool.”

I think Agile will get even more of a boost in value as developers move toward a "mobile first" approach, which seems tightly coupled with fast, iterative apps improvement schedules.

One of the neat things about a mobile first orientation is that it forces long-overdue simplification and ease in use in apps. When new apps are designed for their mobile device deployment first, the dictates of the mobile constraints prevail.

Combine that with Agile, and the guiding principles of speed and keeping user requirements dominant help keep projects from derailing. Revisions and updates remain properly constrained. Mobile First discourages snowballing of big applications, instead encouraging releases of smaller, more manageable apps.

Mobile First design benefits combined with Agile methods can be well extended across SaaS, cloud, VDI, web, and even client-server applications.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in:

Friday, March 1, 2013

Complexity from big data and cloud trends makes architecture tools like ArchiMate and TOGAF more powerful, says expert panel

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

We recently assembled a panel of enterprise architecture (EA) experts to explain how such simultaneous and complex trends as big data, cloud computing, security, and overall IT transformation can be helped by the combined strengths of The Open Group Architecture Framework (TOGAF®) and the ArchiMate® modeling language.

The panel consisted of Chris Forde, General Manager for Asia-Pacific and Vice President of Enterprise Architecture at The Open Group; Iver Band, Vice Chair of The Open Group ArchiMate Forum and Enterprise Architect at The Standard, a diversified financial services company; Mike Walker, Senior Enterprise Architecture Adviser and Strategist at HP and former Director of Enterprise Architecture at Dell; Henry Franken, the Chairman of The Open Group ArchiMate Forum and Managing Director at BIZZdesign, and Dave Hornford, Chairman of the Architecture Forum at The Open Group and Managing Partner at Conexiam. I served as the moderator.

This special BriefingsDirect thought leadership interview series comes to you in conjunction with The Open Group Conference recently held in Newport Beach, California. The conference focused on "big data -- he transformation we need to embrace today." [Disclosure: The Open Group and HP are sponsors of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Is there something about the role of the enterprise architect that is shifting?

Walker: There is less of a focus on the traditional things we come to think of EA such as standards, governance and policies, but rather into emerging areas such as the soft skills, business architecture, and strategy.
Walker

To this end I see a lot in the realm of working directly with the executive chain to understand the key value drivers for the company and rationalize where they want to go with their business. So we're moving into a business-transformation role in this practice.

At the same time, we've got to be mindful of the disruptive external technology forces coming in as well. EA can’t just divorce from the other aspects of architecture as well. So the role that enterprise architects play becomes more and more important and elevated in the organization.

Two examples of this disruptive technology that are being focused on at the conference are big data and cloud computing. Both are providing impacts to our businesses not because of some new business idea but because technology is available to enhance or provide new capabilities to our business. The EA’s still do have to understand these new technology innovations and determine how they will apply to the business.

We need to get really good enterprise architects, it’s difficult to find good ones. There is a shortage right now especially given that a lot of focus is being put on the EA department to really deliver sound architectures.

Not standalone

Gardner: We've been talking a lot here about big data, but usually that's not just a standalone topic. It's big data and cloud, cloud, mobile and security.

So with these overlapping and complex relationships among multiple trends, why is EA and things like the TOGAF framework and the ArchiMate modeling language especially useful?

Band: One of the things that has been clear for a while now is that people outside of IT don't necessarily have to go through the technology function to avail themselves of these technologies any more. Whether they ever had to is really a question as well.

Band
One of things that EA is doing, and especially in the practice that I work in, is using approaches like the ArchiMate modeling language to effect clear communication between the business, IT, partners and other stakeholders. That's what I do in my daily work, overseeing our major systems modernization efforts. I work with major partners, some of which are offshore.

I'm increasingly called upon to make sure that we have clear processes for making decisions and clear ways of visualizing the different choices in front of us. We can't always unilaterally dictate the choice, but we can make the conversation clearer by using frameworks like the TOGAF standard and the ArchiMate modeling language, which I use virtually every day in my work.

Hornford: The fundamental benefit of these tools is the organization realizing its capability and strategy. I just came from a session where a fellow quoted a Harvard study, which said that around a third of executives thought their company was good at executing on its strategy. He highlighted that this means that two-thirds are not good at executing on their strategy.

Hornford
If you're not good at executing on your strategy and you've got big data, mobile, consumerization of IT and cloud, where are you going? What's the correct approach? How does this fit into what you were trying to accomplish as an enterprise?

An enterprise architect that is doing their job is bringing together the strategy, goals and objectives of the organization. Also, its capabilities with the techniques that are available, whether it's offshoring, onshoring, cloud, or big data, so that the organization is able to move forward to where it needs to be, as opposed to where it's going to randomly walk to.

Forde: One of the things that has come out in several of the presentations is this kind of capability-based planning, a technique in EA to get their arms around this thing from a business-driver perspective. Just to polish what Dave said a little bit, it's connecting all of those things. We see enterprises talking about a capability-based view of things on that basis.

Gardner: Let's get a quick update. The TOGAF framework, where are we and what have been the highlights from this particular event?

Minor upgrade

Hornford: In the last year, we've published a minor upgrade for TOGAF version 9.1 which was based upon cleaning up consistency in the language in the TOGAF documentation. What we're working on right now is a significant new release, the next release of the TOGAF standard, which is dividing the TOGAF documentation to make it more consumable, more consistent and more useful for someone.

Today, the TOGAF standard has guidance on how to do something mixed into the framework of what you should be doing. We're peeling those apart. So with that peeled apart, we won't have guidance that is tied to classic application architecture in a world of cloud.

What we find when we have done work with the Banking Industry Architecture Network (BIAN) for banking architecture, Sherwood Applied Business Security Architecture (SABSA) for security architecture, and the TeleManagement Forum, is that the concepts in the TOGAF framework work across industries and across trends. We need to move the guidance into a place so that we can be far nimbler on how to tie cloud with my current strategy, how to tie consumerization of IT with on-shoring?

Franken: The ArchiMate modeling language turned two last year, and the ArchiMate 1.0 standard is the language to model out the core of your EA. The ArchiMate 2.0 standard added two specifics to it to make it better aligned also to the process of EA.

Franken
According to the TOGAF standard, this is being able to model out the motivation, why you're doing EA, stakeholders and the goals that drive us. The second extension to the ArchiMate standard is being able to model out its planning and migration.

So with the core EA and these two extensions, together with the TOGAF standard process working, you have a good basis on getting EA to work in your organization.

Gardner: Mike, fill us in on some of your thoughts about the role of information architecture vis-à-vis the larger business architect and enterprise architect roles.

Walker: Information architecture is an interesting topic in that it hasn’t been getting a whole lot of attention until recently.

Information architecture is an aspect of enterprise architecture that enables an information strategy or business solution through the definition of the company's business information assets, their sources, structure, classification and associations that will prescribe the required application architecture and technical capabilities.

Information architecture is the bridge between the business architecture world and the application and technology architecture activities.

The reason I say that is because information architecture is a business-driven discipline that details the information strategy of the company. As we know, and from what we’ve heard at the conference keynotes like in the case of NASA, big data, and security presentations, the preservation and classification of that information is vital to understanding what your architecture should be.

Least matured

From an industry perspective, this is one of the least matured, as far as being incorporated into a formal discipline. The TOGAF standard actually has a phase dedicated to it in data architecture. Again, there are still lots of opportunities to grow and incorporate additional methods, models and tools by the enterprise information management discipline.

Enterprise information management not only it captures traditional topic areas like master data management (MDM), metadata and unstructured types of information architecture but also focusing on the information governance, and the architecture patterns and styles implemented in MDM, big data, etc. There is a great deal of opportunity there.

From the role of information architects, I’m seeing more and more traction in the industry as a whole. I've dealt with an entire group that’s focused on information architecture and building up an enterprise information management practice, so that we can take our top line business strategies and understand what architectures we need to put there.

This is a critical enabler for global companies, because oftentimes they're restricted by regulation, typically handled at a government or regional area. This means we have to understand that we build our architecture. So it's not about the application, but rather the data that it processes, moves, or transforms.

We didn’t have to treat information as a first-class citizen. Times have changed, though.
Gardner: Up until not too long ago, the conventional thinking was that applications generate data. Then you treat the data in some way so that it can be used, perhaps by other applications, but that the data was secondary to the application.

But there's some shift in that thinking now more toward the idea that the data is the application and that new applications are designed to actually expand on the data’s value and deliver it out to mobile tiers perhaps. Does that follow in your thinking that the data is actually more prominent as a resource perhaps on par with applications?

Walker: You're spot on, Dana. Before the commoditization of these technologies that resided on premises, we could get away with starting at the application layer and work our way back because we had access to the source code or hardware behind our firewalls. We could throw servers out, and we used to put the firewalls in front of the data to solve the problem with infrastructure. So we didn’t have to treat information as a first-class citizen. Times have changed, though.

Information access and processing is now democratized and it’s being pushed as the first point of presentment. A lot of times this is on a mobile device and even then it’s not the corporate’s mobile device, but your personal device. So how do you handle that data?

It's the same way with cloud, and I’ll give you a great example of this. I was working as an adviser for a company, and they were looking at their cloud strategy. They had made a big bet on one of the big infrastructures and cloud-service providers. They looked first at what the features and functions that that cloud provider could provide, and not necessarily the information requirements. There were two major issues that they ran into, and that was essentially a showstopper. They had to pull off that infrastructure.

The first one was that in that specific cloud provider’s terms of service around intellectual property (IP) ownership. Essentially, that company was forced to cut off their IP rights.

Big business

As you know, IP is a big business these days, and so that was a showstopper. It actually broke the core regulatory laws around being able to discover information.

So focusing on the applications to make sure it meets your functional needs is important. However, we should take a step back and look at the information first and make sure that for the people in your organization who can’t say no, their requirements are satisfied.

Gardner: Data architecture is it different from EA and business architecture, or is it a subset? What’s the relationship, Dave?

Hornford: Data architecture is part of an EA. I won’t use the word subset, because a subset starts to imply that it is a distinct thing that you can look at on its own. You cannot look at your business architecture without understanding your information architecture. When you think about big data, cool. We've got this pile of data in the corner. Where did it come from? Can we use it? Do we actually have legitimate rights, as Mike highlighted, to use this information? Are we allowed to mix it and who mixes it?

When we look at how our business is optimized, they normally optimize around work product, what the organization is delivering. That’s very easy. You can see who consumes your work product. With information, you often have no idea who consumes your information. So now we have provenance, we have source and as we move for global companies, we have the trends around consumerization, cloud and simply tightening cycle time.
If we look at data in isolation, I have to understand how the system works and how the enterprise’s architecture fits together.

Gardner: Of course, the end game for a lot of the practitioners here is to create that feedback loop of a lifecycle approach, rapid information injection and rapid analysis that could be applied. So what are some of the ways that these disciplines and tools can help foster that complete lifecycle?

Band: The disciplines and tools can facilitate the right conversations among different stakeholders. One of the things that we're doing at The Standard is building cadres equally balanced between people in business and IT.

We're training them in information management, going through a particular curriculum, and having them study for an information management certification that introduces a lot of these different frameworks and standard concepts.

Creating cadres

We want to create these cadres to be able to solve tough and persistent information management problems that affect all companies in financial services, because information is a shared asset. The purpose of the frameworks is to ensure proper stewardship of that asset across disciplines and across organizations within an enterprise.

Hornford: The core is from the two standards that we have, the ArchiMate standard and the TOGAF standard. The TOGAF standard has, from its early roots, focused on the components of EA and how to build a consistent method of understanding of what I'm trying to accomplish, understanding where I am, and where I need to be to reach my goal.

When we bring in the ArchiMate standard, I have a language, a descriptor, a visual descriptor that allows me to cross all of those domains in a consistent description, so that I can do that traceability. When I pull in this lever or I have this regulatory impact, what does it hit me with, or if I have this constraint, what does it hit me with?

If I don’t do this, if I don’t use the framework of the TOGAF standard, or I don’t use the discipline of formal modeling in the ArchiMate standard, we're going to do it anecdotally. We're going to trip. We're going to fall. We're going to have a non-ending series of surprises, as Mike highlighted.
The businesses value of TOGAF is that they get a repeatable and a predictable process for building out our architectures that properly manage risks and reliably produces value.

"Oh, terms of service. I am violating the regulations. Beautiful. Let’s take that to our executive and tell him right as we are about to go live that we have to stop, because we can't get where we want to go, because we didn't think about what it took to get there." And that’s the core of EA in the frameworks.

Walker: To build on what Dave has just talked about and going back to your first question Dana, the value statement on TOGAF from a business perspective. The businesses value of TOGAF is that they get a repeatable and a predictable process for building out our architectures that properly manage risks and reliably produces value.

The TOGAF framework provides a methodology to ask what problems you're trying to solve and where you are trying to go with your business opportunities or challenges. That leads to business architecture, which is really a rationalization in technical or architectural terms the distillation of the corporate strategy.

From there, what you want to understand is information -- how does that translate, what information architecture do we need to put in place? You get into all sorts of things around risk management, etc., and then it goes on from there, until what we were talking about earlier about information architecture.

If the TOGAF standard is applied properly you can achieve the same result every time, That is what interests business stakeholders in my opinion. And the ArchiMate modeling language is great because, as we talked about, it provides very rich visualizations so that people cannot only show a picture, but tie information together. Different from other aspects of architecture, information architecture is less about the boxes and more about the lines.

Quality of the individuals

Forde: Building on what Dave was saying earlier and also what Iver was saying is that while the process and the methodology and the tools are of interest, it’s the discipline and the quality of the individuals doing the work
Forde

Iver talked about how the conversation is shifting and the practice is improving to build communications groups that have a discipline to operate around. What I am hearing is implied, but actually I know what specifically occurs, is that we end up with assets that are well described and reusable.

And there is a point at which you reach a critical mass that these assets become an accelerator for decision making. So the ability of the enterprise and the decision makers in the enterprise at the right level to respond is improved, because they have a well disciplined foundation beneath them.

A set of assets that are reasonably well-known at the right level of granularity for them to absorb the information and the conversation is being structured so that the technical people and the business people are in the right room together to talk about the problems.

This is actually a fairly sophisticated set of operations that I am discussing and doesn't happen overnight, but is definitely one of the things that we see occurring with our members in certain cases.
There is a point at which you reach a critical mass that these assets become an accelerator for decision making.

Hornford: I want to build on that what Chris said. It’s actually the word "asset." While he was talking, I was thinking about how people have talked about information as an asset. Most of us don’t know what information we have, how it’s collected, where it is, but we know we have got a valuable asset.

I'll use an analogy. I have a factory some place in the world that makes stuff. Is that an asset? If I know that my factory is able to produce a particular set of goods and it’s hooked into my supply chain here, I've got an asset. Before that, I just owned a thing.

I was very encouraged listening to what Iver talked about. We're building cadres. We're building out this approach and I have seen this. I'm not using that word, but now I'm stealing that word. It's how people build effective teams, which is not to take a couple of specialists and put them in an ivory tower, but it’s to provide the method and the discipline of how we converse about it, so that we can have a consistent conversation.

When I tie it with some of the tools from the Architecture Forum and the ArchiMate Forum, I'm able to consistently describe it, so that I now have an asset I can identify, consume and produce value from.

Business context

Forde: And this is very different from data modeling. We are not talking about entity relationship, junk at the technical detail, or third normal form and that kind of stuff. We're talking about a conversation that’s occurring around the business context of what needs to go on supported by the right level of technical detail when you need to go there in order to clarify.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: The Open Group.

You may also be interested in: