Friday, September 24, 2010

Demise of enterprise IT departments: A pending crisis point

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.

By Ronald Schmelzer

In ZapThink’s deep conversations with CIOs and other IT decision makers, we find that there’s broad agreement on the multitude of forces conspiring to change every aspect of the way the enterprise does IT.

Yet at the same time, everybody’s in denial that these changes will happen to them. For us as outsiders, it certainly looks like many enterprise IT decision-makers acknowledge that the world is changing -- but deny that they are part of that same world.

Of course, such executives simply have their head in the sand. If change is to occur, it will happen to the vast majority of enterprises, not just the minority.

This realization drives the Crisis Points of the ZapThink 2020 vision. However, ZapThink is not advocating that organizations should adopt any of the crisis points. Rather we are observing that these crises are coming, whether or not companies are ready for them.

In particular, we believe that companies will reach a crisis point as they seek to outsource IT. However, we aren’t advocating that companies outsource all their IT efforts. Rather, we are observing that siren call of offloading IT assets in the form of cloud computing and outsourcing is a significant trend that is leading to a crisis point.

And without a strong rudder, many companies will indeed be dashed on the rocks. This ZapFlash blog post provides greater detail on this particular crisis point: The pending demise of the enterprise IT department, or what we’ve called in previous ZapFlashes the Collapse of Enterprise IT.

Outsourcing and cloud computing:
Different parts of same story


Part of the reason for the visceral response to our Crisis Points ZapFlash is that there’s inherent fear when talking about outsourcing IT functions. Part of the fear comes from the fact that many people confuse outsourcing with offshoring.

Outsourcing is the purchasing of a service from an outside vendor to replace the performance of the task within the organization’s internal operations. Offshoring, on the other hand, is the movement of labor from a region of high cost (such as the United States) to one of comparatively lower cost (such as India).

People fear the latter because it means subcontracting existing work to other people, thus displacing jobs at home. However, the former has been going on for hundreds of years. Indeed, many companies exist solely because they are completing tasks that their customers would rather not undertake themselves.

Almost six years ago, we talked about how service oriented architecture (SOA) and outsourcing go hand in hand, for the simple reason that SOA requires organizations to think about their resources, processes, and capabilities in ways that are loosely coupled from the specifics of their implementation, location, and consumption. Indeed, the more companies implement SOA, the more they can outsource processes that are not strategic or competitive for the organization.

But it’s a mistake to assume the collapse of the enterprise IT department is due entirely to outsourcing the functions of IT to third parties.



Furthermore, the more companies outsource their functions, the more they are motivated to implement SOA to facilitate the consumption of the outsourced capabilities. So, therefore it should be no surprise that the combination of SOA and a challenging economic environment has motivated many companies to see outsourcing as a legitimate strategy for their IT organizations, regardless of whether they move to offshoring.

But it’s a mistake to assume the collapse of the enterprise IT department is due entirely to outsourcing the functions of IT to third parties. Outsourcing is a part of the story, but so is cloud computing. In much the same way that third-party firms can offload parts of IT in the outsourcing model, cloud computing offers the ability to offload other aspects of the IT department. Cloud computing provides both technological and economic benefits for distributing and offloading resources, functions, processes, and even data onto location-independent infrastructures.

While many enterprises are currently pursuing a private model for cloud computing, there are far too many economic benefits of the public model to ignore. Most likely, we will see hybrid cloud approaches, where organizations keep certain mission-critical features behind the firewall on the corporate premises while they shift the rest are to lower cost, more agile, and less costly third-party locations. The net result of this shift is continued erosion of the scope of responsibility for internal IT organizations.

The holistic perspective of the five supertrends

The demise of enterprise IT crisis point emerges from the fact that companies will rush into this vision of outsourced IT without thinking through first the dramatic impact that this transition will have throughout their organization.

For such organizations, the value of our ZapThink 2020 vision is that it pulls together multiple trends and delineates the interrelationships among them. One of the most closely related trends to the demise of the IT organization is the increased formality and dependence on governance, as organizations pull together the business side of governance (GRC, or governance, risk, and compliance), with the technology side of governance (IT governance, and to an increasing extent, SOA governance). Over time, CIOs become CGOs (Chief Governance Officers), as their focus shifts away from technology.

As the enterprise owns fewer and fewer of the organization’s IT assets, the role and responsibility of enterprise IT practitioners will be less about the mechanics of getting systems to work, integrating them with each other, and operating them, and more about the management of the one resource that remains constant: information. After all, IT is information technology, not computer or systems technology.

If you can successfully tackle these questions with a coherent, holistic strategy, then you have defused the risk inherent to movement to outsourcing and/or cloud computing.



With this perspective, it’s essential to view the shift to outsourcing and cloud computing holistically with all the other changes happening in the enterprise IT environment.

For example, the move to democratization of technology means that non-IT practitioners will be utilizing and creating IT capabilities and assets without the control of the IT organization. How will IT practitioners manage the sole enterprise IT asset (information) given that they cannot manage the systems in which that asset flows? As organizations realize the global cubicle vision of IT, how will enterprise IT practitioners and architects enable distributed information without losing GRC visibility?

As systems become increasingly interconnected with deep interoperability despite their increasing distributed nature, how can enterprise IT practitioners make sure the systems as a whole continue to provide value and avoid chaotic disruptions despite the fact that the organization doesn’t own or operate them? As organizations move to more iterative, agile forms of complex systems engineering where new capabilities emerge from compositions of existing ones, how will movements to cloud computing and outsourcing help or hurt those efforts?

If you can successfully tackle these questions with a coherent, holistic strategy, then you have defused the risk inherent to movement to outsourcing and/or cloud computing. On the other hand, if you rush into cloud computing and outsourcing strategies without thinking through all the issues we’ve discussed in this ZapFlash, you’ll be sunk before you know it.

The ZapThink take

Just like the Sirens calling to Odysseus in Homer’s Odyssey, the call of outsourcing and cloud computing will lead many enterprise IT ships to wreck on the rocks unless they can lash themselves to the masts of a holistic perspective of where the industry as a whole is heading. More importantly, the broad shifts in the industry that ZapThink’s 2020 vision of enterprise IT illuminates compels companies to think more broadly about their constant enterprise IT asset: information.

If it no longer matters where your IT is physically located and whether or not you actually own or operate the IT systems you depend on, then what IT department do you really need and what are they really doing? The answer: less hands-on technology and more governance, a sea change that represents the demise of the enterprise IT organization. Whether or not this transition develops into a full-blown crisis is entirely up to you.

This guest post comes courtesy of Ronald Schmelzer, senior analyst at Zapthink.


SPECIAL PARTNER OFFER


SOA and EA Training, Certification,

and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.
You may also be interested in:

Thursday, September 23, 2010

Sonoa becomes Apigee, offers new and rebranded API management and analysis product lines

Sonoa Systems, a provider of application programming interface (API) solutions, has changed its name this week to Apigee.

While Sonoa originally offered a free API tools and management platform, Apigee now offers three product lines for enterprises, developers, and API providers of all sizes. The company now serves more than 7,000 developers and some 140 enterprises with API management services. [Disclosure: Sonoa Systems is a past sponsor of BriefingsDirect podcasts.]

“By unifying the company under one brand and launching our premium line, we can better serve the full spectrum of companies and developers using APIs to power their apps, mobile and multichannel strategies and business partnerships,” said Chet Kapoor, CEO, Apigee.

The traffic on the traffic has been brisk. Currently, 2,500 GB of data per month and 25k messages are processed per second on Apigee Tech, says the firm.

As I heard more about the role of APIs and how managing and defining that traffic and use patterns -- both incoming and outgoing -- I was reminded too of the Big Data analysis value so many companies are building out.

What if you were to be able to analysis real-time data with real-time API activities? This may not be for everyone, but many mobile, e-commerce and service providers -- and a boat load of web-focused start-ups -- could develop some super insights.

Joining the analysis from APIs, systems logs, and data could be a killer business intelligence benefit. It might also spur new revenue by selling that analysis if you happen to find yourself at the juncture of APIs and data and either business or consumer behavior. Viva la real time analytics at scale!

Among the new and rebranded Apigee products:
  • Apigee Premium: Announced on Wednesday, Apigee Premium provides advanced features on top of the Apigee Free platform, including unlimited API traffic, advanced rate limiting and analytics, and developer key provisioning. Visit https://app.apigee.com/sign_up to sign up for the preview.

  • Apigee Free: A free tools platform launched last year for developers and providers to learn, test, and debug APIs, get analytics on API performance and usage, and apply basic rate-limits to protect their services.

  • Apigee Enterprise: An industrial-grade API platform for enterprises using APIs to fuel their mobile, multichannel, application and cloud strategies. Previously Sonoa Systems’ core product ServiceNet, Apigee Enterprise provides API visibility, control, management and security.
You may also be interested in:

Wednesday, September 22, 2010

Data center transformation requires more than new systems, there's also secure data removal, recycling, server disposal

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

An often-overlooked aspect of data center transformation (DCT) is what to do with the older assets as newer systems come online. Much of the retiring IT equipment can possess sensitive data, may be sources of significant economic return, or need at least need to be recycled according to various regulations.

Improperly disposing of data and other IT assets can cause embarrassing security breaches, increase costs, and pose the risk of regulatory penalties. Indeed, many IT organizations are largely unaware of the hazards and risks of selling older systems into auction sites, secondary markets or via untested suppliers.

Compliance and recycling issues, as well as data security concerns and proper software disposition, should therefore be top of mind early in the DCT process, not as an after-thought.

In a recent podcast discussion, I tapped two HP executives on how to best manages productive transitions of data center assets -- from security and environmental impact, to recycling and resale, and even to rental of transitional systems during a managed upgrade process. I spoke with Helen Tang, Worldwide Data Center Transformation Lead for HP Enterprise Business, and Jim O'Grady, Director of Global Life Cycle Asset Management Services with HP Financial Services.

Here are some excerpts:
Helen Tang: Today there are the new things coming about that everybody is really excited about, such as virtualization, and private cloud. ... This time around, enterprises don’t want to repeat past mistakes, in terms of buying just piles of stuff that are disconnected. Instead, they want a bigger strategy that is able to modernize their assets and tie into a strategic growth enablement asset for the entire business.

Yet throughout the entire DCT process, there's a lot to think about when you look at existing hardware and software assets that are probably aged, and won’t really meet today’s demands for supporting modern applications.

How to dispose of those assets? Most people don’t really think about it nor understand all of the risks involved. ... Even experienced IT professionals, who have been in the business for maybe 10, 20 years, don’t quite have the skills and understanding to grasp all of this.

We're starting to see sort of this IT hybrid role called the IT controller, that typically reports to the CIO, but also dot-lines into the CFO, so that the two organizations can work together from the very beginning of a data center project to understand how best to optimize both the technology, as well as the financial aspects.

Jim O'Grady: We see that a lot of companies try to manage this themselves, and they don’t have the internal expertise to do it. Often, it’s done in a very disconnected way in the company. Because it’s disconnected and done in many different ways, it leads to more risks than people think.

You are putting your company’s brand at stake, through improper environmental recycling compliance, or exposing your clients, customers, or patients’ data to a security breach. This is definitely one of those areas you don’t want to read about in a newspaper to figure out what went wrong.

One of the most common areas where our clients are caught unaware of the complexity of the data security, and the e-waste legislation requirements that are out there, and especially the pace of its change.

We suggest that they have a well thought-out plan for destroying or clearing data prior to the asset decommissioning and/or prior to the asset leaving the physical premise of the site. Use your outsource partner, if you have one, as a final validation for data security. So, do it on site, as well as do it off site.

Have a well-established plan and budget up-front, one that’s sponsored by a corporate officer, to handle all of the end-of-use assets well before the end-of-use period comes.

Reams of regulations

E-waste legislation resides at the state, local, national, and regional levels, and they all differ. There's some conflict, but some are in line with each other. So it's very difficult to understand what your legislative requirements are and how to comply. Your best bet is to deal with a highest standard and pick someone that knows and has experience in meeting these legislative requirements.

Legislation resides at the state, local, national, and regional levels, and they all differ.



There are tremendous amounts of global complexities that customers are trying to overcome, especially when they try to do data center consolidation and transformation, throughout their enterprise across different geographies and country borders.

You're talking about a variety of regulatory practices and directives, especially in the EU, that are emerging and restrict how you move used and non-working product across borders. There are a variety of different data-security practices and environmental waste laws that you need to be aware of.

Partner beware

A lot of our clients choose to outsource this work to a partner. But they need to keep in mind that they are sharing risk with whomever they partner with. So they have to be very cautious and be extremely picky about who they select as a partner.

This may sound a bit self-serving, but I always suggest for enterprises to resist smaller local vendors. ... If you don’t kick the tires with your partner and you don’t find out that the partner consists of a man, a dog, and a pickup truck, you just may have a hard time defending yourself as to why you selected that partner.

Also, develop a very strong vendor audit qualification and ongoing inspection process. Visit that vendor prior to the selection and know where your waste stream is going to end up. Whatever they do with the waste stream, it’s your waste stream. You are a part of the chain of custody, so you are responsible for what happens to that waste stream, no matter what that vendor does with it.

You need to create rigorous documented end-to-end controls and audit processes to provide audit trails for any future legal issues. And finally, select a partner with a brand name and reputation for trust and integrity. Essentially, share the risk.

Total asset management

Enterprises should well consider how they retire and recover value for their entire end-of-use IT equipment, whether it's a PDA or supercomputer, HP or non-HP product. Most data center transformations and consolidations typically end with a lot of excess or end-of-use product.

We can help educate customers on the hidden risk and dispositioning that end-of-use equipment into the secondary market. This is a strength of HP Financial Services (HPFS).

Typically, what we find with companies trying to recover value for product is that they give it to their facilities guys or the local business units. These guys love to put it on eBay and try to advertise for the best price. But, that’s not always the best way to recover the best value for your data center equipment.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.



We're now seeing it migrate into the procurement arm. These guys typically put it out for bid and select the highest bid from a lot of the open market brokers. A better strategy to recover value, but not the best.

Your best bet is to work with a disposition provider that has a very, very strong re-marketing reach into the global markets, and especially a strong demonstrative recovery process.

From a financial asset ownership model, HPFS has the ability to come in and work with a client, understand their asset management strategy, and help them to personalize the financial asset ownership model that makes sense for them.

For example, if you look at a leasing organization, when you lease a product, it's going to come back. A key strength in terms of managing your residual is to recover the value for the product as it comes back, and we do that on a worldwide basis.

We have the ability to reach emerging markets or find the market of highest recovery to be able to recover the value for that product. As we work with clients and they give us their equipment to remarket on their behalf, we bring it into the same process.

When you think about it, an asset recovery program is really the same thing as a lease return. It's really a lot of reverse logistics -- bring it into a technical center, where it's audited, the data is wiped, the product is tested, there’s some level of refurbishment done, especially if we can enhance the market value. Then, we bring it into our global markets to recover value for that product.

We have skilled product traders within our product families who know how to hold product, and wait for the right time to release it into the secondary market. If you take a lot of product and sell it in one day, you increase the supply, and all of the recovery rates for the brokers drop overnight. So, you have to be pretty smart. You have to know when to release product in small lot sizes to maximize that recovery value for the client.

Legacy support as core competency

We're seeing a big uptake in the need to support legacy product, especially in DCT. We're able to provide highly customized pre-owned authentic legacy HP product solutions, sometimes going back 20 years or more. The need for temporary equipment just scaling out legacy data center hardware platform capacity that’s legacy locked is an increasing need that we see from our clients.

Clients also need to ensure their product is legally licensed and they do not encounter intellectual property right infringements. Lastly, they want to trust that the vendor has the right technical skills to deal with the legacy configuration and compatibility issues.

Our short-term rental program covers new or legacy products. Again, many customers need access to temporary product to prove out some concepts, or just to test some software application on compatibility issues. Or, if you're in the midst of a transformation, you may need access to temporary swing gear to enable the move.

We also help clients understand strategies to recover the best value for decommissioned assets, as well as how to evaluate and how to put in place a good data-security plan.

We help them understand whether data security should be done on-site versus off-site, or is it worth the cost to do it on-site and off-site. We also help them understand the complexities of data wiping enterprise product, versus just the plain PC.

The one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy.



Most of the local vendors and providers out there are skilled in wiping data for PCs, but when you get into enterprise products, it can get really complex. You need to make sure that you understand those complexities, so you can secure the data properly.

Lastly, the one thing we help customers understand, and it’s the real hidden complexity is how to set up an effective reverse logistic strategy, especially on a global basis. How do you get the timing down for all the products coming back on a return basis?

Tang: We reach out to our customers in various interactions to talk them through the whole process from beginning to end.

One of the great starting points we recommend is something we called the Data Center Transformation Experience Workshop, where we actually bring together your financial side, your operations people, and your CIOs, so all the key stakeholders in the same room, and walk through these common issues that you may or may not have thought about to begin with. You can walk out of that room with consensus, with a shared vision, as well as a roadmap that’s customized for your success.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Sponsor: HP.

Tuesday, September 21, 2010

IBM acquires Netezza as big data market continues to consolidate around appliances, middle market, new architecture

IBM is snapping up yet another business analytics player. After purchasing OpenPages last week, Big Blue is now laying down $1.7 billion in an all-cash deal to acquire Netezza.

Netezza provides high-performance analytics in a data warehousing appliance that claims to handle complex analytic queries 10 to 100 times faster than traditional systems. Netezza appliances puts analytics into the hands of business users in sales, marketing, product development, human resources and other departments that need to actionable insights to drive decision-making.

With its latest business analytics acquisition, Steve Mills, senior vice president and group executive of IBM Software and Systems, says the company is bringing analytics to the masses.

“We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value,” Mills says. “Netezza is a perfect example of this approach.”

Big Blue’s long haul

Netezza fits in with IBM’s maturing business analytics strategy. Big Blue has long put an emphasis on data analysis and business intelligence (BI) as key drivers of IT infrastructure needs. The company has demonstrated a clear understanding that data analysis and BI can also be easily applied to business issues.

IBM’s relationship database, DB2, also fits into the big picture. Over the years, IBM has built a strong family of database-driven products around DB2. Essentially, IBM has successfully worked to tie the data equation together with the needs of enterprises and the strength of their IT departments.

We continue to evolve our capabilities for systems integration, bringing together optimized hardware and software, in response to increasing demand for technology that delivers true business value.



While DB2 reaches into the past and supports the data needs of legacy and distributed systems and applications, new architectures around in-memory and optimized platforms for persistence-driven tasks are in vogue. While Neteeza's strengths are in analytics, this architecture has other uses, ones we'll be seeing more of.

Fast-forward to the Netezza acquisition. The $1.7 billion grab shows that IBM is well aware that big data sets don’t lend themselves to traditional architecture for crunching data. IBM, along with its competitors, have been developing or acquiring new architectures that focus more on in-memory solutions.

Rather than moving the entire database or large caches around on disk or tape, then, new architectures have emerged where the data and logic reside closer together -- and the data is accessed from high-performing persistence.

For example, with Netezza appliances, NYSE Euronext has slashed the time it takes to load and extract massive amounts of historical data so it can run analytic queries more securely and efficiently, while reducing run times from hours to seconds. Virgin Media, a UK provider of TV, broadband, phone and mobile services with millions of subscribers, uses Netezza across its product marketing, revenue assurance and credit services departments to proactively plan, forecast, and respond to the effect of pricing and tariff changes enabling them to quickly respond with competitive offerings.

Business analytics consolidation

W
ith the Netezza acquisition, the business analytics market is seeing consolidation as major players begin preparing to tap into a growing big data opportunity. Much the same as the BI market saw consolidation a few years ago -- IBM acquired Cognos, Oracle bought Hyperion, and SAP snapped up Business Objects -- vendors are now seeing big data analytics as an area that should be embedded into the total infrastructure of solutions. That requires a different architecture.

The competition is heating up. EMC purchased Greenplum, an enabler of big data clouds and self-service analytics, in July. Both companies are planning to sell the hardware and software together in appliances. The vendors tune and optimize the hardware and software to offer the benefits of big data crunching, taking advantage of in memory architecture and high performance hardware.

Expect to see more consolidation, although there aren’t too many players left in the Netezza space. Acquisition candidates include data management and analysis software company Aster Data Systems and Teradata with its enterprise analytics technologies, among others. [Disclosure: Aster Data is a sponsor of BriefingsDirect podcasts.]

Meanwhile, Oracle this week at OpenWorld is pushing against the market with its new Exadata product. The battle is on. My take is that these purchases are for more than the engines that drive analytics -- they are for the engines that drive SaaS, cloud, mobile, web and what we might call the more modern work loads ... data intensive, high-scaling, fast-changing and services-oriented.
BriefingsDirect contributor Jennifer LeClaire provided editorial assistance and research on this post. She can be reached at http://www.linkedin.com/in/jleclaire and http://www.jenniferleclaire.com.
You may also be interested in:

Monday, September 20, 2010

Morphlabs eases way for companies to build private cloud infrastructures, partners with Zend

Morphlabs, a provider of enterprise cloud architecture platforms, has simplified the process of building and managing an internal cloud for enterprise environments -- enabling companies to create their own private cloud infrastructure.

The Manhattan Beach, Calif. company today announced a significant upgrade to its flagship product, mCloud Controller. The enhanced version introduces Enterprise Cloud Architecture (ECA), a new approach that provides enterprises with immediate access to the building blocks and binding components of a fault tolerant, elastic, and highly automated platform.

Morphlabs also announced a partnership with Zend Technologies Ltd., whose Zend Server will be shipped as part of the mCloud Enterprise, said Winston Damarillo, CEO at Morphlabs.

mCloud Controller is a comprehensive cloud computing platform, delivered as an appliance or virtual appliance, as well as providing open mCloud APIs (you can manage the ECA cloud from an iPad, for example). To support the leading platforms, mCloud Controller will have built-in ECA compliant support for Java, Ruby on Rails, and PHP.

Fittingly for enterprise private clouds, the Morph offering also provides direct integration to mainstream middleware via standards-based connectors. It also supports a plethora of VMs, from KVM to Xen, and and VMware, and allows for others cluster managers to be used as well.

Look for Morphlabs to seek to sell to both service providers and enterprises for the compatible hybrids benefits. Of course, we're hearing the same from Citrix, VMware, Novell, HP, etc. It's a horse race out there for a de facto hybrid cloud standard, all right.

Productivity gains

“PHP has been broadly adopted for the productivity gains it brings to Web application development, and because it can provide the massive scalability that e-commerce, social networking and media sites require,” said Matt Elson, vice president of business development at Zend. “Integrating Zend Server into Morphlabs’ mCloud Controller enables IT organizations to leverage the elasticity of cloud computing and automate the process of deploying highly reliable PHP applications in the cloud.”

Key features of the mCloud Controller with ECA include:
  • Uniform environments from development to production to help users simplify system configuration. Applications can grow as needed, while maintaining a standardized infrastructure for ease of growth and replacement.

  • Simplified system administration with automated monitoring and self-healing out of the box to avoid complicated system tuning. mCloud Controller also comes with graphical tools for viewing system-wide performance.

  • Self-service resource provisioning, which frees the IT department from numerous application provisioning requests. Without any system administration skills, authorized users can start and stop computes and provision applications as needed. Billing is also included within the system.

  • Streamlined application management automates the process of deploying, monitoring and backing-up applications. Users do not have to deal with configuration files and server settings.
The mCloud Controller v2.5 is available now in the United States, Japan and South East Asia. For more information contact Morphlabs at info@mor.ph.

You may also be interested in: