Friday, January 29, 2010

Apple and Oracle on way to do what IBM and Microsoft could not: Dominate entire markets

I was a bit distracted from the Apple iPad news due to the marathon Oracle conference Wednesday on its shiny new Sun Microsystems acquisition.

But the more I thought about it, the more these two companies are extremely well positioned to actually fulfill what other powerful companies tried to do and failed. Apple and Oracle may be unstoppable in their burgeoning power to dominate the collection of profits across vast and essential markets for decades.

Apple is well on the way to dominating the way that multimedia content is priced and distributed, perhaps unlike any company since Hearst in its 1920s heyday. Apple is not killing the old to usher in the new, as Google is. Apple is rescuing the old media models with a viable online direct payment model. Then it will take all the real dough.

The iPad is a red herring, almost certainly a loss leader, like Apple TV. The real business is brokering a critical mass of music, spoken word, movies, TV, books, magazines, and newspapers. All the digital content that's fit to access. The iPad simply helps convince the producers and consumers to take the iTunes and App Store model into the domain of the formerly printed word. It should work, too.

Oracle is off to becoming the one-stop shop for mission-critical enterprise IT ... as a service. IT can come as an Oracle-provided service, from soup to nuts, applications to silicon. The "service" is that you only need go to Oracle, and that the stuff actually works well. Just leave the driving to Oracle. It should work, too.

This is a mighty attractive bid right now to a lot of corporations. The in-house suppliers of raw compute infrastructure resources are caught in a huge, decades-in-the-making vice -- of needing to cut costs, manage energy, reduce risk and back off of complexity. Can't do that under the status quo.

In doing complete IT package gig, Oracle has signaled the end of the best-of-breed, heterogeneous, and perhaps open source components era of IT. In the new IT era, services are king. The way you actually serve or acquire them is far less of a concern. Enterprises focus on the business and the IT comes, well, like electricity.

This is why "cloud" makes no sense to Oracle's CEO Larry Ellison. He'd rather we take out the word "cloud" from cloud computing and replace it with "Oracle." Now that makes sense!

All the necessary ingredients

Oracle has all the major parts and smarts it needs to do this, by the way. Oracle may need an acquisition or two more for better management and perhaps hosting. But that's about it.

Like Apple, Oracle is not killing the old IT era to usher in the new. Oracle is rescuing the old IT models with a viable complete IT acquisition model. Then it too will take all the real dough.

Incidentally, IBM tired to, and came quite close to a similar variety of enterprise IT domination. That was more than 30 years ago. IBM was an era or two too early. Microsoft tried, and came moderately close -- at least in vision -- to the same thing, moving from the desktop backward into the data center. But, alas, Microsoft was also an era too early.

Both Sun and IBM were seduced over the past 15 years by the interchangeable parts version of IT ... It's what Java is all about. Microsoft hated Java, never veered from their all-us-or-nothing mantle, which is now passing to Oracle. But Microsoft never had the heft in the core enterprise data center to pull it off. Oracle does.

Yes, Apple and Oracle have clearly learned well from their brethren. And the timing has never been better, the recession a god-send.

So now as consumers, we have some big choices .... er, actually maybe we have a big buy-in, yes, but maybe not too much in the way of choices. As any mainstream consumer and producer of media, I will really need to do business with Apple. Not too much choice. Convenience across the content supply chain has become the killer app. And I love it all the way.

I want my MTV, my New York Times, my Mahler and my Madmen. Apple gets it to me as I wish at an acceptable price. Case closed. The end device is not so important any more, be it big, medium or small, be it Mac or PC. Because of my full-bore consumer seduction, the producers of the content need to follow the gold Apple ring. Same for consumer applications and games, though they are all fundamentally content.

As an IT services buyer, Oracle is making a similar offer. Convenience is killer for IT managers too. Oracle, through its appliances, integrated stack, data ecosystem, tuned high-end hardware, business applications, business intelligence, and sales account heft, leaves me breathless. And taking a next breath will probably have an Oracle SLA attached. Whew!

Critical mass in the accounts that matter

Oracle is already irreplaceable in all -- and I mean all -- the major enterprise accounts. Oracle can substantially now reduce complexity across the IT infrastructure front, while seemingly cutting costs, apparently reducing risk. But a huge portion of the total savings goes into Oracle's pockets, making it stronger in more ways in more accounts for 20 years. Now they can take the lion's share of the profits in the IT as a service era. I call that dominance.

So let's hear it for the balancing acts still standing. Go IBM! Go Microsoft! Go Google! Go HP! Go SAP! How about Cisco and EMC? You all go for as long as you can, please. Or at least as long as it takes for the next IT and media eras to arrive. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

These handful of companies are about the only insurance policies against Apple and Oracle being able to price with impunity across vast markets that deeply affect us all.

Wednesday, January 27, 2010

Oracle's Sun Java strategy: Business as usual

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

By Tony Baer

In an otherwise pretty packed news day, we’d like to echo @mdl4’s sentiments about the respective importance of Apple’s and Oracle’s announcements: “Oracle finalized its purchase of Sun. Best thing to happen to Sun since Java. Also: I don’t give a sh#t about the iPad. I said it.”

There’s little new in observing that on the platform side, that Oracle’s acquisition of Sun is a means for turning the clock back to the days of turnkey systems in a post-appliance era. History truly has come full circle as Oracle in its original database incarnation was one of the prime forces that helped decouple software from hardware.

Fast forward to the present, and customers are tired of complexity and just want things that work. Actually, that idea was responsible for the emergence of specialized appliances over the past decade for performing tasks ranging from SSL encryption/decryption to XML processing, firewalls, email, or specialized web databases.

The implication here is that the concept is elevated to enterprise level; instead of a specialized appliance, it’s your core instance of Oracle databases, middleware, or applications. And even there, it’s but a logical step forward from Oracle’s past practice of certifying specific configurations of its database on Sun (Sun was, and now has become again, Oracle’s reference development platform).

That’s in essence the argument for Oracle to latch onto a processor architecture that is overmatched in investment by Intel for the x86 line. The argument could be raised than in an era of growing interest in cloud, as to whether Oracle is fighting the last war. That would be the case – except for the certainty that your data center has just as much chance of dying as your mainframe.

Question of second source

At the end of the day, it’s inevitably a question of second source. Dana Gardner opines that Oracle will replace Microsoft as the hedge to IBM. Gordon Haff contends that alternate platform sources are balkanizing as Cisco/EMC/VMware butts their virtualized x86 head into the picture and customers look to private clouds the way they once idealized grids.

The highlight for us was what happens to Sun’s Java portfolio, and as it turns out, the results are not far from what we anticipated last spring: Oracle’s products remain the flagship offerings. From looking at respective market shares, it would be pretty crazy for Oracle to have done otherwise.

The general theme was that – yes – Sun’s portfolio will remain the “reference” technologies for the JCP standards, but that these are really only toys that developers should play with. When they get serious, they’re going to keep using WebLogic, not Glassfish. Ditto for:

• Java software development. You can play around with NetBeans, which Oracle’s middleware chief Thomas Kurian characterized as a “lightweight development environment,” but again, if you really want to develop enterprise-ready apps for the Oracle platform, you will still use JDeveloper, which of course is written for Oracle’s umbrella ADF framework that underlies its database, middleware, and applications offerings. That’s identical to Oracle’s existing posture with the old (mostly) BEA portfolio of Eclipse developer tools. Actually, the only thing that surprised us was that Oracle didn’t simply take NetBeans and set it free – as in donating it to Apache or some more obscure open source body.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.



• SOA, where Oracle’s SOA Suite remains front and center while Sun’s offerings go on maintenance.

We’re also not surprised as tot he prominent role of JavaFX in Oracle’s RIA plans; it fills a vacumm created when Oracle tgerminated BEA’s former arrangement to bundle Adobe Flash/Flex development tooling. Inactualityy, Oracle has become RIA agnosatic, as ADF could support any of the frameworks for client display, but JavaFX provides a technology that Oracle can call its own.

There were some interesting distinctions with identity management and access, where Sun inherited some formidable technologies that, believe it or not, originated with Netscape. Oracle Identity management will grab some provisioning technology from the Sun stack, but otherwise Oracle’s suite will remain the core attraction. But Sun’s identity and access management won’t be put out to pasture, as it will be promoted for midsized web installations.

There are much bigger pieces to Oracle’s announcements, but we’ll finish with what becomes of MySQL. In shirt there’s nothing surprising to the announcement that MySQL will be maintained in a separate open source business unit – the EU would not have allowed otherwise. But we’ve never bought into the story that Oracle would kill MySQL. Both databases aim at different markets. Just about the only difference that Oracle’s ownership of MySQL makes – besides reuniting it under the same corporate umbrella as the InnoDB data store – is that, well, like yeah, MySQL won’t morph into an enterprise database. Then again, even if MySQL had remained independent, that arguably it was never going to evolve to the same class of Oracle as the product would lose its believed simplicity.

The more relevant question for MySQL is whether Oracle will fork development to favor Solaris on SPARC. This being open source, there would be nothing stopping the community from taking the law into its own hands.

This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a senior analyst at Ovum.

Friday, January 22, 2010

The Christmas Day bomber, Moore’s Law, and enterprise IT's new challenges

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.

By Jason Bloomberg

Amid the posturing and recriminations following this past December’s ill-fated terrorist attack by the alleged Nigerian Christmas bomber, the underlying cause of the intelligence breach has gone all but unnoticed.

How is it the global post-9/11 anti-terrorist machine could miss a lone Nigerian with explosives in his underwear? After all, chatter included reference to “the Nigerian,” his own father gave warning, he was on a terrorist watch list, and he purchased a one-way ticket to Detroit, paid cash, and checked no luggage. You’d think any one of these bits of information would set off alarms, and the fact that the intelligence community missed the lot is a sign of sheer incompetence, right?

Not so fast. Such a conclusion is actually fallacious. The missing piece of the puzzle is the fact that there are hundreds of thousands of monthly air travelers, and millions of weekly messages that constitutes he chatter the intelligence community routinely follows. And that watch list? Hundreds of thousands of names, to be sure.

Furthermore, the quantity of information that agents must follow is increasing at an exponential rate. So, while it seems in retrospect that agents missed a huge red flag, in actuality there is so much noise that even the combination of warnings taken together was lost in a sea of noise. A dozen red flags, yes, but could you discern a dozen red grains of sand on a beach?

The true reason behind the intelligence breach is far more subtle than simple incompetence, and furthermore, the solution is just as difficult to discern. The most interesting part of this discussion from ZapThink’s perspective, naturally, is the implication for enterprise IT.

The global intelligence community is but one enterprise among many dealing with exponentially increasing quantities and complexity of information. All other enterprises, in the private as well as public sector, face similar challenges: As Moore’s Law and its corollaries proceed on their inexorable path, what happens when the human ability to deal with the resulting information overload falls short? How can you help your organization keep from getting lost in the noise?

The governance crisis point

Strictly speaking, Moore’s Law states that the number of transistors that current technology can cram onto a chip of a given size will increase exponentially over time. But the transistors on a chip are really only the tip of the iceberg; along with processing power we have exponential growth in hard drive capacity, network speed, and other related measures – what we’re calling corollaries to Moore’s Law. And of course, there’s also the all-important corollary to Murphy’s Law that states that the quantity of information available will naturally expand to fill all available space.

Anybody who remembers the wheat and chessboard problem knows that this explosion of information will lead to problems down the road. IT vendors, of course, have long seen this trend as a huge opportunity, and have risen to the occasion with tools to help organizations manage the burgeoning quantity of information. What vendors cannot do, however, is improve how people deal with this problem.

Fundamentally, human capabilities at best grow linearly. Our brains, after all, are not subject to Moore’s Law, and even so, enterprises depend far more on the interactions among people than on the contributions of individuals taken separately. While the number of transistors may double every 18 months, our management, analysis, and other communication skills will only see gradual improvements at best.

This disconnect leads to what ZapThink calls the governance crisis point, as illustrated in the figure below.

The governance crisis point

The diagram above illustrates the fact that while the quantity and complexity of information in any enterprise grows exponentially, the human ability to deal with that information at best grows linearly. No matter where you put the two curves, eventually the one overtakes the other at the governance crisis point, leading to the “governance crisis point problem”: Eventually, human activities are unable to deal with the quantity and complexity of information.

Unfortunately, no technology can solve this problem, because technology only affects the exponential curve. I’m sure today’s intelligence agents have state-of-the-art analysis tools, since after all, if they don’t have them, then who does? But the bomber was still able to get on the plane.

Furthermore, neither is the solution to this problem a purely human one. We’d clearly be fooling ourselves to think that if only we worked harder or smarter, we might be able to keep up. Equally foolish would be the assumption we might be able to slow down the exponential growth of information. Like it or not, this curve is an inexorable juggernaut.

SOA to the rescue?

Seeing as this article is from ZapThink, you might think that service-oriented architecture (SOA) is the answer to this problem. In fact, SOA plays a support role, but the core of the solution centers on governance, hence the name of the crisis point. Anyone who’s been through our Licensed ZapThink Architect course or our SOA & Cloud Governance course understands that the relationship between SOA and governance is a complex one, as SOA depends upon governance but also enables governance for the organization at large.

Just so with the governance crisis point problem: Neither technology nor human change will solve the problem, but a better approach to formalizing the interactions between people and technology give us a path to the solution. The starting point is to understand that governance involves creating, communicating, and enforcing policies that are important to an organization, and that those policies may be anywhere on a spectrum from human-centric to technology-centric. In the context of SOA, then, the first step is to represent certain policies as metadata, and incorporate those metadata in the organization’s governance framework.

In practice, the governance team sorts the policies within scope of the current project into those policies that are best handled by human interactions and those policies that lend themselves to automation. Representing the latter set of policies as metadata enables the SOA governance infrastructure to automate policy enforcement as well as other policy-based processes. Such policy representations alone, however. cannot solve the governance crisis point problem.

The answer lies in how the governance team deals with policies, in other words, what are their polices regarding policies, or what ZapThink likes to call metapolicies. Working through the organization’s policies for dealing with governance, and automating those policies, gives the organization a “metapolicy feedback loop” approach to leveraging the power of technology to improve governance overall.

Catching terrorists and other IT management challenges

How this metapolicy feedback loop might help intelligence agents catch the next terrorist provides a simple illustration of how any enterprise might approach their own information explosion challenges. First, how do agents deal with information today? Basically, they have an information challenge, they implement tools to address that challenge, and they have policies for how to use those tools, as the expression below illustrates:

Information problem --> tools --> policies for using tools --> governance

Now, the challenge with the expression above is that it’s static; it doesn’t take into account the fact that the information problem explodes exponentially, while governance best practices grow linearly. As a result, eventually the quantity of information overwhelms the capabilities of the tools, leading to failures like the explosive in the underwear. Instead, here’s how the expression should work:

Information problem --> tools --> policies for using tools --> metapolicies for dealing with governance --> next-generation governance tools --> best practice approach for dealing with information problem over time

Essentially, the crisis point requires a new level of interaction between human activity and technology capability, a technology-enabled governance feedback loop that promises to enable any enterprise to deal with the information explosion, regardless of whether you’re catching terrorists or pleasing shareholders.

The ZapThink take

Okay, so just how does SOA fit into this story? Remember that as enterprise architecture, SOA consists of a set of best practices for organizing and leveraging IT resources to meet business needs, and the act of applying and enforcing such practices is what we mean by governance. Furthermore, SOA provides a best-practice approach for implementing governance, not just of the services that the SOA implementation supports, but for the organization as a whole.

In essence, SOA leads to a more formal approach to governance, where organizations are able to leverage technology to improve the creation, communication, and enforcement of policies across the board, including those policies that deal with how to automate such governance processes. In the intelligence example, SOA might help agents leverage technology to identify suspicious patterns more effectively by allowing them to craft increasingly sophisticated intelligence policies. In the general case, SOA can lead to more effective management decision making across large organizations.

There is, of course, more to this story. We’ve discussed the problem of too much information before, in our ZapFlash on Net-Centricity, for example. Technology progress leaving people behind is a common thread to all of ZapThink’s research.

If you’re struggling with your own information explosion issues, whether you’re in the intelligence community, the U.S. Department of Defense, or simply struggling with the day-to-day reality that is enterprise IT, drop us a line! Maybe we can help you prevent your next intelligence breach in your organization.

This guest BriefingsDirect post comes courtesy of Jason Bloomberg, managing partner at ZapThink.


SPECIAL PARTNER OFFER

SOA and EA Training, Certification,
and Networking Events

In need of vendor-neutral, architect-level SOA and EA training? ZapThink's Licensed ZapThink Architect (LZA) SOA Boot Camps provide four days of intense, hands-on architect-level SOA training and certification.

Advanced SOA architects might want to enroll in ZapThink's SOA Governance and Security training and certification courses. Or, are you just looking to network with your peers, interact with experts and pundits, and schmooze on SOA after hours? Join us at an upcoming ZapForum event. Find out more and register for these events at http://www.zapthink.com/eventreg.html.

Monday, January 18, 2010

Technical and economic incentives mount for seeking alternatives to costly mainframe applications

Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.

A growing number of technical and economic incentives are mounting that make a strong case for modernizing and transforming enterprise mainframe applications -- and the aging infrastructure that support them.

IT budget planners are using the strident economic environment to force a harder look at alternatives to inflexible and hard-to-manage legacy systems, especially as enterprises seek to cut their total and long-term IT operations spending.

The rationale around reducing total costs is also forcing a recognition of the intrinsic difference between core applications and so-called context -- context being applications that are there for commodity productivity reasons, not for core innovation, customization or differentiation.

With a commodity productivity application, the most effective delivery is on the lowest-cost platform or from a provider. The problem is that 20 or 30 years ago, people put everything on mainframes. They wrote it all in code.

The challenge now is how to free up the applications that are not offering any differentiation -- and do not need to be on a mainframe -- and which could be running on a much more lower cost infrastructure, or come from a completely different means of delivery, such as software as a service (SaaS).

There are demonstrably much less expensive ways of delivering such plain vanilla applications and services, and significant financial rewards for separating the core from the context in legacy enterprise implementations.

This discussion is the third and final in a series that examines "Application Transformation: Getting to the Bottom Line." The series coincides with a trio of Hewlett-Packard (HP) virtual conferences on the same subject.
Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.
Helping to examine how alternatives to mainframe computing can work, we're joined by John Pickett, worldwide mainframe modernization program manager at HP; Les Wilson, America's mainframe modernization director at HP, and Paul Evans, worldwide marketing lead on applications transformation at HP. The discussion is moderated by BriefingsDirect's Dana Gardner, principal analyst at Interarbor Solutions.

Here are some excerpts:
Evans: We have seen organizations doing a lot with their infrastructure, consolidating it, virtualizing it, all the right things. At the same time, a lot of CIOs or IT directors know that the legacy applications environment has been somewhat ignored.

Now, with the pressure on cost, people are saying, "We've got to do something, but what can come out of that and what is coming out of that?" People are looking at this and saying, "We need to accomplish two things. We need a longer term strategy. We need an operational plan that fits into that, supported by our annual budget."

Foremost is this desire to get away from this ridiculous backlog of application changes, to get more agility into the system, and to get these core applications, which are the ones that provide the differentiation and the innovation for organizations, able to communicate with a far more mobile workforce.

What people have to look at is where we're going strategically with our technology and our business alignment. At the same time, how can we have a short-term plan that starts delivering on some of the real benefits that people can get out there?

... These things have got to pay for themselves. An analyst recently looked me in the face and said, "People want to get off the mainframe. They understand now that the costs associated with it are just not supportable and are not necessary."

One of the sessions from our virtual conference features Geoffrey Moore, where he talks about this whole difference between core applications and context -- context being applications that are there for productivity reasons, not for innovation or differentiation.

Pickett: It's not really just about the overall cost, but it's also about agility, and being able to leverage the existing skills as well.

One of the case studies that I like is from the National Agricultural Cooperative Federation (NACF). It's a mouthful, but take a look at the number of banks that the NACF has. It has 5,500 branches and regional offices, so essentially it's one of the largest banks in Korea.

One of the items that they were struggling with was how to overcome some of the technology and performance limitations of the platform that they had. Certainly, in the banking environment, high availability and making sure that the applications and the services are running were absolutely key.

At the same time, they also knew that the path to the future was going to be through the IT systems that they had and they were managing. What they ended up doing was modernizing their overall environment, essentially moving their core banking structure from their current mainframe environment to a system running HP-UX. It included the customer and account information. They were able to integrate that with the sales and support piece, so they had more of a 360 degree view of the customer.

We talk about reducing costs. In this particular example, they were able to save $40 million on an annual basis. That's nice, and certainly saving that much money is significant, but, at the same time, they were able to improve their system response time two- to three-fold. So, it was a better response for the users.

But, from a business perspective, they were able to reduce their time to market. For developing a new product or service, that they were able to decrease that time from one month to five days.

Makes you more agile

If you are a bank and now you can produce a service much faster than your competition, that certainly makes it a lot easier and makes you a lot more agile. So, the agility is not just for the data center, it's for the business as well.

To take this story just a little bit further, they saw that in addition to the savings I just mentioned, they were able to triple the capacity of the systems in their environment. So, it's not only running faster and being able to have more capacity so you are set for the future, but you are also able to roll out business services a whole lot quicker than you were previously.

... Another example of what we were just talking about is that, if we shift to Europe, Middle East, and Africa region, there is very large insurance company in Spain. It ended up modernizing 14,000 MIPS. Even though the applications had been developed over a number of years and decades, they were able to make the transition in a relatively short length of time. In a three- to six-month time frame they were able to move that forward.

With that, they saw a 2x increase in their batch performance. It's recognized as one of the largest batch re-hosts that are out there. It's just not an HP thing. They worked with Oracle on that as well to be able to drive Oracle 11g within the environment.

Wilson: ... In the virtual conferences, there are also two particular customer case studies worth mentioning. We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

In terms of customer situations, we've always had a very active business working with organizations in manufacturing, retail, and communications. One thing that I've perceived in the last year specifically -- it will come as no surprise to you -- is that financial institutions, and some of the largest ones in the world, are now approaching HP with questions about the commitment they have to their mainframe environments.

We're seeing a tremendous amount of interest from some of the largest banks in the United States, insurance companies, and benefits management organizations, in particular.

Second, maybe benefiting from some of the stimulus funds, a large number of government departments are approaching us as well. We've been very excited by customer interest in financial services and public sector.

The first case study is a project we recently completed at a wood and paper products company, a worldwide concern. In this particular instance we worked with their Americas division on a re-hosting project of applications that are written in the Software AG environment. I hope that many of the listeners will be familiar with the database ADABAS and the language, Natural. These applications were written some years ago, using those Software AG tools.

Demand was lowered

The user company had divested one of the major divisions within the company, and that meant that the demand for mainframe services was dramatically lowered. So, they chose to take the residual applications, the Software AG applications, representing about 300-350 MIPS, and migrate those in their current state, away from the mainframe, to an HP platform.

Many folks listening to this will understand that the Software AG environment can either be transformed and rewritten to run, say, in an Oracle or a Java environment, or we can maintain the customer's investment in the applications and simply migrate the ADABAS and Natural, almost as they are, from the mainframe to an alternative HP infrastructure. The latter is what we did.

By not needing to touch the mainframe code or the business rules, we were able to complete this project in a period of six months, from beginning to end. The user tells us that they are saving over $1 million today in avoiding the large costs associated with mainframe software, as well as maintenance and depreciation on the mainframe environment.

... The more monolithic approach to applications development and maintenance on the mainframe is a model that was probably appropriate in the days of the large conglomerates, where we saw a lot of companies trying to centralize all of that processing in large data centers. This consolidation made a lot of sense, when folks were looking for economies of scale in the mainframe world.

Today, we're seeing customers driving for a higher degree of agility. In fact, my second case study represents that concept in spades. This is a large multinational manufacturing concern. We will just refer to them as "a manufacturing company." They have a large number of businesses in their portfolio.

Our particular customer in this case study is the manufacturer of electronic appliances. One of the driving factors for their mainframe migration was ... to divest themselves from the large mainframe corporate environment, where most of the processing had been done for the last 20 years.

They wanted control of their own destiny to a certain extent, and they also wanted to prepare themselves for potential investment, divestment, and acquisition, just to make sure that they were masters of their own future.

Pickett: ... Just within the past few months, there was a survey by AFCOM, a group that represents data-center workers. It indicated that, over the next two years, 46 percent of the mainframe users said that they're considering replacing one or more of their mainframes.

Now, let that sink in -- 46 percent say they are going to be replacing high-end systems over the next two years. That's an absurdly high number. So, it certainly points to a trend that we are seeing in that particular environment -- not a blip at all.
Listen to the podcast. Find it on iTunes/iPod and Podcast.com. Read a full transcript or download a copy. Learn more. Sponsor: Hewlett-Packard.


Gain more insights into "Application Transformation: Getting to the Bottom Line" via a series of regional HP virtual conferences:

Access the regional Asia Pacific conference, the EMEA conference, or the Americas event.

Tuesday, January 12, 2010

Is Google the best candidate to create a good, customer-focused cloud banking service portfolio?

Slowly -- and sometimes not so slowly -- the bricks have been giving way to the clicks for the past 15 years. Plenty of formerly unassailable business models have suffered as a result. The tears flowing for these companies, however, have been few outside their own high, stony walls.

Users, customers, innovators, seekers -- the majority bottom sections of the social and economic pyramids -- these are the big winners in the many wonderful effects of the Web and Internet. And I for one have the freedom, productivity, choice and empowerment to prove it.

Except in one glaring area: banking. We are by no means done on the disruption front.

I have had it with the old financial processes, lack of capability, murky institutions, rips-offs, peonage fees/rates -- and especially attitudes. As far as I'm concerned -- as a consumer, family, and business -- I'm ready to fire them all and move to the inevitable cloud- and open source-based alternatives.

I have had it with credit cards, banks, mutual fund companies, PayPal, debit cards, MasterCard and Visa. As far as I'm concerned they are all fired. They do a lousy job, have suspect security, charge too much, stiff you with hidden fees and raise their rates whenever they want. Why pay 15 percent interest on a credit card when money can be borrowed for less than 2 percent? For their service? For their security? Because they can do a basic two-phase commit?

Merchants hate it, users hate it. Why are we waiting on this? Let the banking disruption rumpus begin!

You want financial industry reform? Screw the Congress, SEC and Fed. Barney Frank and Chris Dodd don't seem to have the stomach and/or power to make much difference. Same with Obama. What we need is real competition -- Internet style. The financial industry needs to follow the mainstream media (and others like car makers and hopefully cell phone networks) on a strict diet of lower costs, less egregious profits, less pitiful service -- and to be swiftly outmatched on their piss-poor online capabilities.

Like a lot of big, old industries, the banking function is essentially a function nowadays of software, standard protocols, high-performance (yet standard) IT systems ... and soon impeccable cloud computing credentials. But they key is the good software, of making things work for the users and community, not just the providers.

A few good transactions

If I can order movies, rent a car, and run a small business online, I should be able to do a few basic financial transactions online. I'd like to do more micro-payments and automated financial and business processes. Credit cards are not the best way to do this. Yet I seem to be stuck with a loan shark when I simply need to be able to order and fulfill a modest online transaction.

So let's have those that are good at what really counts -- software and cloud computing experts -- offering the banking services that we as consumers and businesses really want.

I'm tempted to write a similar screed about health care and mobile telephony, but that will have to wait. But we need to nail banking, finance and insurance first. It impacts all the rest.

The last two years are and should be the last straw. Wake up. In these failed finance industries -- the corporate leaders of which we as U.S. taxpayers apparently own in no small degree -- "Too big to fail" needs to be replaced with too good to resist. The companies that should be subsidized are the ones that create productivity, lower costs, improve service and propel -- rather than hamstring -- the economy.

Why as part of the stimulus are the governments not creating the legislation to allow a new breed of bank to emerge? Why are the laws not being amended to allow for more -- not less! -- competition in the financial realm? What choice do we really have? MasterCard and Visa are not a choice.

In other words, we need a viable new cloud banking option era. Marc Andreessen told Charlie Rose when he set up his latest venture fund last year that new online banking was ripe for investment. He's right. Let's get on with it. I'll be your first customer.

Let the big guy do it


Meanwhile, how about Google? Like a dog on a meat truck, they have their teeth into everything else around them. Why not online banking too? You can't blame for being too big to succeed, can you?

If any of us can explore, learn, compare, shop, order, track, and share your experiences via Google -- the actual monetary transactions scattered inside these processes should be a natural included component too. Right?

Is Google the best candidate to create a good, customer-focused cloud banking service portfolio? I think they would provide just the catalyst for change we so desperately need. We can then expect Microsoft to enter the field three years later, perhaps for an added element of choice and change.

MicroCard and Googlesta! Hey, it's a start, and almost certainly an improvement.