Thursday, January 17, 2008

IBM and Kapow on how enterprises exploit application mashups and lightweight data access

Listen to the podcast. Read a full transcript. Sponsor: Kapow Technologies.

The choices among enterprise application development and deployment technologies has never been greater. But what's truly different about today's applications is that line of business people can have a greater impact than ever on how technology supports their productive work.

By exploiting mashups, situational applications, Web 2.0 techniques and lightweight data access, new breeds of Web-based applications and services are being cobbled together fast, cheap, and without undue drain on IT staffs and developers. Tools and online services both are being used to combine external web services like maps and weather with internal data feeds and services to add whole new dimensions of business intelligence and workflow automation, often in a few days, often without waiting in line in order to get IT's attention.

And while many of these mashups happen outside of IT's purview, more IT leaders see these innovative means as a productivity boon that can't be denied, and which may even save them time and resources while improving IT's image in the bargain. The trick is to manage the people and new processes without killing off the innovation.

To help weed through the agony and ecstasy of Enterprise 2.0 application development and deployment in the enterprise, I recently chatted with Rod Smith, Vice President of Internet Emerging Technologies at IBM, and Stefan Andreasen, the Founder and CTO of Kapow Technologies.

Here are some excerpts:
In times of innovation you get some definite chaos coming through, but IT and line of businesses see this as a big opportunity. ... The methodology here is very different from the development methodology we’ve been brought up to do. It’s much more collaborative, if you’re line of business, and it’s much more than a set of specifications.

This current wave is really driven by line of business getting IT in their own hands. They’ve started using it, and that’s created the chaos, but chaos is created because there is a need. The best thing that’s happening now is acknowledging that line-of-business people need to do their own thing. We need to give them the tools, environments and infrastructure so they can do it in a controlled way -- in an acceptable, secured way.

... As we opened up this content [we found] that this isn't just about IT managing or controlling it. It’s really a partnership now. ... The line of business wants to be involved when information is available and published. That’s a very different blending of responsibility than we've seen before on this.

There is a lot of information that’s out there, both on the public Web and on the private Web, which is really meant to be human-readable information. You can just think about something as simple as going to U.S. Geological Service and looking at fault lines of earthquakes and there isn't any programmatic API to access this data.

This kind of data might be very important. If I am building a factory in an earthquake area, I don’t want to buy a lot that is right on the top of a fault line. So I can turn this data into a standard API, and then use that as part of my intelligence to find the best property for my new factory.

It’s just not internal information they want. It's external information, and we really are empowering these content developers now. The types of applications that people are putting together are much more like dashboards of information, both internally and externally over the Internet, that businesses use to really drive their business. Before, the access costs were high.

Now the access costs are continuing to drop very low, and people do say, "Let’s go ahead and publish this information, so it can be consumed and remixed by business partners and others,” rather than thinking about just a set of APIs at a low level, like we did in the past with Java.

If you want to have automatic access to data or content, you need to be able to access it in a standard way. What is happening now with Web Oriented Architecture (WOA) is that we're focusing on a few standard formats like RESTful services and on feeds like RSS and Atom.

So first you need to be able to access your data that way. This is exactly what we do. Our customers turn data they work with in an application into these standard APIs and feeds, so they can work with them in an automated way. ... With the explosion of information out there, there's a realization that having the right data at the right time is getting more and more important. There is a huge need for getting access in an automated way.

The more forward-thinking people in IT departments realize that the faster they can put together publishable data content, they can get a deeper understanding in a very short time about what their customers want. They can then go back and decide the best way to open up that data. Is it through syndication feeds, XML, or programmatic API?

Before, IT had to guess usage and how many folks might be touching it, and then build it once and make it scalable. ... We've seen a huge flip now. Work is commensurate with some results that come quickly. Now we will see more collaboration coming from IT on information and partnerships.

What is interesting about it is, if you think about what I just described -- where we mashed in some data with AccuWeather -- if that had been an old SOA project of nine or 18 months, that would have been a significant investment for us, and would have been hard to justify. Now, if that takes a couple of weeks and hours to do -- even if it fails or doesn’t hit the right spot -- it was a great tool for learning what the other requirements were, and other things that we try as a business.

That’s what a lot of this Web 2.0 and mashups are about -- new avenues for communication, where you can be engaged and you can look at information and how you can put things together. And it has the right costs associated with it -- inexpensive. If I were going to sum up a lot of Web 2.0 and mashups, the magnitude of drop in “customization cost” is phenomenal.

What’s fun about this, and I think Stefan will agree, is that when I go to a customer, I don’t take PowerPoint charts anymore. I look on their website and I see if they have some syndication feeds or some REST interfaces or something. Then I look around and I see if I can create a mashup of their material with other material that hadn’t been built with before. That’s compelling.

People look and they start to get excited because, as you just said, they see business patterns in that. "If you could do that, could you grab this other information from so-and-so?" It’s almost like a jam session at that point, where people come up with ideas.
Listen to the podcast. Read a full transcript. Sponsor: Kapow Technologies.

Wednesday, January 16, 2008

Sun refuses to give up on software acquisitions, buys MySQL for $1 billion

We knew that Sun has been lusting after a real software business in addition to Solaris. We knew that Sun "shares" -- that it digs open source, including Solaris and Java. And we knew that Sun had a love-hate relationship with Oracle and a hate-hate relationship with IBM and Microsoft.

So toss this all in a big pot, put on simmer and you get a logical -- if not three years too late -- stew: Sun Microsystems intends to buy MySQL AB and its very popular open source database. The announcement comes today with a hefty price tag of $1 billion.

The MySQL purchase by Sun makes more sense than any other acquisition they have done since they botched NetDynamics 10 years ago. This could be what saves Sun.

Sun can make a lot of mischief with this one, by taking some significant oxygen out of its competitors' core database revenues. Sun can package MySQL with its other software (and sell some hardware and storage, to boot), with the effect that the database can drive the sales of operating systems, middleware and perhaps even tools. Used to be the other way around, eh? Fellow blogger Larry Dignan sees synergies, too. And Tony Baer has some good points.

Who could this hurt if Sun executes well? IBM, Oracle, Microsoft, Sybase, Red Hat, Ingres. It could hurt Microsoft and SQL Server the most. Sun could hasten the tipping point for the commercial relational database to go commodity, like Linux did to operating systems like Unix/Solaris. Sun could far better attract developers to a data services fabric efficiency than with its tools-middleware-Solaris stack alone. As we recently saw, with Microsoft buying Fast Search & Transfer, the lifecycle of data and content is where software productivity begins and ends.

Sun will need to do this right, which has its risks given Sun's record with large software acquisitions. And Sun won't get a lot of help ecology-wise, from any large vendors. This puts Sun on a solo track, which it seems to prefer anyway. I wonder if the global SIs other than IBM will grok this?

Yes, it makes a lot of sense, which makes the timing so frustrating. I for one -- and I was surely not alone -- told very high-up folks at Sun to buy and seduce MySQL three years ago. (I also told them to merge with SAP, but that's another blog.) When Sun went and renamed it's SunONE stack to the Java what's-it-all, I warned them it would piss off the community. It did. I also told them Oracle was kicking their shins in. It did. I said: "Oracle has Linux, and you have MySQL." Oh, well.

[Now, Oracle has BEA, which pretty much dissolves any common market goals that Oracle and Sun once had as leaders of the anti-Microsoft coalition. The BEA acquisition by Oracle was a given, hastened no doubt to the close by the gathering gloom of of a U.S. economic recession.]

I'm glad the Sun-MySQL logic still holds, but Oracle has already done the damage with Linux, we saw how that Unix-to-Linux transition put Sun on its knees, and on the defensive. And we know that Sun has only been able to get one leg up since then, albeit refraining from falling over completely. Now, with BEA, Oracle with its Linux and other open source strengths -- not to mention those business apps -- will seek to choke out the last light from Sun, and focus on IBM on the top end, and Microsoft on the lower end. As Larry Ellison said, there will be room for only a handful of mega-vendors -- and we cannot be assured yet that Sun will meaningfully be one of them (or perhaps instead the next Unisys).

Indeed, the timing may still have some gold lining .... err, silver lining. Sun has had to pay big-time for MySQL (a lot more than if they had taken a large position in the AB two years ago). And what do they get for the cool $1 billion? Installed base, really. Sun says MySQL has millions of global deployments including Facebook, Google, Nokia, Baidu and China Mobile.

There's more, though. The next vendor turf battles are moving up yet another abstraction. Remember the cloud thing? Sun in sense pioneered the commercialization of utility computing, only to have Amazon come out strong (and added a database service in the cloud late last year). IBM has cloud lust. Google and Microsoft, too. Sun's acquisition of MySQL could also help it become a larger vendor to the other cloud builders, ie telecos, while seeding the Sun cloud to better rain down data services for its own users and developers.

And that begs the question of an Oracle-BEA cloud. Perhaps a partnership with Google on that one, eh? Then we have the ultimate mega-vendor/provider triumvirate: Apple-Google-Oracle. It's what Microsoft would be if it broke itself up properly and got the anti-trust folks off of their backs (not to mention a reduction in internal dysfunction). And that leaves loose change in the form of Sun, IBM, Amazon, eBay, and the dark horses of the telecos. Sun ought to seduce the telcos, sure, and they know it. Problem is the telecos don't yet.

Google may end up being the cloud king-maker here, playing Oracle and Sun off of one another. Playing coy with IBM, too. Who will partner with Amazon? Fun times.

Surely if Sun can produce a full-service cloud built on Solaris-Intel-Sparc that includes low-energy-use virtualized runtimes, complementary tools, and integrated database -- and price it to win -- well, the cloud wars are on. Sun might hang on for yet another day or two.

Tuesday, January 15, 2008

MuleSource takes aim at SOA governance, launches subscription-based ESB

MuleSource, a provider of open-source service-oriented architecture (SOA) infrastructure software, has jumped into the SOA governance pool with the community release today of Mule Galaxy 1.0.

Galaxy, an open-source platform with integrated registry and repository, allows users to store and manage an increasing number of SOA artifacts and can be used in conjunction with the Mule enterprise service bus (ESB) or as a standalone product. It was also designed with federation in mind, being pluggable to other registries.

In other news today, Mule also announced a subscription-only version of its ESB, as well as a beta version of Mule Saturn, an activity monitoring tool for business processes and workflow.

The subscription ESB smacks of "Mule on-demand.com." It will be interesting to see how well this does in terms of uptake. Integration as a service seems to be gaining traction. We're also told this "ESB in the cloud" supports IBM CICS, which is interesting ... are we approaching transactional mashups en masse?

As enterprises use SOA to expand their consumption of services from both inside and outside the business, governance becomes an all-important issue for control. Galaxy provides such registry and repository features as lifecycle, dependency, and artifact management -- along with querying and indexing.

A RESTful HTTP Atom Pub interface facilities integration with such frameworks as Mule, Apache CXF, and WCF. Galaxy also provides out-of-the-box support for various artifact types, including Mule, WSDLs, and custom artifacts.

Galaxy can be downloaded now, and a fully tested enterprise edition will be available in Q2 for Mule Enterprise subscribers.

On the ESB front, Mule has taken aim at the Fortune 2000 customer base with the introduction of Mule 1.5 Enterprise Edition, a subscription-only commercial enterprise packaging of the Mule ESB integration platform. Prior to this announcement, the ESB had been available only in the community edition.

It's sort of funny, as commercial providers offer open source versions of their products, we also see open source providers handing up commercial versions. I guess that means everyone needs one of each? Perhaps the versions (ala Fedora to RHEL) are becoming alike, in that it takes a subscription of some sort to get the real goods and use them.

Take the traffic when you can, I've always said. Mule's popularity was in evidence in November, when the company announced that community downloads had surpassed one million.

The new enterprise offering is available for a single annual fee and encompasses news features, including:

  • Support for Apache CXF Web Services Framework
  • Patch management and provisioning via MuleHQ
  • Streaming of large data objects through Mule without being read into memory
  • Nested routers to decouple service implementations from service interfaces
  • Support for multiple models
  • Diagnostic feedback for customer support

More information is available from the MuleSource site.

For users looking for a business-activity monitoring tool, MuleSource has released a beta version of Mule Saturn 1.0, which is designed to complement an SOA infrastructure by providing detailed logging and reporting on every transaction that flows through the Mule ESB.

Saturn allows staff to drill down on transaction details and set message-level breakpoints for deep log analytics, allowing for continuous custom improvement. Key features include:

  • Business user view into workflow and state
  • Process visualization
  • Search on transaction, date, various ID
  • Reporting on service-level agreements

Saturn is available immediately to MuleSource subscribers.

Monday, January 14, 2008

WSO2 Web services framework builds bridge between Ruby and enterprise apps

WSO2 has built a bridge between Ruby-based applications and enterprise-class Web services with the introduction of its Web Services Framework for Ruby (WSF/Ruby) 1.0.

WSF/Ruby, an open-source framework for providing and consuming Web services in the Ruby object-oriented programming language, offers support for the WS-* stack, allowing developers to combine Ruby with security and messaging capabilities required for enterprise SOAP -based Web services. Disclosure: WSO2 is a sponsor of BriefingsDirect podcasts.

WSO2 Chairman/CEO Sanjiva Weerawarana explained the bridging capabilities in a pre-release interview with Infoworld:

While Ruby has been popular in the Web 2.0 realm, sometimes it needs to talk to legacy architectures, he said. With the new framework, developers could build a Web application using Ruby and then hook into enterprise infrastructures, such as JMS (Java Message Service) queues. For example, a Web site might be built with Ruby that then needs to link to an order fulfillment system based on an IBM mainframe or minicomputer, Weerawarana said.

With WSF/Ruby, developers can also consume Web services with Representational State Transfer (REST). WSF/Ruby also provides a fully open-source Ruby extension based on Apache Axis2/C, Apache Sandesha2/C, and Apache Rampart/C.

WSF/Ruby features both client and service APIs. The client uses the WSClient class for one-way and two-way service invocation support. The service API for providing Web services used the WSService class with support for one-way and two-way operations. Both APIs incorporate WSMessage class to handle message-level options.

WSF/Ruby 1.0 supports basic Web services standards, including SOAP 1.1 and SOAP 1.2. It also provides interoperability with Microsoft .NET, the Apache Axis2/Java-based WSO2 Web Services Application Server (WSAS), and other J2EE implementations. Key features of WSF/Ruby 1.0 are:

  • Comprehensive support for the WS*- stack, including the SOAP Message Transmission Optimization Mechanism (MTOM), WS-Addressing, WS-Security, WS-SecurityPolicy, and WS-Reliable Messaging.
  • Secure Web services with advanced WS*-Security features, such as encryption and signing of SOAP messages. Users also can send messages with UsernameToken and TimeStamp support.
  • Reliable messaging for Web services and clients.
  • REST support, so a single service can be exposed both as a SOAP-style and as a REST-style service. The client API also supports invoking REST services using HTTP GET and POST methods.
  • Class mapping for services, enabling a user to provide a class and expose the class operations as service operations.
  • Attachments with Web services and clients that allow users to send and receive attachments with SOAP messages in optimized formats and non-optimized formats with MTOM support.
According to WSO2, WSF/Ruby has been tested on Windows XP with Microsoft Visual C++ version 8.0, as well as with Linux GCC 4.1.1.

LogMeIn files for IPO, sets up the market for cloud-as-PC-support continuum

I see that remote PC services start-up LogMeIn is going to conduct an IPO on Nasdaq in the not too distant future, pointing up the vibrancy of the intersection of cloud computing and the personal computer.

And the encouraging growth that LogMeIn has enjoyed shows that cloud, remote maintenance and the long-term health of the PC are all quite mutually compatible, thank you. Microsoft has is right when they chime about "software and services," just as there will be for a long time the need for PCs and the cloud services that they will increasingly rely on.

So congrats to LogMeIn, they are a great bunch of folks. Disclosure: LogMeIn has been a sponsor of BriefingsDirect podcasts. I am sure glad I had that chat about the Web as operating system way back when with Mike and Joe.

This intention of filing seems only the beginning of LogMeIn's next phase. According to the filing, LogMeIn plans to raise up to $86 million from the IPO, but this could change. It may not be that large of a sum, but it shows how Internet firms don't require the capital they used to to grow substantially. And there'a always the possibility of LogMeIn making acquisitions to fill out its services and support portfolio.

Nice thing about the LogMeIn services is that they straddle the consumer, SOHO, SMB and enterprise markets. The services can cut across them all -- adding value while cutting costs on the old way of doing things. Nice recipe these days. More telcos and service providers will need such abilities too.

As I've said, I expect to see more telcos buying software and services vendors in 2008 to expand their offerings beyond the bit-pipe and entertainment content stuff. If you can serve it up on subscription, well then do it broadly and monetize the infrastructure as many ways as possible.

Tuesday, January 8, 2008

IBM remains way out in front on information access despite Microsoft's Fast bid

Ever notice that Microsoft -- with cash to burn apparently -- waits for the obvious to become inevitable and then ends up paying huge premiums for companies in order to catch up to reality? We saw it with aQuantive, Softricity and Groove Networks.

It's happened again with today's $1.2 billion bid by Microsoft for Norway's Fast Search and Transfer. Hasn't it been obvious for more than three years (at the least) that enterprise information management is an essential task for just about any large company?

That's why IBM has been buying up companies left and right, from Ascential to Filenet to Watchfire to Datamirror to Cognos. Oracle has been on a similar acquisitions track. Google has exceptionally produced search appliances (hardware!) to get a toe-hold in the on-premises search market, and Google and Yahoo! have also both been known to make acquisitions related to search. EMC even got it with Documentum.

Ya, that's what I'd call obvious. What's more, data warehousing, SAN, data marts and business intelligence (BI) have emerged as among the few consistent double-digit growth areas for IT spending the last few years.

So now some committee inside of Microsoft took a few months to stop fighting about whether SQL Server, SharePoint and Office 200X were enough to get the job done for the Fortune 500's information needs. I guess all that Microsoft R&D wasn't enough to apply to such an inevitable market need either. What do those world-class scientists do at Microsoft? Make Bill Gates videos?

And so now Microsoft's smartens up to internal content chaos (partly the result of all those MS Office files scattered hinther and yon), sees the market for what it is rather than what they would like it to be, and pays a double-digit multiple on revenues for Fast. Whoops, should have seen that coming. Oh, well, here's a billion.

It's almost as if Microsoft thinks it competitors and customers are stupid for not just using the Windows Everywhere approach when needs arise in the modern distributed enterprise. It's almost as if Microsoft waits for the market to spoil their all-inclusive fun (again), and then concedes late that Windows Everywhere alone probably won't get the job done (again). So the MBAs reach into the Redmond deep pockets and face reality, reluctantly and expensively.

Don't get me wrong, I think highly of Fast, know a few people there (congrats, folks), and was a blogger for Fast last year. I even did a sponsored podcast with Fast's CEO and CTO. That's a disclosure, FYI.

And I'm a big fan of data, content, information, digital assets, fortune cookies -- all of it being accessable, tagged, indexed and made useful in context to business processes. Meta data management gives me goosebumps. The more content that gets cleaned, categorized and easily found, the better. I'm a leaner to the schema. I'm also quite sure that this information management task is a prerequisite for general and successful implementations of service oriented architectures and search oriented architectures.

And I'm not alone. IBM has been building a formidable information management arsenal, applying it widely within its global accounts and a new factor as a value-add to its many other software and infrastructure offerings. The meta data approach also requires hardware and storage, not to mention professional services. IBM knows getting your information act together leads to SOA (both kinds) efficiencies and advantages. And -- looking outward -- as Big Blue ramps up its Blue Cloud initiatives, content access and management across domains and organizational boundaries takes on a whole new depth and imperative.

And now we can be sure that Microsoft thinks so too. Finally. My question is with all that money, and no qualms about spending lavishly for companies, why doesn't Microsoft do more acquisitions proactively instead of reactively?

Both Microsoft's investors and customers might appreciate it. The reason probably has to do with the how Microsoft manages itself. Perhaps it ought to do more internal searches for the obvious.

Thursday, January 3, 2008

Genuitec's Pulse service provides automated updates across Eclipse, Android, ColdFusion

MyEclipse IDE vendor Genuitec is stepping up the general developer downloads plate to take a swing at the task of automated and managed updates, plug-ins and patches to such widespread tools as Eclipse, Android, ColdFusion.

The free Pulse service helps bring a "single throat to choke" benefit to downloads but without the need to remain dependent on a single commercial vendor (or track all the bits yourself) amid diverse open source or ecology offerings. Fellow independent IT analyst Tony Baer has a piece on Pulse. The service is in beta, with version 1.0 due in early 2008.

Google's Android SDK -- a software stack focused on mobile devices -- and Android Development Tools (ADT) will come preconfigured to run with one click in Pulse's “Popular” profile area, Genuitec announced in December. That shows how quickly new offerings can be added to a Pulse software catalog service. Pulse refresh includes support for developers using Mac, Linux and Windows.

Pulse requires an agent be downloaded to an Eclipse Rich Client Platform (RCP).

The services put Genuitec squarely in the "value as a service" provider role to many types of developers. As we know, developers rely on communities as focal points for knowledge, news, updates, shared experience, code, and other online services.

As we've seen in many cases, a strong community following and sense of shared value among developers often bodes well for related commercial and FOSS products alike. Genuitec is obviously interested in wider use of MyElipse, and is therefore providing community innovation as a channel.

I also expect that Genuitec will move aggressively into "development and deployment as a service" offerings in 2008. There's no reason why a Pulse set of services could not evolve into a general platform for myriad developer resources and increasingly tools/IDEs as a service. Indeed, Genuitec is finding wider acceptance by developers of developing and deploying in the cloud concepts and benefits. Disclosure: Genuitec is a sponsor of BriefingsDirect podcasts.

So while Amazon offers to developers runtime, storage, and databases as services -- based on a pay as use and demand increases basis -- the whole question of tools is very interesting. The whole notion of free of very inexpensive means to development and deployment will prove a major trend in 2008, I predict.

Now there is virtually no barriers for developer innovation and entrepreneurial zeal to move from the white board to global exposure and potential use. And that can only be good for users, enterprises, ISVs, and the creativity that unfettered competition often unleashes.

Wednesday, December 19, 2007

A logistics and shipping carol: How online retailers Alibris and QVC ramp up for holiday peak delivery

Listen to the podcast. Or read a full transcript. Sponsor: UPS.

Santa used to get months to check his list and prepare for peak season, but for online and television retailers such as Alibris and QVC they need to take orders the make deliveries in a matter of days. The volume and complexity of online shipping and logistics continues to skyrocket, even as customer expectations grow more exacting. Shoppers routinely place gift orders on Dec. 21, and expect the goods on the stoop two days later.

For global shopping network QVC, the task amounts to a record peak of 870,000 orders taken in a single day -- more than three times typical volume. For rare book and media seller Alibris, the task requires working precisely across ecologies of sellers and distributors. For partners like UPS, the logistics feat requires a huge undertaking that spans the globe and requires a technology integration capability with little room for error at record-breaking paces.

Listen as innovative retailers Alibris and QVC explain how they deal with huge demands on their systems and processes to meet the holiday peak season fulfillment. One wonders how they do it without elves, reindeer, or magic.

Join Mark Nason, vice president of operations at Alibris, and Andy Quay, vice president of outbound transportation at QVC, as we hear how the online peak season comes together in this sponsored podcast moderated by Dana Gardner, president and principal analyst at Interarbor Solutions.

Here are some excerpts:
What we strive for [at Alibris] is a consistent customer experience. Through the online order process, shoppers have come to expect a routine that is reliable, accurate, timely, and customer-centric. For us to do that internally it means that we prepare for this season throughout the year. The same challenges that we have are just intensified during this holiday time-period.

Alibris has books you thought you would never find. These are books, music, movies, things in the secondary market with much more variety, and that aren’t necessarily found in your local new bookseller or local media store.

We aggregate -- through the use of technology -- the selection of thousands of sellers worldwide. That allows sellers to list things and standardize what they have in their store through the use of a central catalogue, and allows customers to find what they're looking for when it comes to a book or title on some subject that isn’t readily available through their local new books store or media seller.

You hit on the term we use a lot -- and that is "managing" the complexity of the arrangement. We have to be sure there is bandwidth available. It’s not just staffing and workstations per se. The technology behind it has to handle the workload on the website, and through to our service partners, which we call our B2B partners. Their volume increases as well.

So all the file sizes, if you will, during the transfer processes are larger, and there is just more for everybody to do. That bandwidth has to be available, and it has to be fully functional at the smaller size, in order for it to function in its larger form. ... These are all issues we are sensitive to, when it comes to informing our carriers and other suppliers that we rely on, by giving them estimates of what we expect our volume to be. It gives them the lead-time they need to have capacity there for us.

Integration is the key, and by that I mean the features of service that they provide. It’s not simply transportation, it’s the trackability, it’s scaling; both on the volume side, but also in allowing us to give the customer information about the order, when it will be there, or any exceptions. They're an extension of Alibris in terms of what the customer sees for the end-to-end transaction.

[For QVC] peak season 20 some years ago was nothing compared to what we are dealing with now. This has been an evolutionary process as our business has grown and become accepted by consumers across the country. More recently we’ve been able to develop with our website as well, which really augments our live television shows.

... In our first year in business, in December, 1986 -- and I still have the actual report, believe it or not -- we shipped 14,600 some-odd packages. We are currently shipping probably 350,000 to 450,000 packages a day at this point. We've come a long way. We actually set a record this year by taking more than 870,000 orders in a 24-hour period on Nov. 11. This led to our typical busy season through the Thanksgiving holiday to the December Christmas season. We'll be shipping right up to Friday, Dec. 21 for delivery on Christmas.

We’ve been seeing customer expectations get higher every year. More people are becoming familiar with this form of ordering, whether through the web or over the telephone. It's as close to a [just-in-time supply chain for retail] as you can get it. As I sometimes say, it's "just-out-of-time"! We do certainly try for a quick turnaround.

The planning for this allows the supply chain to be very quick. We are like television broadcasts. We literally are scripting the show 24-hours in advance. So we can be very opportunistic. If we have a hot product, we can get it on the air very quickly and not have to worry about necessarily supplying 300 brick-and-mortar stores. Our turnaround time can be blindingly quick, depending upon how fast we can get the inventory into one of our distribution centers.

We carefully plan leading up to the peak season we're in now. We literally begin planning this in June for what takes place during the holidays -- right up to Christmas Day. We work very closely with UPS and their network planners, both ground and air, to ensure cost-efficient delivery to the customer. We actually sort packages for air shipments, during critical business periods, to optimize the UPS network.
Listen to the podcast. Or read a full transcript. Sponsor: UPS.