Tuesday, September 3, 2013

TOGAF 9 certification reaches 25,000 milestone

This guest post comes courtesy of Andrew Josey, Director of Standards within The Open Group.

By Andrew Josey

Last Wednesday represented a significant milestone for The Open Group’s TOGAF 9 certification program. In case you hadn’t already seen it on our homepage, Twitter, or LinkedIn, the number of TOGAF 9 certified individuals has now surpassed the 25,000 mark, an increase of nearly 8,500 new certifications in the equivalent 12-month period!

Josey
For those of you who might be unfamiliar with the name, TOGAF, an Open Group Standard, is a proven enterprise architecture methodology and framework used by the world’s leading organizations to improve business efficiency.

Certification is available to individuals who wish to demonstrate they have attained the required knowledge and understanding of the current standard, and reaching the 25,000 mark is of course an incredible milestone for TOGAF. [Disclosure: The Open Group is a sponsor of BriefingsDirect podcasts.]

However, Wednesday’s milestone isn’t the only positive reflection of TOGAF adoption in recent times. Just weeks ago, the latest Foote report placed TOGAF skills and Open CA certification (an Open Group Certification) top of the 340 highest paying non-certified and 289 highest paying certified IT skills, respectively.

Superb certifications

The report, based on US and Canadian data, stated that: “vendor independent organizations such as The Open Group have far fewer resources for promoting their programs but what they do have are superb architecture certifications that employers need and highly value and we see their certifications holding their value if not gaining ground.”

There is no doubt that the success of both can be partially attributed to a huge surge in the popularity of open standards over the last few years -- including TOGAF and Open CA.

The economic downturn has its role to play here, of course. Since the financial crisis began, open standards have helped by providing a framework that allows Enterprise Architects to save their companies money, maintain and increase profitability and drive business efficiencies. And, on a professional level, certification has helped Enterprise Architects to differentiate themselves, delivering better job security and employment prospects through testing times.

However, with the worst of the financial crisis hopefully behind us, the rate of certifications shows little signs of slowing. The below graph outlines the rise in the number of TOGAF 9 certifications since March 2009:

 

As you can see from the graph, there are two levels defined for TOGAF 9 “people certification,” and these are known as TOGAF 9 Foundation and TOGAF 9 Certified, respectively.

To provide you with a brief background on these, certification to TOGAF 9 Foundation demonstrates that the candidate has gained knowledge of the terminology, structure, and basic concepts of TOGAF 9, and also understands the core principles of enterprise architecture and the TOGAF standard.
Certification to TOGAF 9 Certified provides validation that in addition to the knowledge and comprehension of TOGAF 9 Foundation, the candidate is able to analyze and apply this knowledge.

Self study

However, while there are now 50 TOGAF 9 training partners across the globe and 58 accredited TOGAF 9 courses to choose from, more and more of these certifications are self taught. At the last count we had sold more than 7,700 electronic self study packs for TOGAF 9 certification, making it the number one best-seller in our electronic commerce store. These have proved particularly popular in smaller global markets where face-to-face training courses may be less accessible or costly.

Of course, as we celebrate a great milestone in its evolution, credit must go out to the many people who have helped develop and continue to help develop the TOGAF standard, in particular the members of The Open Group Architecture Forum. Today’s milestone is not only a testament to the value placed in trusted, globally accepted standards supported through certification, but to their endeavors.

It was not so long ago we announced on this very blog that TOGAF had become a globally recognized, registered brand trademark. Now, just a few months later, we celebrate another significant milestone in the evolution of TOGAF. Long may this evolution (and the milestones) continue!

More information on TOGAF 9 Certification, including the directory of Certified professionals and the official accredited training course calendar, can be obtained from The Open Group website here: http://www.opengroup.org/togaf9/cert/

This guest post comes courtesy of Andrew Josey, Director of Standards within The Open Group.

You may also be interested in:

Thursday, August 29, 2013

Spot buying automated by Ariba gives start-up Koozoo a means to shop more efficiently

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

This latest BriefingsDirect podcast, from the recent 2013 Ariba LIVE Conference in Washington, D.C., explores how the spot-buying process is benefiting buyers and sellers, and how they're using the Ariba Network and Discovery to conduct tactical buying.

Here to explain the latest and greatest around agile procurement for businesses of any size, we're joined by Ian Thomson, Koozoo’s Head of Business Development, based in San Francisco. The interview is conducted Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba, an SAP company, is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: Tell us about Koozoo.

Thomson: Koozoo is a technology company based in San Francisco, and we are the easiest way to share a live view with someone in particular, or a broader community. We have built a very simple web application that converts your old smartphone into a geolocating webcam.

Thomson
For example, you would pull the old smartphone out of your sock drawer, or whatever drawer that you have it in, and you dust it off. You go over to your WiFi network and download the Koozoo application, and that now makes your phone a webcam that you can stream and point to a website, if you had a website that you wanted to display it on, or on to the Koozoo site, where it could be private for your consumption.

On a personal level, it could be to make a baby monitor or something like that for a limited community, like a neighborhood watch group or merchant association that wanted to share their views or more broadly on the Koozoo community.

Happening around town

If you look at Koozoo, it looks like we're making equivalent of Google Street View, but live. Right now, we're limited to San Francisco, but were able to see what's happening around town.

Gardner: That’s really interesting. There is plethora of end devices that have cameras and WiFi, and you're able to then take advantage of that and give people the opportunity to innovate around the notion of either public or private streams. Is it just live stream? How can people just take a photo view every minute or six minutes, or anything like that?

Thomson: Not just yet, but that is certainly the direction we are moving in terms of building an ecosystem that allows you to make alerts for a certain movement or alerts when there are certain sounds that are anomalous, and are ones that you would want to record or see. But now, it's just an ambient streaming.

Right now, we support the iOS suite, an iPad or an old iPhone, later than the 3GS generation. Before that, they didn’t have the necessary hardware components, the chip set, to support live streaming the way we do live streaming with encoding.

We are on some Android platforms, but Android is a very fragmented market, and as you develop toward Android, you have to keep that in mind. So we focus on certain platforms within the Android market first. As we define this product market fit, we will develop the app and then propagate the Android market.

Gardner: So you're a startup -- resource constrained is, I believe, the way people would refer to that -- and you still need to buy and sell goods and services. When you were tasked with something that wasn’t a strategic, organized, or recurring type of purchase, how did you find what you needed, and how did Ariba factor into that?

Thomson: We're not really a buying organization per se. We're an engineering-based company. We have very few people and, when we need to buy something, it's usually something like walking over to Office Depot and picking up a couple of pencils, because we need pencils.
As I got smarter about what I needed, I was able to communicate that with the potential vendors.

Primarily, we had been looking at Amazon for that sort of purchase. As soon as we needed to purchase a few more things, we moved from pure retail to more of a wholesale sort of a buyer. We needed to buy more phones to do testing on. We looked at Alibaba and we looked at Amazon, primarily because I had only heard of those two, to buy larger quantity of used mobile phones in this case, because I wanted to take advantage of refurbished phones being more cost-effective for us for testing.

Initially I found Ariba through an introduction. Somebody said, "If you're looking at Alibaba, why don’t you look at Ariba? It's more suitable for what you're looking for." And it certainly turned out to be the case.
Initially, we made that one purchase and subsequent purchases of mobile devices. It turned out to be a really good mechanism for doing some market analysis. I didn't know what I was buying. I'm not a mobile device expert, and I don’t come from consumer electronics.

I was able to learn about what it was I didn’t know and be able to iterate on my request for a purchase. I was able to iterate on that publicly, so that everybody got to see. As I got smarter about what I needed, I was able to communicate that with the potential vendors.

Sense of confidence

Gardner: Because we're here at Ariba LIVE, the conference, we've been hearing news from Ariba about spot buying as a capability they're investing in and delivering through their network and Discovery process. Is there something about having a pre-qualified list of suppliers that in some way benefited you or gave you a sense of confidence vis-à-vis going just on the open World Wide Web.

Thomson: I don’t mean to be disparaging, but that was a little bit of the experience on Alibaba. I did get a ton of responses that weren't necessarily qualified and they weren't qualified, because they hadn’t read my request very clearly. They were offering me something that clearly wasn’t supporting what I needed, but rather was supporting what they were trying to sell.

I had done some Google searches and tried to find vendors, whether it was for these used devices or for a specialized widget that I needed to have made and sourced. So doing that on Google was pretty tough, time consuming, and something that I wasn’t expert at.

I didn’t have the capability to ask the right questions. I didn’t even know whether I was in the ballpark of what expectations should be in terms of time for delivery, cost, or what an acceptable small batch really was, because initially I needed a small batch to test the product that I was developing.
I see how we could position our profile on Ariba Discovery to respond to inbound interest on that platform for something that we could solve.

I don’t really see us ever becoming a very large buying organization. If we continue to develop this product, we already have a group of two or three suppliers that we've developed our relationship with over Ariba Discovery, which is the the spot buying platform that they have.
I don’t know if I necessarily would need more. I don’t know what our procurement needs are going to be moving forward. We're a little bit more interested in looking at it as a way to respond to potential leads. I'm on the sales side, not on the procurement side of our company.

As I look at moving forward, we do need to capture new leads. I see how we could position our profile on Ariba Discovery to respond to inbound interest on that platform for something that we could solve, whether that’s a surveillance system, a public safety system, or something like that. We would certainly be a very cost effective solution for that. So to be able to respond to inbound interest could be a very good place for us to go.

Gardner: So, it's a two-way channel. You were able to use Ariba Discovery and spot buying to find goods and services quickly and easily without a lot of preparation and organization, and conversely, there might be a lot of buyers out there using Ariba Discovery looking for a streaming capability and you would be popping up on their lists. Have you done that yet? What's the plan?

Product-development cycle

Thomson: That would be the hope, that there are a lot of people that want to buy it and use it. I haven't focused on that just yet. As a company, we're in a product-development cycle, not really in the business development sales cycle just yet. Ariba could be a very good way of figuring out what the market wants.

Gardner: So, it's not just a sales execution channel, but also a market research and business development channel, finding out what's available. You don’t know what people want, until you get it out in front of them, and of course, the spend for doing that through advertising or direct marketing is pretty daunting. Something like Ariba Discovery gives you that opportunity to do sales and research at no cost.

Thomson: It certainly does.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Wednesday, August 28, 2013

Advanced IT monitoring delivers predictive diagnostics focus to United Airlines

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

The next edition of the HP Discover Performance Podcast Series explores how United Airlines demanded better performance and monitoring from IT -- and got it.

We'll see how United not only had to better track thousands of systems and applications from its newly merged company, but it also had to dig deeper and orchestrate an assortment of management elements to produce the right diagnostic focus, and thereby reduce outages from hours to mere minutes.

Learn more about how United has gained predictive monitoring and more effective and efficient IT performance problem solving from our guest, Kevin Tucker, Managing Director of Platform Engineering at United Airlines. The interview is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: You had major companies coming together with Continental and United. Help us understand the challenge that you faced as you tried to make things work together and your performance improve.

Tucker: The airline industry is one of the most complex IT environments in existence. I think it's really difficult for the average flyer to understand all of the logistics that have to happen to get a flight off the ground.

Tucker
There are millions of messages moving through the system, from weight and balance, to reservation changes. There's the Network Operations Center (NOC) that has to make sure that we're on time with slots. There are fuel concerns. We have to ensure that with all of the connections that are happening out there that, the flights that feed into our hubs carrying our passengers get in on time, so that folks can make their connecting flights.

Moving people around is a very serious business. I have had people say, "Why do you guys take it so seriously? You're not launching nukes, or you are not curing cancer." But at the end of the day, people are counting on us to get them from point A to B.

That might be the CEO that’s trying to go out and close a big business deal. It might be someone trying to get to see an ailing family member, or someone who's lined up for what could be a life-changing interview. It's our job to get them there on time, in a stress-free manner, and reliably.

Complex environments

Tucker: We've had a very challenging last couple of years. We recently took two large, complex IT environments and merged them. We picked some applications from Continental, some applications from United, and we had to make these applications interface with each other when they were originally never designed to do so. In the process, we had to scale many of these systems up, and we did that at an incredible pace.

Over and above that, with the complex challenges of merging the two IT systems, we had this phenomenon that's building in the environment that can't be denied, and that's the explosion of mobile. So it was really a perfect storm for us.

We were trying to integrate the systems, as well as stay out in front of our customer demands with respect to mobile and self-service. It became a daunting challenge, and it became very apparent to us going in, that we needed good vital signs for us to be able to survive, for us to be able to deliver that quality of service our customers come to expect from us.

From my perspective, I have several customer sets. I have the executives. We don’t really know how we're doing if we can't measure something. So we need to be able to provide them metrics, so that they understood how we were running IT.

I have the United employees, and that could be the line mechanic, to the gate agent, to the lobby agent. And then we have our flyers. And all of those people deserve reliable data and systems that are available at all times. So when you factor all of that in, we knew we needed good vital signs, so that we could ensure these applications were functioning as designed.
We brought HP to the table and told them that we don't want to be average. We want to be world-class.

We didn't get there as fast as we would like. It was quite a feat to integrate these systems and we landed on a collapsed Passenger Service System (PSS) system back in March of 2012. Unfortunately, given that we were a little late to the game, we had some tough days, but we rallied. We brought HP to the table and told them that we don't want to be average. We want to be world-class.

We created a battle plan. We got the troops energized. We deployed the power that's available to us within the HP Management Suite. We formed a battle plan and we executed that.

It wasn't without challenge, but we are very proud of the work that we've done in a very short period of time. Within an eight-month journey, we have gone from being average at best, to I think one of the best around, with the stuff we have gotten after.

So it can be done. It just takes discipline, commitment, and a will to be the best. I'm very proud of the team and what they've accomplished.

Gardner: Kevin, I like the way you refer to this as "vital signs." When you put in place the tools, the ability to get diagnostics, when you had that information at your fingertips, what did you see? Was it a fire hose or a balanced scorecard? What did you get and what did you need to do in order to make it more actionable?

Using all the tools

Tucker: We own quite a bit of the HP product set. We decided that in order to be great, we need to use all of the tools on our tool belt. So we had a methodical approach. We started with getting the infrastructure covered. We did that through making sure SiteScope was watching servers for health. We made sure the storage was monitored, the databases are monitored, the middleware components, the messaging queues, etc., as well as all of the network infrastructure.

What really started to shine the light on how we were performing out there, as we started rolling all of those events up and correlating them into BSM, was that we were able to understand what impact we were having throughout the environment, because we understood the topology-based event correlation. That was sort of the first model we went at.

You mentioned diagnostics. We started deploying that very aggressively. We have diagnostics deployed on every one of our Java application servers. We also have deployed diagnostics on our .NET applications.

What that has done for us is that we were able to proactively get in front of some of these issues. When we first started dabbling in diagnostics, it was more of a forensics type activity. We would use that after we were in an incident. Now we use diagnostics to actually proactively prevent incidents.
In many cases we have gotten many of those restorals down in under five minutes, where before it was way north of an hour.

We're watching for memory utilization, database connection counts, and time spent in garbage collection, etc. Those actually fire alerts that weave their way through BSM. They cut a Service Manager ticket, and we have automation that picks that Service Manager ticket up, assumes ownership, goes out and does remediation, and refreshes the monitor. When that’s successful, we close the ticket out, all the while updating the Service Manager ticket to ensure we're ITIL compliant.

In many cases we have gotten many of those restores down in under five minutes, where before it was way north of an hour.

Through the use of these tools, we have certainly gained better insight into how our applications are using database connections and how much time we're spending in garbage collection. It really helps us tune, tweak, and size the environments in a much more predictive fashion versus more of a guess. So that's been invaluable to us.

You're probably picking up on a theme that's largely operationally based. We've begun making pretty good inroads into DevOps, and that's very important for us. We're deploying these agents and these monitors all the way back in the development lifecycle. They follow applications from dev to stage, so that when we get to prod, the monitors we know are solid. Application teams are able to address performance issues in development.

These tools have really aided the development teams that are participating in the DevOps space with us.

Gardner: Is there something about the ability for HP to span across these activities that led you to choose them and how did you decide on them versus some of the other alternatives?

Clear winner

Tucker: When we merged and got through the big integration I spoke of last year, clearly, we were two companies. We had two products. It became very clear to us without a doubt that because HP's depth and width that they could provide us across stacks and within those stacks, being able to go up and down, they were the clear winner.

Then when you start further looking at, well, why are we reinventing the wheel once something gets to production. When you look at the LoadRunner scripts, VuGen scripts that are created back in the development and the quality assurance (QA) cycle. Well, those are your production monitors and it prevents us from having to perform double work, if you will.

That's a huge benefit that we see in the suite. When you couple that with the diagnostic type information I referred to, that's giving our development teams great insight way back in the development cycle. As you look at the full lifecycle, the HP toolset allows you to span development stage into production and provide a set of dashboards that allow for the developers to understand how their sets of service are running.

We were very quickly able to bring them on board, because at the end of the day, there's the human factor that sets in. What's in it for me? I hear you ops and engineering guys telling me we need to monitor your application, but when you peel it back, I'm harkening back to my days when I used to run software.

Developers are busy and when you show them value that the director of the middleware services or business services has a dashboard, he can go look at how his services are performing. They very quickly identify that value and they're very keen on not getting those calls at 3 o’clock in the morning.
There was no doubt in our mind as we started down our journey that the HP toolset just couldn't be rivaled in that space.

It's a slam-dunk for us, and as I say, there was no doubt in our mind as we started down our journey that the HP toolset just couldn't be rivaled in that space.

We're very proud of our accomplishment. We're living proof. We're in a complex, fast-moving industry. We were starting from much further behind than we would have liked to, and we bought off and believed in the tools. We used them partnering with HP and we were able to come a long way.
What really started moving the dial for us with respect to remediation time and lowering mean time to restore (MTTR) and drastically improving our availability is the use of diagnostics. It's automated restores for things that we can. We can't restore everything automatically, but if we can take the noise away so our operations teams can focus on the tough stuff, that's what it's all about with the BSM TBEC (Topology Based Event Correlation) views, the event-based correlation.

Before, as we were making our journey, we started very quickly getting good at identifying an issue before the customer called in. That was not always the case. And that's step one. You never want a customer calling in and saying, "I want to let you know your application is down," and you say, "Thank you very much. We'll take a look at that."

Very difficult

That shaves a few minutes, but honestly then the Easter egg hunt starts. Is it a server, a network, a switch, the SAN, a database, or the application? So you start getting all of these people on the phone, and they start sifting through logs and trying to understand what this alert means with respect to the problem at hand. It's very difficult when you have thousands of servers and north of a thousand applications spread across five data centers. It's just very difficult.

Through the use of correlated views, understanding the dependencies, and the item within the infrastructure that's causing the problem turning red and bubbling up to the other applications that are impacted, allows us to zero in and fix that issue right off the bat, versus losing an hour of getting people on checking things to figure out is it them or is it not.

So through automating what can be automated with restore and having the event-based correlation that is what caused our operational performance we go from what I would call maybe a D- to an A+.

As we've matured and have insight into our environment with metrics, we're able to stop the firefighting mode. It's allowed us to step back and start working on engineering with respect to how we utilize our assets. With all of this data, we now understand how the servers are running. We understand, through getting engaged early in DevOps with some of the rich information we get through load testing and whatnot, we're able to size our environments better.
As part of that, it gives us flexibility with respect to where we place some of these applications, because now we're working with scientific data, versus gut feel and emotion. That's enabling us to build a new data center and, as part of that, we're definitely looking at increasing our geographic disbursement.

The biggest benefit we're seeing, now that we have gotten to more or less a stable operation, is that we're able to focus in on the engineering and strategically look at what our data center of the future looks like. As part of that, we're making a heavy investment in cloud, private right now.

We may look at bursting some stuff to the public side, but right now, we're focused on an internal cloud. For us, cloud means automated server build, self-service, a robot that’s building the environment so that the human error is taken out. When that server comes online, it's an asset manager, it's got monitors in place, and it was built the same way.

Now that we are moving out of the firefighting mode and more into the strategic and engineering mode, that's definitely paying big dividends for us.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Thursday, August 22, 2013

Combining big data and cloud capabilities for ecommerce matches buyers and sellers like never before

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

Social tools, big data and business networks are reshaping e-commerce in such a way that buyers and sellers are linking up in ways not possible even a few years ago.

In one example, Ohio-based LLT Barcode & Label, has found powerful new ways to develop sales and leads inexpensively using Ariba Discovery to better connect with qualified new customers.

To learn how, BriefingsDirect recently sat down with Rachel Spasser, Senior Vice President of Marketing at Ariba, an SAP Company, and Kris Hart, the Account Manager at LLT Barcode & Label in Stow, Ohio. The moderator is Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Ariba is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: This whole concept of finding customers, developing leads using cloud, using the networked economy and how people are linked up now more-and-more has become a function of the data that people are able to access.

Tell me a little bit about how things have changed, say now, versus five years ago in terms of finding sellers for buyers, and buyers for sellers?

Spasser: We're talking about major changes that have gone on in the past few years. If you think about the mantra around marketing, it's always been to make yourself, your product, and your company relevant to potential buyers.

Spasser
Ten or 15 years ago, as the Internet was really coming into its own, it provided significantly more data that helped us understand demographically, and a little bit behaviorally, who our customers were and what they were doing.

When we talk about the networked economy, business networks, and connected commerce we're really taking that to a whole other level, where as marketers, we're able to hyper-target, our potential customers not just on their demographics, but also be able to predict when they're going to be ready to buy.

They overtly raise their hand and say, "Hey, I'm interested in buying by a connected RFI or RFQ process." But we can also analyze the myriad data that's available today and put both unstructured data and structured data together in a way that helps us predict when they are going to be ready to buy. That's when we can actually present them with compelling offers. So it really does change the whole nature of the game.

What's appealing?

Gardner: Kris Hart, do you really care whether it's the cloud or the networked economy, or whether it's structured or unstructured data? What appeals to you in all this in terms of the change that's going on?

Hart: In general, you don't really care one way or another how you get it. the cloud and e-commerce makes everything a lot easier, not just for the seller, but also the buyer. It really helps with the relationship, and being on the cloud is much simpler, faster, and easier.

Gardner: Tell us a little bit about LLT Barcode & Label, so that we have a better understanding of your business, the vertical, the challenges that you face in terms of growing your business, and finding new customers.

Hart: We started about 16 years ago. It was a small company, and we're still a small company, where it's hard to get out in front of buyers. When we started, we would send out about a half-million catalogs a year. Those catalogs have inside them labels, thermal transfer ribbons, and laser labels, and items like mobile computers and printers to be able to print those items.

Hart
We started out as a small company, basically a distributor for a company that needs to ship a label via FedEx, UPS, or however. They need a label on that box to be able to ship their product. So as far as vertical markets go, if you name a company, they have to ship something, and they'd be able to use that.

So we play in every single market out there. We gross roughly about $12 million a year and we have about 15 people who work for us.

We would buy lists from Hoover’s, or somebody like that, and we would send out half a million catalogs a year.
We were hoping that everyone who got one of those catalogs would call us and want to buy something. But that's not the case, and over the past 15 years, that return on investment (ROI) has gone down significantly. So we had to find new ways to grow our business.

Internet has helped

The Internet has really helped. A lot of people at home are using sites like Amazon.com and eBay and they're getting used to just looking on the Internet when they need to buy something.

When they go into work, because it's so easy at home, they want to be able to do the same thing. It's a win-win for both sides. We don't have to print lots of catalogs every year, and they can just find us on our website or search engine via Google, Bing, whatever.

Gardner: Why wasn't it just creating a website and being part of the massive index that Google and other search engines use? Why was it better moving to an e-commerce cloud, a networked economy approach like Ariba Discovery?

Hart: The biggest thing with Ariba Discovery is that it's very much like match.com. We wanted to be in front of buyers when they were ready to place an order. We wanted to be that company that that person on the other end is going to come search for, see us, and give us an opportunity for their business.

If I walk in in the morning, I already have leads there. I already have an opportunity with a buyer who knows what they're looking for. They're out there on Ariba Discovery, asking me, "Can you help me?" That's a lot easier than like the catalog theory, hoping that someone runs across your website.
You know automatically what that buyer is looking for and where that buyer is.

You know automatically what that buyer is looking for and where that buyer is, and hopefully within a couple of sentences, a couple of conversations with them, you can figure out when they need to buy.

Spasser: If you listen to what Kris said, the Internet has enabled business to be done faster and for him to find new customers faster and more efficiently.

But where the business networks really differentiate themselves from the Internet as a whole is that there is a common platform, where both buyers and sellers are going to connect and find very specific commodities or services that meet their specifications. Creating a platform that enables them to get to that level of granularity enables them not just to conduct commerce faster, but conduct commerce smarter.

More and more data

As we look at the value of the business networks, over the course of time, these networks are, in essence, aggregating more and more data and insights that ultimately help companies like LLT Bar Code in that mid-market that want to do business globally. At the same time, it helps global buyers that want to buy, for various reasons, from companies that are either local suppliers, from companies that might have green initiatives, women-owned business, or diverse-owned business.

So there are a lot of different criteria that go into determining who you want to buy from or sell to. The beauty of the business network is that it gives you the ability to both get insight and get a lot of information on the buyer or the seller, and to connect based on very, very specific criteria.

Gardner: Kris, one of the things that’s interesting in comparing a web-based open-Internet approach to something more standardized is speaking the same language, data, metadata, taxonomy. It might be one thing for you to put up information on your website, but somebody who's searching, or even the search engine algorithms, might not understand the importance of certain terms and lingo.

How has a commonly understood business-oriented environment helped with your results?

Hart: Well, with Ariba Discovery it’s great. They have postings for all sorts of commodities and items that a buyer may be looking for. A buyer can check them off on his screen and then send out an RFQ.
With Ariba Discovery it’s great. They have postings for all sorts of commodities and items that a buyer may be looking for.

The nice thing about having that similar platform is that I set up a profile. I select specific commodities of the types of business and the items that we operate in. With that, a lead is automatically given to me. The buyers are looking for specific things, and it may be in his own terms or maybe a little bit different than how he may search it on the Internet and possibly miss us.

I can take my list of commodities and then it will be automatically generated. It’s going to come pretty close. He is going to be right there for me to be able to respond to whether or not we can help them, yes or no.

We started looking at Ariba Discovery in late 2011. We were trying to find some other way to gain new customers and to grow the business. We decided on Ariba Discovery in January of 2012. There were a couple of RFPs out there, and I thought, "Why not give it a shot?"
So we respond to a couple, and the first RFQ that I had responded to was for some blank labels and a couple of colored labels. A couple of weeks later I get an email from the client that had posted the RFQ, and they want us to bid on their entire distribution, labels, ribbons, and shipping items.

Too good to be true

And after going through their process of RFQs, add-in items, pricing, and everything else, we won about a $400,000 deal via Ariba Discovery. That was for a two-year contract, and that was the first one. It was almost too good to be true.

The great thing about it was that we started in the middle of January, when I responded to the RFQ and we had the contract and everything signed and they were buying from us in April. Very quick process.

Gardner: What was your cost in acquiring this customer?

Hart: With Ariba Discovery, you can use Ariba Discovery for free or you can use the Mid-Level or the Advantage. And we decided to go with the Advantage.

That had some other benefits with the profile and things of that nature and it was only $3,000. We decided to take some of our marketing budget and put it into that and at least give it a shot. We were more than happy. We got our ROI right-away, and it's a much better and simpler process than sending out a half a million catalogs a year.
You’re not wasting your time. You’re not sifting through a lot of unqualified opportunities as the seller.

Spasser: We love to hear stories like Kris’ about immediate ROI. And we hear them over-and-over again from suppliers on the network.

One benefit, as we talked about, is the ability to define very specific criteria as a seller as to what you sell, so that when you do get posts that are directed to you, you know that they're relevant to you. You’re not wasting your time. You’re not sifting through a lot of unqualified opportunities as the seller. And the benefit to the buyer is the same.

As the buyer posts, their posts are being sent to people who are all qualified, who all have the ability to meet the specifications that they have created. Therefore, it's saving a lot of time on their part as well. When you look at the two-sided model, it really is a win-win both for the buyer and the seller within the network.

One of the hidden pieces of gold within the network is the data that's created over time from all of this transactional information, as well as data that is created individually from buyers and seller. They're providing information on ratings and reviews of how quickly they got paid, if they're a seller, by a particular buyer. Or it's the reverse, how accurate and on time deliveries are if you’re a buyer looking at a seller.

So, the unstructured information, combined with a lot of structured data, is creating an opportunity to make much better matches and have buyers and sellers that are much more informed about one another as they enter into business relationship.

We all know the more that we understand and have well-defined expectations of our business partners, the more effective those relationships are over the course of time.

Some structure

Gardner: Have you been able to find that the services around process for procurement, automating the invoicing and PO process, the data, even the transactions make that a more efficient customer relationship?

Hart: Yeah, it's funny that you bring that up. Right now, that particular customer has gone to Ariba for some other items that they are going to start using for procurement and different items like that. What we're hoping is that once that process takes over, then we can be in the supplier network and possibly even set up e-catalogs for them and make their process a whole lot more efficient and easier.

That way, if they have any transition in buyers, someone is on vacation, or anything like that, there are no questions about the items that they purchased from us. There are no questions on the pricing, on who to send the purchase orders to, or anything like that. It's all handled by Ariba.
You can now start connecting with multiple buyers via the network, which enables you to create even greater efficiencies.

Hopefully, we'll be able to set up a catalog, and they'll be able to smoothly purchase from us. It already helps us being on the supplier network and then them hopefully getting ramped up and making everything a whole lot more smooth.

Spasser: It's great to hear about the benefits as it relates to efficiency from a process perspective on the network. One of the other benefits of setting up a catalog to do business with that one buyer on the network is that now you have the catalog set up on the network.

As you get additional buyers, you're using the same catalog and the same infrastructure. You can now start connecting with multiple buyers via the network, which enables you to create even greater efficiencies as a mid-market seller who may have limited resources within the company to issue and reconcile invoices.

The efficiency of setting up one relationship is the starting point. There is much more efficiency as you start to establish more-and-more relationships as a seller over the network. But there are other services that we offer over the network that also create value, and one example is our Dynamic Discounting Program.

For example, as a seller on the network if you would like to get paid faster from one of your buyers, you can offer different payment terms. Let's say your contract is a net-60 payment term, and you’d like to get paid faster because you’d like to make some capital investments in your company. You could offer to provide a discount for a net-10 payment.

Discount terms

We have many mid-market suppliers who have offered those discount terms to their buyers, so that they can self-fund growth for their own companies. That's yet another benefit of doing business on the network.

The third major benefit is the visibility and predictability that you have in relationship with the customer. At any point in time, as a seller on the network, you can go in and see the status of where your order stands, where your payment stands, or whether your invoice has been approved for payment.

It gives you a lot more visibility, and therefore predictability, as a mid-size business, into things like cash flow. You can anticipate when you're going to get paid, instead of having to pick up the phone 16 times and say, "When is the check is going to come? Have you approved the invoice?"

So there's efficiency, visibility, and predictability, and then there is the opportunity to impact cash flow by using some of the more sophisticated tools like Dynamic Discounting to speed up payment.

Gardner: Tell me a little bit about the future roadmap that Ariba is embarking on around a wider, deeper, richer basket of goods and services.
So there's efficiency, visibility, and predictability, and then there is the opportunity to impact cash flow by using some of the more sophisticated tools like Dynamic Discounting to speed up payment.

Spasser: One of the things that we are doing, really focused on the lead generation side, is expanding the services that we offer around Ariba Discovery that can help companies like LLT Barcode & Label find new customers worldwide.

Related to that is the ability to create multiple profiles that are very highly targeted. So for the various products that LLT Barcode & Label offers, they could create a profile unique to different industries. Or if there are different industry requirements around labels and barcodes, they can create very specific, targeted profiles that really make them extremely relevant to buyers within various industries.

Another service that we are offering is what we are calling "Sellers You Might Like," and that's the ability to based on what someone is searching on proactively suggest sellers that meet their criteria.

So even if a buyer does a search and doesn't necessarily click on a particular supplier, if that supplier meets those criteria from an algorithm perspective, that suppler will be proactively suggested, especially if that supplier has received high ratings and reviews from other buyers on the network.

So there are couple of new services that are being offered from the Discovery side that will really accelerate the ability for mid-market companies like LLT Barcode & Label to grow their business and expand the value that they get from using the Ariba network.

How to get started

Gardner: Any 20-20 hindsight, based on your experiences so far, that you could offer to other people who are looking to get started on this cloud-based, commerce and lead generation activity?

Hart: Definitely give it a shot. Set up a free profile and start getting some leads generated, something that comes your way. Definitely make sure you're in the right commodities. Don't start clicking boxes for commodities that you can't offer. That way, your leads are more precise and more ready to go Then see what happens for free.
Definitely give it a shot. Set up a free profile and start getting some leads generated.

If you want extra help and a little bit better outreach with the marketing profile, go up the next step or even all the way to the top like we did. We saw the benefits and doing that with the marketing help that we have from Ariba and how to use the Discovery tool. As I said, that paid off for us, but definitely give it a shot. If nothing else, just set up a free profile and see where that takes you.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Ariba, an SAP company.

You may also be interested in:

Wednesday, August 21, 2013

Knowing what it takes to generate a minimum lovable product

(This guest post comes courtesy of Brian de Haaff, founder of Aha! Labs, Inc. Follow him at @bdehaaff.)

By Brian de Haaff

A can of cat food is a minimum viable product (MVP) when you are starving, but it’s highly unsatisfying and unlikely to generate a loyal following (of humans).

de Haaff
And there you have one of the problems of the MVP approach. It strives for “barely enough” and never good. And heaven forbid, the goal is never being great. It results in products that mostly work but never delight. No matter your source, the very definition of an MVP is generally similar to the following: The MVP is a new product with just the necessary features to be deployed, but no more.

The MVP is a curse for ambitious technology companies that want to grow. In an increasingly transactional world, growth comes from long-term customer happiness. And long-term customer happiness comes when customers adore your product or service and want you to succeed. You should be thinking about what it will take for customers to love you, not tolerate you. Really think about the type of mindset change it would take. What would it take to create a minimum lovable product (MLP)?
While the true adoption of the MVP is a strategic approach to getting product out the door, when applied it yields unsatisfactory products.

While the true adoption of the MVP is a strategic approach to getting product out the door, when applied it yields unsatisfactory products. You might argue that is is best for prototyping and feedback gathering. Yet, my experience is that when it is the dominant product development mindset in an organization, it becomes the overarching goal of every release and dictates the outcome. Even the product managers who are responsible for shepherding the product become intoxicated with mediocrity.

I have been in multiple larger organizations where the concept dominated executive, product, and engineering mind share. Rather than asking what do customers really want, or what would delight them, the conversation always returned to what’s the minimum viable product and when can we get it to market.

The problem is that the two major principles driving the MVP are flawed.

The MVP reduces waste

The MVP never reduces waste because it never delivers what the customer really wants. It presupposes that there will be iteration after iteration before the product truly meets customer requirements. Couple this with the fact that agile engineering environments prioritize “rapid output”  and it’s even more likely that what’s delivered will not be tied to the organizational strategies and product vision.

The MVP accelerates time to market

The MVP may very well get you something to market first but even in an emerging market you will not be a serious contender. Loyal customers who depend on your product are what matter. There were helpdesks before Zendesk, tablets before the iPad, electric cars before Tesla Motors, and CRM tools before Salesforce.com. The MVP is further useless in established markets where major disruption is what’s required. Customers already have tons of viable products and some are probably even pretty good. It’s your insight that matters and only a terrific product can win.

Ultimately, chasing the MVP forces you to sprint faster and faster chasing fool’s gold. And the more desperate you become to lead, the more you are likely to die from incrementalism. It’s a vicious loop that will gently guide you from market innovator to hopeful fast-follower.
We all have the opportunity to do something fantastic and be happy doing it.

Now, even if you are convinced that striving for mediocrity is an atrocity, you likely need to convince others. There is no easy way. One approach is to just yell like a crazy guy the next time you are in a strategy or product meeting and someone starts talking about the MVP. You might just be able to get the group to focus on what’s necessary to create a minimum lovable product.

Assuming you start thinking about creating love and others are willing to give you a chance, here are a few ways to determine if you have succeeded in identifying a minimum lovable product before spending one minute developing it. Remember that the goal is to find the big idea first. The more of these characteristics you can check off for your idea, the more lovable your product will be.
  • At least one person tells you it’s never been done
  • Customers visibly smile when you describe it to them
  • Someone swears when he hears the idea (in delight or disgust)
  • You dream of using it and all of the features you could add
  • Only your CTO or top architects think it’s possible
  • People start contacting you to learn about what you are building (old school word-of-mouth)
  • The top industry analysts are not writing about it
I hope that this inspires and excites you. If you are interested in learning more about building great products — you may want to use our interactive tool to discover how lovable your product is.

We all have the opportunity to do something fantastic and be happy doing it. And I personally guarantee that changing your focus and setting your sights on creating a MLP will bring you great joy and make the world a better place.

Sign up for the free 30 day trial of Aha! Follow the company @aha_io, and the author @bdehaaff. (Comments on Hacker News)

(This guest post comes courtesy of Brian de Haaff, founder of Aha! Labs, Inc.)

You may also be interested in:

Tuesday, August 20, 2013

Just in time for VMworld, Dell virtualization suite update aims to further enable data-center efficiency

Dell Software today announced the next generation of its virtualization operations management suite, just in time for next week's VMworld conference. The suite is comprised of a trio of solutions that help businesses of all sizes streamline virtualization, storage, and cloud-computing initiatives -- with broad VMware products support.

The new releases include Foglight for Virtualization, Enterprise and Standard Editions 7.0 and Foglight for Storage Management 3.0. These releases address a number of data-center transformation challenges -- from optimizing performance and efficiency in a virtual environment to solving end-to-end storage and virtualization challenges in complex and cloud-based systems.
Professionals are faced with increased expectations to deliver business value.

Dell Software, recognizing VMware's market leading penetration in virtualization, is carving out a heterogeneity support play for server, storage and VDI -- but is making VMware support a priority for now. And Dell is supporting VMware VDI even as its own thin client portfolio, nee Wyse, competes with it.

For Dell Software, the bigger long-term management and efficiency opportunity is for making a mixture of virtualization technologies tamed. They, like most, expect Microsoft Hyper-V to gain in share and therefore make for a mixed data center sprawl. And the same is expected for VDI, especially as BYOD, mobile first, and tablets make the end points even more hetero.

You might say the messier it gets (and the lack of cross-technology support from the competing market leaders themselves), the better it is for a third party like Dell to waltz in and fix the mess, and scoop up a nice piece of margin too. Dell plans to target Hyper-V eventually, but is focused on VMware for now.

"Software-defined is great, but it isn't a black box. You still need to connect the physical to the virtual," said John Maxwell, executive director, product management, Dell Software. [Disclosure: Dell Software and VMware are sponsors of BriefingsDirect podcasts.]

Infrastructure and Operations (I&O) professionals are faced with not only reducing the total cost of the data center, but also to drive agility, quickly deploy new applications, and deliver the highest level of performance, said Maxwell.
Foglight for Virtualization

Foglight for Virtualization, Enterprise Edition 7.0 provides end-to-end performance monitoring and operations management for heterogeneous virtual environments to help reduce operational costs, speed deployments, and simplify the complexity of the data center virtual and physical infrastructure. This also includes deep integration with storage, Microsoft Active Directory, and Microsoft Exchange modules to provide performance analysis and advice across data-center infrastructure.

New optimization functionality allows I&O administrators to significantly increase consolidation ratios with improved visibility and analytics geared toward right-sizing CPU, memory, and storage, and effectively manages virtual-machine (VM) sprawl by reclaiming resources through new optimization insight into such waste as powered off VMs, zombie VMs, abandoned images, and unused templates and snapshots.

Now that most enterprises are majority workload virtualized, the next wave of cost savings comes from finding and removing those zombies, managing all the various flavors of virtualization in common fashion, further optimizing the CPU/memory/disk performance, and freeing up data storage with more insight and intelligence. You could think of this all as repaving the path to software-defined data centers, recognizing that there's still a lot of waste to clear along that road well before the end destination.

Additional enhancements due this month include:
Foglight for Storage Management 3.0 helps to ensure the right storage configuration to optimize virtual infrastructure performance and availability. It provides virtualization managers with visibility into the underlying physical storage infrastructure that lies beneath the virtual datastores, enabling them to immediately identify hosts, VMs, and datastores experiencing performance problems caused by underlying physical host, fabric and storage components and tips to fix the issues.

Performance metrics

Foglight for Storage Management gathers extensive performance metrics and presents this data within a rich graphical interface incorporating architectural diagrams, graphs, alerts, and drill-down screens for fast and easy identification of virtual and physical storage problems.  New feature functionality provides:
  • Pool-level analysis to track remaining capacity and over-commitment for thin provisioning
  • Performance analyzer to provide one-click performance troubleshooting visibility from VM to array to sub-array level in a single dashboard
  • Additional device support for Dell Compellent, Dell EqualLogic, and EMC VMAX
Foglight for Virtualization, Standard Edition 7.0 is specifically designed for the small–to-medium size environment (SME), with a focus on ease of use and quick time to value. It provides administrators insight into virtual environment performance and automated resource optimization, capacity planning, chargeback and showback, and change analysis and compliance.
Dell Software’s virtualization operations management suite helps simplify the complexity of today’s modern data centers to help I&O professionals deliver on these top priorities

New capacity management and planning functionality helps to drive data-center sustainability and reduce power consumption to decrease operating costs and capital expenditures, while improving workload performance. It does this by analyzing existing VM workload requirements against the entire environment and identifying the minimum number of host servers required over time to safely support workloads. Additional new enhancements include:
  • Enhanced user interface for a more intuitive approach to capacity management and planning
  • New capacity planning features to prepare for host server refresh, upgrade, or expansion projects by finding the optimum configurations and minimum number of new or existing host servers to maximize VM performance while minimizing server cost, space, and power needs
  • New power minimization feature to reduce costs by determining the minimum number of host servers needed over time to safely run workloads, and estimating potential cost savings by powering down unneeded servers
  • New optimization to automatically adjust virtual machine disk sizes up or down, based on actual use
Foglight for Virtualization, Enterprise Edition 7.0 remains priced at $799; Foglight for Storage Management 3.0 remains priced at $499 per socket, and Foglight for Virtualization, Standard Edition 7.0 operates as a standalone tool, or can easily integrate with Dell Software’s Foglight for Virtualization, Enterprise Edition software. It's priced at $399 per physical socket in managed servers.

All can be downloaded from the Dell Software website and are expected to be generally available on August 31.

You may also be interested in: