Thursday, May 2, 2013

Ariba, Dell Boomi to unveil collaboration enhancements for networked economy at Ariba LIVE conference

Collaboration will take center stage next week when Ariba, an SAP company, holds its Ariba LIVE conference in Washington, DC. In an effort to fuel greater collaboration between companies through new capabilities and network-derived intelligence, Ariba will announce an enhanced set of tools, as well as a joint offering with Dell Boomi.

Leading the list of enhanced Ariba tools are:
  • Ariba Spot Buy. With the integration of Ariba Procure-to-Pay and Ariba Discovery, buyers can quickly discover and qualify new sources of supply for one-off, time-sensitive, or hard-to-find purchases.
  • Ariba Recommendations. Through new services that push network-derived intelligence and community-generated content directly into the context of specific business processes and use cases, companies can make more informed decisions at the point of transaction or activity. “Suppliers You May Like,” for example, helps guide buyers to qualified suppliers based on a host of inputs, including buyer requirements, supplier capabilities and performance ratings, and how often other buyers on the network have awarded business to them.
“Just as consumers tap into personal networks like Facebook, Twitter and Amazon.com to connect with friends and family, share and shop, companies are leveraging digital networks to more efficiently engage with their trading partners and collaborate across the entire commerce process,” said Sanish Mondkar, Ariba Chief Product Officer. “This new, more social and connected way of operating is redefining the way business is done. But it demands a new set of tools and processes that are only possible at scale in a truly networked environment. Ariba is delivering these tools today.”

Spot buys – or unplanned purchases of unique items -- account for more than 40 percent of a company’s total spend. Spot buys are challenging because they require quick turnaround, and buyers generally lack efficient or effective methods to source them. [Disclosure: Ariba and Dell are sponsors of BriefingsDirect podcasts.]

Selective leveraging

According to independent research firm The Hackett Group, “by selectively leveraging software tools in areas like supplier discovery and online bidding, organizations can reduce the time it takes to find the right suppliers from weeks to days or even hours and drive cost reductions of between two percent and five percent on average.”

Nearly one million selling organizations across more than 20,000 product categories are connected to the Ariba Network. And they have access to the more than 13 million leads worth over $5 billion that are posted each year by more than half of the Global 2000 who are connected to the network as well.
Organizations can reduce the time it takes to find the right suppliers from weeks to days or even hours.

New features added to Ariba Discovery allow selling organizations to get the right messages to the right audience and convert these leads into sales.
  • Profile Pitch. Sellers can create highly targeted profiles and messaging based on industry, commodity, territory and other factors to promote themselves to active buyers. 
  • Badges and Social Sharing. Selling organizations can further raise their visibility by adding Ariba badges to their company websites and/or email signatures, defining vanity URLs for their company profiles and sharing their public URLs and postings on social sites such as Facebook, Twitter, and LinkedIn.
Pre-packaged integration

Ariba and Dell Boomi will announce that they are teaming to deliver pre-packaged integration as a service offerings to help selling organizations drive new levels of efficiency and effectiveness across their operations.

Designed to simplify and speed integration to the Ariba Network, the Ariba Integration Connector, powered by Dell Boomi Integration Packs, enables companies to collaborate more efficiently and drive game-changing improvements in productivity and performance.  The first connector integrates with Intuit QuickBooks. Additional connectors to enable sellers who own Microsoft Dynamics AX, Netsuite and Sage Peachtree solutions to quickly and easily integrate with the Ariba Network are planned to be released later this year.
From the beginning, the Ariba Network has been built to be an open platform to connect all companies.

“From the beginning, the Ariba Network has been built to be an open platform to connect all companies using any system to foster more efficient business-to-business collaboration,” said Tim Minahan, senior vice president, network strategy, Ariba. “With these new connectors, we are making it even easier for sales organizations of all sizes to fully automate their customer transactions and collaborations over the Ariba Network -- directly from their preferred CRM, ERP and accounting systems.”

The Ariba Integration Connector removes the barriers to system-to-network integration by eliminating complexity.  An out-of-the-box solution delivered as a service, the connector provides a fast, easy and affordable way for companies to connect to the Ariba Network -- regardless of the back-end systems they use. The connector currently supports integration with Intuit QuickBooks Desktop 2009-2013, Premier and Enterprise for US, UK, and CA Enterprise and Enterprise Plus.

The connector is available and in use today. To learn more about Ariba’s Connection solutions and the benefits they can deliver to your organization, visit http://www.ariba.com/services/connection-solutions.

You may also be interested in


Dell's Foglight for Virtualization update extends visibility and management control across more infrastructure

Dell Software this week delivered Foglight for Virtualization, Enterprise Edition to extend the depth and breadth of managing and optimizing server virtualization as well as virtual desktop infrastructure (VDI) and their joint impact on such IT resources as storage.

Building on the formerly named Quest vFoglight Pro virtualization management solution, Dell re-branded vFoglight to Foglight for Virtualization to make it the core platform to the Foglight family. Foglight is not sitting still either. Improvements this year move beyond monitoring support for VMware View VDI, to later support for VMware vCloud Director, OpenStack, and Citrix Xen VDI. [Disclosure: Dell Software and WMware are sponsors of BriefingsDirect podcasts.]

The higher value from such ecosystem and heterogeneous management support is the ability for virtualization server and system administrators to comprehensively optimize various flavors of data-center server virtualization, as well as the major VDI types, with added capabilities to track and analyze performance from the application level all the way to the server and storage hardware level. This week's announcements have also shown a spotlight on the recently updated Foglight for Storage Management 2.5.
Dell is showing its commitment to offering a  solution that encompasses all aspects of virtual infrastructure performance monitoring and management.

“With Foglight for Virtualization, Enterprise Edition, Dell is showing its commitment to offering a  solution that encompasses all aspects of virtual infrastructure performance monitoring and management, built on a platform that can scale as the infrastructure grows,” said Steve Rosenberg, general manager for Performance Monitoring, Dell. “This new release expands Foglight’s ability not only to monitor the additional infrastructure area of VDI, but also to correlate metrics from VDI with performance for applications, the virtual layer, the network, and underlying servers and storage.”

Dell Software also last week released a series of BYOD-targeted products and services, which are related to the better VDI management capabilities. That's because many enterprises and mid-market firms that are tasked with moving quickly to BYOD are using VDI to do it.

With the increasing adoption of VMware View in virtualized data centers (including for MSPs), VDI support is fast becoming a mainstay for today’s IT departments and managed service providers. VDI and server virtual machines (VMs) often utilize the same hardware components. Yet, both of these virtualized infrastructures serve different users and have separate requirements and resource demands, explained John Maxwell, vice president of product management for performance monitoring for virtualization, networking,storage and hardware at Dell Software.

Single-source solution 

As a result, VDI and server VMs require dedicated performance monitoring systems. However, these systems must also be connected, because so many underlying resources are often shared. Agent-based Foglight for Virtualization, Enterprise Edition offers virtualization administrators a more single-source solution that not only identifies and fixes performance issues within VMware View, but continues to run all features available in vOPS Server Enterprise with no effect on overall vCenter performance.

Foglight for Storage Management 2.5 has been released as an optional "cartridge" to Foglight for Virtualization. Foglight for Storage Management now offers physical storage performance reporting in addition to virtual reporting, providing customers with complete "VM to physical LUN" visibility. 

Additional enhancements in this release include LUN latency reporting, NPIV support, and the ability for customers to purchase the product either as a stand-alone cartridge, or as an optional cartridge to Foglight for Virtualization.
This new release expands Foglight’s ability to monitor the additional infrastructure area of VDI.

Additionally, Foglight is a unified performance monitoring platform that allows individual product solutions, delivered as sets of pluggable “cartridges,” to run stand-alone or to interoperate. Each individual product delivers best-of-breed functionality to the admin for that area, while simultaneously integrating with other cartridges to deliver true end-to-end monitoring from end-user experience to the underlying storage and server hardware layers, and everything in between, said Maxwell.

Foglight for Virtualization Enterprise Edition 6.8 is available now for a 45-day trial from www.quest.com. Pricing starts at $799 per socket. Foglight for Storage Management 2.5 is also available now for a 45-day trial from www.quest.com.  Pricing starts at $499 per socket.

Because Foglight is built on a common architecture to support the cartridges, it seems likely that it will move from an on-premises only offering to a SaaS-based version too, especially to support cloud- and MSP-based VDI offerings, and also to manage hybrid VDI implementations.

You may also be interested in:

Monday, April 22, 2013

Service Virtualization brings speed benefit and lower costs to TTNET applications testing

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

Welcome to the latest edition of the HP Discover Performance Podcast Series. Our next discussion examines how TTNET, the largest internet service provider in Turkey, with six million subscribers, significantly improved applications deployment while cutting costs and time to delivery.

We'll hear how TTNET deployed advanced Service Virtualization (SV) solutions to automate end-to-end test cases, gaining a path to integrated Unified Functional Testing (UFT).

To learn how, we're joined by Hasan Yükselten, Test and Release Manager at TTNET, which is a subsidiary of Türk Telekom, based in Istanbul. The interview is conducted by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: HP is a sponsor of this and other BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What was the situation there before you became more automated, before you started to use more software tools?

Yükselten: We're the leading ISP in Turkey. We deploy more than 200 applications per year, and we have to provide better and faster services to our customers every week, every month. Before HP SV, we had to use the other test infrastructures in our test cases.

Yükselten
We mostly had problems on issues such as the accessibility, authorization, downtime, and private data for reaching the other third-party’s infrastructures. So, we needed virtualization on our test systems, and we needed automation for getting fast deployment to make the release time shorter. And of course, we needed to reduce our cost. So, we decided to solve the problems by implementing SV.

Gardner: How did you move from where you were to where you wanted to be?

Yükselten: Before SV, we couldn’t do automation, since the other parties are in discrete locations and it was difficult to reach the other systems. We could automate functional test cases, but for end-to-end test cases, it was impossible to do automation.

First, we implemented SV for virtualizing the other systems, and we put SV between our infrastructure and the third-party infrastructure. We learned the requests and responses and then could use SV instead of the other party infrastructure.

Automation tools

After this, we could also use automation tools. We managed to use automation tools via integrating Unified Functional Testing (UFT) and SV tools, and now we can run automation test cases and end-to-end test cases on SV.

We started to use SV in our test systems first. When we saw the success, we decided to implement SV for the development systems also.
Gardner: Give me a sense of the type of applications we’re talking about.

Yükselten: We are mostly working on customer relationship management (CRM) applications. We deploy more than 200 applications per year and we have more than six million customers. We have to offer new campaigns and make some transformations for new customers, etc.

We have to save all the informations, and while saving the information, we also interact the other systems, for example the National Identity System, through telecom systems, public switched telephone network (PSTN) systems.

We have to ask informations and we need make some requests to the other systems. So, we need to use all the other systems in our CRM systems. And we also have internet protocol television (IPTV) products, value added services products, and the company products. But basically, we’re using CRM systems for our development and for our systems.

Gardner: So clearly, these are mission-critical applications essential to your business, your growth, and your ability to compete in your market.

Yükselten: If there is a mistake, a big error in our system, the next day, we cannot sell anything. We cannot do anything all over Turkey.

Gardner: Let's talk a bit about the adoption of SV. What you actually have in place so far?

Yükselten: Actually, it was very easy to adopt these products into our system, because including proof of concept (PoC), we could use this tool in six weeks. We spent first two weeks for the PoC and after four weeks, we managed to use the tool.

Easy to implement

For the first six weeks, we could use SV for 45 percent of end-to-end test cases. In 10 weeks, 95 percent of our test cases could be run on SV. It was very easy to implement. After that, we also implemented two other SVs in our other systems. So, we're now using three SV systems. One is for development, one is just for the campaigns, and one is for the E2E tests.

HP Software helped us so much, especially R&D. HP Turkey helped us, because we were also using application lifecycle management (ALM) tools before SV. We were using QTP LoadRunners, Quality Center, etc., so we had a good relation with HP Software.
Since SV is a new tool, we needed a lot of customization for our needs, and HP Software was always with us. They were very quick to answer our questions and to return for our development needs. We managed to use the tool in six weeks, because of HP’s Rapid Solutions.

Gardner: My understanding is that you have something on the order of 150 services. You use 50 regularly, but you're able to then spin up and use others on a more ad-hoc basis. Why is it important for you to have that kind of flexibility and agility?
We virtualized all the web services, but we use just what we need in our test cases. 

Yükselten: We virtualized more than 150 services, but we use 48 of them actively. We use these portions of the service because we virtualized our third-party infrastructures for our needs. For example, we virtualized all the other CRM systems, but we don’t need all of them. In gateway remote, you can simulate all the other web services totally. So, we virtualized all the web services, but we use just what we need in our test cases.

In three months we got the investment back actually, maybe shorter than three months. It could have been two and half months. For example, for the campaign test cases, we gained 100 percent of efficiency. Before HP, we could run just seven campaigns in a month, but after HP, we managed to run 14 campaigns in a month.
We gained 100 percent efficiency and three man-months in this way, because three test engineers were working on campaigns like this. For another example, last month we got the metrics and we saw that we had a total blockage for seven days, so that was 21 working days for March. We saved 33 percent of our manpower with SV and there are 20 test engineers working on it. We gained 140 man-months last month.

For our basic test scenarios, we could run all test cases in 112 hours. After SV, we managed to run it in 54 hours. So we gained 100 percent efficiency in that area and also managed to do automation for the campaign test cases. We managed to automate 52 percent of our campaign test cases, and this meant a very big efficiency for us. Totally, we saved more than $50,000 per month.

Broader applications

Gardner: Do you expect now to be able to take this to a larger set of applications across Türk Telekom?

Yükselten: Yes. Türk Telekom licenses these tools and started to use these tools in their test service to get this efficiency for those systems. We have a branch company called AVEA, and they also want to use this tool. After our getting this efficiency, many companies want to use this virtualization. Eight companies visited us in Turkey to get our experiences on this tool. Many companies want this and want to use this tool in their test systems.

Gardner: Do you have any advice for other organizations like those you've been describing, now that you have done this? Any recommendations on what you would advise others that might help them improve on how they do it?

Yükselten: Companies must know their needs first. For example, in our company, we have three blockage systems for third parties and the other systems don't change everyday. So it was easy to implement SV in our systems and virtualize the other systems. We don’t need to do virtualization day by day, because the other systems don't change every day.

Once a month, we consult and change our systems, update our web services on SV, and this is enough for us. But if the other party's systems changes day by day or frequently, it may be difficult to do virtualization every day.
Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

This is an important point. Companies should think automation besides virtualization. This is also a very efficient aspect, so this must be also considered while making virtualization.

We started to use UFT with integrating SV. As I told you, we managed to automate 52 percent of our campaign test cases so far. So we would like to go on and try to automate more test cases, our end-to-end test cases, the basic scenarios, and other systems.
Our first goal is doing more automation with SV and UFT and the other is using SV in development sites. We plan to find early defects in development sites and getting more quality products into the test.

Rapid deployment

Of course, in this way, we get rapid deployment and we make shorter release times because the product will have more quality. Using performance test and SV also helps us on performance. We use HP LoadRunner for our performance test cases. We have three goals now, and the last one is using SV with integrating LoadRunner.

Gardner: Well, it's really impressive. It sounds as if you put in place the technologies that will allow you to move very rapidly, to even a larger payback. So congratulations on that. Gain more insights and information on the best of IT Performance Management at www.hp.com/go/discoverperformance. And you can always access this and other episodes in our HP Discover performance podcast series on iTunes under BriefingsDirect.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: HP.

You may also be interested in:

Rush to enable enterprise mobile development pits native against container approaches

Both enterprises and independent software vendors (ISVs) know the software-development game's changed. Not only do they need to rapidly develop and deploy more mobile apps across multiple interfaces and device platforms, but they need to really rethink all of their client development -- and even try and come up with a singular approach to most of them.

Fast to their rescue, the suppliers of development tools and testing systems are tripping over each other to appeal to them in this new game. And as in the past with other deployment advances, we're seeing a major philosophical split between the "nativists" (running directly on the device hardware) and the "virtualizers" (with their scripting and interpretive layers and containers).

First, the nativists. Embarcadero Technologies, with its RAD Studio and former Borland CodeGear assets, is not surprisingly catering to its skills base -- the hard core developers at home in Delphi and C++Builder, as well as C and Objective-C. Embarcadero therefore today delivered RAD Studio XE4, with an attractive offer to those seeking native -- what Embarcadero calls "multi-device, true native" -- apps development, but across most mobile devices from a singular code base and a single core skills set. RAD Studio XE4 has a single application framework for iOS, Windows, and Mac OSX, with support for Android coming soon.
But native development for mobile (nee PCs) isn't the only game in town, nor the only way to seek the "run anywhere" nirvana.

RAD Studio XE4 allows developers to gain more control over the development lifecycle and deliver apps with tighter security, a better user experience, lightning quick performance, and a small footprint. Those that want to target iOS devices, as well as OSX and Windows PCs, can write once and run anywhere, so to speak, says Embarcadero. The key is FireMonkey, a cross-platform GUI framework developed by Embarcadero to provide Delphi and C++Builders with a single framework. This is the same lineage of the graphical language tools that sprung from native (fat) PC development.

But native development for mobile (nee PCs) isn't the only game in town, nor the only way to seek the "run anywhere" nirvana. The other approaches to the mobile and cross-platform development complexity problem are more aligned with open source, HTML5, and scripting, all with roots in the web.

And so HP last month, threw it's weight from the IT management perspective behind "a hybrid approach" for mobile. HP Anywhere, as HP calls it, aids in the distributing and consuming of IT management information to mobile devices. But this may well be a model for far broader enterprise-to-mobile process alignment.

Especially where BYOD is the goal, the hybrid approach works best, says Genefa Murphy, Director of Mobile Product Management and User Experience at HP Software. [Disclosure: Both Embarcadero and HP are sponsors of BriefingsDirect podcasts.]

Under this "virtualizers" vision, the HP Anywhere server connects IT management systems to the HP Anywhere Client on Android or iOS devices, forming the basic client app or container on the end-point devices. Then so-called Mini-Apps are downloadable to that container to provide the access and interface to specific IT management tasks or modules.

Two best ends

These two examples of mobile enablement to me represent the two best ends of the enterprise mobile needs spectrum. And chances are, enterprises are going to need both, especially for existing applications and processes. For example, the Embarcadero approach can swiftly take existing full-client applications and deliver them to the needed mobile tier devices with strong performance and security, and no need to rewrite for each client and OS, said John Thomas (JT), Director of Product Management at Embarcadero.

For more on my views of how cloud, mobile and enterprise IT intersect, see my two-part interview on the Gathering Clouds blog.

The question yet to be answered is what combination of native, scripting, or hybrid container-type models will fit best for entirely new "mobile first" applications. This is a work in progress, and will also vary greatly from company to company, based on a maze of variables for each. Looks for a lot more blogs on that greenfield apps trend in the future.

For now, however, a lot of the pain for IT in going mobile is in getting existing PC applications via code reuse -- as well as business processes on back-end systems -- out to where they can be used . . . on the modern mobile landscape and in the hands of newly empowered mobile users. Incidentally, the new Embarcadero tools and framework allows .NET apps to be driven out to iOS devices in a pretty snappy fashion. That's assuming, of course, Windows CE won't be your preferred client environment after all. You know who you are.

Currently, RAD Studio XE4 delivers multi-device development for ARM and Intel devices, including Apple iPhone, iPod Touch, iPad, Mac OSX, Windows PCs, Slates, and Surface Pro tablets, said JT. And RAD Studio XE4 allows developers to take advantage of the full range of capabilities available on each of those devices to deliver the best user experience, he added. The full Android support should come mid-year.
The Embarcadero tools allow developers or designers to also quickly create no-code, visual mockups with live or simulated data and deploy to actual target devices

The Embarcadero tools allow developers or designers to also quickly create no-code, visual mockups with live or simulated data and deploy to actual target devices (like PCs, phones, or tablets), or simulate on Windows or Mac, so that the requirements and app role can be best defined and tuned.

RAD Studio XE4 is available immediately. To download a free trial, visit http://www.embarcadero.com/products/rad-studio/downloads. Pricing starts at $1,799. Delphi and C++Builder pricing starts at $149 for Starter edition and $999 and up for full commercial development licenses. Upgrade discounts are available for users of recent earlier versions. An introductory 10 percent discount is available on most RAD Studio XE4 family products through May 22.

As for HP Anywhere, it manages the cross-platform device client issue using HMTL5 and Javascipt, and we'll be seeing a lot of that too from many "virtualizers." HP also boats RAD via an emulator that allows quick switching between device views. HP is taking its HP Anywhere story to both the test and QA people as well as developers as they seek ways to bring more business functions to the mobile enterprise worker corps.

You may also be interested in:

Wednesday, April 10, 2013

Data complexity forces need for agnostic tool chain approach for information management, says Dell Software executive

Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

A data dichotomy has changed the face of information management, bringing with it huge new data challenges for businesses to solve.

The dichotomy means that organizations, both large and small, not only need to manage all of their internal data to provide intelligence about their businesses, they need to manage the growing reams of increasingly external big data that enables them to discover new customers and drive new revenue.

The latest BriefingsDirect software how-to discussion then focuses on bringing far higher levels of automation and precision to the task of solving such varied data complexity. By embracing an agnostic, end-to-end tool chain approach to overall data and information management, businesses are both solving complexity and managing data better as a lifecycle.

To gain more insights on where the information management market has been and where it's going, we are joined by Matt Wolken, Executive Director and General Manager for Information Management at Dell Software. The discussion is moderated by Dana Gardner, Principal Analyst at Interarbor Solutions. [Disclosure: Dell Software is a sponsor of BriefingsDirect podcasts.]

Here are some excerpts:
Gardner: What are the biggest challenges that businesses need to solve now when it comes to data and information management?

Wolken: About 10 or 15 years ago, the problem was that data was sitting in individual databases around the company, either in a database on the backside of an application, the customer relationship management (CRM) application, the enterprise resource planning (ERP) application, or in data marts around the company. The challenge was how to bring all this together to create a single cohesive view of the company?

Wolken
That was yesterday's problem, and the answer was technology. The technology was a single, large data warehouse. All of the data was moved to it, and you then queried that larger data warehouse where all of the data was for a complete answer about your company.

What we're seeing now is that there are many complexities that have been added to that situation over time. We have different vendor silos with different technologies in them. We have different data types, as the technology industry overall has learned to capture new and different types of data -- textual data, semi-structured data, and unstructured data -- all in addition to the already existing relational data. Now, you have this proliferation of other data types and therefore other databases.

The other thing that we notice is that a lot of data isn't on premise any more. It's not even owned by the company. It's at your software-as-a-service (SaaS) provider for CRM, your SaaS provider for ERP, or your travel or human resources (HR) provider. So data again becomes siloed, not only by vendor and data type, but also by location. This is the complexity of today, as we notice it.

Cohesive view

All of this data is spread about, and the challenge becomes how do you understand and otherwise consume that data or create a cohesive view of your company? Then there is still the additional social data in the form of Twitter or Facebook information that you wouldn't have had in prior years. And it's that environment, and the complexity that comes with it, that we really would like to help customers solve.

Gardner: When it comes to this so-called data dichotomy, is it oversimplified to say it's internal and external, or is there perhaps a better way to categorize these larger sets that organizations need to deal with?

Wolken: There's been a critical change in the way companies go about using data. There are some people who want to use data for an outcome-based result. This is generally what I would call the line-of-business concern, where the challenge with data is how do I derive more revenue out of the data source that I am looking at.

What's the business benefit for me examining this data? Is there a new segment I can codify and therefore market to? Is there a campaign that's currently running that is not getting a good response rate, and if so, do I want to switch to another campaign or otherwise improve it midstream to drive more real value in terms of revenue to the company?

That’s the more modern aspect of it. All of the prior activities inside business intelligence (BI) -- let’s flip those words around and say intelligence about the business -- was really internally focused. How do I get sanctioned data off of approved systems to understand the official company point of view in terms of operations?
How do I go out and use data to derive a better outcome for my business?

That second goal is not a bad goal. That's still a goal that's needed, and IT is still required to create that sanctioned data, that master data, and the approved, official sources of data. But there is this other piece of data, this other outcome that's being warranted by the line of business, which is, how do I go out and use data to derive a better outcome for my business? That's more operationally revenue-oriented, whereas the internal operations are around cost orientation and operations.

So where you get executive dashboards for internal consumption off of BI or intelligence for the business, the business units themselves are about visualization, exploration, and understanding and driving new insights.

It's a change in both focus and direction. It sometimes ends up in a conflict between the groups, but it doesn't really have to be that way. At least, we don't think it does. That's something that we try to help people through: How do you get the sanctioned data you need, but also bring in this third-party data and unstructured data and add nuance to what you are seeing about your company.

Gardner: Do traditional technology offerings allow this dichotomy to be joined, or do we need a different way to create these insights across both internal and external information?

Wolken: There are certainly ways to get to anything. But if you're still amending program after program or technology after technology, you end up with something less than the best path, and there might be new and better ways of doing things.

Agnostic tool chain

There are lots of ways to take a data warehouse forward in today's environment, manipulate other forms of data so it can enter a data warehouse or relational data warehouse, and/or go the other way and put everything into an unstructured environment, but there's also another way to approach things, and that’s with an agnostic tool chain.

Tools have existed in the traditional sense for a long time. Generally, a tool is utilized to hide complexity and all of the issues underneath the tool itself. The tool has intelligence to comprehend all of the challenges below it, but it really abstracts that from the user.

We think that instead of buying three or four database types, a structured database, something that can handle text, a solution that handles semi-structured or structured, or even a high performance analytical engine for that matter, what if the tool chain abstracts much of that complexity? This means the tools that you use every day can comprehend any database type, data structure type, or any vendor changes or nuances between platforms.

That's the strategy we’re pursuing at Dell. We’re defining a set of tools -- not the underlying technologies or proliferation of technologies -- but the tools themselves, so that the day-to-day operations are hidden from the complexity of those underlying sources of vendor, data type, and location.
We’re looking to enable customers to leverage those technologies for a smoother, more efficient, and more effective operation.

That's how we really came at it -- from a tool-chain perspective, as opposed to deploying additional technologies. We’re looking to enable customers to leverage those technologies for a smoother, more efficient, and more effective operation.

Let's just take data integration as a point. I can sometimes go after certain siloed data integration products. I can go after a data product that goes after cloud resources. I can get a data product that only goes after relational. I can get another data product to extract or load into Hive or Hadoop. But what if I had one that could do all of that? Rather than buying separate ones for the separate use cases, what if you just had one?
Gardner: What are the stakes here? What do you get if you do this right?

Institutional knowledge

Wolken: There are a couple of ways we think about it, one of which is institutional knowledge. Previously, if you brought in a new tool into your environment to examine a new database type, you would probably hire a person from the outside, because you needed to find that skill set already in the market in order to make you productive on day one.

Instead of applying somebody who knows the organization, the data, the functions of the business, you would probably hire the new person from the outside. That's generally retooling your organization.

Or, if you switch vendors, that causes a shift as well. One primary vendor stack is probably a knowledge and domain of one of your employees, and if you switch to another vendor stack or require another vendor stack in your environment, you're probably going to have to retool yet again and find new resources. So that's one aspect of human knowledge and intelligence about the business.

There is a value to sharing. It's a lot harder to share across vendor environments and data environments if the tools can't bridge them. In that case, you have to have third-party ways to bridge those gaps between the tools. If you have sharing that occurs natively in the tool, then you don't have to cross that bridge, you don't have the delay, and you don't have the complexity to get there.

So there is a methodology within the way you run the environment and the way employees collaborate that is also accelerated. We also think that training is something that can benefit from this agnostic approach.
You're reaching across domains and you're not as effective as you would be if you could do that all with one tool chain.

But also, generically, if you're using the same tools, then things like master data management (MDM) challenges become more comprehensive, if the tool chain understands where that MDM is coming from, and so on.

You also codify how and where resources are shared. So if you have a person who has to provision data for an analyst, and they are using one tool to reach to relational data, another to reach into another type of data, or a third-party tool to reach into properties and SaaS environments, then you have an ineffective process.

You're reaching across domains and you're not as effective as you would be if you could do that all with one tool chain.

So those are some of the high-level ideas. That's why we think there's value there. If you go back to what would have existed maybe 10 or 15 years ago, you had one set of staff who used one set of tools to go back against all relational data. It was a construct that worked well then. We just think it needs to be updated to account for the variance within the nuances that have come to the fore as the technology has progressed and brought about new types of technology and databases.

Gardner: What are typically some of the business paybacks, and do they outweigh the cost?

Investment cycles

Wolken: It all depends on how you go about it. There are lots of stories about people who go on these long investment cycles into some massive information management strategy change without feeling like they got anything out of it, or at least were productive or paid back the fee.

There's a different strategy that we think can be more effective for organizations, which is to pursue smaller, bite-size chunks of objective action that you know will deliver some concrete benefit to the company. So rather than doing large schemes, start with smaller projects and pursue them one at a time incrementally -- projects that last a week and then you have 52 projects that you know derive a certain value in a given time period.

Other things we encourage organizations to do deal directly with how you can use data to increase competitiveness. For starters, can you see nuances in the data? Is there a tool that gives you the capability to see something you couldn't see before? So that's more of an analytical or discovery capability.

There's also a capability to just manage a given data type. If I can see the data, I can take advantage of it. If I can operate that way, I can take advantage of it.

Another thing to think about is what I would call a feedback mechanism, or the time or duration of observation to action. In this case, I'll talk about social sentiment for a moment. If you can create systems that can listen to how your brand is being talked about, how your product is being talked about in the environment of social commentary, then the feedback that you're getting can occur in real time, as the comments are being posted.
There's a feedback mechanism increase that also can then benefit from handling data in a modern way or using more modern resources to get that feedback.

Now, you might think you'll get that anyway. I would have gotten a letter from a customer two weeks from now in the postal system that provided me that same feedback. That’s true, but sometimes that two weeks can be a real benefit.

Imagine a marketing campaign that's currently running in the East, with a companion program in the West that's slightly different. Let's say it's a two-week program. It would be nice if, during the first week, you could be listening to social media and find out that the campaign in the West is not performing as well as the one in the East, and then change your investment thesis around the program -- cancel the one that's not performing well and double down on the one that's performing well.

There's a feedback mechanism increase that also can then benefit from handling data in a modern way or using more modern resources to get that feedback. When I say modern resources, generally that's pointing towards unstructured data types or textual data types. Again, if you can comprehend and understand those within your overall information management status, you now also have a feedback mechanism that should increase your responsiveness and therefore make your business more competitive as well.

Gardner: Given that these payoffs could be so substantial, what's between companies and the feedback benefits?

It's the complexity

Wolken: I think it's complexity of the environment. If you only had relational systems inside your company previously, now you have to go out and understand all of the various systems you can buy, qualify those systems, get pure feedback, have some proofs of concept (POCs) in development, come in and set all these systems up, and that just takes a little bit of time. So the more complexity you invite into your environment, the more challenges you have to deal with.

After that, you have to operate and run it every day. That's the part where we think the tool chain can help. But as far as understanding the environment, having someone who can help you walk through the choices and solutions and come up with one that is best suited to your needs, that’s where we think we can come in as a vendor and add lots of value.

When we go in as a vendor, we look at the customer environment as it was, compare that to what it is today, and work to figure out where the best areas of collaboration can be, where tools can add the most value, and then figure out how and where can we add the most benefit to the user.

What systems are effective? What systems collaborate well? That's something that we have tried to emulate, at least in the tool space. How do you get to an answer? How do you drive there? Those are the questions we’re focused on helping customers answers.

For example, if you've never had a data warehouse before, and you are in that stage, then creating your first one is kind of daunting, both from a price perspective, as well as complexity perspective or know-how. The same thing can occur on really any aspect -- textual data, unstructured data, or social sentiment.
Those are some of the major challenges -- complexity, cost, knowledge, and know-how.

Each one of those can appear daunting if you don't have a skill set, or don't have somebody walking you through that process who has done it before. Otherwise, it's trying to put your hands on every bit of data and consume what you can and learning through that process.

Those are some of the things that are really challenging, especially if you're a smaller firm that has a limited number of staff and there's this new demand from the line of business, because they want to go off in a different direction and have more understanding that they couldn't get out of existing systems.

How do you go out and attain that knowledge without duplicating the team, finding new vendor tools, and adding complexity to your environment, maybe even adding additional data sources, and therefore more data-storage requirements. Those are some of the major challenges -- complexity, cost, knowledge, and know-how.

Gardner: Why are mid-market organizations now more able to avail themselves of some of these values and benefits than in the past?

Mid-market skills

Wolken: As the products are well-known, there is more trained staff that understands the more common technologies. There are more codified ways of doing things that a business can take advantage of, because there's a large skill set, and most of the employees may already have that skill set as you bring them into the company.

There are also some advantages just in the way technologies have advanced over the years. Storage used to be very expensive, and then it got a little cheaper. Then solid-state drives (SSD) came along and then that got cheaper as well. There are some price point advantages in the coming years, as well.

Dell overall has maintained the status that we started with when Michael Dell started recreating PCs in his dorm room from standard product components to bring the price down. That model of making technology attainable to larger numbers of people has continued throughout Dell’s history, and we’re continuing it now with our information management software business.

We’re constantly thinking about how we can reduce cost and complexity for our customers. One example would be what we call Quickstart Data Warehouse. It was designed to democratize data to a lower price point, to bring the price and complexity down to a much lower space, so that more people can afford and have their first data warehouse.

We worked with our partner Microsoft, as well as Dell’s own engineering team, and then we qualified the box, the hardware, and the systems to work to the highest peak performance. Then, we scripted an upfront install mechanism that allows the process to be up and running in 45 minutes with little more than directing a couple of IP addresses. You plug the box in, and it comes up in 45 minutes, without you having to have knowledge about how to stand up, integrate, and qualify hardware and software together for an outcome we call a data warehouse.
We're trying to hit all of the steps, and the associated costs -- time and/or personnel costs – and remove them as much as we can.

Another thing we did was include Boomi, which is a connector to automatically go out and connect to the data sources that you have. It's the mechanism by which you bring data into it. And lastly, we included services, in case there were any other questions or problems you had to set it up.

If you have a limited staff, and if you have to go out and qualify new resources and things you don't understand, and then set them up and then actually run them, that’s a major challenge. We're trying to hit all of the steps, and the associated costs -- time and/or personnel costs – and remove them as much as we can.

It's one way vendors like Dell are moving to democratize business intelligence a little further, bring it to a lower price point than customers are accustomed too and making it more available to firms that either didn’t have that luxury of that expertise link sitting around the office, or who found that the price point was a little too high.

Gardner: You mentioned this concept of the tool chain several times -- being agnostic to the data type, holistic management, complete view, and then of course integrate it. What is it about the tool chain that accomplishes both a comprehensive value, but also allows it to be adopted on a fairly manageable path, rather than all at once?

Wolken: One of the things we find advantageous about entering the market at this point in time is that we're able to look at history, observe how other people have done things over time, and then invest in the market with the realization that maybe something has changed here and maybe a new approach is needed.

Different point of view

Whereas the industry has typically gone down the path of each new technology or advancement of technology requires a new tool, a new product, or a new technology solution, we’ve been able to stand back and see the need for a different approach. We just have a different point of view, which is that an agnostic tool chain can enable organizations to do more.

So when we look at database tools, as an example, we would want a tool that works against all database types, as opposed to one that works against only a single vendor or type of data.

The other thing that we look at is if you walk into an average company today, there are already a lot of things laying around the business. A lot of investment has already been made.

We wanted to be able to snap in and work with all of the existing tools. So, each of the tools that we’ve acquired, or have created inside the company, were made to step into an existing environment, recognize that there were other products already in the environment, and recognize that they probably came from a different vendor or work on a different data type.

That’s core to our strategy. We recognize that people were already facing complexity before we even came into the picture, so we’re focused on figuring out how we snap into what they already have in place, as opposed to a rip-and-replace strategy or a platform strategy that requires all of the components to be replaced or removed in order for the new platform to take its place.
We’ve also assembled a tool chain in which the entirety of the chain delivers value as a whole.

What that means is tools should be agnostic, and they should be able to snap into an environment and work with other tools. Each one of the products in the tool chain we’ve assembled was designed from that point of view.

But beyond that, we’ve also assembled a tool chain in which the entirety of the chain delivers value as a whole. We think that every point where you have agnosticism or every point where you have a tool that can abstract that lower amount of complexity, you have savings.

You have a benefit, whether it’s cost savings, employee productivity, or efficiency, or the ability to keep sanctioned data and a set of tools and systems that comprehend it. The idea being that the entirety of the tool chain provides you with advantages above and beyond what the individual components bring.

Now, we're perfectly happy to help a customer at any point where they have difficultly and any point where our tools can help them, whether it's at the hardware layer, from the traditional Dell way, at the application layer, considering a data warehouse or otherwise, or at the tool layer. But we feel that as more and more of the portfolio – the tool chain – is consumed, more and more efficiency is enabled.

Gardner: It also sounds as if this sets you up for a data and information lifecycle benefits, not just on the business and BI benefits, but also on the IT benefits.

Wolken: One of the problems that you uncover is that there's a lot of data being replicated in a lot of places. One of the advantages that we've put together in the tool chain was to use virtualization as a capability, because you know where data came from and you know that it was sanctioned data. There's no reason to replicate that to disk in another location in the company, if you can just reach into that data source and pull that forward for a data analyst to utilize.

You can virtually represent that data to the user, without creating a new repository for that person. So you're saving on storage and replication costs. So if you’re looking for where is there efficiency in the lifecycle of data and how can you can cut some of those costs, that’s something that jumps right out.

Doing that, you also solve the problem of how to make sure that the data that was provisioned was sanctioned. By doing all of these things, by creating a virtual view, then providing that view back to the analyst, you're really solving multiple pieces of the puzzle at the same time. It really enables you to look at it from an information-management point of view.

One of the advantages

Gardner: How should enterprises and mid-market firms get started?

Wolken: Most companies aren’t just out there asking how they can get a new tool chain. That's not really the strategy most people are thinking about. What they are asking is how do I get to the next stage of being an intelligent company? How do I improve my maturity in business intelligence? How would I get from Excel spreadsheets without a data warehouse to a data warehouse and centralized intelligence or sanctioned data?

Each one of these challenges come from a point of view of, how do I improve my environment based upon the goals and needs that I am facing? How do I grow up as a company and get to be more of a data-based company?

Somebody else might be faced with more specific challenges, such a line of business is now asking me for Twitter data, and we have no systems or comprehension to understand that. That's really the point where you ask, what's going to be my strategy as I grow and otherwise improve my business intelligence environment, which is morphing every year for most customers.
It's about incremental improvement as well as tangible improvement for each and every step of the information management process.

That's the way that most people would start, with an existing problem and an objective or a goal inside the company. Generically, over time, the approach to answering it has been you buy a new technology from a new vendor who has a new silo, and you create a new data mart or data warehouse. But this is perpetuating the idea that technology will solve the problem. You end up with more technologies, more vendor tools, more staff, and more replicated data. We think this approach has become dated and inefficient.

But if, as an organization, you can comprehend that maybe there is some complexity that can be removed, while you're making an investment, then you free yourself to start thinking about how you can build a new architecture along the way. It's about incremental improvement as well as tangible improvement for each and every step of the information management process.

So rather than asking somebody to re-architect and rip and replace their tool chain or the way they manage the information lifecycle, I would say you sort of lean into it in a way.

If you're really after a performance metric and you feel like there is a performance issue in an environment, at Dell we have a number of resources that actually benchmark and understand the performance and where bottlenecks are in systems.

Sometimes there’s an issue occurring inside the database environment. Sometimes it's at the integration layer, because integration isn’t happening as well as you think. Sometimes it's at the data warehouse layer, because of the way the data model was set up. Whatever the case, we think there is value in understanding the earlier parts of the chain, because if they’re not performing well, the latter parts of the chain can’t perform either.

And so at each step, we've looked at how you ensure the performance of the data. How do you ensure the performance of the integration environment? How do you ensure the performance of the data warehouse as well? We think if each component of the tool chain in working as well as it should be, then that’s when you enable the entirety of your solution implementation to truly deliver value.
Listen to the podcast. Find it on iTunes. Read a full transcript or download a copy. Sponsor: Dell Software.

You may also be interested in:

Monday, April 8, 2013

nCrypted Cloud adds security and privacy to cloud-based storage services for consumers and enterprises

Boston-based startup nCrypted Cloud recently launched software of the same name designed to address the security and privacy concerns that have emerged with the use of popular cloud-based storage services.

Available in consumer basic, consumer pro, and enterprise editions, nCrypted Cloud encrypts information stored on popular cloud services such as Dropbox, Google Drive and Microsoft’s SkyDrive. The software is as simple to use as the services it works with, says Nick Stamos, the CEO and Co-Founder of nCrypted Cloud, while offering the robustness and controls that enterprise IT departments need.

Stamos says nCrypted Cloud’s security privacy protections fill a glaring gap in cloud storage services today.
The promise of the cloud is 'put everything in the cloud and it will be available' – but that’s the problem as well as the promise.

“The promise of the cloud is 'put everything in the cloud and it will be available' – but that’s the problem as well as the promise,” says Stamos, who is also principal and founder of The Stamos Group.

Popular cloud-based storage services lack the security and privacy that enterprises require, yet employees are using them anyway -- with the rise of BYOD and mobility, users want access to files anytime, from anywhere. This leaves enterprise IT departments searching for a way to protect corporate information stored in the cloud.

In a related develoment, last month we reported on OwnCloud, Inc. and its release of the latest version of the ownCloud Community Edition with a number of usability, performance, and integration enhancements. The ownCloud file sync and share software, deployed on-premise, not only offers users greater control, but allows organizations to integrate existing security, storage, monitoring and reporting tools.

Mobile data management solutions have proven too restrictive and inflexible, said Stamos, while trying to implement corporate policies that prohibit employees from storing and accessing personal and corporate data from a mobile device is unreasonable. Enterprise IT needs a solution that users won’t attempt to work around, but will embrace, he says.

“We allow users to apply privacy controls to personal data, as well as corporate data, so that if an employee parts ways with a company he can revoke access to that personal data from a corporate device, and vice versa,” explains Stamos. “That makes it a value proposition that users feel comfortable with.” Meanwhile, enterprise security policies can be used to govern work files and allow for revocation of access if needed.

Enhance, not replace

One key distinction about nCrypted Cloud is that it works with existing cloud-storage services, instead of replacing them.

“We provide the same sort of native user experience … so it’s not disruptive end users. The last thing the world needed was a new storage provider,” says Stamos. “What people need is to be able to use the Dropbox they love…in the context of it being more secure by just being able to make folders private or share them securely. They can continue to have their data where it is and how it’s organized without being disruptive in any way, shape or form.”

nCrypted Cloud’s persistent client-side encryption ensures that data isn’t exposed and the software offers comprehensive key-management features to facilitate administration. When a user accesses corporate files from any device, her predefined access policies and sharing status is verified and keys for her user ID are sent to the device.
Users can easily access and share files in different cloud-based storage services and have a single-pane view of cloud and corporate file repositories.

That client caches keys for offline access to files, and keys can be removed if the access policies change. Users can easily access and share files in different cloud-based storage services and have a single-pane view of cloud and corporate file repositories.

The consumer basic version of nCrypted Cloud is available for free. The consumer pro version costs $5 per month and includes managed secure sharing, some file auditing, and the capability to manage files stored in different cloud services. The enterprise edition – which enters beta testing next week – well be priced at $10 per month. It includes all of the capabilities of the consumer pro version as well as  enhancements such as multiple identities, centralized provisioning and policy control and a full audit trail of 30-day archives. 

Downloads and more information are available at www.ncryptedcloud.com.

(BriefingsDirect contributor Cara Garretson provided editorial assistance and research on this post. She can be reached on LinkedIn.)

You may also be interested in: