I was with a vendor the other day that said he needed a “demonstration of intent” to take to his management before he was able to engage with us.
The “demonstration of intent” is that in-advance commitment vendors like to have before they commit resources to any prospective project at a customer. It comprises, usually, a small amount of money up front (“to meet costs”), or a letter that indicates that upon successful conclusion of what-ever-it-is, there will be a deal on the table.
The thing about such requests is that they often come in advance of any proof of tangible value. A presentation or case study is not such a demonstration.
My answer to these requests, usually, is “we don't pay for pre-sales”. And why should we? The burden for demonstrating value – in our specific context – must necessarily be elsewhere. We don't have the bandwidth, anyway, for speculative investigations of things that might be useful.
I think it important, at this point, to characterise a definition of pre-sales. It's any situation in which the primary objective is to demonstrate to us there is value that ought be pursued. When someone proposes, for example, a new way of using data that will enhance revenue or cut costs, proving those claims is just a cost of doing business. Of course, any kind of proof that relates specifically to us is going to cost money. It'll cost us too: we have to invest time and other resources to help.
So don't be asking for money to “meet costs”, because it is likely that we'll ask for money to meet ours.
Some people, of course, will argue that this kind of thinking means that smaller companies (who simply don't have the money to invest without commitment) are locked out of deals with a larger organisation. That probably true, and its too bad. Its why smaller organisations tend to have smaller clients to start with: the costs of getting in the door rapidly accelerate the bigger the potential customer.
By the way, I also raise an eyebrow when the term “proof of concept” comes up, which is another way of saying pre-sales.
A pilot, on the other hand, is something quite different. The purpose of a pilot is to let us see how a particular new thing will work in our specific context. It teaches us the lessons we need to operate it successfully if it goes ahead. And it lets us uncover any technical or procedural bugs we'd need to address during a ramp-up.
A pilot is not about proving value. It is about demonstrating that operationally we can do whatever-it-is.
Functionally speaking, a pilot is pretty much a full implementation of the new thing. We like to make sure that our partners (which is what a vendor would be, if we are actually going to do a pilot) are in a situation where they, too, are getting something from the arrangement at this point. Though it might not always be money.
I have one further point to make on this subject. And that's about the amount of investment vendors have to stump up to introduce us to something new.
We, like every other business, have budgets we must adhere to. Most of the time, they are decided far in advance. And usually, there is no fat in them at all. We are asked, year on year to find cost savings, in fact. We make our investment decision based on the value and capabilities we have now, or know about about at budget time.
So when someone shows up with something unique, there is often a scramble to make anything happen. Sometimes we actually can't make anything happen. I often wonder, when I sit across the table from a potential partner whether they realise this, or think we're playing a negotiation game.
I can assure you we're not.
Its necessary to be candid about these things. And for vendors to recognise that the price of getting something into a large customer is directly proportional to how quickly they want it to happen. A sales cycle of a year or more (which fits in nicely with our budgetary cycle) can be circumvented, but only where practically everything gets paid for up front.
Here is the key thing from our point of view: if we try to slot something into a programme out-of-cycle, it means we'll have to cut something else. Presumably, since it got into the budget in the first place, it was quite important.
It is a brave person that deprioritises something everyone has agreed is essential in favour of something that practically no-one knows about, or is new or especially novel.
Here is another term I sometimes hear from vendors: “we will do something but you need to have skin in the game”. Since when did investing our time and resources in working with you not become skin the game? We have skin in the game from the first meeting.
So let me close with a summary that might be helpful. We'll talk to anyone that can help us build our business. But be ready to prove you can do so. And recognise that any new thing we do will likely require us to negotiate a compromise somewhere else.
Hi James -
Long time reader here, managing IT Innovation for another large company. I thought I would expand a bit on your excellent post, specifically about pilots.
Congratulations for your high quality blog and your future book!
- Julien
Posted by: Julien Le Nestour | November 14, 2008 at 12:20 PM
The concepts of "proof of concept" and "pilot" are given different meanings by different organizations. In our bank a "proof of concept" is intended to answer the question: "does it work in our context?" A pilot is designed to answer the question: "Is it ready for production rollout?"
Very different questions.
As to who pays for a "proof of concept" - I have no issue in covering __some__ of a (small) vendor's costs (the big guys can take care of themselves). The test is whether I think the vendor's offering is sufficiently unique/compelling. If my refusing to contribute something (other than internal resources) means I'm not going to be able to access a capability that's potentially interesting then that's not really in my best interest.
But part of the negotiation around the POC is often an option for license acquisition -locking in an option price before it's proved that the product is the greatest thing since sliced bread, and for that the professional negotiators enter the room with the starting position that the vendor should pay us since our name as a reference client has value... :)
Posted by: Tim Gray | November 21, 2008 at 08:42 PM
James,
You make a number of good points here but I wanted to share some perspective informed by being on the other side pitching to many organizations.
a) Organizations vary widely in terms of how "lean" they are. Specifically, if an organization is not very efficient, the skin in the game in terms of time invested is not all that valuable. There are very bloated organizations with people looking for things to do - try selling to a telecom monopoly or maybe a state-owned bank :)
b) Skin in the game forces some alignment that there is interest versus time is invested only to find later that IT has a standard that precludes the organizations working together. Or even worse, skin in the game forces questions to be asked internally that might yield "oh we already have a project to do this running in Brazil"
c) Or the business is happy to have time invested to help them with internal political battles with IT - that is the source of their initial return from their investment of time. These are real things that happen.
d) I have seen some prospects invest time simply to try and gather information for our competitors - they are already users of a competing product and enjoy getting flown to that competitors user conferences to speak, etc. It does happen.
e) Lastly there is a continuum between a case study and a pilot. A vendor should be willing to invest to educate but if very specific idiosyncratic requests are made it is not unreasonable to ask them to be paid for. I get it all of the time "my data is different than the twenty live production examples you can bring up on my computer of nearly identical data"
Posted by: Bob Smith | November 23, 2008 at 11:41 PM
This is a good article which I have been meaning to comment on for some time. While it may be a matter of semantics for some, your distinction between "pre-sales", "proof of concept" and "pilot" is extremely useful. I agree that the cost of "pre-sales" and "provING a concept" should generally fall on the vendor - that's just the cost of doing business and why there is a sales and marketing expense line in the P&L. I think problems arise when the lines between proof of concept and pilot are blurry, which I think could easily happen.
I have also seen the converse happen a lot: the (big) client wants the (small) vendor to take a loss because by doing such and such a project or implementation they will generate a significant real life case study that will supposedly be worth far more than the actual project cost in terms of marketing and sales opportunities. How is that really any different than what you are complaining about?
Like most things in life, it goes both ways.
Posted by: John Januszczak | December 19, 2008 at 06:43 PM
In some cases, buyers want a proof of concept to prove only that the technology does what the vendor claims and does it within the buyer's IT/data/network/etc structure. There's no intent for proof of value here and no intent to prove that the buyer can actually realize the theoretical benefits. This is in many cases an installation in an IT lab with no little configuration and customization and minimal data plumbing and user training. Vendor personnel may be on hand to assist IT or the business.
The pilot is the proof of value within a particular organization's human and process context. As you say, a pilot is pretty much a full implementation of the new thing and so it requires the same project planning, installation, data plumbing connections, change management, benefits measurement planning, customizations, training materials development (and so on) as a full-scale implementation, the vendor's service days are almost as high as a full implementation. Some of the banks costs are the same although the training/change management/support for say 20 branches is significantly smaller than for 2000. (Because of this, neither the vendor nor the buyer is going to begin a pilot without the expectation of success.)
This raise a question: is there a way to prove that benefits will be realized (within the buyer's operational environment) other than by running a pilot program of the extent described above?
I wonder if you or your readers have any thoughts about this.
Posted by: Dave Marcus | December 23, 2008 at 05:23 PM
Hi James
What an interesting post and what an interesting discussion.
I have long started-up new capability development in the way you describe, both as Head of CRM for a German automotive bank and also as a consultant in banks, telcos, airlines, even in HM Govt.
I think there are four factors that drive success in the piloting of new capabilities:
1. Viewing the Pilots as Capability Buildout NOT as Technology Implementation - As you rightly point out, implementing new kit is never enough (sorry vendors!). What also has to be developed is all the complementary things such as, processes, data flows, work routines, performance measures, etc, that together, and only together, deliver value for the business. There is both extensive research in economics and extensive case studies in CRM, BPR, ERP and TQM demonstrating the validity of this approach.
2. Running Pilots as Internal Corporate Ventures - Too many pilots are just run as light versions of full kit implementation. I have leant to run them as though they were internally-funded venture projects. That means a capability audit first to understand what capabilities the organisation has, what will need to be built and who is going to have to be involved in the pilot. Then a detailed paper pilot that looks at how the whole thing will work, step-by-step, and what will stop it being successful. Then an alpha test of the working kit and complementary stuff with most of the interfaces run manually. This allows you to learn how to make the kit and complementary stuff work before you automate it. There are always many teething problems, irrespective of how much you plan. Finally running a fully automated beta test with the kit, the complementary stuff and key operational staff all involved. All this typically takes 90-100 days. Then you are ready to expand to a fullimplementation.
3. Balancing Piloting Implementation with Learning and Early Value Delivery - It is all too easy to see pilots as just a step before full implementation. This is short-sighted. A pilot is much more than that. In particular, if they are designed properly, they are great opportunities to learn things that you don't yet know and which are critical before embarking on a full implementation. Designing pilots explicitly to learn unknowns, both reduces future implementation failure and provides information to help you nail down how the full implementation will drive economic value creation. And let's not forget that we should be using the pilot to actually create some of that early economic value too. Why pilot e.g. a campaign management system, if you can't reduce operational costs, increae revenue generation and reduce customer value at risk in the process?
4. Viewing Pilots as Advanced Vendor Qualification - Vendors are very good at selling kit. They have to be. But often they are often conspicuous by their reluctance to provide support once you have signed on the dotted line. Yet 99% of the value of implemented kit is delivered in the months and years after you have signed. The pilot is a great time to stress-test the relational qualities of the vendor before you commit to them over the longer term. You are committing to the vendor as much as, if not more than, you are to their kit. This is a lesson I have learnt the hard way.
It is amazing what you can do in a 90-day pilot if you plan explicitly to do so. Some vendors won't like it, but they are probebly not the vendors you should be working with over the long-term anyway!
Graham Hill
Posted by: Graham Hill | January 01, 2009 at 09:23 PM