Proposal management

The Seventh Wave


The Seventh Wave

“Life can only be understood backwards, but it must be lived forwards.” Paul Rogers looks back at the waves of disruption affecting procurement, waxes his surfboard and prepares to ride the seventh wave.

Death of a salesman 1950 - 1988
We can characterise the business-to-business sales process during this period in terms of a proactive salesperson targeting the budget holder or Very Important Top Officer (VITO) and pitching their value proposition directly to the decision maker. If there was a purchasing person involved, they would raise the Purchase Order long after the deal had been done. Of course, there would be some exceptions to this generalisation, but I will characterise this period as a time of “selling to”.

The First Wave 1988 - 2015
The post-war dominance of relationship-based business-to-business sales processes was disrupted in the 1980s with the emergence of professional sales strategies. Two methodologies warrant attention; SPIN Selling and The Challenger Sale. SPIN Selling involves a proactive salesperson and a reactive prospect. Of course! But at least it recognised that the prospect has needs, though the salesperson is needed to ‘help’ the prospect to define their needs. What does that say about the role of procurement people?
The Challenger model is best understood by reference to the subtitle of the 2011 book that launched the approach. It reads “Taking control of the customer conversation”. Not much room for an empowered procurement team there! The Challenger model also assumes a proactive salesperson interacts with a largely passive prospect. A key difference between SPIN Selling and the Challenger Sales model is that in the Challenger model the salesperson acknowledges that the prospect may already have developed an understanding of their needs and understand the market. Remember the matrix based approaches so beloved by procurement folk were first published in the 1980s and this is just 30 short years later. We can characterize the first wave in terms of increasing professionalisation of sales practitioners beyond the ‘relationship sell’, but it is a still a drama with no role for procurement practitioners.

The Second Wave 2000 - 2015
The second wave of disruption affecting business-to-business sales processes didn't happen in the sales function at all. It happened in procurement departments in larger corporate and public sector organisations. Investments in procurement capacity and capability and the implementation of procure-to-pay systems which hardwired procurement governance created the preconditions for ‘buying from’ instead of ‘selling to’. Sales strategies focused on VITO and budget holders were identified as ‘maverick behaviour’, and procurement teams increasingly took control of their organisation’s procurement process.
Of course, some VITOs still did their own thing, but the timing of procurement decision-making, the selection of which stakeholders were involved, the definition of needs, the specification and the selection of bidders all became decisions for the client, not the sales organisation.
The emergence of professional procurement disrupted the business-to-business sales process into the business-to-business procurement process.

The Third Wave 2015 - 2022
The Third Wave began in the early 2000s when the originator of SPIN Selling (Neil Rackham) authored a book called Rethinking the Sales Force with John DeVincentis. While there are still some sales teams who deny that their roles are changing, the emergence of ‘buying from’ instead of ‘selling to’ meant that progressive sales teams had to refocus their effort using the segmentation methods so beloved of procurement practitioners. If it is premature to ascribe the First Wave to the emergence of professional procurement, the Third Wave is 100% the consequence of professionalisation of procurement.
Proposal management has been around for a long time, but as RFPs became increasingly important as a way of winning work, more and more sales teams recognised the need to develop capacity and capability in proposal management. The point is that the business-to-business procurement process has changed how organisations organise and resource their sales processes.

The Fourth Wave 2022 - date
The Fourth Wave is happening right now. It involves the adoption of Artificial Intelligence (AI) by proposal teams and it is changing both sales and procurement processes. To understand why, let's classify the content in an RFP response into three types. Type one content is content that does not change from bid to bid. This might include quality frameworks, risk management frameworks and other governance that is not updated for each proposal. Type two content is content that may be configured for different proposals but is substantially the same from bid to bid. Case studies would be the most obvious example. Finally, type three content is content that is originated specifically in response to an RFP question. This might involve designing a bespoke solution for a particular client's problem.
This is both a challenge and an opportunity for proposal teams. The opportunity is to use AI to accelerate the creation of draft proposals using a library of draft content. Type one and type two content can be configured quickly, focusing the proposal team's effort on originating type three content. The challenge is that machine learning models need training data. Lots and lots of sample proposal responses. Where does the training data come from? There are two potential sources of training data. The first is previous proposals developed by the organisation and the second is public domain information that can be scavenged from competitor’s websites.
This might include quality policies, risk management frameworks, etc. Critically, what it won't include the is type two and type three content from competitors. But it will be relatively easy to train on type one content. The way the training will work is that the model will be taught that content A is poor and will score 3/10 or 4/10 in evaluation. Content B is average and will score 5/10 or 6/10 and so on. AI solutions will produce content designed to score 10 /10 and will incrementally improve the content to approach that goal.

The Fifth Wave 2024 onwards
Imagine that multiple bidders all train their large language models using similar content. What do you think will happen next?
The AI-generated content will progressively converge until submissions from different bidders are virtually indistinguishable.
In the past type one content might have been scored in a range from 5/10 to 8/10 during RFP evaluation. It is likely that the range will compress so that there might be a range from 8/10 to 9/10. For type two content there may also be some convergence, as the AI algorithms progressively align content to the evaluation criteria of the prospect. This means that clients who share their evaluation criteria will get back responses with exactly what they are looking for. Result!
The consequence will be that some content of written RFP responses will no longer be a discriminator between respondents

There may be a gap between respondents who use AI as part of their proposal generation and respondents who do not. For those respondents who do use AI as part of their proposal management solution, content generated by large language models will increasingly converge, and the difference in quality between responses will be within the margin of error for the scoring and evaluation process.

The Sixth Wave 2026 onwards
My contention is that AI-created proposals will reduce the effectiveness of scoring of written submissions in support of proposals because of two key processes. The first is the ability of large language models to generate content which gives the client exactly what they are looking for. Well written easy-to-navigate proposals which ‘tick all the boxes’.
What the evaluation team is scoring is the quality of proposal writing, not necessarily the underpinning capability of the organisation submitting the bid.
The second process is the progressive convergence of AI-generated content, undermining the ability of documentary evaluation to distinguish between respondents. What is the point of spending hours, days or weeks evaluating written submissions if the respondents all score between 75% and 80%? Is that “clear blue water?” Is it unrealistic to anticipate that other forms of evaluation will become disproportionately important? Interviews, presentations, reviews of past performance, site visits and maybe even client testimonials. The RFP evaluation process will be disrupted just as surely as professional procurement disrupted the sales process in the Second Wave.

The Seventh Wave 2028 onwards
I suspect that AI-driven RFP evaluation solutions will emerge, if they haven’t already. There may be little point in scoring type one content as the differences between competing submissions may be minimal. Such content may be evaluated on a risk-rated basis, or simply pass/fail. Many procurement teams rely on third-party compliance providers such as Avetta. An ‘adjacency’ for compliance providers is not only to host documents, but to evaluate the submissions against a common framework using a panel of subject matter experts. Imagine the savings if every client inviting proposals from (say) facilities management providers agreed to accept the scoring of each company’s governance library by an expert panel instead of reading and scoring multiple sustainability frameworks, environmental frameworks, gender equality frameworks, quality systems, risk management frameworks etc
The time saved could be better deployed in interviewing the proposed delivery team, exploring the feasibility of the proposed solution, validating that the claims made by the proposal writers are supported by evidence in the field and, of course, negotiating mutually acceptable terms.

No hoverboards here!
“Prediction is very difficult, especially if it's about the future!” I tried to envisage the near future for business-to-business procurement processes without resorting to stale old cliches such as showing procurement practitioners on hoverboards. Instead, I went for procurement practitioners on surfboards, riding interacting waves of disruption. But what do you think? Let me know in the comments!

How many RFPs are issued annually?

How many RFPs are issued every year in Australia and New Zealand?
I know, I know. It sounds like one of those kooky interview questions that tech giants ask and then rationalise by saying “we’re just testing your ability to think rationally!”.

I researched how many public sector organisations there are in Australia (more than 1300) and how many in New Zealand (39 government departments and 200 other agencies) to get a public sector figure of more than 1500.

I then profiled that number as 20 percent large organisations, 50 percent medium and 30 percent small. From there, I estimated the number of formal RFPs issued per annum for a large, medium and small organisation.

Out popped a number of about 82,000 RFPs a year for the public sector alone.

I then triangulated my back-of-an-envelope guess against the volume of contracts on AusTender – an Australian Government procurement information site – and guess what? The reported figure is 83,000 tenders in 2022-23. We can probably add 50% to that number for private sector RFPs.

Some proposal content does not change
One thing I learnt from writing proposals in response to RFPs is that there are three types of content in a proposal response:

• Type one: content that doesn’t change from proposal to proposal
• Type two: content that is configured for each proposal
• Type three: content that is customised for each proposal

An example of type one content would be the various policy and governance frameworks that are required as part of compliance obligations, such as risk management, sustainability, social procurement, environmental and quality management.

Type two content could be case studies that are broadly consistent from proposal to proposal but are configured around the specific context of the prospect.

Meanwhile, type three content might be solution design in response to the unique problem or opportunity addressed in the client’s RFP document.

The point about type one content is that it is written once and used many times. But it is also evaluated many times too, isn’t it?
A few more formulas on my spreadsheet and I got to $0.3 billion a year in staff time to read, evaluate and score the documents listed above in the public sector alone.

“Cool maths, Bro. So what?”
Imagine that your driving licence was only valid in the postcode in which you live. Every time you crossed into a new postcode, a member of the local neighbourhood watch team would give you a quick driving test to validate that you were accredited to drive in that postcode.

It would be crazy, wouldn’t it? But isn’t that what we do with the compliance content that I have listed above?

The tenderer has key documents in a library that they submit unchanged for every bid. And each member of the evaluation team in every client reads the framework from every respondent.

Depending on the nature of the evaluation plan, they may score the frameworks on a risk-related basis, they may score them out of 10 or simply score them on a pass/fail
basis.

But they still have to read them!

A modest proposal
It leads to a massive duplication of effort, doesn’t it?

We don’t have to prove our competence to drive every time we cross into a new postcode, so why don’t we have a standardised process to evaluate tenderers’ type one policy frameworks?

Imagine that the procurement community across Australia and New Zealand collaborated to create a ‘compliance agency’. The agency could curate a library of policy frameworks which could be accessed by bidders and clients alike, though for different purposes.

Bidders would be responsible for developing and uploading the content and ensuring that it was up to date. As soon as they uploaded a document, it would then be read, evaluated and scored by an expert panel on behalf of all clients.

The panel’s score would be instantly shared with the bidder and be available online to all clients.

So, instead of evaluation team members needing caffeinated energy drinks to stay awake, reading yet another risk management framework, they could simply note that actual experts had already rated the document an 8 out of 10 or a 9 out of 10.

The evaluation teams’ focus could shift from the documentary evidence of the policy framework to evaluating actual results in practice.

What happens next?
If a bidder received feedback that their social procurement policy was rated 7 out of 10, what would they do?

The answer is, they would seek feedback about how they could improve their score. Receipt and application of feedback would have the benefit of improving policy and management frameworks, meaning over time the bidder may improve their score to an 8 out of 10.

This means a second benefit of a single point of evaluation would be incremental improvement of policy frameworks.

I don’t know if you have found yourself at a loose ebb on a rainy afternoon and decided to read risk management frameworks from tenderers to pass the time.

My advice is DON’T. They are a vanilla gloop of vowels and consonants, only broken up by graphics involving clouds and arrows. The frameworks all reference ISO 31000 and it’s hard to discern differences between tenderers.

But doesn’t that raise the question, “if all of the respondents are scoring 8 out of 10 or 9 out of 10, it is no longer a discriminator between respondents, so why are we scoring them?”

Two key trends
There are two related trends that are impacting proposals.

The first is the use of proposal writers by tenderers. I don’t want to make a value judgement about proposal writers on the not unreasonable grounds that I have dabbled in proposal writing myself.

Proposal writers bring professionalism to proposals and help tenderers ‘put their best foot forward’. In plain terms, professional proposal writers may make even the most turgid policy framework easy to read, well signposted and jargon free.

The only distinction that may be reached in reading frameworks from different respondents is if one respondent does not use the services of a professional proposal writer and another does.

The difference will be obvious. One will be well presented, easy to read and demonstrate professional communication skills. The other will have to be read two or three times to understand it.

Validity
The problem this presents is the risk that we score the quality of writing rather than the actual underpinning policy and its application.

This is called ‘validity’. Are we actually measuring what we think we are measuring?

This is not the fault of the proposal writer. However, it does undermine the use of documentary evaluation as an indicator of capability.

Pass or f-AI-l?
The second trend is the increasing use of artificial intelligence to support proposal writing.

What I think will happen is management frameworks and other policy documents will experience accelerating convergence of content as well as style.

Policy frameworks will become increasingly indistinguishable from each other.

If they’re not a discriminator and they’re increasingly homogenous, why do we read them? Why do we score them?

The answer is, they are one part of the evaluation process and provide a standard against which actual performance may be compared.

So, perhaps that is the point. The focus of evaluation can migrate from reviewing policy documents to focusing on in-the-field performance, results, outputs and outcomes.

So, we can save $0.3 billion?
Possibly. Having learned about machine learning and large language models, I think AI solutions could be trained to read and score these frameworks.

Rather than replacing staff, I think the opportunity is for AI-assisted evaluation to relieve evaluators of the cruel and unusual torment of reading interminably boring tracts of text from multiple bidders and instead focus on evaluating actual results.

This may expose a gap between what the proposal claims and actual results.

I guess Caveat Emptor still applies, doesn’t it?


About the author:

Based in Melbourne, Paul brings more than 40 years of experience to the table, having worked in more than 20 countries as a procurement professional, consultant and trainer.

An internationally acknowledged expert on procurement and negotiation, Paul has worked on a diverse range of projects from buying a billion dollars of gold in an off-market transaction to negotiating with an airline on behalf of a government that owned them.

He’s also the chair of the judges for PASA’s coveted NIGELS Awards and is a Fellow of CIPS (FCIPS).