How many RFPs are issued annually?

How many RFPs are issued every year in Australia and New Zealand?
I know, I know. It sounds like one of those kooky interview questions that tech giants ask and then rationalise by saying “we’re just testing your ability to think rationally!”.

I researched how many public sector organisations there are in Australia (more than 1300) and how many in New Zealand (39 government departments and 200 other agencies) to get a public sector figure of more than 1500.

I then profiled that number as 20 percent large organisations, 50 percent medium and 30 percent small. From there, I estimated the number of formal RFPs issued per annum for a large, medium and small organisation.

Out popped a number of about 82,000 RFPs a year for the public sector alone.

I then triangulated my back-of-an-envelope guess against the volume of contracts on AusTender – an Australian Government procurement information site – and guess what? The reported figure is 83,000 tenders in 2022-23. We can probably add 50% to that number for private sector RFPs.

Some proposal content does not change
One thing I learnt from writing proposals in response to RFPs is that there are three types of content in a proposal response:

• Type one: content that doesn’t change from proposal to proposal
• Type two: content that is configured for each proposal
• Type three: content that is customised for each proposal

An example of type one content would be the various policy and governance frameworks that are required as part of compliance obligations, such as risk management, sustainability, social procurement, environmental and quality management.

Type two content could be case studies that are broadly consistent from proposal to proposal but are configured around the specific context of the prospect.

Meanwhile, type three content might be solution design in response to the unique problem or opportunity addressed in the client’s RFP document.

The point about type one content is that it is written once and used many times. But it is also evaluated many times too, isn’t it?
A few more formulas on my spreadsheet and I got to $0.3 billion a year in staff time to read, evaluate and score the documents listed above in the public sector alone.

“Cool maths, Bro. So what?”
Imagine that your driving licence was only valid in the postcode in which you live. Every time you crossed into a new postcode, a member of the local neighbourhood watch team would give you a quick driving test to validate that you were accredited to drive in that postcode.

It would be crazy, wouldn’t it? But isn’t that what we do with the compliance content that I have listed above?

The tenderer has key documents in a library that they submit unchanged for every bid. And each member of the evaluation team in every client reads the framework from every respondent.

Depending on the nature of the evaluation plan, they may score the frameworks on a risk-related basis, they may score them out of 10 or simply score them on a pass/fail
basis.

But they still have to read them!

A modest proposal
It leads to a massive duplication of effort, doesn’t it?

We don’t have to prove our competence to drive every time we cross into a new postcode, so why don’t we have a standardised process to evaluate tenderers’ type one policy frameworks?

Imagine that the procurement community across Australia and New Zealand collaborated to create a ‘compliance agency’. The agency could curate a library of policy frameworks which could be accessed by bidders and clients alike, though for different purposes.

Bidders would be responsible for developing and uploading the content and ensuring that it was up to date. As soon as they uploaded a document, it would then be read, evaluated and scored by an expert panel on behalf of all clients.

The panel’s score would be instantly shared with the bidder and be available online to all clients.

So, instead of evaluation team members needing caffeinated energy drinks to stay awake, reading yet another risk management framework, they could simply note that actual experts had already rated the document an 8 out of 10 or a 9 out of 10.

The evaluation teams’ focus could shift from the documentary evidence of the policy framework to evaluating actual results in practice.

What happens next?
If a bidder received feedback that their social procurement policy was rated 7 out of 10, what would they do?

The answer is, they would seek feedback about how they could improve their score. Receipt and application of feedback would have the benefit of improving policy and management frameworks, meaning over time the bidder may improve their score to an 8 out of 10.

This means a second benefit of a single point of evaluation would be incremental improvement of policy frameworks.

I don’t know if you have found yourself at a loose ebb on a rainy afternoon and decided to read risk management frameworks from tenderers to pass the time.

My advice is DON’T. They are a vanilla gloop of vowels and consonants, only broken up by graphics involving clouds and arrows. The frameworks all reference ISO 31000 and it’s hard to discern differences between tenderers.

But doesn’t that raise the question, “if all of the respondents are scoring 8 out of 10 or 9 out of 10, it is no longer a discriminator between respondents, so why are we scoring them?”

Two key trends
There are two related trends that are impacting proposals.

The first is the use of proposal writers by tenderers. I don’t want to make a value judgement about proposal writers on the not unreasonable grounds that I have dabbled in proposal writing myself.

Proposal writers bring professionalism to proposals and help tenderers ‘put their best foot forward’. In plain terms, professional proposal writers may make even the most turgid policy framework easy to read, well signposted and jargon free.

The only distinction that may be reached in reading frameworks from different respondents is if one respondent does not use the services of a professional proposal writer and another does.

The difference will be obvious. One will be well presented, easy to read and demonstrate professional communication skills. The other will have to be read two or three times to understand it.

Validity
The problem this presents is the risk that we score the quality of writing rather than the actual underpinning policy and its application.

This is called ‘validity’. Are we actually measuring what we think we are measuring?

This is not the fault of the proposal writer. However, it does undermine the use of documentary evaluation as an indicator of capability.

Pass or f-AI-l?
The second trend is the increasing use of artificial intelligence to support proposal writing.

What I think will happen is management frameworks and other policy documents will experience accelerating convergence of content as well as style.

Policy frameworks will become increasingly indistinguishable from each other.

If they’re not a discriminator and they’re increasingly homogenous, why do we read them? Why do we score them?

The answer is, they are one part of the evaluation process and provide a standard against which actual performance may be compared.

So, perhaps that is the point. The focus of evaluation can migrate from reviewing policy documents to focusing on in-the-field performance, results, outputs and outcomes.

So, we can save $0.3 billion?
Possibly. Having learned about machine learning and large language models, I think AI solutions could be trained to read and score these frameworks.

Rather than replacing staff, I think the opportunity is for AI-assisted evaluation to relieve evaluators of the cruel and unusual torment of reading interminably boring tracts of text from multiple bidders and instead focus on evaluating actual results.

This may expose a gap between what the proposal claims and actual results.

I guess Caveat Emptor still applies, doesn’t it?


About the author:

Based in Melbourne, Paul brings more than 40 years of experience to the table, having worked in more than 20 countries as a procurement professional, consultant and trainer.

An internationally acknowledged expert on procurement and negotiation, Paul has worked on a diverse range of projects from buying a billion dollars of gold in an off-market transaction to negotiating with an airline on behalf of a government that owned them.

He’s also the chair of the judges for PASA’s coveted NIGELS Awards and is a Fellow of CIPS (FCIPS).