Time
saved
1 ½ days of additional writing and reviewing – bid responses started from a blank piece of paper.
Cost
saved
For the customer of paying for additional days – reducing the fees by 1 ½ days.
Quality
gained
Using the scoring and suggestions has moved the user from hand-writing and reviewing the responses too automating the content and reviewing process.
Bid details:
- Bid Mobilisation and Content Foundation with Bid Bot:
Bid Bot Content Automation: After receiving bid documents, Bid Bot’s content automation tool was used to build foundational content by inputting bid questions and customer requirements. This tool enabled quick generation of core content components addressing team roles, standards, delivery approaches, product and service methodology and accreditations.
Structured Methodology with Generative AI: After the foundational responses, additional external generative AI provided a structured methodology to expand on Bid Bot’s initial output, organising content into a methodology approach.
2. Quality Scoring and Suggestions – Bid Bot Quality Scoring and suggestion Tool
Each draft response was scored for quality using Bid Bot’s scoring tool. Below is a breakdown of how we did this:
Methodology: Core ingredients were focussed on initially across delivery approach and product & service details, people and tools. This provides a solid base for the response, and helps the tool understand more to help get scorable content in enhance the existing text and provided more specifics across the people and standards and accreditations.
Value adds: This section has multiple sub-category indicators for value-add components, this helped get the key win theme messages analysed and put into the response such as cost/time savings numerical percentages and resource continuity.
Evidence: This section has multiple sub-categories for helping with evidence. They had done similar works before for the same client and so this was mentioned in the responses. Using the tool allowed us to uplift the evidence score through using the suggestions provided.
Question requirements: Breaking out the question requirements allowed us to analyse the individual elements of the question for better alignment to the requirements.
Customer requirements: Breaking out the customer requirements allowed us to analyse the individual elements and put in anything additional we believed they wanted calling out based off insights from a previous project. The response benefited from this as it meant we could drip through the content around the restricted work environments.
Role personas: We used the Project Manager role persona to understand if there was anything missing or enhancements that could be made to the responses.
3. Refinement and feedback integration
Continuous Improvement via Bid Bot Suggestions: On Friday morning, final refinements were guided by Bid Bot’s scoring insights, ensuring all sections achieved optimal scoring based on Bid Bot’s metrics. This added layer of platform-driven analysis helped ensure the bid met high standards before final submission.
4. Outcome
The Bid Bot platform’s automated content and scoring tools facilitated high-quality, complex response generation within a highly compressed timeline. This platform-specific approach allowed for the integration of project methodology insights, continuity of experienced personnel, and comprehensive project management content.
client
testimonial
“Thanks again for dropping onto the bid responses, it’s 100000x better than it would have been without your input! Our project manager with 30+ years’ experience had no comments to amend anything, great job!”.
Sales Director