AI Agent for Proposal Generation
Wtih Meta’s Llama3 and Deepseek kind of open models, it has become a possibility to run LLM (Large Language Models) off the cloud. Cost effectiveness and data security are guaranteed as LLM can run within the security walls of the enterprise on private and confidential data.
Here is our proof of concept on using AI Agents for generating proposals to improve the productivity for Sales Managers and ease of approval by CEO.
As the understanding of new solutions and terminologies like AI Agents, Agentic AI, Workflow Automation are evolving, it will take some time for the industry to settle on a common concrete understanding. For our purpose, we define AI Agents as something that does a specific task (like the one taken here - proposal generation). To accomplish the task, the AI Agent we have developed effectively use prompt engineering along with RAG (Retrieval Augmented System) and use different design patterns namely LLM Router and LLM Evaluators.
Solution Overview
The data basis for the solution is the set of pre-existing proposals. The core of the solution is simple RAG based LLM prompting. The processed, cleaned and meaningfully chunked data is stored in vector database. When the user asks for a proposal to be generated, similar sources are extracted from the vector database and used in generating the proposal. The prototype is created with Ollama, Mistral model for LLM and mxbai-embed-large for embedding.
Development process
It is very common to see cool demos using LLMs, but building useful business solution involves mundane data cleaning and pre-processing work. The solution involved the following developmental steps.
- Some of the existing proposals referred to older versions of the solution components. It won’t be correct to create a new proposal with old solution component. Content was created involving new solution components to use with LLM Evaluator. Likewise, to ensure all the stale information is replaced with current information, new contents are created for Terms & Conditions, Payment and delivery schedules, etc.
- The proposals were written in different formats. Created LLM based tools to bring all the different formats to uniformed structure.
- To reasonably chunk, the content is to be semantically understood. Utilities are created to extract tables, images and texts separately. Line-wise, section-wise, paragraph-wise chunking were done to keep the essence of data.
Realized Business Value
Before this solution, the Sales Manager has to search through many proposals manually and spend long hours of effort to meaningfully copy, paste, edit to reuse existing proposal contents. With this solution, the user can get those things done simple prompt to LLM. The user is only required to proofread and make amendments. RAG really helps in sharing the actual references used in generating the proposal and that powers the user to look at the base materials and effectively validate the current proposal.
Next steps
While the Proposal generation is effective, further developments are needed to make this solution stay current and make the operation much more effective. That is when some new solution component overrides the old one, it is to be automatically detected and added to the system. The effectiveness of fine tuning is to be evaluated. The base LLM model fine tuning with human and agent generated Question & Answers is hypothesized to make the mode perform smooth. Stay tuned!