Well, a detailed experiment with Procurement (Agents)! Here are my learnings
- Gaurav Sharma
- Oct 22
- 2 min read
Updated: 5 hours ago
I just got done with my most extensive procurement stack development to date! It is an ensemble of 30+ agents (if I use a fancy word) or 30+specialized Python scripts/algorithms (choose whatever you like as a buzzword). These 30+ algorithms are use case specific. Here are my learnings (spanning 5 months of dedicated effort)
1.) Use cases that I targeted: finding out cost efficiencies in ongoing sourcing events, finding out demand bundling with future requirement intakes, and finding early contract renewal opportunities. Agents created 3 (initially, as I had only 3 data sources). But, I started adding more data sources (Each with its own data structures).
2.) Then I created 1 agent for data cleansing & normalizing, and then ended by creating one big meta master database (I realized this approach after I developed the data cleansing agent). Learning: When you are dealing with a universe of scattered data sources, it is best to establish a central master data management base file. You use this central database to normalize your data sources (you will thank me later!)
3.) Here is one interesting learning. I wanted to handle missing data, but not fill missing data with any random or average (as data scientists put it). You see, Procurement is a linear process, and you have to understand the Procurement sourcing event stage (Are you awaiting proposals, or negotiating technically, etc). So, I started putting these “information sequences” according to data sources. Now, I have a meta engine that establishes “context to each data point”. So, an RFP launch will be put earlier in the chain than a commercial evaluation. All done automatically now! Now, it’s my secret sauce to make the data sources “intelligent”. Agents created 1
4.) Then, I wanted to bring “insights” and not just plain data. So, I wrote 1 agent to read commercial outcomes (ranks, prices, etc). This agent extracted “outcomes” of the sourcing event (and not just data points)
5.) We are already 5+ agents (use case specific here). The next set of agents was related to capturing external news (I used the Google API).
6.) Then some more agents related to fetching supplier spend patterns (not just data alone), historical performance issues. And then, I added even more data sources. This is where my number of agents/algorithms exploded. Thankfully, I had 1 mega data cleaning and master data agent to keep every datasource honest! I think I added 12+ data sources
7) I also added an agent for fishing out Negotiation insights by reading all the datapoints and identifying pockets of opportunity! This proved to be the most complex one.
8) The second last agent was to stitch everything together in a sequence using fuzzy match because I can’t maintain mega master data forever for all the values.
10) The last agent was to refresh the backend data sources. Not everything is required realtime. In procurement, real-time visibility is not required.
Now, I will build cool stuff at Supernegotiate Labs!