All Projects
AIPlatformEnterprise

GenAI Agent Workflows for Deal Intelligence & Research Automation

Designed and deployed GenAI-driven agent workflows within AvaSense to automate market news classification, structured data extraction from contracts, and dashboard reporting — cutting research and reporting time by 35%.

Role

Senior Product Manager · Avasant

Year

2023–2024

The Problem

Procurement analysts and strategy teams were spending significant time manually classifying market intelligence, extracting structured data from unstructured contracts, and compiling reports for leadership. This was error-prone, slow, and kept high-value team members locked in low-value work.

My Role

I owned the product strategy, use-case definition, and interaction design for the AI agent workflows. I worked directly with the AI engineering team on prompting strategy, evaluation criteria, and the UX model for surfacing AI outputs. The AI team owned model selection and infrastructure; I owned the 'what' and the 'how it feels to use it'.

Discovery & Research

Interviewed 8 procurement analysts across 3 enterprise clients to map how they spent their research and reporting time. Found that 60%+ of analyst hours were consumed by manual classification and data lookup — not analysis. Ran workflow shadowing sessions to understand what precision vs. recall trade-offs were acceptable for automated outputs.

The Approach

Led the integration of GenAI agents into two critical workflows: (1) real-time deal intelligence — surfacing relevant market signals to support supplier negotiations, and (2) contract data interpretation — extracting key obligations, dates, and risk clauses from unstructured contract documents. Worked with the AI engineering team to define prompting strategies, evaluation criteria, and fallback mechanisms for low-confidence extractions. Designed the UX around a 'trust but verify' model — surfacing AI outputs with confidence indicators and easy human override.

Key Decisions

01

Adopted a 'trust but verify' output model rather than full automation — surfacing AI results with confidence indicators and easy human override. This was critical for enterprise users who needed to be accountable for the outputs they acted on.

02

Chose modular prompt chains over a single monolithic prompt — enabling individual workflow steps to be reconfigured and improved without breaking the full pipeline.

Architecture / System Design

Designed the agent system around modular, composable prompt chains that could be reconfigured per workflow without engineering rework. Established a feedback loop where analyst corrections on AI outputs were captured and used to refine model behavior. Built classification pipelines for market news ingestion using structured output schemas, and deployed a queue-based extraction system for async contract processing — ensuring the platform remained responsive for interactive use cases.

What I'd Do Differently

I'd have built the analyst correction feedback loop into the MVP rather than as a follow-on feature. The corrections analysts made in early weeks contained the richest signal for model improvement, and we lost some of that data before the loop was operational.

Outcomes

Improved user research productivity by 40%. Reduced research and reporting time by 35%. Automated market news classification and structured data extraction enabled analysts to shift from data gathering to strategic decision-making.