Case Study: GrantFlow

TL;DR

Evidence

Project state (self-reported)

Context / Constraint

The next operator running a grant proposal cycle may be an AI agent, not a person clicking through a dashboard. That agent still needs operational controls: discovery, typed contracts, auth, idempotency, preflight gates, review checkpoints, audit events, and deterministic smoke tests.

A single LLM endpoint with a "draft a proposal" prompt is not a workflow — it is an unbounded text generator without traceability, governance, or export-ready outputs.

Donor reviewers and audit teams need traceable evidence; agent runtimes need stable contracts and bounded retries; NGO operators need human checkpoints and review SLAs. All three have to live in one API.

Problem

Most agent-assisted proposal workflows are wrappers around a chat model. They produce text fluently. They struggle with the operational shape of real grant work: tenanted access, idempotent generation across retries, donor-specific preflight gates, structured review states, audit events, grounding inspection, and exportable evidence packs.

Buyers cannot ship that into an EU or UN review process. Agent runtimes cannot orchestrate it reliably. Operations teams cannot audit it after the fact.

Actions

What it does now

What it is not

Tech stack

Companion projects

Relevance

This project demonstrates how I think about practical infrastructure for agent-driven nonprofit operations: typed contracts before chat UI, governed credentials before "trust the agent", deterministic generation before clever prompting, HITL and audit events before "just ship it", and exports that hold up in front of an EU or UN reviewer.

The honest scope is also part of the design: customer pilot data stays out of the public repository, no production adoption is claimed, and the README's "Buyer Proof" section names current strongest donor-template paths rather than customer references.

Project links

Author: Vassiliy Lakhonin