Back to profile

Case Study: Global Think Tank Analyst

TL;DR

Evidence

Metrics

No production-usage, adoption, or benchmark numbers are claimed.

Context / Constraint

LLMs are good at summarizing geopolitical events. They are weak at turning them into decision-ready intelligence.

The common failure mode: confident-sounding regional commentary, vague "monitor closely" advice, no decision frame, no actor incentives, no triggers, no evidence boundaries.

A skill layer needed to be small enough to attach to any agent and strict enough to actually change the output — without becoming a framework or runtime.

Problem

Most AI-generated strategic-risk analysis is fluent but decision-light. It rarely says what decision is being supported, separates facts from assessments, states confidence honestly, or names the indicators that would update the judgment.

That is fine for background reading. It is weak for compliance, risk committees, sanctions-exposure decisions, regulatory planning, or any operating decision that has to be defensible.

Actions

Question / Decision / Audience / Time horizon / Evidence mode
Fact / Assessment / Assumption / Scenario / Unknown
Actor incentives and leverage
Options with trade-offs
Watch-next indicators (concrete, observable)
Confidence and key unknowns
What evidence would change the judgment

What it does now

What it is not

Portfolio context

Global Think Tank Analyst is the horizontal domain skill in a three-repo portfolio designed to compose:

This repo does not duplicate either neighbor. Vertical depth lives in vertical-specialist repos; validation tooling lives in Agenda Intelligence MD.

Why this version is better

The skill is small enough to attach to any capable agent, and strict enough to change the shape of the output. The contract does not ask the model to sound smarter; it asks the model to frame the decision, label its evidence, and name what to watch next.

That is the part most generic geopolitical analysis misses.

Before / after (illustrative)

Excerpt from a live-source-backed example in the repo, condensed for this page. Full memo with sources, scenarios, options, and watch-next indicators: examples/live-source-backed-eu-ai-act-simplification.md. Evidence mode: live-source-backed.

User question: "What does the May 7, 2026 EU Council–Parliament provisional agreement on AI Act simplification (Omnibus VII) change for our compliance roadmap, and how should we adjust delivery over the next 6 months?"

Before — generic strategic-risk commentary:

The provisional agreement clarifies certain AI Act obligations and indicates a more pragmatic approach to compliance. Companies should monitor the formal adoption process, review their compliance roadmap, and adjust resourcing as needed.

Summarizes the news but does not support a decision. No frame, no evidence boundary, no scenarios, no triggers.

After — with the Global Think Tank Analyst skill attached:

The skill does not retrieve sources or verify facts — that is the job of a source-backed workflow or Agenda Intelligence MD. It asks the agent to frame the decision, label its evidence, and name what would change the view.

Tech stack

Relevance

This project demonstrates how I think about useful agent infrastructure: small reusable layers, explicit reasoning contracts, low context cost, honest evidence discipline, and outputs that improve decisions rather than just sounding polished — composed cleanly with vertical specialist skills and a separate infrastructure layer instead of bundling everything into one repo.

Project links

Author: Vassiliy Lakhonin