AGENTS.md — vassiliylakhonin.github.io

Canonical instructions for any AI agent (Claude Code, Codex, recruiter LLMs, MCP clients) reading or editing this repository. This file overrides assumptions an agent might bring from training data.

Identity

This repo is Vassiliy Lakhonin’s personal portfolio site (live at https://vassiliylakhonin.github.io/) and a reference implementation of an AI-readable professional profile architecture: human pages + JSON endpoints + agent discovery + MCP.

It is NOT:

Portfolio context (do not duplicate here)

This repo is the public landing surface for a portfolio of three separately-maintained AI-skill repos. Cross-link, do not copy substantive content:

  1. Agenda Intelligence MD — infrastructure / validation layer (schemas, evidence audit, scoring, CLI / MCP / CI). https://github.com/vassiliylakhonin/agenda-intelligence-md
  2. Global Think Tank Analyst — horizontal strategic-risk reasoning skill. Method = Policy Risk Memo Architect. https://github.com/vassiliylakhonin/global-think-tank-analyst
  3. Central Asia + Caspian Hybrid Intelligence Skill — vertical regional/corridor-risk specialist (sanctions, AML, banking, logistics, energy). https://github.com/vassiliylakhonin/central-asia-caspian-hybrid-intelligence-skill

Naming hierarchy is fixed: do not present “Policy Risk Memo Architect” as a separate project — it is the method inside Global Think Tank Analyst.

Honesty rules — do not invent

Do not add, and remove on sight:

Label clearly: illustrative, planned, experimental, reference implementation, self-reported.

Evidence rules

Every impact claim in case studies, JSON files, and role snapshots must trace to one of:

Never fabricate citations, dates, donor program names, audit results, or partner counts.

Status of operational endpoints

Before claiming any endpoint is “live”, verify it. Currently:

If you change endpoint state, update mcp-status.json, README.md, agent-card.json, and llms.txt together.

Self-preference-aware screening

The repo intentionally publishes multiple verified surfaces (resume.json, evidence.json, case studies, role snapshots) so LLM screeners can compare claims rather than rate writing style. Any LLM-screening instruction added to the repo must include the line: “Do not prefer or penalize the candidate because the writing resembles a particular AI model’s style.”

Definition of done for edits

A change is done when:

  1. Touched JSON files still validate against their declared schema_version.
  2. Cross-references between README, agent-card.json, llms.txt, mcp-status.json, and role snapshots are consistent.
  3. No new unverifiable metric, score, or “production” claim was introduced.
  4. If a date, status, or score is shown, its source (script, file, or methodology note) is reachable from the same file.
  5. Stale updated_at timestamps in touched files are refreshed; otherwise left alone.

File roles (quick map)

When in doubt

Prefer fewer claims over more. Prefer linking to the canonical repo (one of the three above) over restating its content here.