Sitemap

AI Agent Skills at Scale: What Building 170 Skills Across 9 Domains Taught Me About Portability

The AI skills ecosystem is converging in theory and fragmenting in practice. A practical account of what works, what breaks, and what the zero-dependency constraint actually means.

12 min readMar 13, 2026

The skill worked perfectly in Claude Code. A marketing content auditor that checked brand voice consistency, flagged cliche phrases, and suggested improvements — tested across three client projects over two weeks. Then one of my engineers tried loading it in Codex CLI.

Press enter or click to view image in full size
AI agent skills architecture showing a central SKILL.md file connected to nine domain categories with varying levels of portability
AI Agent Skills: One Standard, Nine Domains, Uneven Portability | Image Generated with Gemini ©

Note: AI tools helped refine this piece. The skills, testing, architecture decisions, and limitations described are from my direct experience building and maintaining the claude-skills repository.

Nothing broke loudly. The skill loaded. It ran. But the auto-triggering based on file type did not work. The progressive disclosure — loading reference files only when needed — fell back to dumping everything into context at once. Token consumption tripled.

The output quality dropped because the model was drowning in irrelevant instructions meant for other scenarios.

Reza Rezvani

Written by Reza Rezvani

CTO & AI builder based in Berlin. Writing about Claude Code, agentic workflows, and shipping real products with AI. 20+ years of turning ideas into products.

Responses (4)

Write a response

Thanks for this, i was able to take your idea from the ORCHESTRATION.md and add it to a project i am working on. Helped my marketplace work more effectively.

1

love the zero-dependency constraint, but i’m wondering if strict portability is worth the tradeoff vs leaning into provider-specific features for your top 10% of skills

1

Understand this as the skill context bloat. I would call it neural drift. you could try a recursive skill distiller that learns and sculpts the text to its prinicipal eigenvector. The models then see the skill as a concept and not a prompt. here is…

1