GitAgentProtocol
your repository becomes your agent
The open standard for defining, versioning, and running AI agents natively in git.
A git-native, framework-agnostic, open standard for defining AI agents. Version-controlled config that exports to Claude Code, OpenClaw, Lyzr Agent, Chimera, NanoBot, CrewAgent, and Agents SDK.
Maintained by team @Lyzr
Runs the Git Agent Protocol, pulls the Architect agent from GitHub, and launches it as a Claude Code agent.
Supports
AI Agent Design Patterns Built on Git
A gitagent is your git repository as an agent — complete with version control, branching, pull requests, and collaboration built in. These are the architectural patterns that emerge when you define agents as git-native file systems.
When an agent learns a new skill or writes to memory, it can open a branch + PR for human review before merging.
Instead of RAG that rediscovers knowledge on every query, the LLM incrementally builds a wiki from raw sources in knowledge/. Compiled pages live in memory/wiki/ with cross-references, an index.md catalog, and a chronological log.md. Three skills — ingest, query, lint — maintain the knowledge base over time. Inspired by Karpathy's LLM Wiki pattern. Scaffold with gitagent init --template llm-wiki.
Every change to your agent is a git commit. Roll back broken prompts, revert bad skills, and explore past versions — full undo history for your agent, powered by git.
The memory/ folder holds a runtime/ subfolder where agents write live knowledge — dailylog.md, key-decisions.md, and context.md — persisting execution state across sessions.
Deterministic, multi-step workflows defined in workflows/ as YAML. Chain skill:, agent:, and tool: steps with depends_on ordering, ${{ }} template data flow, and per-step prompt: overrides.
The VM is stateless. Git is the state. Agents run in ephemeral compute, committing every meaningful event — bootstrap, execute, checkpoint, teardown — to a runtime/<date>/<job-id> branch. Full audit trail, deterministic replay, and failure recovery — all from git history.
Anything placed at the root — context.md, skills/, tools/ — is automatically shared across every agent in the monorepo. No duplication, one source of truth.
Use git branches to promote agent changes through environments — just like shipping software.
The knowledge/ folder stores entity relationships as a hierarchical knowledge tree + embeddings — letting agents reason over structured data at runtime.
Fork any public agent repo, customize its SOUL.md, add your own skills, and PR improvements back upstream — open-source collaboration for AI agents.
Run gitagent validate on every push via GitHub Actions. Test agent behavior in CI, block bad merges, and auto-deploy to production — treat agent quality like code quality.
git diff shows exactly what changed between agent versions. git blame traces every line to who wrote it and when — full traceability out of the box.
Tag stable agent versions like v1.1.0. Pin production to a tag, canary new versions on staging, and roll back instantly if something breaks.
Agent tools that need API keys or credentials read from a local .env file — kept out of version control via .gitignore. Agent config is shareable, secrets stay local.
Define bootstrap.md and teardown.md in the hooks/ folder to control what an agent does when it starts up and what it should do before it stops — injecting lifecycle logic at key points.
No single agent should control a critical process end-to-end. Define roles (maker, checker, executor, auditor) in agent.yaml + DUTIES.md with conflict matrices and handoff rules — gitagent validate catches violations before deployment.
Why GitAgent: The Git-Native AI Agent Standard
Everything your agent needs, defined in files you already know how to manage.
Version control, branching, diffing, and collaboration built in. Your agent definition is just files in a repo.
Works with Claude Code, OpenAI, CrewAI, LangChain, and more. Define once, export anywhere.
First-class FINRA, SEC, and Federal Reserve support. Audit logging, supervision, and model risk management.
Skills, tools, sub-agents, and workflows. Agents can extend, depend on, and delegate to each other.
How the GitAgent Agent Framework Works
Three files define your agent. Everything else is optional.
spec_version: "0.1.0"
name: code-review-agent
version: 1.0.0
description: Automated code review agent
author: gitagent-examples
license: MIT
model:
preferred: claude-sonnet-4-5-20250929
fallback:
- claude-haiku-4-5-20251001
constraints:
temperature: 0.2
max_tokens: 4096
skills:
- code-review
tools:
- lint-check
- complexity-analysis
runtime:
max_turns: 20
timeout: 120Run AI Agents from Git
Clone any agent repo and run it instantly. One command — no setup, no config files to copy.
~/.gitagent/cache/)agent.yaml + SOUL.md + skills-b develop—specific branch--refresh—force re-clone-p "Review my code"—one-shot prompt-a git—auto-detect adapterExport to Any AI Framework
One agent definition. Every framework.
Export to CLAUDE.md with skills, model hints, and compliance.
$ gitagent export -f claude-codeGenerate Python code with Agent(), Tool stubs, and type mappings.
$ gitagent export -f openaiYAML crew config with role/goal extraction and sub-agent mapping.
$ gitagent export -f crewaiWorkspace with config JSON, AGENTS.md, tools, and skills.
$ gitagent export -f openclawConfig JSON + system prompt for Nanobot runtime.
$ gitagent export -f nanobotAPI payload with provider mapping and credential IDs.
$ gitagent export -f lyzrChat completions payload with model namespace mapping.
$ gitagent export -f githubSingle concatenated markdown for any LLM.
$ gitagent export -f system-promptGit Agent Protocol: Build & Run AI Agents
Everything you need to build, validate, run, and ship agents.
init$ gitagent init --template <minimal|standard|full>Templates: minimal (2 files), standard (skills + tools), full (compliance + hooks + memory)
validate$ gitagent validate --complianceJSON schema validation, skill checks, and optional regulatory compliance audit
run$ gitagent run -a <adapter> -p "prompt"Adapters: claude, openai, crewai, openclaw, nanobot, lyzr, github, git (auto-detect)
export$ gitagent export --format <format> -o outputFormats: system-prompt, claude-code, openai, crewai, openclaw, nanobot, lyzr, github
import$ gitagent import --from <format> <path>Convert existing agent configs into gitagent format
install$ gitagent installShallow-clones dependencies at specified versions into mount paths
skills$ gitagent skills search "code review"Registries: SkillsMP marketplace, GitHub repos, local filesystem
audit$ gitagent auditFINRA 3110, SEC 17a-4, SR 11-7, CFPB checks with pass/fail/warn indicators
info$ gitagent infoShows config, model, skills, tools, compliance, and SOUL.md preview
lyzr$ gitagent lyzr run -r <repo> -p "Hello"One command: clone → create agent on Lyzr → chat. Saves agent ID for reuse
Framework Adapters: One Standard, Every Runtime
One agent definition. Eight runtime targets.
claudeopenaicrewaiopenclawnanobotlyzrgithubgitAI Agent Skills System
Reusable capability modules following the Agent Skills standard.
SKILL.md Format
---
name: code-review
description: Thorough code reviews
license: MIT
compatibility: ">=0.1.0"
allowed-tools: Read Edit Grep Glob Bash
metadata:
author: "Jane Doe"
version: "1.0.0"
category: "developer-tools"
---
# Instructions
Review the code for:
1. Security vulnerabilities
2. Performance issues
3. Code style consistencyDiscovery Priority
<agent>/skills/Agent-local<agent>/.agents/skills/agentskills.io<agent>/.claude/skills/Claude Code<agent>/.github/skills/GitHub~/.agents/skills/Personal (global)Skill CLI
$ gitagent skills search "code review"$ gitagent skills install code-review --global$ gitagent skills list$ gitagent skills info code-reviewRegistries
SkillsFlow
Deterministic, multi-step workflows that chain skills together via YAML in workflows/.
name: code-review-flow
description: Full code review pipeline
triggers:
- pull_request
steps:
lint:
skill: static-analysis
inputs:
path: ${{ trigger.changed_files }}
review:
agent: code-reviewer
depends_on: [lint]
prompt: |
Focus on security and performance.
Flag any use of eval() or raw SQL.
inputs:
findings: ${{ steps.lint.outputs.issues }}
test:
tool: bash
depends_on: [lint]
inputs:
command: "npm test -- --coverage"
report:
skill: review-summary
depends_on: [review, test]
conditions:
- ${{ steps.review.outputs.severity != 'none' }}
inputs:
review: ${{ steps.review.outputs.comments }}
coverage: ${{ steps.test.outputs.report }}
error_handling:
on_failure: notify
channel: "#eng-reviews"Key Concepts
Skills run in declared order, not LLM discretion. Every run follows the same path.
Chain outputs to inputs with ${{ steps.X.outputs.Y }} template syntax.
Add extra context per step with the prompt: field — guide the agent without changing the skill.
Combine skill:, agent:, and tool: steps in a single flow for maximum flexibility.
Execution Pipeline
lintskillreviewagenttesttoolreportskillAI Agent Compliance & Governance
First-class regulatory support baked into the manifest. Run gitagent audit for a full report.
Risk Tiers
Compliance Artifacts
compliance/
├── risk-assessment.md
├── regulatory-map.yaml
└── validation-schedule.yamlRegulatory Frameworks
Rule 3110, 4511, 2210Supervisor assignment, HITL, escalation, retention (6y+), fair/balanced comms
SR 11-7, SR 23-4Model inventory, validation cadence, ongoing monitoring, vendor due diligence
Reg S-P, 17a-4Audit logging, PII handling, retention (3y+)
Circular 2022-03Bias testing, fair lending analysis
Frequently Asked Questions
Common questions about the GitAgent open AI agent standard.
Quick Start: Define Your First AI Agent
Seven commands from install to deploy.