📦 Skill is the new package format for AI
How reusable, markdown-based instructions are becoming the npm of AI agent development
Something interesting is happening in AI tooling. Across different platforms — OpenClaw, Claude, Cursor — I’m seeing the same pattern emerge: reusable, portable instructions packaged as markdown-based “skills.”
It reminds me of the early days of npm. Before package managers, we copied code between projects. No standards, no discovery, no versioning. Then npm came along and changed everything. Now we’re seeing the same transformation happen for AI agents.
Skills are becoming the package format for AI — and I think this standardization is going to unlock something powerful.
🤔 The Problem: Every Agent Starts From Zero
Working with AI agents today feels like web development before package managers. Each time you set up a new agent or switch platforms, you start from scratch. You write the same instructions again. You explain the same patterns. You recreate the same workflows.
I work with multiple AI tools daily — OpenClaw for personal automation, Claude for coding, Cursor for development. Each has its own way of handling context and instructions. The knowledge I build in one tool doesn’t transfer to another.
Here’s what I mean: I’ve spent months training my OpenClaw assistant to understand my real estate search criteria. Detailed instructions on scoring properties, connecting to various APIs, formatting notifications. It’s incredibly valuable knowledge — but it’s locked into one system.
The same frustration applies to development workflows. I have a systematic approach to code reviews, deployment processes, and debugging patterns. But when I switch from Claude to Cursor, I have to explain it all again.
The core issue: We’re building sophisticated AI workflows, but we can’t package, share, or reuse the knowledge that makes them effective.
🧠 The Insight: Skills as Portable Instructions
The solution is emerging organically across multiple platforms: skills — reusable, markdown-based packages that contain instructions, context, and workflows that AI agents can discover and use.
Think of it like this: instead of writing a library of functions, you’re writing a library of guidance. A skill packages up domain expertise in a format that any compatible AI agent can understand and apply.
The Anatomy of a Skill
Every skill I’ve encountered follows the same basic structure:
---
name: property-search
description: Orchestrator for automated property search across France and Belgium
triggers: ["veille", "recherche bien", "property search", "nouveaux biens"]
---
# Property Search
Automated property search orchestrating all source skills...
## Search Criteria
[Detailed instructions for search parameters]
## Workflow
[Step-by-step process]
## Scripts
[Bundled automation code]
This format is remarkably consistent across platforms. OpenClaw calls them Skills, Claude calls them Agent Skills, and even Cursor rules follow similar patterns. The universal components are:
- YAML frontmatter — metadata that helps the agent discover when to use this skill
- Markdown instructions — human-readable guidance the agent follows
- Optional resources — scripts, templates, or reference data
- Triggers — keywords or patterns that activate the skill
It’s like a package.json for AI instructions.
📱 Real-World Examples: Skills in Action
Let me show you how this works in practice with examples from my own workspace.
Property Search Orchestration
I have a property-search skill that coordinates real estate hunting across multiple sources. Instead of explaining the same search criteria to different tools, I defined it once:
## Search Criteria
| | France | Belgium |
|---|---|---|
| **Budget** | <= 150k EUR | <= 150k EUR |
| **Surface min** | 1.5 ha | 1.5 ha |
| **Types** | ruine, ferme à rénover, terrain constructible | ... |
| **Regions** | See detailed list below | Wallonie complete |
The skill bundles:
- Instructions — detailed search and scoring logic
- Scripts — Python scrapers for different property sites
- Triggers —
["veille", "recherche bien", "property search"]
When I type “veille immobiliere” to my assistant, it automatically loads this skill and runs the complete workflow. No need to re-explain what I’m looking for or how to score properties.
Blog Writing Workflow
I have a write-blog-article skill that captures my entire editorial process:
---
name: write-blog-article
description: Full blog article workflow for smartsdlc.dev (MDX, frontmatter, style)
triggers: [manual]
---
This skill contains:
- Style guidelines — voice, tone, and formatting rules
- Technical patterns — MDX structure, frontmatter schema
- Editorial workflow — research → structure → write → review
- Quality checklist — verification steps before publishing
Every time I write an article, the AI loads these instructions automatically. Consistent output without repeating guidance.
🔄 The Convergence: Different Tools, Same Pattern
What’s fascinating is how different AI platforms are converging on the same approach, even though they developed independently.
OpenClaw Skills
OpenClaw pioneered the SKILL.md format with YAML frontmatter. Skills live in .ai/skills/ directories and get automatically discovered by the agent. The metadata defines triggers, requirements, and installation instructions.
.ai/skills/property-search/
├── SKILL.md # Main instructions
├── scripts/ # Python automation
└── data/ # Reference files
Claude Agent Skills
Anthropic recently announced Agent Skills using nearly identical structure. Same YAML frontmatter, same markdown instructions, same progressive loading model.
The key insight from Claude’s approach: progressive disclosure. Only load the metadata initially, then pull in full instructions when triggered. This keeps context windows manageable while providing unlimited reference material.
Cursor Rules and CLAUDE.md
Even Cursor IDE has evolved toward this pattern with .cursor/ rules and project-specific AI instructions. The community has standardized on CLAUDE.md files for providing context to AI coding assistants.
The Emerging Standard
Despite being developed independently, these systems are remarkably similar:
| Feature | OpenClaw | Claude Skills | Cursor Rules |
|---|---|---|---|
| File format | SKILL.md | SKILL.md | .cursorrules |
| Metadata | YAML frontmatter | YAML frontmatter | Comment headers |
| Instructions | Markdown body | Markdown body | Markdown body |
| Discovery | Filesystem scanning | API/filesystem | Project scanning |
| Triggers | Keywords/patterns | Description matching | File patterns |
The convergence isn’t accidental. They’re all solving the same fundamental problem: how to package and reuse AI expertise.
🌐 ClawHub: The npm Registry for Skills
Just as npm made JavaScript packages discoverable and shareable, ClawHub is emerging as the registry for AI skills. You can search, install, and publish skills just like npm packages.
# Search for skills
clawhub search "property search"
# Install a skill
clawhub install property-search-france
# Publish your own
clawhub publish ./my-skill/
The parallel is striking:
- npm → JavaScript packages → reusable code
- ClawHub → AI skills → reusable instructions
The registry solves the same discovery and distribution problems that made npm essential for web development.
💡 Why This Matters: The Network Effects
Package formats succeed because of network effects. The more people use npm, the more valuable it becomes. The same dynamic is starting with skills.
Standardization Reduces Friction
When skills follow a consistent format, they become portable. A skill written for OpenClaw can work in Claude with minimal changes. This portability encourages investment in building high-quality skills.
Knowledge Becomes Composable
Skills can reference other skills, creating composition patterns. My property search skill uses multiple source-specific skills under the hood. Each focuses on one data source, but they combine into a powerful orchestrator.
Community Development
A standard format enables community contribution. Instead of everyone building property scrapers from scratch, we can share and improve common patterns. The best approaches win through usage and refinement.
Marketplace Potential
Standardized skills create marketplace opportunities. Just as developers sell npm packages or WordPress plugins, specialized AI skills could become a business model. Legal review skills, financial analysis skills, domain-specific expertise packaged for reuse.
🔧 Technical Architecture: How Skills Work
Under the hood, skills use a clever technical approach that makes them both human-readable and machine-executable.
Progressive Loading
The Claude team nailed this with their three-tier loading model:
Level 1: Metadata (always loaded)
- Just the YAML frontmatter with name/description
- Helps the AI decide when to use the skill
- Minimal context window impact
Level 2: Instructions (loaded when triggered)
- The markdown body with detailed guidance
- Loaded via filesystem reads when needed
- Moderate context usage
Level 3: Resources (loaded on demand)
- Scripts, templates, reference data
- Executed without loading into context
- Unlimited bundled content
This architecture solves the context window problem while providing unlimited depth.
Human-Readable, Machine-Executable
The genius of markdown-based skills is that they’re simultaneously:
- Human-readable — you can browse, edit, and understand them
- Machine-discoverable — YAML metadata enables search and filtering
- Version-controllable — they’re just text files in git
- Cross-platform — work with any tool that can read markdown
Compare this to traditional API integrations or config files. Skills are much more approachable and maintainable.
No Runtime Required
Unlike software packages that need specific language runtimes, skills are just instructions. They work with any AI agent that understands the format. This universality is key to adoption.
🚀 Practical Implementation: Building Your First Skill
Let me walk through creating a simple but useful skill to show how this works in practice.
Example: Code Review Skill
Here’s a skill I use for consistent code reviews:
---
name: code-review-checklist
description: Systematic code review focusing on maintainability, performance, and security
triggers: ["code review", "pr review", "review code"]
---
# Code Review Checklist
## Pre-Review Setup
1. Understand the context: What problem is this solving?
2. Check the scope: Is this focused or trying to do too much?
3. Review tests first: Do they clearly express the intended behavior?
## Review Categories
### Architecture & Design
- [ ] Does this fit the existing patterns?
- [ ] Are abstractions at the right level?
- [ ] Is the data flow clear and predictable?
### Code Quality
- [ ] Are variable and function names descriptive?
- [ ] Is complex logic explained with comments?
- [ ] Are error cases handled appropriately?
### Performance
- [ ] Any obvious inefficiencies?
- [ ] Database queries optimized?
- [ ] Large data structures handled efficiently?
### Security
- [ ] Input validation on user data?
- [ ] No secrets in code?
- [ ] Authentication/authorization correct?
## Output Format
Provide feedback in this structure:
1. **Overall Assessment**: Approve/Request Changes/Comment
2. **Strengths**: What's done well
3. **Areas for Improvement**: Specific, actionable feedback
4. **Questions**: Things that need clarification
How to Use It
With this skill installed, I can say “review this pull request” and the AI will:
- Load the skill automatically (trigger match)
- Follow the systematic checklist
- Provide consistent, structured feedback
The skill captures my code review expertise and makes it reusable across projects and team members.
Once saved as a SKILL.md in your skills directory, the agent discovers it automatically. Want to share it? clawhub publish code-review-checklist/ and it’s available to anyone.
⚡ Advanced Patterns: Composition and Orchestration
The real power emerges when skills compose. My property search skill orchestrates multiple specialized sub-skills — one per data source, plus a scoring skill. Each focuses on one concern, but they combine into a powerful workflow.
Skills can also define multi-step processes. My blog publishing skill chains research → outline → draft → review → assets → publish. The AI follows this workflow automatically, loading the right instructions at each stage.
And some skills are pure context — domain knowledge that gets loaded when working in specific areas. My real estate domain skill explains SAFER organizations, notaire processes, and DPE ratings. No workflow, just expertise the agent can draw on.
🔮 What’s Coming Next
The format is still evolving. The obvious next steps mirror npm’s evolution: versioning (semver for skills), multi-agent collaboration (specialized agents loading specialized skills), and IDE integration (install skills like you install extensions).
The most interesting one is skill marketplaces. Experts will package domain knowledge — legal review, financial analysis, industry-specific automation — as shareable, monetizable skills. The same way WordPress plugins created an ecosystem, AI skills will too.
⚠️ Challenges
Skills aren’t perfect. Security is the biggest concern — skills can execute code and access data, so we need sandboxing, permissions, and trust mechanisms. Quality control matters too: the npm ecosystem proved that low-quality packages create real problems. Community review, automated testing, and rating systems will be essential.
Platform fragmentation is real but shrinking. OpenClaw and Claude skill formats are remarkably similar; cross-platform compatibility improves with each release. And context window limits remain a constraint, though progressive loading makes this manageable.
🙂 Last Thoughts
Software packages contain code. Skills contain knowledge. They capture not just what to do, but how to think about problems. They’re closer to mentoring than programming.
The same patterns are emerging across platforms because they solve real problems. And like all good standards, they’re growing organically from actual use cases rather than being imposed from above.
The best AI agents won’t be the ones with the biggest models — they’ll be the ones with the richest skill libraries.
Stay Tuned 🚀
Want to go further?
If you're looking to improve your Developer Experience, build AI Frameworks, or optimize your CI/CD pipelines with Nx, feel free to reach out — I'm always happy to share real-world setups and practical tips.
Build on Foundations,
Scale with AI.
I design AI-powered workflows and engineering foundations that make teams faster, happier, and more consistent.