Teaching Your AI: How Agent Skills Bridge the Knowledge Gap

Development , AI Tools
Teaching Your AI: How Agent Skills Bridge the Knowledge Gap

Your AI coding assistant is brilliant. It knows React, understands TypeScript, can scaffold a NestJS backend in minutes. But ask it about Sequelize v7, and suddenly you’re in a battle.

“Let me help you set up Sequelize,” Claude says confidently, before generating code that imports from sequelize instead of @sequelize/core, uses Model.init() instead of decorators, and helpfully suggests downgrading to v6 because “v7 is still in alpha and may have stability issues.”

The assistant isn’t wrong about the alpha tag. It’s wrong about everything else. I’ve been running Sequelize v7 in production for over two years. I personally know one of the core developers. The “alpha” label is a versioning artifact, not a stability warning. But Claude doesn’t know that. Claude can’t know that. Its training data has a cutoff date, and the nuances of my production experience aren’t in any public dataset.

This is the knowledge gap problem, and it’s about to get solved.

The Cutoff Curse

Every large language model has a training cutoff. The data used to train it stops at some point in the past. For most daily tasks, this doesn’t matter. JavaScript fundamentals haven’t changed. SQL syntax is SQL syntax. The core patterns persist.

But software development doesn’t stand still. Frameworks release major versions that rewrite entire APIs. New tools emerge that solve problems in fundamentally different ways. Your company builds internal libraries that no public model has ever seen.

The gap shows up in predictable ways:

  • Major version changes: Sequelize v7, React Server Components, Next.js App Router — the AI knows the old patterns, not the new ones
  • Recent releases: Tools that launched after the training cutoff are invisible
  • Internal tooling: Your company’s custom APIs, conventions, and patterns exist nowhere in the training data
  • Domain expertise: Specialized knowledge that experts carry in their heads

The traditional workaround? Copy-paste documentation into the context window. Every. Single. Time. Explain the same API changes. Correct the same outdated patterns. Fight the same “helpful” suggestions to use the old approach.

It works, technically. It’s also exhausting.

The Emergence of Agent Skills

Agent Skills are an open standard for giving AI agents new capabilities and expertise. Think of them as plugins for your AI assistant’s brain — modular knowledge packages that agents can discover and load on demand.

The concept is simple: instead of cramming everything into a system prompt, skills let agents pull in specialized knowledge only when they need it. A skill is just a folder containing:

skill-name/
├── SKILL.md        # Metadata and core instructions
└── references/     # Detailed documentation loaded on demand

The magic is in the three-phase process:

  1. Discovery: The agent scans available skills, reading only the name and description from each SKILL.md
  2. Activation: When a task matches a skill’s domain, the full instructions are loaded into context
  3. Execution: The agent follows the skill’s guidance, pulling reference files as needed

This is just-in-time knowledge loading. The agent stays lean until it needs expertise, then loads exactly what’s relevant. No bloated prompts. No wasted context. No constant re-explaining.

Why Skills Beat the Alternatives

There are other approaches to the knowledge gap. None of them hit the sweet spot quite like skills.

System prompts can contain custom instructions, but they’re limited in size and not modular. You can’t easily share them, version them, or combine them for different projects.

Fine-tuning lets you bake knowledge directly into a model, but it’s expensive, requires significant data, and isn’t practical for individual developers or small teams. Every update means retraining.

RAG (Retrieval Augmented Generation) can fetch relevant documentation dynamically, but it requires infrastructure — vector databases, embedding pipelines, retrieval tuning. It’s powerful but complex.

Skills are just files. Markdown files in folders. You can version control them with git, share them on GitHub, install them with a single command. They work across multiple AI agents — Claude Code, Cursor, GitHub Copilot, Gemini CLI, and many more. No infrastructure required.

The simplicity is the feature.

The Sequelize v7 Problem

Let me tell you about my breaking point.

Over the past few months, I’ve been building several TypeScript projects. Backend services, APIs, data pipelines — all using Sequelize v7 with the new decorator-based model definitions. It’s a cleaner API, properly TypeScript-first, and I’ve had zero production issues.

But every time I opened a new chat with Claude, the fight began.

Claude would see Sequelize in my package.json and start “helping.” Import from the wrong package. Use Model.init() instead of decorators. Suggest patterns that were deprecated in v7.

And then, inevitably: “I notice you’re using Sequelize v7 which is still in alpha. For production use, I’d recommend the stable v6 release.”

No. Stop. I know what I’m doing. v7 works. I’ve proven it works. Please stop trying to downgrade me.

Here’s what Claude would generate without the skill:

// What Claude thinks I want (v6 style)
import { Model, DataTypes } from 'sequelize';

class User extends Model {
  declare id: number;
  declare email: string;
}

User.init({
  id: { type: DataTypes.INTEGER, primaryKey: true, autoIncrement: true },
  email: { type: DataTypes.STRING, allowNull: false }
}, { sequelize, modelName: 'user' });

Here’s what I actually need:

// What I actually use (v7 style)
import { Model, InferAttributes, InferCreationAttributes } from '@sequelize/core';
import { Attribute, PrimaryKey, AutoIncrement, NotNull } from '@sequelize/core/decorators-legacy';

class User extends Model<InferAttributes<User>, InferCreationAttributes<User>> {
  @Attribute(DataTypes.INTEGER)
  @PrimaryKey
  @AutoIncrement
  declare id: number;

  @Attribute(DataTypes.STRING)
  @NotNull
  declare email: string;
}

Different package imports. Different class structure. Different decorators. Every single session, I was teaching the same lesson.

Building the Skill

So I built a skill. The sequelize-7 skill captures everything Claude needs to know about v7:

---
name: sequelize-7
description: Sequelize v7 (alpha) TypeScript-first ORM with decorator-based models
version: 1.0.0
---

# Sequelize v7 Skill

Use this skill when working with Sequelize v7 (`@sequelize/core`).

## Critical v7 Differences

- Package is `@sequelize/core`, NOT `sequelize`
- Dialects are separate packages: `@sequelize/postgres`, `@sequelize/mysql`, etc.
- Use decorator-based model definitions (recommended)
- Constructor accepts only options object, no URL string shorthand
- CLS transactions enabled by default
- Association foreign keys default to camelCase

## Reference Files

- [getting-started.md](references/getting-started.md) - Installation and setup
- [models.md](references/models.md) - Decorator-based model definitions
- [querying.md](references/querying.md) - CRUD operations and operators
- [associations.md](references/associations.md) - Relationships and eager loading
- [advanced.md](references/advanced.md) - Transactions, hooks, migrations

The reference files contain detailed documentation that gets pulled in as needed. The skill doesn’t bloat the context — it provides targeted knowledge exactly when relevant.

Now when I work on a Sequelize project, Claude knows:

  • Use @sequelize/core, not sequelize
  • Use decorators, not Model.init()
  • Stop suggesting downgrades — v7 is production-ready
  • Follow the v7 patterns from the reference docs

One skill. Permanent fix. No more daily re-education.

The Open Standard

The Agent Skills specification was originally developed by Anthropic and released as an open standard. It’s been adopted across the ecosystem — over 25 agent products now support it:

  • Claude Code, Cursor, GitHub Copilot, VS Code
  • OpenAI Codex, Gemini CLI, Amp, Goose
  • Roo Code, Databricks, Factory, and many more

This means a skill you write works everywhere. Build once, use across all your tools.

Installation is straightforward:

# Install from a GitHub repo
npx skills add totophe/skills

# Or install a specific skill
npx skills add totophe/skills --skill sequelize-7

Skills can be scoped:

  • Personal: ~/.claude/skills/ — applies to all your projects
  • Project: .claude/skills/ — shared with your team via git

My skills are published at skills.sh/totophe/skills and the source is on GitHub.

Democratizing Expert Knowledge

Here’s what excites me about skills: anyone can create and share expertise.

You’re an expert in some domain. Maybe it’s a framework, maybe it’s your company’s architecture, maybe it’s a niche tool that most developers never touch. That expertise lives in your head, and every time you work with an AI assistant, you’re manually transferring it through conversation.

Skills let you document once and benefit forever. Write down what you know, structure it as a skill, and suddenly every AI interaction has that context. Share it publicly, and other developers benefit too.

The open standard means your work isn’t locked to any single vendor. The version control means your skills evolve with the tools they describe. The community means someone might have already solved your problem.

This is knowledge transfer at scale.

Another Layer of Abstraction

In my previous post, I wrote about how AI-assisted development represents the next layer of abstraction in software history. We’ve moved from assembly to C to high-level languages to frameworks to AI assistants.

Skills are the next step in that progression. They’re not just abstracting code — they’re abstracting knowledge itself. The expertise of framework authors, the patterns of experienced developers, the conventions of entire organizations — all packaged into portable, shareable, version-controlled files.

The AI becomes a vessel for collective knowledge. Not just what’s in the training data, but what the community actively teaches it.

Getting Started

If you’re hitting knowledge gaps with your AI tools, here’s how to start:

  1. Use existing skills: Check skills.sh for skills that match your stack
  2. Install the ones you need: npx skills add <repo>
  3. Create your own: Document the expertise you keep re-explaining
  4. Share them: Push to GitHub, let others benefit

The specification is straightforward. A skill is just a SKILL.md file with optional reference documents. If you can write markdown, you can write a skill.

The Shift

I used to spend significant mental energy teaching Claude the same things every session. Now I spend that energy building.

The knowledge gap hasn’t disappeared — new frameworks will keep emerging, APIs will keep changing, my projects will keep accumulating custom patterns. But the gap is now a solvable problem. When I encounter something my AI doesn’t know, I can teach it once and never repeat myself.

That’s the shift. From constant re-education to accumulated knowledge. From ephemeral context to persistent expertise. From fighting the AI’s outdated assumptions to upgrading its understanding permanently.

Skills bridge the knowledge gap. And once you start using them, you’ll wonder how you ever worked without them.


The sequelize-7 skill is live at skills.sh/totophe/skills/sequelize-7. Claude finally knows that v7 is production-ready. My sanity thanks me.

About the Author
totophe's avatar

totophe

Creative Mind, Digital Development Strategist, and Web & Marketing Technologies Consultant in Brussels, Belgium