Featured

Stop Re-Teaching Your AI: How We Built a Shared Skill System

Ethan Nagel

How we used the new Model Context Protocol to make reusable "skills" that teach any LLM our development patterns.

AIMCPLLMdevelopmentConcise Consulting

The problem: re-teaching your AI, again and again

If you’ve spent any time working with large language models, you know the pattern. You teach the AI how your project works — naming conventions, architecture, tone — and then, a week later, you start a new session and do it all over again.

At Concise Consulting, we work across multiple projects that share a common DNA: the Concise MVVM pattern, our preferred Svelte 5 runes conventions, and a repeatable way of setting up mobile and backend stacks. Each time we fired up a new LLM session, we had to remind it of the same things. It felt like onboarding an intern every morning.

Eventually, we decided to stop re-teaching.


Discovering “skills”

Anthropic introduced something that caught our attention: Agent Skills — simple, self-contained folders that package knowledge or workflows into a format an LLM can load automatically.

A skill might describe:

  • how to structure a Flutter app with services and ViewModels,
  • how to handle authentication with Supabase, or
  • your team’s unique pull-request template and coding standards.

Skills are just text and resources, defined by a simple spec. Drop one into the right folder, and the LLM knows how to use it.

It’s clean, but there was a catch: it only worked for Anthropic’s Claude, inside environments that support their specific folder structure.

We wanted the same concept — but portable. Something that would work no matter which model or IDE we were using.


Enter the Model Context Protocol

That’s where the Model Context Protocol (MCP) comes in. It’s an emerging open standard that lets LLM environments talk to external “servers” — sources of knowledge, tools, or workflows. MCP is quickly becoming the universal plug that lets AIs access external context safely and consistently.

float-left Someone had already done the hard part: a project called skills-mcp. It turns the Anthropic skills format into an MCP-compatible server. Install it once, and any MCP-aware client (like Cursor, Claude Desktop, or Copilot in the near future) can access your skills on demand.

It was exactly what we needed: a way to teach our AIs once, and reuse that teaching everywhere. Hello?


How we set it up

We started simple.

  1. Created a repo with a skills/ folder and pulled in the “skill-creator” skill from Anthropic’s open library. We trimmed out the packaging steps we didn’t need and added our own documentation generator.
  2. Installed the skills-mcp server and pointed it at that repo. On Cursor, installation took minutes.
  3. Added a mandatory init step to make sure every session loaded those skills before doing anything else.

Here’s what that looks like in our .cursorrules file:

## MANDATORY FIRST ACTION - SKILLS INITIALIZATION

**CRITICAL:** Before performing ANY actions (reading files, making changes, executing commands), you MUST:

1. Call `mcp_skills-mcp_list_skills` to discover available skills
2. Acknowledge completion by saying "Skills Initialized!"

**This is not optional.** Starting work without initializing skills violates this requirement.

**Why:** Skills provide specialized workflows that are essential for this project. Missing this step leads to incomplete solutions.

With that in place, every new session begins with the AI effectively re-learning our world in seconds.


Why this matters

The obvious benefit is consistency. When the LLM knows our patterns, it stops hallucinating old syntax or reinventing structures we’ve already solved.

But the real payoff is portability. We can now:

  • Move from Claude to Cursor or vice versa without losing our customizations.
  • Maintain a single source of truth for how our codebase is meant to evolve.
  • Gradually build a library of reusable skills — small, composable slices of institutional knowledge.

Instead of project-specific prompt engineering, we’re building skills that travel with us. Each new project starts a little smarter.


A few examples

We now have early skills that:

  • Define our Concise MVVM architecture (naming conventions, file layout, data flow).
  • Provide up-to-date Svelte 5 runes syntax and usage examples to stop the LLM from reverting to Svelte 4 habits.
  • Automate base setup for a Flutter project with our preferred integrations — SQLite or Supabase, service layer, offline sync.

Each one started as a markdown guide inside a project. Now they’re shared, versioned, and instantly available to any agent we use.


The bigger idea

This pattern scales beyond coding. Imagine internal documentation, customer tone guides, or compliance workflows encoded as skills. Any organization could maintain a warehouse of expertise that its AIs can instantly adopt — without retraining, vendor lock-in, or a mess of prompts.

We’re just scratching the surface, but it’s already changed the way we work. Instead of fighting to make each LLM session remember our preferences, we’ve externalized that knowledge into a simple, discoverable system.

Reusable intelligence. Predictable outputs. Less overhead.

Not bad for a weekend experiment.


If you’re experimenting with MCP or building your own reusable AI workflows, I’d love to compare notes. It’s early days for this ecosystem, but it’s already showing how open standards can make AIs not just smarter — but more collaborative.

© 2025 Concise Consulting