How to Run a Cursor Workshop That Actually Changes Developer Habits
Generic AI training doesn't stick. Here's how we run Cursor + Claude Code workshops that turn "I tried it once" into daily practice.
Generic AI training doesn't stick. Here's how we run Cursor + Claude Code workshops that turn "I tried it once" into daily practice.
Most AI coding tool workshops follow the same script: someone shares their screen, codes a todo app, and everyone leaves thinking "cool, but what about my work?"
That's not training. That's a demo.
If you want your developers to actually adopt Cursor (or Claude Code, Copilot, etc.), you need to run a workshop that builds muscle memory with their code.
Here's how we do it at OpenClaw Labs.
When you demo AI with a todo app or a trivial refactor:
Result: they try it once, get mediocre results, and revert to old habits.
Pick a real repo (ideally the team's main codebase)
Identify 3-5 tasks that map to daily work:
Prep a branch (optional: seed it with a ticket or issue)
Part 1: Context rules (15 mins)
.cursorrules and .cursorignorePart 2: Live coding (45 mins) We pick 2-3 tasks from the pre-work list and live-code them. The format:
Key: Use your codebase. If your backend is Django, code Django. If it's React, code React. If you have a weird monorepo setup, code that.
Part 3: Pair practice (30 mins) Developers pair up and pick a task. They:
We circulate and help. The goal: they get comfortable prompting in their context.
Part 4: Playbook creation (15 mins) We capture patterns:
These become team docs.
.cursorrules file for the repoIf you're sharing your screen for 90 minutes, you're doing it wrong. Developers need to write code.
"Let's build a calculator" doesn't teach someone how to refactor a React component with Redux. Use real tasks.
AI is bad at: architecture decisions, high-security code, novel algorithms. If you don't teach people the failure modes, they'll lose trust fast.
One workshop doesn't change habits. You need:
After a good workshop:
We also track:
A workshop is the entry point. But real adoption needs:
We run this as a 4-week pilot: Week 1 is the workshop, Weeks 2-4 are building automations and rollout plans.
We run these workshops for engineering teams across Australia and globally. If you want to level up your team's AI skills with your codebase, let's talk.
Bonus: Our Cursor .cursorrules Template
Here's a starter template we use for TypeScript/React teams:
# Project Context
- Stack: Next.js 14, TypeScript, Tailwind CSS
- Style: Functional components, hooks, server components where possible
- Testing: Jest + React Testing Library
# Code Style
- Use TypeScript strict mode
- Prefer `const` over `let`
- Use named exports
- Keep components under 200 lines
- Write tests for business logic
# AI Instructions
- When generating components, include TypeScript types
- When writing tests, cover happy path + 2 edge cases
- When refactoring, explain breaking changes in comments
- Default to server components unless client interactivity is needed
Adapt it to your stack and iterate.
We run hands-on workshops and ship workflow automations for engineering and ops teams.
Book a 30-min discovery call →