Master Cursor Custom Rules to Align Gen AI with Your Code
Use Cursor’s Custom Rules to enforce coding standards, project structure, and test coverage. Make AI generate code that follows your team’s best practices.
Originally published on GeekyAnts Blog · By Ajinkya Vinayak Palaskar, Software Engineer III at GeekyAnts · May 29, 2025

Generative AI tools like Cursor are changing the way developers write code — but let's be honest, the default AI behaviour doesn't always match how you or your team builds software. Whether it's naming conventions, project structure, or the way you wire up API calls, out-of-the-box AI can feel like working with a junior dev who doesn't quite get the vibe yet.
That's where Cursor's Custom Rules come in. Instead of adapting your codebase to fit AI's suggestions, you can flip the script and make AI generate code that follows your standards, your structure, and your expectations.
In this blog, we'll dive into what Cursor's rules are, how they work, and how you can use them to bring structure, consistency, and actual team alignment to your AI-powered workflow — real, dev-focused examples that help you go from "AI that kind of helps" to "AI that codes like your best junior dev (who doesn't forget lint rules)."
Understanding Cursor's Custom Rules
At its core, Cursor's Custom Rules feature gives developers a way to shape the behaviour of the AI assistant so it actually respects your coding preferences. It's not just about giving it tips — it's about creating structured, repeatable rules that get applied every time you prompt it in certain files or folders.
Cursor supports two types of rules:
- Project Rules — Workspace-wide rules stored in
.cursor/rules/and committed to version control. They apply to everyone working in the repo — super useful for enforcing team-wide patterns, boilerplate structures, or naming conventions. - User Rules — These live locally and are scoped only to your Cursor environment. Think of them as your personal tweaks or power-ups for one-off or experimental cases. Cursor stores them in your user settings directory.
To sum it up: Project rules = team alignment. User rules = personal productivity boosts.
How Do These Rules Actually Work?
Cursor rules work by defining:
- File matchers (globs) — Decide which files the rule applies to.
- Prompts/Instructions — Tell Cursor what to do when editing those files.
- Referenced files (optional) — Provide extra context, like shared types, utility functions, or templates.
So instead of writing the same prompt over and over again like "Please generate a Zod schema and name everything in camelCase", you just write the rule once, and Cursor applies it automatically in the right places.
What About .cursorrules?
If you've used Cursor in the past, you might've seen the old .cursorrules file in the root of your project. That format is now deprecated in favour of the new .cursor/rules/*.mdc system, which is much more flexible and easier to manage across teams.
Setting Up Custom Rules in Cursor
The easiest way to get started is to use the built-in command:
cursor rule:new
This will prompt you for a name and create a Markdown file at:
.cursor/rules/your-rule-name.mdc
Here's what a full rule looks like:
---
description: Enforce Zod validation in all service files
globs:
- "**/*.service.ts"
alwaysApply: true
referencedFiles:
- src/types/shared.ts
---
Always use Zod for request and response validation in service files.
- Import `z` from 'zod'
- Define input and output schemas before the function body
- Name schemas using camelCase with a `Schema` suffix (e.g., `getUserSchema`)
- Export schemas alongside the function
- Never use `any` types
Let's break this down:
| Field | Purpose |
|---|---|
description |
Short explanation of what the rule does |
globs |
File patterns where the rule applies (e.g., **/*.ts) |
alwaysApply |
If true, the rule is used without needing manual selection |
referencedFiles |
(Optional) Files used as examples or context for better AI responses |
| Body | The actual instructions shown to the AI when working in matching files |
To see which rules are active or to edit them:
- Go to Settings → Rules
- You'll see both User Rules and Project Rules
- Toggle, delete, or update them as needed
Real-World Use Cases
Use Case 1: Enforcing Consistent API Validation with Zod
The Problem
Your team uses Zod for request/response validation across all service files, but devs often forget to include schemas or name things consistently.
Rule File: .cursor/rules/zod-validation.mdc
---
description: Enforce Zod validation in service files
globs:
- "**/*.service.ts"
alwaysApply: true
---
When generating or editing service files:
- Always import `z` from 'zod' at the top of the file
- Define a Zod schema for every function's input and output
- Name schemas with camelCase + `Schema` suffix (e.g., `createUserInputSchema`)
- Export all schemas alongside their corresponding functions
- Never use TypeScript `any` — always infer types from Zod schemas using `z.infer<>`
- Validate all incoming data at the function boundary before any business logic
Example output Cursor generates:
import { z } from 'zod';
export const createUserInputSchema = z.object({
name: z.string().min(1),
email: z.string().email(),
});
export type CreateUserInput = z.infer<typeof createUserInputSchema>;
export async function createUser(input: CreateUserInput) {
const validated = createUserInputSchema.parse(input);
// business logic here
}
This makes the AI generate pre-validated, strongly typed service methods every time — no reminders or rework needed.
Use Case 2: Enforcing Consistent File Structure for Features
The Problem
Every new feature in your app should follow a clean structure with index.tsx, hooks.ts, types.ts, and api.ts — but different devs often structure things differently.
Rule File: .cursor/rules/feature-structure.mdc
---
description: Enforce standard file structure for feature modules
globs:
- "src/features/**"
alwaysApply: true
---
When scaffolding a new feature module, always create these four files:
- `index.tsx` — main component, default export only
- `hooks.ts` — all custom hooks for the feature, prefixed with `use`
- `types.ts` — all TypeScript interfaces and types for the feature
- `api.ts` — all API calls, using the project's fetch wrapper, never raw fetch
Never mix API logic inside components.
Never define types inline in component files.
Always import types from `./types` and hooks from `./hooks`.
Example scaffold Cursor generates:
src/features/user-profile/
├── index.tsx ← main component
├── hooks.ts ← useUserProfile, useUpdateProfile
├── types.ts ← UserProfile, UpdateProfilePayload
└── api.ts ← fetchUserProfile, updateUserProfile
With this rule, Cursor automatically scaffolds the correct file layout and encourages modular, maintainable code across your repo.
Use Case 3: Enforcing Unit Test Coverage with Vitest
The Problem
You're using Vitest, and every utility function should have a corresponding test file. Devs often skip writing them.
Rule File: .cursor/rules/vitest-coverage.mdc
---
description: Auto-generate Vitest tests alongside utility functions
globs:
- "src/utils/**/*.ts"
alwaysApply: true
---
For every utility function created or modified:
- Always generate a corresponding test file at `src/utils/__tests__/[filename].test.ts`
- Use `describe` blocks to group related tests
- Cover: happy path, edge cases (empty input, null, undefined), and error cases
- Use `vi.fn()` for mocks — never jest globals
- Import the function under test using relative paths
- Never skip writing tests — if the logic is simple, write at least one smoke test
Example test Cursor generates:
import { describe, it, expect } from 'vitest';
import { formatCurrency } from '../formatCurrency';
describe('formatCurrency', () => {
it('formats a positive number correctly', () => {
expect(formatCurrency(1000)).toBe('$1,000.00');
});
it('handles zero', () => {
expect(formatCurrency(0)).toBe('$0.00');
});
it('handles negative numbers', () => {
expect(formatCurrency(-500)).toBe('-$500.00');
});
it('throws on non-numeric input', () => {
expect(() => formatCurrency(NaN)).toThrow();
});
});
With this rule in place, AI writes tests alongside your utilities — improving test coverage and helping junior devs not skip QA steps.
Use Case 4: Auto-Wiring RPC Handlers with tRPC
The Problem
You're using tRPC, and you want every new route to follow a specific handler format with proper input/output typing.
Rule File: .cursor/rules/trpc-handlers.mdc
---
description: Enforce tRPC handler patterns for all router files
globs:
- "src/server/routers/**/*.ts"
alwaysApply: true
referencedFiles:
- src/server/trpc.ts
---
When creating new tRPC procedures:
- Always use `publicProcedure` or `protectedProcedure` from the shared trpc file
- Define input validation with Zod inline in `.input()`
- Define output type with Zod inline in `.output()` where applicable
- Use `.query()` for read operations and `.mutation()` for write operations
- Never use `any` in input or output schemas
- Keep handler logic thin — delegate to a service function, not inline
- Name procedures in camelCase (e.g., `getUserById`, `createPost`)
Example output Cursor generates:
import { z } from 'zod';
import { router, protectedProcedure } from '../trpc';
import { getUserById } from '../../services/user.service';
export const userRouter = router({
getUserById: protectedProcedure
.input(z.object({ id: z.string().uuid() }))
.output(z.object({ id: z.string(), name: z.string(), email: z.string() }))
.query(async ({ input }) => {
return getUserById(input.id);
}),
});
With this, AI consistently generates boilerplate that aligns with your tRPC config — without you needing to manually adjust every time.
Best Practices & Tips
Getting started with Custom Rules is easy, but getting them to stick and scale well with your team takes a bit of finesse. Here are some battle-tested tips:
1. Make Rules Iterative, Not Perfect
Don't try to write the ultimate prompt on Day 1. Start small — even a one-liner like "Use Zod in all services" can go a long way. Watch how Cursor responds and improve the rule over time.
Think of rules like code: ship early, refine often.
2. Be Specific in the Prompt
Don't just say "use tests" — say "use Vitest and place tests in the __tests__ folder next to the source file." The more specific the language, the more reliable the output.
Use phrasing like:
"Always start with...""Never skip...""Follow the pattern from..."
3. Review Rules as a Team
If you're on a team, treat your rules like coding conventions. Do quick async reviews and align on when a rule should apply (alwaysApply: true vs. manual). This helps avoid confusion or AI overreach.
4. Keep It Human
Don't over-engineer your prompts. Write it like you're telling a junior dev sitting next to you. That's usually the sweet spot.
5. Use referencedFiles for Context-Rich Rules
When your rule depends on shared patterns — like a base API client, a design system component, or a shared type file — reference them explicitly. This gives Cursor real context instead of guessing.
referencedFiles:
- src/lib/apiClient.ts
- src/types/global.d.ts
Final Thoughts
Custom Rules in Cursor are low-effort, high-impact. With just a few Markdown files, you can guide AI to follow your team's coding style, enforce structure, and even auto-suggest best practices — all without writing extra logic or docs.
Whether you're working solo or on a big team, these rules let you offload the repetitive reminders and keep your codebase clean and consistent.
If you've ever wished AI could "just know how we do things around here" — this is how you get there.
Want to build AI-powered engineering workflows that actually fit your team? Talk to GeekyAnts.


