Context Lizard: A Git-Inspired System for AI Coding Standards
AI coding assistants have revolutionized development workflows, offering unprecedented speed and efficiency. However, these tools face a critical limitation: they lack persistent memory of your coding standards, architectural preferences, and team conventions. This "digital amnesia" creates a significant productivity drain as developers repeatedly teach the same patterns across sessions and projects.
Project-Level Configuration: The Initial Solution
To address this challenge, I developed a structured contextlizard.json
file that captures my entire development philosophy, from basic formatting preferences to complex architectural patterns:
{
"reactRules": {
"conditionalRendering": {
"rule": "Use double negation (!!) for optional values and omit parentheses for single-line JSX",
"good": "{!!optionalValue && <Component />}",
"bad": "{optionalValue && (<Component />)}",
"explanation": "Double negation ensures boolean type, and parentheses are unnecessary for single-line JSX"
},
"mapSyntax": {
"rule": "Use concise arrow function syntax without parentheses for simple map returns",
"good": "array.map(item => <Component key={item.id} {...item} />)",
"bad": "array.map((item) => (<Component key={item.id} {...item} />))"
}
},
"errorHandling": {
"rule": "Do not use try-catch blocks whenever possible",
"explanation": "Prefer returning error objects or using higher-order error handling patterns",
"good": "const result = await someFunction()
if (result.status === 'failure') {
return { error: result.error }
}",
"bad": "try {
await someFunction()
} catch (error) {
console.error(error)
}",
"operationResult": {
"rule": "Always return an OperationResult object instead of throwing errors",
"example": "type OperationResult<T> = {
status: 'success' | 'failure'
data?: T
error?: string
}"
}
},
"sequelizeModels": {
"modelCreation": {
"rule": "Create Sequelize models using a class that extends Model",
"good": "export class SubscriptionModel extends Model {
declare subscriptionId: string
declare userId: string
}
SubscriptionModel.init({
subscriptionId: {
type: DataTypes.STRING,
primaryKey: true
}
}, {
sequelize,
tableName: 'subscriptions'
})"
}
}
}
By placing this file in my project root and referencing it when instructing AI tools, the quality of generated code improved dramatically. The AI consistently followed my preferred patterns and architectural decisions.
Beyond Formatting: Recipes for Complex Features
While style rules provide consistency, the true power of Context Lizard comes from recipes—templates for entire feature implementations that encode complex architectural patterns:
"recipes": {
"dataModule": {
"description": "Complete implementation of a new data type with database, API, and client components",
"requiredInputs": ["entityName", "attributes", "relationships"],
"components": {
"sequelizeModel": {
"description": "Sequelize model definition",
"template": "export class {{EntityName}}Model extends Model {
declare {{entityName}}Id: string
{{#each attributes}}
declare {{name}}: {{type}}
{{/each}}
}
{{EntityName}}Model.init({
{{entityName}}Id: {
type: DataTypes.STRING,
primaryKey: true,
allowNull: false
},
{{#each attributes}}
{{name}}: {
type: DataTypes.{{typeToSequelize type}},
allowNull: {{#if required}}false{{else}}true{{/if}}
},
{{/each}}
}, {
sequelize,
tableName: '{{entityNamePlural}}',
timestamps: true
})"
},
"serverFunctions": {
"description": "Server-side CRUD operations",
"template": "export async function create{{EntityName}}(input: Create{{EntityName}}Input): Promise> {
try {
const {{entityName}} = await {{EntityName}}Model.create(input)
return {
status: 'success',
data: {{entityName}}.toJSON() as {{EntityName}}
}
} catch (error) {
console.log('CREATE_{{ENTITY_NAME_UPPER}}_ERROR:\n', error)
return {
status: 'failure',
error: 'Failed to create {{entityName}}'
}
}
}"
}
}
}
}
With this recipe system, I can simply instruct: "Create a Subscription module with planType (string, required), startDate (date, required), and status (string, required) that belongs to User." The AI then generates complete implementations across database models, server functions, API routes, and client wrappers—all following my established architectural patterns.
Cursor's .mdc Format: Leveraging Native Capabilities
After developing my JSON-based system, I discovered Cursor's native rules approach using .mdc
files in the .cursor/rules
directory:
Rule Name: conditional-rendering
Description: Guidelines for conditional rendering in React components
Bad:
```jsx
{optionalValue && (<Component />)}
```
Good:
```jsx
{!!optionalValue && <Component />}
```
Use double negation (!!) for optional values to ensure boolean conversion.
Omit unnecessary parentheses for single-line JSX elements.
Similarly, recipes can be defined in the .mdc format:
Rule Name: data-module-recipe
Description: Complete implementation of a new data type with database, API, and client components
Required Inputs:
- entityName: The name of the entity (e.g., "Subscription")
- attributes: Array of { name, type, required } for entity attributes
- relationships: Array of { type, relatedEntity } for model associations
Components to Create:
1. Sequelize Model (models/{entityName}Model.ts):
```typescript
export class {{EntityName}}Model extends Model {
declare {{entityName}}Id: string
{{#each attributes}}
declare {{name}}: {{type}}
{{/each}}
}
```
Usage Example:
Simply say: "Create a Subscription module with these attributes: planType (string, required), startDate (date, required), endDate (date), status (string, required). It should belong to User."
I'm currently using AI to translate my comprehensive .contextlizard.json
into individual .mdc
files to leverage Cursor's native capabilities.
The Multi-Project Challenge: Where Single-Project Solutions Fall Short
While this approach works well for individual projects, it introduces several challenges when managing multiple codebases:
- Copy/paste errors - Rules duplicated between projects inevitably drift apart
- Tech stack mismatch - Different projects require different rule subsets
- No versioning - As standards evolve, there's no systematic way to update all projects
- Duplication of effort - Team members create redundant rule sets
For example, when working on both a React/NextJS/Sequelize project and a React/Vite/Supabase project, I found myself manually copying relevant rules between projects and carefully filtering out inappropriate patterns. This process was time-consuming and error-prone.
Context Lizard: A Git-Inspired System for AI Rules
The solution is an external repository system for AI coding standards that functions like Git:
- Central repository - Rules and recipes stored in a version-controlled system
- Rule categorization - Organized by technology rather than project
- Composable configurations - Select the technologies relevant to your project
- Branch and merge - Experiment with standards and merge successful approaches
- Pull and push - Sync standards between projects and the central repository
This approach allows teams to build a comprehensive library of standards that evolves with their experience and can be selectively applied to different projects.
The Ideal Workflow: Practical Implementation
The workflow simplifies to these core commands:
# Initialize with specific tech stack
contextlizard init my-project --stack react,nextjs,sequelize,auth0
# Project is configured with relevant rules
# Later, pull latest updates to rules
contextlizard pull my-project
# Contribute improvements back to the central repository
contextlizard push --rule react.conditionalRendering
This workflow ensures that standards remain consistent across projects while allowing for continuous improvement. When more effective patterns emerge, they can be updated once and pulled into all relevant projects, maintaining consistency across the entire development ecosystem.
Future Directions: Premium Rule Collections
As AI integrates further into development workflows, high-quality rule collections might emerge as valuable intellectual property. Companies could potentially offer curated rule sets embodying industry best practices and architectural patterns.
Consider the possibility of accessing a "NextJS Enterprise Architecture" rule set that encodes established patterns for building scalable applications, or "React Performance Optimization" rules to guide AI toward generating efficient component structures.
Conclusion: Transforming AI from Tool to Partner
Context Lizard represents a strategic shift in AI-assisted development—from repetitive correction to centralized standards management. By creating a Git-like system for coding standards, teams can maintain consistency across projects while continuously refining their development philosophy.
The approach amplifies developer judgment rather than replacing it, creating a multiplier effect that transforms AI tools from occasional assistants to genuine productivity partners.
What makes this approach powerful isn't simply formatting consistency—it's about encoding architectural thinking patterns, design philosophies, and decision frameworks. When properly implemented, Context Lizard helps AI internalize your team's problem-solving approach, not just their syntactic preferences.
For teams looking to maximize AI productivity, investing in centralized standards management is the crucial next step. Whether using Context Lizard or building your own system, the goal is the same: teaching AI to think like your development team, applying consistent architectural patterns and design decisions across your entire project ecosystem.