Introduces Playwright testing agents (planner, generator, healer) powered by MCP to plan, generate, and heal end-to-end tests. Configures the MCP server and integrates agent workflows into OpenCode and GitHub chat modes to enable AI-assisted testing. Adds Playwright test dependency and updates lockfile; adjusts markdown lint ignores to reduce noise. Adds contributor guidance for Claude Code to streamline local development. Normalizes shell script shebangs to use /usr/bin/env bash for portability. Enables automated browser testing workflows and resilient test maintenance within AI-enabled tooling.
8.0 KiB
CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
Project Overview
Multi-tenant Django analytics dashboard for chat session data. Companies upload CSV files or connect external APIs to visualize chat metrics, sentiment analysis, and session details. Built with Django 5.2+, Python 3.13+, managed via UV package manager.
Essential Commands
Development Server
# Start Django dev server (port 8001)
make run
# or
cd dashboard_project && uv run python manage.py runserver 8001
Database Operations
# Create migrations after model changes
make makemigrations
# Apply migrations
make migrate
# Reset database (flush + migrate)
make reset-db
# Create superuser
make superuser
Background Tasks (Celery)
# Start Celery worker (separate terminal)
make celery
# or
cd dashboard_project && uv run celery -A dashboard_project worker --loglevel=info
# Start Celery Beat scheduler (separate terminal)
make celery-beat
# or
cd dashboard_project && uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler
# Start all services (web + celery + beat) with foreman
make procfile
Testing & Quality
# Run tests
make test
# or
uv run -m pytest
# Run single test
cd dashboard_project && uv run -m pytest path/to/test_file.py::test_function
# Linting
make lint # Python only
npm run lint:py # Ruff check
npm run lint:py:fix # Auto-fix Python issues
# Formatting
make format # Ruff + Black
npm run format # Prettier (templates) + Python
npm run format:check # Verify formatting
# JavaScript linting
npm run lint:js
npm run lint:js:fix
# Markdown linting
npm run lint:md
npm run lint:md:fix
# Type checking
npm run typecheck:py # Python with ty
npm run typecheck:js # JavaScript with oxlint
Dependency Management (UV)
# Install all dependencies
uv pip install -e ".[dev]"
# Add new package
uv pip install <package-name>
# Then manually update pyproject.toml dependencies
# Update lockfile
make lock # or uv pip freeze > requirements.lock
Docker
make docker-build
make docker-up
make docker-down
Architecture
Three-App Structure
-
accounts - Authentication & multi-tenancy
CustomUserextends AbstractUser withcompanyFK andis_company_adminflagCompanymodel is top-level organizational unit- All users belong to exactly one Company
-
dashboard - Core analytics
DataSource- CSV uploads or external API links, owned by CompanyChatSession- Parsed chat data from CSVs/APIs, linked to DataSourceDashboard- Custom dashboard configs with M2M to DataSources- Views: dashboard display, CSV upload, data export (CSV/JSON/Excel), search
-
data_integration - External API data fetching
ExternalDataSource- API credentials and endpointsChatSession&ChatMessage- API-fetched data models (parallel to dashboard.ChatSession)- Celery tasks for async API data fetching via
tasks.py
Multi-Tenancy Model
Company (root isolation)
├── CustomUser (employees, one is_company_admin)
├── DataSource (CSV files or API links)
│ └── ChatSession (parsed data)
└── Dashboard (M2M to DataSources)
Critical: All views must filter by request.user.company to enforce data isolation.
Data Flow
CSV Upload:
- User uploads CSV via
dashboard/views.py:upload_data - CSV parsed, creates DataSource + multiple ChatSession records
- Dashboard aggregates ChatSessions for visualization
External API:
- Admin configures ExternalDataSource with API credentials
- Celery task (
data_integration/tasks.py) fetches data periodically - Creates ChatSession + ChatMessage records in
data_integrationapp - Optionally synced to
dashboardapp for unified analytics
Key Design Patterns
- Multi-tenant isolation: Every query filtered by Company FK
- Role-based access: is_staff (Django admin), is_company_admin (company management), regular user (view only)
- Dual ChatSession models:
dashboard.ChatSession(CSV-based) anddata_integration.ChatSession(API-based) exist separately - Async processing: Celery handles long-running API fetches, uses Redis or SQLite backend
Configuration Notes
Settings (dashboard_project/settings.py)
- Uses
python-dotenvfor environment variables - Multi-app: accounts, dashboard, data_integration
- Celery configured in
dashboard_project/celery.py - Custom user model:
AUTH_USER_MODEL = "accounts.CustomUser"
Environment Variables
Create .env from .env.sample:
DJANGO_SECRET_KEY- Generate for productionDJANGO_DEBUG- Set False in productionEXTERNAL_API_USERNAME/EXTERNAL_API_PASSWORD- For data_integration APICELERY_BROKER_URL- Redis URL or SQLite fallback
Template Formatting
- Prettier configured for Django templates via
prettier-plugin-jinja-template - Pre-commit hook auto-formats HTML templates
- Run manually:
npm run format
Common Patterns
Adding New Model
- Edit
models.pyin appropriate app make makemigrationsmake migrate- Register in
admin.pyif needed - Update views to filter by company
CSV Upload Field Mapping
Expected CSV columns (see README.md for full schema):
- session_id, start_time, end_time, ip_address, country, language
- messages_sent, sentiment, escalated, forwarded_hr
- full_transcript, avg_response_time, tokens, tokens_eur
- category, initial_msg, user_rating
Testing Celery Tasks
cd dashboard_project
uv run python manage.py test_celery
Creating Sample Data
cd dashboard_project
uv run python manage.py create_sample_data
Creates admin user (admin/admin123), 3 companies with users, sample dashboards.
Development Workflow
- Before starting:
uv venv && source .venv/bin/activate && uv sync" - Run migrations:
make migrate - Start services: Terminal 1:
make run, Terminal 2:make celery, Terminal 3:make celery-beat - Make changes: Edit code, test locally
- Test:
make testandmake lint - Format:
make format && bun run format - Commit: Pre-commit hooks run automatically
yq -r '.scripts' package.json{ "format": "prettier --write .; bun format:py", "format:check": "prettier --check .; bun format:py -- --check", "format:py": "uvx ruff format", "lint:js": "oxlint", "lint:js:fix": "bun lint:js -- --fix", "lint:js:strict": "oxlint --import-plugin -D correctness -W suspicious", "lint:md": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.{node_modules,trunk,grit,venv,opencode,github/chatmodes,claude/agents}\"", "lint:md:fix": "bun lint:md -- --fix", "lint:py": "uvx ruff check", "lint:py:fix": "uvx ruff check --fix", "typecheck:js": "oxlint --type-aware", "typecheck:js:fix": "bun typecheck:js -- --fix", "typecheck:py": "uvx ty check" }
Important Context
- Django 5.2+ specific features may be in use
- UV package manager preferred over pip for speed
- Celery required for background tasks, needs Redis or SQLite backend
- Multi-tenancy is enforced at query level, not database level
- Bootstrap 5 + Plotly.js for frontend
- Working directory: All Django commands run from
dashboard_project/subdirectory
File Organization
- Django apps:
dashboard_project/{accounts,dashboard,data_integration}/ - Settings:
dashboard_project/dashboard_project/settings.py - Static files:
dashboard_project/static/ - Templates:
dashboard_project/templates/ - Uploaded CSVs:
dashboard_project/media/data_sources/ - Scripts:
dashboard_project/scripts/(cleanup, data fixes) - Examples:
examples/(sample CSV files)
Testing Notes
- pytest configured via
pyproject.toml - Test discovery:
test_*.pyfiles indashboard_project/ - Django settings:
DJANGO_SETTINGS_MODULE = "dashboard_project.settings" - Run specific test:
cd dashboard_project && uv run -m pytest path/to/test.py::TestClass::test_method