Compare commits

3 Commits

Author SHA1 Message Date
pre-commit-ci[bot]
ed7923fc1c [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-05-18 22:05:54 +00:00
dependabot[bot]
41bd7d984e Bump numpy from 2.2.5 to 2.2.6
Bumps [numpy](https://github.com/numpy/numpy) from 2.2.5 to 2.2.6.
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](https://github.com/numpy/numpy/compare/v2.2.5...v2.2.6)

---
updated-dependencies:
- dependency-name: numpy
  dependency-version: 2.2.6
  dependency-type: direct:production
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-05-18 22:05:40 +00:00
0ee64050e5 Enhance development environment setup and security scanning workflows
Some checks failed
Bandit / bandit (push) Has been cancelled
Codacy Security Scan / Codacy Security Scan (push) Has been cancelled
2025-05-18 22:04:15 +00:00
88 changed files with 3119 additions and 5714 deletions

View File

@@ -1,61 +0,0 @@
---
name: playwright-test-generator
description: Use this agent when you need to create automated browser tests using Playwright. Examples: <example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example><example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>
tools: Glob, Grep, Read, mcp__playwright-test__browser_click, mcp__playwright-test__browser_drag, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_file_upload, mcp__playwright-test__browser_handle_dialog, mcp__playwright-test__browser_hover, mcp__playwright-test__browser_navigate, mcp__playwright-test__browser_press_key, mcp__playwright-test__browser_select_option, mcp__playwright-test__browser_snapshot, mcp__playwright-test__browser_type, mcp__playwright-test__browser_verify_element_visible, mcp__playwright-test__browser_verify_list_visible, mcp__playwright-test__browser_verify_text_visible, mcp__playwright-test__browser_verify_value, mcp__playwright-test__browser_wait_for, mcp__playwright-test__generator_read_log, mcp__playwright-test__generator_setup_page, mcp__playwright-test__generator_write_test
model: sonnet
color: blue
---
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>

View File

@@ -1,47 +0,0 @@
---
name: playwright-test-healer
description: Use this agent when you need to debug and fix failing Playwright tests. Examples: <example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example><example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>
tools: Glob, Grep, Read, Write, Edit, MultiEdit, mcp__playwright-test__browser_console_messages, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_generate_locator, mcp__playwright-test__browser_network_requests, mcp__playwright-test__browser_snapshot, mcp__playwright-test__test_debug, mcp__playwright-test__test_list, mcp__playwright-test__test_run
model: sonnet
color: red
---
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis

View File

@@ -1,98 +0,0 @@
---
name: playwright-test-planner
description: Use this agent when you need to create comprehensive test plan for a web application or website. Examples: <example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example><example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>
tools: Glob, Grep, Read, Write, mcp__playwright-test__browser_click, mcp__playwright-test__browser_close, mcp__playwright-test__browser_console_messages, mcp__playwright-test__browser_drag, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_file_upload, mcp__playwright-test__browser_handle_dialog, mcp__playwright-test__browser_hover, mcp__playwright-test__browser_navigate, mcp__playwright-test__browser_navigate_back, mcp__playwright-test__browser_network_requests, mcp__playwright-test__browser_press_key, mcp__playwright-test__browser_select_option, mcp__playwright-test__browser_snapshot, mcp__playwright-test__browser_take_screenshot, mcp__playwright-test__browser_type, mcp__playwright-test__browser_wait_for, mcp__playwright-test__planner_setup_page
model: sonnet
color: green
---
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a Markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.

View File

@@ -1,53 +0,0 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
.venv/
venv/
env/
ENV/
# Django
*.log
db.sqlite3
db.sqlite3-journal
/static/
/media/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# Testing
.pytest_cache/
.coverage
htmlcov/
# Git
.git/
.gitignore
# Docker
Dockerfile
docker-compose.yml
.dockerignore
# Documentation
*.md
docs/
# CI/CD
.github/
# Playwright
.playwright-mcp/
# Other
*.bak
*.tmp
node_modules/

View File

@@ -27,13 +27,12 @@ indent_size = 2
# CSS, JavaScript, and JSON files
[*.{css,scss,js,json}]
indent_style = space
indent_size = 2
indent_style = tab
indent_size = 4
# Markdown files
[*.md]
trim_trailing_whitespace = false
indent_size = 4
# YAML files
[*.{yml,yaml}]

View File

@@ -1,20 +1,8 @@
# .env.sample - rename to .env and update with actual credentials
# Django settings
# Generate secret with e.g. `openssl rand -hex 32`
DJANGO_SECRET_KEY=your-secure-secret-key
DJANGO_DEBUG=True
# Database configuration (optional - local development uses SQLite by default)
# Uncomment these to use PostgreSQL locally:
# DATABASE_URL=postgresql://postgres:postgres@localhost:5432/dashboard_db
# POSTGRES_DB=dashboard_db
# POSTGRES_USER=postgres
# POSTGRES_PASSWORD=postgres
# POSTGRES_HOST=localhost
# POSTGRES_PORT=5432
#
# Note: Docker Compose automatically uses PostgreSQL via docker-compose.yml environment variables
# External API credentials
EXTERNAL_API_USERNAME=your-api-username
EXTERNAL_API_PASSWORD=your-api-password
@@ -22,7 +10,7 @@ EXTERNAL_API_PASSWORD=your-api-password
# Redis settings for Celery
REDIS_URL=redis://localhost:6379/0
CELERY_BROKER_URL=redis://localhost:6379/0
CELERY_RESULT_BACKEND=redis://redis:6379/0
CELERY_RESULT_BACKEND=redis://localhost:6379/0
# Celery Task Schedule (in seconds)
CHAT_DATA_FETCH_INTERVAL=3600

View File

@@ -1,97 +0,0 @@
---
description: Use this agent when you need to create comprehensive test plan for a web application or website.
tools: ['edit/createFile', 'edit/createDirectory', 'search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_click', 'playwright-test/browser_close', 'playwright-test/browser_console_messages', 'playwright-test/browser_drag', 'playwright-test/browser_evaluate', 'playwright-test/browser_file_upload', 'playwright-test/browser_handle_dialog', 'playwright-test/browser_hover', 'playwright-test/browser_navigate', 'playwright-test/browser_navigate_back', 'playwright-test/browser_network_requests', 'playwright-test/browser_press_key', 'playwright-test/browser_select_option', 'playwright-test/browser_snapshot', 'playwright-test/browser_take_screenshot', 'playwright-test/browser_type', 'playwright-test/browser_wait_for', 'playwright-test/planner_setup_page']
---
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a Markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.
<example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example>
<example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>

View File

@@ -1,61 +0,0 @@
---
description: Use this agent when you need to create automated browser tests using Playwright.
tools: ['search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_click', 'playwright-test/browser_drag', 'playwright-test/browser_evaluate', 'playwright-test/browser_file_upload', 'playwright-test/browser_handle_dialog', 'playwright-test/browser_hover', 'playwright-test/browser_navigate', 'playwright-test/browser_press_key', 'playwright-test/browser_select_option', 'playwright-test/browser_snapshot', 'playwright-test/browser_type', 'playwright-test/browser_verify_element_visible', 'playwright-test/browser_verify_list_visible', 'playwright-test/browser_verify_text_visible', 'playwright-test/browser_verify_value', 'playwright-test/browser_wait_for', 'playwright-test/generator_read_log', 'playwright-test/generator_setup_page', 'playwright-test/generator_write_test']
---
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>
<example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example>
<example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>

View File

@@ -1,46 +0,0 @@
---
description: Use this agent when you need to debug and fix failing Playwright tests.
tools: ['edit/createFile', 'edit/createDirectory', 'edit/editFiles', 'search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_console_messages', 'playwright-test/browser_evaluate', 'playwright-test/browser_generate_locator', 'playwright-test/browser_network_requests', 'playwright-test/browser_snapshot', 'playwright-test/test_debug', 'playwright-test/test_list', 'playwright-test/test_run']
---
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis
<example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example>
<example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>

View File

@@ -11,8 +11,7 @@
## uv
UV is a fast Python package and project manager written in Rust. Use UV to manage dependencies, virtual environments,
and run Python scripts with improved performance.
UV is a fast Python package and project manager written in Rust. Use UV to manage dependencies, virtual environments, and run Python scripts with improved performance.
### Running Python Scripts
@@ -154,8 +153,7 @@ and run Python scripts with improved performance.
## Project Structure
This section provides a comprehensive overview of the LiveGraphsDjango project structure and the function of each key
file. Please update this section whenever there are noteworthy changes to the structure or to a file's function.
This section provides a comprehensive overview of the LiveGraphsDjango project structure and the function of each key file. Please update this section whenever there are noteworthy changes to the structure or to a file's function.
```tree
LiveGraphsDjango/

View File

@@ -1,37 +1,22 @@
# To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located.
# Please see the documentation for all configuration options:
# https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
# Please see the documentation for more information:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates
# https://containers.dev/guide/dependabot
version: 2
multi-ecosystem-groups:
all-dependencies:
schedule:
interval: "monthly"
updates:
- package-ecosystem: "uv"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "bun"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "github-actions"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "docker"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "docker-compose"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "devcontainers"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
schedule:
interval: "weekly"
day: "tuesday"
time: "03:00"
timezone: "Europe/Amsterdam"
- package-ecosystem: "uv"
directory: "/"
schedule:
interval: "weekly"
day: "tuesday"
time: "03:00"
timezone: "Europe/Amsterdam"

View File

@@ -29,7 +29,7 @@ jobs:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- uses: actions/checkout@v4
- name: Bandit Scan
uses: shundor/python-bandit-scan@ab1d87dfccc5a0ffab88be3aaac6ffe35c10d6cd
with: # optional arguments

View File

@@ -1,77 +0,0 @@
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
# Optional: Only run on specific file changes
# paths:
# - "src/**/*.ts"
# - "src/**/*.tsx"
# - "src/**/*.js"
# - "src/**/*.jsx"
jobs:
claude-review:
# Optional: Filter by PR author
# if: |
# github.event.pull_request.user.login == 'external-contributor' ||
# github.event.pull_request.user.login == 'new-developer' ||
# github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install pre-commit
run: uv tool install pre-commit
- name: Run pre-commit hooks
run: pre-commit install --install-hooks
- name: Run Claude Code Review
id: claude-review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance considerations
- Security concerns
- Test coverage
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
claude_args: |
--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"
--allowedTools Bash,Edit,NotebookEdit,SlashCommand,WebFetch,WebSearch,Write
--max-turns 1000
--model claude-sonnet-4-5
# --mcp-config '{ "mcpServers": { "bun-docs": { "type": "http", "url": "https://bun.com/docs/mcp" } } }'
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: write

View File

@@ -1,73 +0,0 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
id-token: write
actions: write # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install pre-commit
run: uv tool install pre-commit
- name: Run pre-commit hooks
run: pre-commit install --install-hooks
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
settings: |
{
"includeCoAuthoredBy": false,
"alwaysThinkingEnabled": true,
"permissions": {"defaultMode": "bypassPermissions"}
}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"
--allowedTools Bash,Edit,NotebookEdit,SlashCommand,WebFetch,WebSearch,Write
--max-turns 1000
--model claude-sonnet-4-5
# --mcp-config '{ "mcpServers": { "bun-docs": { "type": "http", "url": "https://bun.com/docs/mcp" } } }'
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: write
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
# prompt: 'Update the pull request description to include a summary of changes.'
# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
# claude_args: '--allowed-tools Bash(gh pr:*)'

View File

@@ -36,11 +36,11 @@ jobs:
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout code
uses: actions/checkout@v5
uses: actions/checkout@v4
# Execute Codacy Analysis CLI and generate a SARIF output with the security issues identified during the analysis
- name: Run Codacy Analysis CLI
uses: codacy/codacy-analysis-cli-action@master
uses: codacy/codacy-analysis-cli-action@d840f886c4bd4edc059706d09c6a1586111c540b
with:
# Check https://github.com/codacy/codacy-analysis-cli#project-token to get your project token from your Codacy repository
# You can also omit the token and run the tools that support default configurations
@@ -56,6 +56,6 @@ jobs:
# Upload the SARIF file generated in the previous step
- name: Upload SARIF results file
uses: github/codeql-action/upload-sarif@v4
uses: github/codeql-action/upload-sarif@v3
with:
sarif_file: results.sarif

3
.gitignore vendored
View File

@@ -421,6 +421,3 @@ package-lock.json
# Local database files
*.rdb
*.sqlite
# playwright
.playwright-mcp/

View File

@@ -1,73 +0,0 @@
// Configuration for markdownlint-cli2
{
"$schema": "https://cdn.jsdelivr.net/gh/DavidAnson/markdownlint-cli2@refs/heads/main/schema/markdownlint-cli2-config-schema.json",
"config": {
"inline-html": false,
"code-block-style": {
"style": "fenced"
},
"code-fence-style": {
"style": "backtick"
},
"emphasis-style": {
"style": "asterisk"
},
"extended-ascii": {
"ascii-only": true
},
"fenced-code-language": {
"allowed_languages": [
"bash",
"sh",
"html",
"javascript",
"json",
"markdown",
"text",
"python",
"csv",
"tree"
],
"language_only": true
},
"heading-style": {
"style": "atx"
},
"hr-style": {
"style": "---"
},
"MD013": false,
"link-image-style": {
"collapsed": false,
"shortcut": false,
"url_inline": false
},
"no-duplicate-heading": {
"siblings_only": true
},
"ol-prefix": {
"style": "ordered"
},
"proper-names": {
"code_blocks": false,
"names": [
"Cake.Markdownlint",
"CommonMark",
"JavaScript",
"Markdown",
"markdown-it",
"markdownlint",
"Node.js"
]
},
"reference-links-images": {
"shortcut_syntax": true
},
"strong-style": {
"style": "asterisk"
},
"ul-style": {
"style": "dash"
}
}
}

17
.markdownlint.json Normal file
View File

@@ -0,0 +1,17 @@
{
"default": true,
"MD007": {
"indent": 4,
"start_indented": false,
"start_indent": 4
},
"MD013": false,
"MD029": false,
"MD030": {
"ul_single": 3,
"ol_single": 2,
"ul_multi": 3,
"ol_multi": 2
},
"MD033": false
}

View File

@@ -1,56 +0,0 @@
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>
<example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example>
<example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>

View File

@@ -1,42 +0,0 @@
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis
<example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example>
<example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>

View File

@@ -1,93 +0,0 @@
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a Markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.
<example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example>
<example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>

View File

@@ -4,34 +4,26 @@
# - post-merge
# - post-rewrite
ci:
skip: [prettier-jinja, prettier-all, tombi-format, tombi-lint] # django-check, django-check-migrations
default_language_version:
node: 22.15.1
python: python3.13
repos:
- repo: https://github.com/adamchainz/django-upgrade
rev: 1.29.1
rev: 1.25.0
hooks:
- id: django-upgrade
# uv hooks for dependency management
- repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.9.7
rev: 0.7.5
hooks:
# Update the uv lockfile
- id: uv-lock
# Update the requirements.txt
- id: uv-export
# Standard pre-commit hooks
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v6.0.0
rev: v5.0.0
hooks:
- id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- id: end-of-file-fixer
- id: check-yaml
# - id: check-json
@@ -45,21 +37,28 @@ repos:
- id: mixed-line-ending
args: [--fix=lf]
- repo: local
# - repo: https://github.com/psf/black
# rev: 22.10.0
# hooks:
# - id: black
# # HTML/Django template linting
# - repo: https://github.com/rtts/djhtml
# rev: 3.0.7
# hooks:
# - id: djhtml
# entry: djhtml --tabwidth 4
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0
hooks:
- id: prettier-jinja
name: Prettier Jinja
language: node
- id: prettier
types_or: [javascript, jsx, ts, tsx, css, scss, html, json, yaml, markdown]
additional_dependencies:
- prettier
- prettier-plugin-jinja-template
types_or: [html, jinja]
entry: bunx prettier --plugin=prettier-plugin-jinja-template --parser=jinja-template --write
- id: prettier-all
name: Prettier All
language: node
types_or: [javascript, jsx, ts, tsx, css, scss, json, yaml, markdown]
entry: bunx prettier --write
# types_or: [javascript, jsx, ts, tsx, css, scss, json, yaml, markdown]
# exclude: '.*\.html$'
- repo: https://github.com/DavidAnson/markdownlint-cli2
rev: v0.18.1
@@ -69,54 +68,37 @@ repos:
# Ruff for linting and formatting
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.3
rev: v0.11.10
hooks:
# Run the linter.
- id: ruff-check
- id: ruff
args: [--fix]
# Run the formatter.
- id: ruff-format
- repo: https://github.com/oxc-project/mirrors-oxlint
rev: v1.25.0 # change to the latest version
# Django-specific hooks
- repo: local
hooks:
- id: oxlint
verbose: true
- id: django-check
name: Django Check
entry: uv run python dashboard_project/manage.py check
language: system
pass_filenames: false
types: [python]
always_run: true
# # Django-specific hooks
# - repo: local
# hooks:
# - id: django-check
# name: Django Check
# entry: uv run python dashboard_project/manage.py check
# language: python
# pass_filenames: false
# types: [python]
# always_run: true
# additional_dependencies: [uv]
# - id: django-check-migrations
# name: Django Check Migrations
# entry: uv run python dashboard_project/manage.py makemigrations --check --dry-run
# language: python
# pass_filenames: false
# types: [python]
# additional_dependencies: [uv]
- id: django-check-migrations
name: Django Check Migrations
entry: uv run python dashboard_project/manage.py makemigrations --check --dry-run
language: system
pass_filenames: false
types: [python]
# Security checks
- repo: https://github.com/pycqa/bandit
rev: 1.8.6
rev: 1.8.3
hooks:
- id: bandit
args: [-c, pyproject.toml, -r, dashboard_project]
# additional_dependencies: ["bandit[toml]"]
# TOML formatting and linting
- repo: https://github.com/tombi-toml/tombi-pre-commit
rev: v0.6.40
hooks:
- id: tombi-format
- id: tombi-lint
additional_dependencies: ["bandit[toml]"]
# # Type checking
# - repo: https://github.com/pre-commit/mirrors-mypy

View File

@@ -1,5 +1,4 @@
{
"$schema": "https://json.schemastore.org/prettierrc.json",
"arrowParens": "always",
"bracketSpacing": true,
"embeddedLanguageFormatting": "auto",
@@ -28,13 +27,7 @@
"proseWrap": "preserve",
"printWidth": 100
}
},
{
"files": ["*.jsonc"],
"options": {
"trailingComma": "none"
}
}
],
"plugins": ["prettier-plugin-jinja-template", "prettier-plugin-packagejson"]
"plugins": ["prettier-plugin-jinja-template"]
}

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# Run linting, formatting and type checking
echo "Running Ruff linter..."

View File

@@ -1,4 +1,4 @@
#!/usr/bin/env bash
#!/bin/bash
# Run tests with coverage
echo "Running tests with coverage..."

10
.uv
View File

@@ -5,11 +5,17 @@ keep-lockfile = true
# Cache compiled bytecode for dependencies
compile-bytecode = true
# Use a local cache directory
local-cache = true
# Verbosity of output
verbosity = "minimal"
; # Define which part of the environment to check
; environment-checks = ["python", "dependencies"]
# Define which part of the environment to check
environment-checks = ["python", "dependencies"]
# How to resolve dependencies not specified with exact versions
dependency-resolution = "strict"
# If the cache and target directories are on different filesystems, hardlinking may not be supported.
link-mode = "copy"

View File

@@ -1,42 +0,0 @@
{
"auto_install_extensions": { "ty": true },
"languages": {
"Python": {
"language_servers": [
// Disable basedpyright and enable Ty, and otherwise
// use the default configuration.
"ty",
"!basedpyright",
"..."
]
},
"TypeScript": {
"language_servers": [
// Enable oxc for TypeScript files.
"oxc",
"..."
]
},
"JavaScript": {
"language_servers": [
// Enable oxc for JavaScript files.
"oxc",
"..."
]
}
},
"lsp": {
"oxc": {
"initialization_options": {
"options": {
"run": "onType",
"configPath": null,
"tsConfigPath": null,
"unusedDisableDirectives": "allow",
"typeAware": true,
"flags": {}
}
}
}
}
}

276
CLAUDE.md
View File

@@ -1,276 +0,0 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Multi-tenant Django analytics dashboard for chat session data. Companies upload CSV files or connect external APIs to visualize chat metrics, sentiment analysis, and session details. Built with Django 5.2+, Python 3.13+, managed via UV package manager.
## Essential Commands
### Development Server
```bash
# Start Django dev server (port 8001)
make run
# or
cd dashboard_project && uv run python manage.py runserver 8001
```
### Database Operations
```bash
# Create migrations after model changes
make makemigrations
# Apply migrations
make migrate
# Reset database (flush + migrate)
make reset-db
# Create superuser
make superuser
```
### Background Tasks (Celery)
```bash
# Start Celery worker (separate terminal)
make celery
# or
cd dashboard_project && uv run celery -A dashboard_project worker --loglevel=info
# Start Celery Beat scheduler (separate terminal)
make celery-beat
# or
cd dashboard_project && uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler
# Start all services (web + celery + beat) with foreman
make procfile
```
### Testing & Quality
```bash
# Run tests
make test
# or
uv run -m pytest
# Run single test
cd dashboard_project && uv run -m pytest path/to/test_file.py::test_function
# Linting
make lint # Python only
npm run lint:py # Ruff check
npm run lint:py:fix # Auto-fix Python issues
# Formatting
make format # Ruff + Black
npm run format # Prettier (templates) + Python
npm run format:check # Verify formatting
# JavaScript linting
npm run lint:js
npm run lint:js:fix
# Markdown linting
npm run lint:md
npm run lint:md:fix
# Type checking
npm run typecheck:py # Python with ty
npm run typecheck:js # JavaScript with oxlint
```
### Dependency Management (UV)
```bash
# Install all dependencies
uv pip install -e ".[dev]"
# Add new package
uv pip install <package-name>
# Then manually update pyproject.toml dependencies
# Update lockfile
make lock # or uv pip freeze > requirements.lock
```
### Docker
```bash
make docker-build
make docker-up
make docker-down
```
## Architecture
### Three-App Structure
1. **accounts** - Authentication & multi-tenancy
- `CustomUser` extends AbstractUser with `company` FK and `is_company_admin` flag
- `Company` model is top-level organizational unit
- All users belong to exactly one Company
2. **dashboard** - Core analytics
- `DataSource` - CSV uploads or external API links, owned by Company
- `ChatSession` - Parsed chat data from CSVs/APIs, linked to DataSource
- `Dashboard` - Custom dashboard configs with M2M to DataSources
- Views: dashboard display, CSV upload, data export (CSV/JSON/Excel), search
3. **data_integration** - External API data fetching
- `ExternalDataSource` - API credentials and endpoints
- `ChatSession` & `ChatMessage` - API-fetched data models (parallel to dashboard.ChatSession)
- Celery tasks for async API data fetching via `tasks.py`
### Multi-Tenancy Model
```text
Company (root isolation)
├── CustomUser (employees, one is_company_admin)
├── DataSource (CSV files or API links)
│ └── ChatSession (parsed data)
└── Dashboard (M2M to DataSources)
```
**Critical**: All views must filter by `request.user.company` to enforce data isolation.
### Data Flow
**CSV Upload**:
1. User uploads CSV via `dashboard/views.py:upload_data`
2. CSV parsed, creates DataSource + multiple ChatSession records
3. Dashboard aggregates ChatSessions for visualization
**External API**:
1. Admin configures ExternalDataSource with API credentials
2. Celery task (`data_integration/tasks.py`) fetches data periodically
3. Creates ChatSession + ChatMessage records in `data_integration` app
4. Optionally synced to `dashboard` app for unified analytics
### Key Design Patterns
- **Multi-tenant isolation**: Every query filtered by Company FK
- **Role-based access**: is_staff (Django admin), is_company_admin (company management), regular user (view only)
- **Dual ChatSession models**: `dashboard.ChatSession` (CSV-based) and `data_integration.ChatSession` (API-based) exist separately
- **Async processing**: Celery handles long-running API fetches, uses Redis or SQLite backend
## Configuration Notes
### Settings (`dashboard_project/settings.py`)
- Uses `python-dotenv` for environment variables
- Multi-app: accounts, dashboard, data_integration
- Celery configured in `dashboard_project/celery.py`
- Custom user model: `AUTH_USER_MODEL = "accounts.CustomUser"`
### Environment Variables
Create `.env` from `.env.sample`:
- `DJANGO_SECRET_KEY` - Generate for production
- `DJANGO_DEBUG` - Set False in production
- `EXTERNAL_API_USERNAME` / `EXTERNAL_API_PASSWORD` - For data_integration API
- `CELERY_BROKER_URL` - Redis URL or SQLite fallback
### Template Formatting
- Prettier configured for Django templates via `prettier-plugin-jinja-template`
- Pre-commit hook auto-formats HTML templates
- Run manually: `npm run format`
## Common Patterns
### Adding New Model
1. Edit `models.py` in appropriate app
2. `make makemigrations`
3. `make migrate`
4. Register in `admin.py` if needed
5. Update views to filter by company
### CSV Upload Field Mapping
Expected CSV columns (see README.md for full schema):
- session_id, start_time, end_time, ip_address, country, language
- messages_sent, sentiment, escalated, forwarded_hr
- full_transcript, avg_response_time, tokens, tokens_eur
- category, initial_msg, user_rating
### Testing Celery Tasks
```bash
cd dashboard_project
uv run python manage.py test_celery
```
### Creating Sample Data
```bash
cd dashboard_project
uv run python manage.py create_sample_data
```
Creates admin user (admin/admin123), 3 companies with users, sample dashboards.
## Development Workflow
1. **Before starting**: `uv venv && source .venv/bin/activate && uv sync"`
2. **Run migrations**: `make migrate`
3. **Start services**: Terminal 1: `make run`, Terminal 2: `make celery`, Terminal 3: `make celery-beat`
4. **Make changes**: Edit code, test locally
5. **Test**: `make test` and `make lint`
6. **Format**: `make format && bun run format`
7. **Commit**: Pre-commit hooks run automatically
> `yq -r '.scripts' package.json`
>
> ```json
> {
> "format": "prettier --write .; bun format:py",
> "format:check": "prettier --check .; bun format:py -- --check",
> "format:py": "uvx ruff format",
> "lint:js": "oxlint",
> "lint:js:fix": "bun lint:js -- --fix",
> "lint:js:strict": "oxlint --import-plugin -D correctness -W suspicious",
> "lint:md": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.{node_modules,trunk,grit,venv,opencode,github/chatmodes,claude/agents}\"",
> "lint:md:fix": "bun lint:md -- --fix",
> "lint:py": "uvx ruff check",
> "lint:py:fix": "uvx ruff check --fix",
> "typecheck:js": "oxlint --type-aware",
> "typecheck:js:fix": "bun typecheck:js -- --fix",
> "typecheck:py": "uvx ty check"
> }
> ```
## Important Context
- **Django 5.2+** specific features may be in use
- **UV package manager** preferred over pip for speed
- **Celery** required for background tasks, needs Redis or SQLite backend
- **Multi-tenancy** is enforced at query level, not database level
- **Bootstrap 5** + **Plotly.js** for frontend
- **Working directory**: All Django commands run from `dashboard_project/` subdirectory
## File Organization
- **Django apps**: `dashboard_project/{accounts,dashboard,data_integration}/`
- **Settings**: `dashboard_project/dashboard_project/settings.py`
- **Static files**: `dashboard_project/static/`
- **Templates**: `dashboard_project/templates/`
- **Uploaded CSVs**: `dashboard_project/media/data_sources/`
- **Scripts**: `dashboard_project/scripts/` (cleanup, data fixes)
- **Examples**: `examples/` (sample CSV files)
## Testing Notes
- pytest configured via `pyproject.toml`
- Test discovery: `test_*.py` files in `dashboard_project/`
- Django settings: `DJANGO_SETTINGS_MODULE = "dashboard_project.settings"`
- Run specific test: `cd dashboard_project && uv run -m pytest path/to/test.py::TestClass::test_method`

View File

@@ -1,58 +1,34 @@
# Dockerfile
# Use a Python image with uv pre-installed
FROM ghcr.io/astral-sh/uv:python3.13-bookworm-slim
# Setup a non-root user
RUN groupadd --system --gid 999 nonroot \
&& useradd --system --gid 999 --uid 999 --create-home nonroot
FROM python:3.13-slim
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV DJANGO_SETTINGS_MODULE=dashboard_project.settings
# Change the working directory to the `app` directory
# Set work directory
WORKDIR /app
# Enable bytecode compilation
ENV UV_COMPILE_BYTECODE=1
# Install UV for Python package management
RUN pip install uv
# Copy from the cache instead of linking since it's a mounted volume
ENV UV_LINK_MODE=copy
# Copy project files
COPY pyproject.toml .
COPY uv.lock .
COPY . .
# Install dependencies (separate layer for caching)
RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --frozen --no-install-project --no-dev
# Copy the project into the image
COPY . /app
# Sync the project (install the project itself)
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-dev
# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"
# Install dependencies
RUN uv pip install -e .
# Change to the Django project directory
WORKDIR /app/dashboard_project
# Collect static files (runs as root)
RUN uv run manage.py collectstatic --noinput
# Fix ownership of dashboard_project directory for nonroot user
# This ensures db.sqlite3 and any files created during runtime are writable
RUN chown -R nonroot:nonroot /app/dashboard_project && \
chmod 775 /app/dashboard_project
# Collect static files
RUN python manage.py collectstatic --noinput
# Change back to the app directory
WORKDIR /app
# Use the non-root user to run our application
USER nonroot
# Run gunicorn via uv run to ensure it's in the environment
CMD ["uv", "run", "gunicorn", "dashboard_project.wsgi:application", "--bind", "0.0.0.0:8000", "--chdir", "dashboard_project"]
# Run gunicorn
CMD ["gunicorn", "dashboard_project.wsgi:application", "--bind", "0.0.0.0:8000"]

View File

@@ -1,43 +1,35 @@
.PHONY: venv install install-dev lint test format clean run migrate makemigrations superuser setup-node celery celery-beat docker-build docker-up docker-down reset-db setup-dev procfile
# Create a virtual environment
venv:
uv venv -p 3.13
# Install production dependencies
install:
uv pip install -e .
# Install development dependencies
install-dev:
uv pip install -e ".[dev]"
# Run linting
lint:
uv run -m ruff check dashboard_project
# Run tests
test:
uv run -m pytest
# Format Python code
format:
uv run -m ruff format dashboard_project
uv run -m black dashboard_project
# Setup Node.js dependencies
setup-node:
npm install --include=dev
# Clean Python cache files
clean:
find . -type d -name "__pycache__" -exec rm -rf {} +
find . -type f -name "*.pyc" -delete
@@ -56,52 +48,42 @@ clean:
rm -rf dist/
# Run the development server
run:
cd dashboard_project && uv run python manage.py runserver 8001
# Run Celery worker for background tasks
celery:
cd dashboard_project && uv run celery -A dashboard_project worker --loglevel=info
# Run Celery Beat for scheduled tasks
celery-beat:
cd dashboard_project && uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler
# Apply migrations
migrate:
cd dashboard_project && uv run python manage.py migrate
# Create migrations
makemigrations:
cd dashboard_project && uv run python manage.py makemigrations
# Create a superuser
superuser:
cd dashboard_project && uv run python manage.py createsuperuser
# Update uv lock file
lock:
uv pip freeze > requirements.lock
# Setup pre-commit hooks
setup-pre-commit:
pre-commit install
# Run pre-commit on all files
lint-all:
pre-commit run --all-files
# Docker commands
docker-build:
docker-compose build
@@ -112,18 +94,15 @@ docker-down:
docker-compose down
# Initialize or reset the database in development
reset-db:
cd dashboard_project && uv run python manage.py flush --no-input
cd dashboard_project && uv run python manage.py migrate
# Start a Redis server in development (if not installed, fallback to SQLite)
run-redis:
redis-server || echo "Redis not installed, using SQLite fallback"
# Start all development services (web, redis, celery, celery-beat)
run-all:
foreman start
@@ -131,18 +110,15 @@ procfile:
foreman start
# Test Celery task
test-celery:
cd dashboard_project && uv run python manage.py test_celery
# Initialize data integration
init-data-integration:
cd dashboard_project && uv run python manage.py create_default_datasource
cd dashboard_project && uv run python manage.py create_default_datasource
cd dashboard_project && uv run python manage.py test_celery
# Setup development environment
setup-dev: venv install-dev migrate create_default_datasource
@echo "Development environment setup complete"

View File

@@ -1,13 +1,10 @@
# Chat Analytics Dashboard
A Django application that creates an analytics dashboard for chat session data. The application allows different
companies to have their own dashboards and view their own data.
A Django application that creates an analytics dashboard for chat session data. The application allows different companies to have their own dashboards and view their own data.
## Project Overview
This Django project creates a multi-tenant dashboard application for analyzing chat session data. Companies can upload
their chat data (in CSV format) and view analytics and metrics through an interactive dashboard. The application
supports user authentication, role-based access control, and separate data isolation for different companies.
This Django project creates a multi-tenant dashboard application for analyzing chat session data. Companies can upload their chat data (in CSV format) and view analytics and metrics through an interactive dashboard. The application supports user authentication, role-based access control, and separate data isolation for different companies.
### Project Structure
@@ -42,8 +39,8 @@ The project consists of two main Django apps:
1. Clone the repository:
```sh
git clone https://github.com/kjanat/livegraphs-django.git
cd livegraphs-django
git clone <repository-url>
cd LiveGraphsDjango
```
2. Install uv if you don't have it yet:
@@ -140,7 +137,7 @@ The project consists of two main Django apps:
python manage.py runserver
```
10. Access the application at `http://127.0.0.1:8000/`
10. Access the application at <http://127.0.0.1:8000/>
### Development Workflow with UV
@@ -190,8 +187,8 @@ UV offers several advantages over traditional pip, including faster dependency r
1. Clone the repository:
```sh
git clone https://github.com/kjanat/livegraphs-django.git
cd livegraphs-django
git clone <repository-url>
cd dashboard_project
```
2. Build and run with Docker Compose:
@@ -212,8 +209,7 @@ UV offers several advantages over traditional pip, including faster dependency r
### Prettier for Django Templates
This project uses Prettier with the `prettier-plugin-django-annotations` plugin to format HTML templates with Django
template syntax.
This project uses Prettier with the `prettier-plugin-django-annotations` plugin to format HTML templates with Django template syntax.
#### Prettier Configuration
@@ -346,7 +342,6 @@ This will create:
- Fill in the company details and save
3. **Create Users**:
- Go to Users > Add User
- Fill in user details
- Assign the user to a company
@@ -367,7 +362,6 @@ This will create:
- Click "Upload"
3. **Create a Dashboard**:
- Click on "New Dashboard" in the sidebar
- Fill in the dashboard details
- Select data sources to include
@@ -388,7 +382,6 @@ This will create:
- Use filters to refine results
3. **View Session Details**:
- In search results, click the eye icon for a session
- View complete session information and transcript
@@ -442,7 +435,7 @@ If your dashboard is empty:
The CSV file should contain the following columns:
| Column | Description |
|---------------------|--------------------------------------------------------|
| ------------------- | ------------------------------------------------------ |
| `session_id` | Unique identifier for the chat session |
| `start_time` | When the session started (datetime) |
| `end_time` | When the session ended (datetime) |
@@ -514,7 +507,6 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
- System-wide configuration
7. **Responsive Design**:
- Mobile-friendly interface using Bootstrap 5
- Consistent layout and navigation
- Accessible UI components
@@ -558,7 +550,6 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
- JSON serialization for frontend
3. **User Authentication**:
- Login/registration handling
- Session management
- Permission checks
@@ -599,7 +590,6 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
- Manages company users
3. **Regular Users**:
- View dashboards
- Search and explore chat data
- Analyze chat metrics
@@ -620,5 +610,4 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
## License
This project is unlicensed. Usage is restricted to personal and educational purposes only. For commercial use, please
contact the author.
This project is unlicensed. Usage is restricted to personal and educational purposes only. For commercial use, please contact the author.

View File

@@ -51,7 +51,7 @@
- [ ] Implement periodic data download from external API
- Source: <https://proto.notso.ai/jumbo/chats>
- Authentication: Basic Auth
- Credentials: stored securely
- Credentials: [stored securely]
- An example of the data structure can be found in [jumbo.csv](examples/jumbo.csv)
- The file that the endpoint returns is a CSV file, but the file is not a standard CSV file. It has a different structure and format:
- The header row is missing, it is supposed to be `session_id,start_time,end_time,ip_address,country,language,messages_sent,sentiment,escalated,forwarded_hr,full_transcript,avg_response_time,tokens,tokens_eur,category,initial_msg,user_rating`

265
bun.lock
View File

@@ -1,265 +0,0 @@
{
"lockfileVersion": 1,
"workspaces": {
"": {
"devDependencies": {
"@playwright/test": "^1.56.1",
"@types/bun": "latest",
"markdownlint-cli2": "^0.18.1",
"oxlint": "^1.25.0",
"oxlint-tsgolint": "^0.5.0",
"prettier": "^3.6.2",
"prettier-plugin-jinja-template": "^2.1.0",
"prettier-plugin-packagejson": "^2.5.19",
},
"peerDependencies": {
"typescript": "^5",
},
},
},
"packages": {
"@nodelib/fs.scandir": ["@nodelib/fs.scandir@2.1.5", "", { "dependencies": { "@nodelib/fs.stat": "2.0.5", "run-parallel": "^1.1.9" } }, "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g=="],
"@nodelib/fs.stat": ["@nodelib/fs.stat@2.0.5", "", {}, "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A=="],
"@nodelib/fs.walk": ["@nodelib/fs.walk@1.2.8", "", { "dependencies": { "@nodelib/fs.scandir": "2.1.5", "fastq": "^1.6.0" } }, "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg=="],
"@oxlint-tsgolint/darwin-arm64": ["@oxlint-tsgolint/darwin-arm64@0.5.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-OjNuyBRlxqUgcvlc51Ab38tjRWN+gBSV6Z2004hgfbt6w7RoX6ITA6v3KYQzovCfa4Ne8l+XbhVf9y9PtLipgQ=="],
"@oxlint-tsgolint/darwin-x64": ["@oxlint-tsgolint/darwin-x64@0.5.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-KiKVKe8dd52O1vfnrE4ffyv/bJDG7RtkZiv5/lrMV0r9USL/VKN80qNtYgdzaf64WOUegAgRdWs++DZXGjYGbA=="],
"@oxlint-tsgolint/linux-arm64": ["@oxlint-tsgolint/linux-arm64@0.5.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-10Rd1GVW+Z8pd9zxpWBtRt8ur3pUXZf05iDw3Py9VyPYYpkupXLhAS0T42HLz62qHSmh79XKfDFvMOQqjVzTxQ=="],
"@oxlint-tsgolint/linux-x64": ["@oxlint-tsgolint/linux-x64@0.5.0", "", { "os": "linux", "cpu": "x64" }, "sha512-bj4YUqn7M3vLWELwQ9Y0mrdiV9Elvj/oVCs7meperiIV5FHM29of74ePupKzWp645iJB+gj7jA/OlhCvT4Exug=="],
"@oxlint-tsgolint/win32-arm64": ["@oxlint-tsgolint/win32-arm64@0.5.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-SqNClNZjFFQlnLFmzjKxTuEEJOCT0v7aamkaxtG4ek5uBRUKuN8sAwWesS06yQ3t5JMg0vPLB2fs9xJcM+VdmQ=="],
"@oxlint-tsgolint/win32-x64": ["@oxlint-tsgolint/win32-x64@0.5.0", "", { "os": "win32", "cpu": "x64" }, "sha512-pEsrzV6VM3Eo41AQavoLzNLE1nMWYNtHYUPkzBzkUZMinklWDsdLpRDYX2fw58W7mEyY0NR6T9a+Hn/qI8/aaQ=="],
"@oxlint/darwin-arm64": ["@oxlint/darwin-arm64@1.25.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-OLx4XyUv5SO7k8y5FzJIoTKan+iKK53T1Ws8fBIl4zblUIWI66ZIqSVG2A2rxOBA7XfINqCz8UipGzOW9yzKcg=="],
"@oxlint/darwin-x64": ["@oxlint/darwin-x64@1.25.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-srndNPiliA0rchYKqYfOdqA9kqyVQ6YChK3XJe9Lxo/YG8tTJ5K65g2A5SHTT2s1Nm5DnQa5AKZH7w+7KI/m8A=="],
"@oxlint/linux-arm64-gnu": ["@oxlint/linux-arm64-gnu@1.25.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-W9+DnHDbygprpGV586BolwWES+o2raOcSJv404nOFPQjWZ09efG24nuXrg/fpyoMQb4YoW2W1fvlnyMVU+ADcw=="],
"@oxlint/linux-arm64-musl": ["@oxlint/linux-arm64-musl@1.25.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-1tIMpQhKlItm7uKzs3lluG7KorZR5ItoNKd1iFYF/IPmZ+i0/iuZ7MVWXRjBcgQMhMYSdfZpSVEdFKcFz2HDxA=="],
"@oxlint/linux-x64-gnu": ["@oxlint/linux-x64-gnu@1.25.0", "", { "os": "linux", "cpu": "x64" }, "sha512-xVkmk/zkIulc5o0OUWY04DyBfKotnq9+60O9I5c0DpdKAELVLhZkLmct0apx3jAX6Z/3yYPzhc6Lw1Ia3jU3VQ=="],
"@oxlint/linux-x64-musl": ["@oxlint/linux-x64-musl@1.25.0", "", { "os": "linux", "cpu": "x64" }, "sha512-IeO10dZosJV58YzN0gckhRYac+FM9s5VCKUx2ghgbKR91z/bpSRcRl8Sy5cWTkcVwu3ZTikhK8aXC6j7XIqKNw=="],
"@oxlint/win32-arm64": ["@oxlint/win32-arm64@1.25.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-mpdiXZm2oNuSQAbTEPRDuSeR6v1DCD7Cl/xouR2ggHZu3AKZ4XYmm29hyrzIxrYVoQ/5j+182TGdOpGYn9xQJg=="],
"@oxlint/win32-x64": ["@oxlint/win32-x64@1.25.0", "", { "os": "win32", "cpu": "x64" }, "sha512-opoIACOkcFloWQO6dubBLbcWwW52ML8+3deFdr0WE0PeM9UXdLB0jRMuLsEnplmBoy9TRvmxDJ+Pw8xc2PsOfQ=="],
"@pkgr/core": ["@pkgr/core@0.2.9", "", {}, "sha512-QNqXyfVS2wm9hweSYD2O7F0G06uurj9kZ96TRQE5Y9hU7+tgdZwIkbAKc5Ocy1HxEY2kuDQa6cQ1WRs/O5LFKA=="],
"@playwright/test": ["@playwright/test@1.56.1", "", { "dependencies": { "playwright": "1.56.1" }, "bin": { "playwright": "cli.js" } }, "sha512-vSMYtL/zOcFpvJCW71Q/OEGQb7KYBPAdKh35WNSkaZA75JlAO8ED8UN6GUNTm3drWomcbcqRPFqQbLae8yBTdg=="],
"@sindresorhus/merge-streams": ["@sindresorhus/merge-streams@2.3.0", "", {}, "sha512-LtoMMhxAlorcGhmFYI+LhPgbPZCkgP6ra1YL604EeF6U98pLlQ3iWIGMdWSC+vWmPBWBNgmDBAhnAobLROJmwg=="],
"@types/bun": ["@types/bun@1.3.1", "", { "dependencies": { "bun-types": "1.3.1" } }, "sha512-4jNMk2/K9YJtfqwoAa28c8wK+T7nvJFOjxI4h/7sORWcypRNxBpr+TPNaCfVWq70tLCJsqoFwcf0oI0JU/fvMQ=="],
"@types/debug": ["@types/debug@4.1.12", "", { "dependencies": { "@types/ms": "*" } }, "sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ=="],
"@types/katex": ["@types/katex@0.16.7", "", {}, "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ=="],
"@types/ms": ["@types/ms@2.1.0", "", {}, "sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA=="],
"@types/node": ["@types/node@24.10.0", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-qzQZRBqkFsYyaSWXuEHc2WR9c0a0CXwiE5FWUvn7ZM+vdy1uZLfCunD38UzhuB7YN/J11ndbDBcTmOdxJo9Q7A=="],
"@types/react": ["@types/react@19.2.2", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-6mDvHUFSjyT2B2yeNx2nUgMxh9LtOWvkhIU3uePn2I2oyNymUAX1NIsdgviM4CH+JSrp2D2hsMvJOkxY+0wNRA=="],
"@types/unist": ["@types/unist@2.0.11", "", {}, "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA=="],
"argparse": ["argparse@2.0.1", "", {}, "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="],
"braces": ["braces@3.0.3", "", { "dependencies": { "fill-range": "^7.1.1" } }, "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="],
"bun-types": ["bun-types@1.3.1", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-NMrcy7smratanWJ2mMXdpatalovtxVggkj11bScuWuiOoXTiKIu2eVS1/7qbyI/4yHedtsn175n4Sm4JcdHLXw=="],
"character-entities": ["character-entities@2.0.2", "", {}, "sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ=="],
"character-entities-legacy": ["character-entities-legacy@3.0.0", "", {}, "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="],
"character-reference-invalid": ["character-reference-invalid@2.0.1", "", {}, "sha512-iBZ4F4wRbyORVsu0jPV7gXkOsGYjGHPmAyv+HiHG8gi5PtC9KI2j1+v8/tlibRvjoWX027ypmG/n0HtO5t7unw=="],
"commander": ["commander@8.3.0", "", {}, "sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"decode-named-character-reference": ["decode-named-character-reference@1.2.0", "", { "dependencies": { "character-entities": "^2.0.0" } }, "sha512-c6fcElNV6ShtZXmsgNgFFV5tVX2PaV4g+MOAkb8eXHvn6sryJBrZa9r0zV6+dtTyoCKxtDy5tyQ5ZwQuidtd+Q=="],
"dequal": ["dequal@2.0.3", "", {}, "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA=="],
"detect-indent": ["detect-indent@7.0.2", "", {}, "sha512-y+8xyqdGLL+6sh0tVeHcfP/QDd8gUgbasolJJpY7NgeQGSZ739bDtSiaiDgtoicy+mtYB81dKLxO9xRhCyIB3A=="],
"detect-newline": ["detect-newline@4.0.1", "", {}, "sha512-qE3Veg1YXzGHQhlA6jzebZN2qVf6NX+A7m7qlhCGG30dJixrAQhYOsJjsnBjJkCSmuOPpCk30145fr8FV0bzog=="],
"devlop": ["devlop@1.1.0", "", { "dependencies": { "dequal": "^2.0.0" } }, "sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA=="],
"entities": ["entities@4.5.0", "", {}, "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="],
"fast-glob": ["fast-glob@3.3.3", "", { "dependencies": { "@nodelib/fs.stat": "^2.0.2", "@nodelib/fs.walk": "^1.2.3", "glob-parent": "^5.1.2", "merge2": "^1.3.0", "micromatch": "^4.0.8" } }, "sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg=="],
"fastq": ["fastq@1.19.1", "", { "dependencies": { "reusify": "^1.0.4" } }, "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ=="],
"fdir": ["fdir@6.5.0", "", { "peerDependencies": { "picomatch": "^3 || ^4" }, "optionalPeers": ["picomatch"] }, "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg=="],
"fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="],
"fsevents": ["fsevents@2.3.2", "", { "os": "darwin" }, "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA=="],
"git-hooks-list": ["git-hooks-list@4.1.1", "", {}, "sha512-cmP497iLq54AZnv4YRAEMnEyQ1eIn4tGKbmswqwmFV4GBnAqE8NLtWxxdXa++AalfgL5EBH4IxTPyquEuGY/jA=="],
"glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="],
"globby": ["globby@14.1.0", "", { "dependencies": { "@sindresorhus/merge-streams": "^2.1.0", "fast-glob": "^3.3.3", "ignore": "^7.0.3", "path-type": "^6.0.0", "slash": "^5.1.0", "unicorn-magic": "^0.3.0" } }, "sha512-0Ia46fDOaT7k4og1PDW4YbodWWr3scS2vAr2lTbsplOt2WkKp0vQbkI9wKis/T5LV/dqPjO3bpS/z6GTJB82LA=="],
"ignore": ["ignore@7.0.5", "", {}, "sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg=="],
"is-alphabetical": ["is-alphabetical@2.0.1", "", {}, "sha512-FWyyY60MeTNyeSRpkM2Iry0G9hpr7/9kD40mD/cGQEuilcZYS4okz8SN2Q6rLCJ8gbCt6fN+rC+6tMGS99LaxQ=="],
"is-alphanumerical": ["is-alphanumerical@2.0.1", "", { "dependencies": { "is-alphabetical": "^2.0.0", "is-decimal": "^2.0.0" } }, "sha512-hmbYhX/9MUMF5uh7tOXyK/n0ZvWpad5caBA17GsC6vyuCqaWliRG5K1qS9inmUhEMaOBIW7/whAnSwveW/LtZw=="],
"is-decimal": ["is-decimal@2.0.1", "", {}, "sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A=="],
"is-extglob": ["is-extglob@2.1.1", "", {}, "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="],
"is-glob": ["is-glob@4.0.3", "", { "dependencies": { "is-extglob": "^2.1.1" } }, "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="],
"is-hexadecimal": ["is-hexadecimal@2.0.1", "", {}, "sha512-DgZQp241c8oO6cA1SbTEWiXeoxV42vlcJxgH+B3hi1AiqqKruZR3ZGF8In3fj4+/y/7rHvlOZLZtgJ/4ttYGZg=="],
"is-number": ["is-number@7.0.0", "", {}, "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="],
"is-plain-obj": ["is-plain-obj@4.1.0", "", {}, "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg=="],
"js-yaml": ["js-yaml@4.1.0", "", { "dependencies": { "argparse": "^2.0.1" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA=="],
"jsonc-parser": ["jsonc-parser@3.3.1", "", {}, "sha512-HUgH65KyejrUFPvHFPbqOY0rsFip3Bo5wb4ngvdi1EpCYWUQDC5V+Y7mZws+DLkr4M//zQJoanu1SP+87Dv1oQ=="],
"katex": ["katex@0.16.25", "", { "dependencies": { "commander": "^8.3.0" }, "bin": { "katex": "cli.js" } }, "sha512-woHRUZ/iF23GBP1dkDQMh1QBad9dmr8/PAwNA54VrSOVYgI12MAcE14TqnDdQOdzyEonGzMepYnqBMYdsoAr8Q=="],
"linkify-it": ["linkify-it@5.0.0", "", { "dependencies": { "uc.micro": "^2.0.0" } }, "sha512-5aHCbzQRADcdP+ATqnDuhhJ/MRIqDkZX5pyjFHRRysS8vZ5AbqGEoFIb6pYHPZ+L/OC2Lc+xT8uHVVR5CAK/wQ=="],
"markdown-it": ["markdown-it@14.1.0", "", { "dependencies": { "argparse": "^2.0.1", "entities": "^4.4.0", "linkify-it": "^5.0.0", "mdurl": "^2.0.0", "punycode.js": "^2.3.1", "uc.micro": "^2.1.0" }, "bin": { "markdown-it": "bin/markdown-it.mjs" } }, "sha512-a54IwgWPaeBCAAsv13YgmALOF1elABB08FxO9i+r4VFk5Vl4pKokRPeX8u5TCgSsPi6ec1otfLjdOpVcgbpshg=="],
"markdownlint": ["markdownlint@0.38.0", "", { "dependencies": { "micromark": "4.0.2", "micromark-core-commonmark": "2.0.3", "micromark-extension-directive": "4.0.0", "micromark-extension-gfm-autolink-literal": "2.1.0", "micromark-extension-gfm-footnote": "2.1.0", "micromark-extension-gfm-table": "2.1.1", "micromark-extension-math": "3.1.0", "micromark-util-types": "2.0.2" } }, "sha512-xaSxkaU7wY/0852zGApM8LdlIfGCW8ETZ0Rr62IQtAnUMlMuifsg09vWJcNYeL4f0anvr8Vo4ZQar8jGpV0btQ=="],
"markdownlint-cli2": ["markdownlint-cli2@0.18.1", "", { "dependencies": { "globby": "14.1.0", "js-yaml": "4.1.0", "jsonc-parser": "3.3.1", "markdown-it": "14.1.0", "markdownlint": "0.38.0", "markdownlint-cli2-formatter-default": "0.0.5", "micromatch": "4.0.8" }, "bin": { "markdownlint-cli2": "markdownlint-cli2-bin.mjs" } }, "sha512-/4Osri9QFGCZOCTkfA8qJF+XGjKYERSHkXzxSyS1hd3ZERJGjvsUao2h4wdnvpHp6Tu2Jh/bPHM0FE9JJza6ng=="],
"markdownlint-cli2-formatter-default": ["markdownlint-cli2-formatter-default@0.0.5", "", { "peerDependencies": { "markdownlint-cli2": ">=0.0.4" } }, "sha512-4XKTwQ5m1+Txo2kuQ3Jgpo/KmnG+X90dWt4acufg6HVGadTUG5hzHF/wssp9b5MBYOMCnZ9RMPaU//uHsszF8Q=="],
"mdurl": ["mdurl@2.0.0", "", {}, "sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w=="],
"merge2": ["merge2@1.4.1", "", {}, "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="],
"micromark": ["micromark@4.0.2", "", { "dependencies": { "@types/debug": "^4.0.0", "debug": "^4.0.0", "decode-named-character-reference": "^1.0.0", "devlop": "^1.0.0", "micromark-core-commonmark": "^2.0.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-chunked": "^2.0.0", "micromark-util-combine-extensions": "^2.0.0", "micromark-util-decode-numeric-character-reference": "^2.0.0", "micromark-util-encode": "^2.0.0", "micromark-util-normalize-identifier": "^2.0.0", "micromark-util-resolve-all": "^2.0.0", "micromark-util-sanitize-uri": "^2.0.0", "micromark-util-subtokenize": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-zpe98Q6kvavpCr1NPVSCMebCKfD7CA2NqZ+rykeNhONIJBpc1tFKt9hucLGwha3jNTNI8lHpctWJWoimVF4PfA=="],
"micromark-core-commonmark": ["micromark-core-commonmark@2.0.3", "", { "dependencies": { "decode-named-character-reference": "^1.0.0", "devlop": "^1.0.0", "micromark-factory-destination": "^2.0.0", "micromark-factory-label": "^2.0.0", "micromark-factory-space": "^2.0.0", "micromark-factory-title": "^2.0.0", "micromark-factory-whitespace": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-chunked": "^2.0.0", "micromark-util-classify-character": "^2.0.0", "micromark-util-html-tag-name": "^2.0.0", "micromark-util-normalize-identifier": "^2.0.0", "micromark-util-resolve-all": "^2.0.0", "micromark-util-subtokenize": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-RDBrHEMSxVFLg6xvnXmb1Ayr2WzLAWjeSATAoxwKYJV94TeNavgoIdA0a9ytzDSVzBy2YKFK+emCPOEibLeCrg=="],
"micromark-extension-directive": ["micromark-extension-directive@4.0.0", "", { "dependencies": { "devlop": "^1.0.0", "micromark-factory-space": "^2.0.0", "micromark-factory-whitespace": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0", "parse-entities": "^4.0.0" } }, "sha512-/C2nqVmXXmiseSSuCdItCMho7ybwwop6RrrRPk0KbOHW21JKoCldC+8rFOaundDoRBUWBnJJcxeA/Kvi34WQXg=="],
"micromark-extension-gfm-autolink-literal": ["micromark-extension-gfm-autolink-literal@2.1.0", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-sanitize-uri": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-oOg7knzhicgQ3t4QCjCWgTmfNhvQbDDnJeVu9v81r7NltNCVmhPy1fJRX27pISafdjL+SVc4d3l48Gb6pbRypw=="],
"micromark-extension-gfm-footnote": ["micromark-extension-gfm-footnote@2.1.0", "", { "dependencies": { "devlop": "^1.0.0", "micromark-core-commonmark": "^2.0.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-normalize-identifier": "^2.0.0", "micromark-util-sanitize-uri": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-/yPhxI1ntnDNsiHtzLKYnE3vf9JZ6cAisqVDauhp4CEHxlb4uoOTxOCJ+9s51bIB8U1N1FJ1RXOKTIlD5B/gqw=="],
"micromark-extension-gfm-table": ["micromark-extension-gfm-table@2.1.1", "", { "dependencies": { "devlop": "^1.0.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-t2OU/dXXioARrC6yWfJ4hqB7rct14e8f7m0cbI5hUmDyyIlwv5vEtooptH8INkbLzOatzKuVbQmAYcbWoyz6Dg=="],
"micromark-extension-math": ["micromark-extension-math@3.1.0", "", { "dependencies": { "@types/katex": "^0.16.0", "devlop": "^1.0.0", "katex": "^0.16.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-lvEqd+fHjATVs+2v/8kg9i5Q0AP2k85H0WUOwpIVvUML8BapsMvh1XAogmQjOCsLpoKRCVQqEkQBB3NhVBcsOg=="],
"micromark-factory-destination": ["micromark-factory-destination@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA=="],
"micromark-factory-label": ["micromark-factory-label@2.0.1", "", { "dependencies": { "devlop": "^1.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg=="],
"micromark-factory-space": ["micromark-factory-space@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg=="],
"micromark-factory-title": ["micromark-factory-title@2.0.1", "", { "dependencies": { "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw=="],
"micromark-factory-whitespace": ["micromark-factory-whitespace@2.0.1", "", { "dependencies": { "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ=="],
"micromark-util-character": ["micromark-util-character@2.1.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q=="],
"micromark-util-chunked": ["micromark-util-chunked@2.0.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0" } }, "sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA=="],
"micromark-util-classify-character": ["micromark-util-classify-character@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q=="],
"micromark-util-combine-extensions": ["micromark-util-combine-extensions@2.0.1", "", { "dependencies": { "micromark-util-chunked": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg=="],
"micromark-util-decode-numeric-character-reference": ["micromark-util-decode-numeric-character-reference@2.0.2", "", { "dependencies": { "micromark-util-symbol": "^2.0.0" } }, "sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw=="],
"micromark-util-encode": ["micromark-util-encode@2.0.1", "", {}, "sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw=="],
"micromark-util-html-tag-name": ["micromark-util-html-tag-name@2.0.1", "", {}, "sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA=="],
"micromark-util-normalize-identifier": ["micromark-util-normalize-identifier@2.0.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0" } }, "sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q=="],
"micromark-util-resolve-all": ["micromark-util-resolve-all@2.0.1", "", { "dependencies": { "micromark-util-types": "^2.0.0" } }, "sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg=="],
"micromark-util-sanitize-uri": ["micromark-util-sanitize-uri@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-encode": "^2.0.0", "micromark-util-symbol": "^2.0.0" } }, "sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ=="],
"micromark-util-subtokenize": ["micromark-util-subtokenize@2.1.0", "", { "dependencies": { "devlop": "^1.0.0", "micromark-util-chunked": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-XQLu552iSctvnEcgXw6+Sx75GflAPNED1qx7eBJ+wydBb2KCbRZe+NwvIEEMM83uml1+2WSXpBAcp9IUCgCYWA=="],
"micromark-util-symbol": ["micromark-util-symbol@2.0.1", "", {}, "sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q=="],
"micromark-util-types": ["micromark-util-types@2.0.2", "", {}, "sha512-Yw0ECSpJoViF1qTU4DC6NwtC4aWGt1EkzaQB8KPPyCRR8z9TWeV0HbEFGTO+ZY1wB22zmxnJqhPyTpOVCpeHTA=="],
"micromatch": ["micromatch@4.0.8", "", { "dependencies": { "braces": "^3.0.3", "picomatch": "^2.3.1" } }, "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"oxlint": ["oxlint@1.25.0", "", { "optionalDependencies": { "@oxlint/darwin-arm64": "1.25.0", "@oxlint/darwin-x64": "1.25.0", "@oxlint/linux-arm64-gnu": "1.25.0", "@oxlint/linux-arm64-musl": "1.25.0", "@oxlint/linux-x64-gnu": "1.25.0", "@oxlint/linux-x64-musl": "1.25.0", "@oxlint/win32-arm64": "1.25.0", "@oxlint/win32-x64": "1.25.0" }, "peerDependencies": { "oxlint-tsgolint": ">=0.4.0" }, "optionalPeers": ["oxlint-tsgolint"], "bin": { "oxlint": "bin/oxlint", "oxc_language_server": "bin/oxc_language_server" } }, "sha512-O6iJ9xeuy9eQCi8/EghvsNO6lzSaUPs0FR1uLy51Exp3RkVpjvJKyPPhd9qv65KLnfG/BNd2HE/rH0NbEfVVzA=="],
"oxlint-tsgolint": ["oxlint-tsgolint@0.5.0", "", { "optionalDependencies": { "@oxlint-tsgolint/darwin-arm64": "0.5.0", "@oxlint-tsgolint/darwin-x64": "0.5.0", "@oxlint-tsgolint/linux-arm64": "0.5.0", "@oxlint-tsgolint/linux-x64": "0.5.0", "@oxlint-tsgolint/win32-arm64": "0.5.0", "@oxlint-tsgolint/win32-x64": "0.5.0" }, "bin": { "tsgolint": "bin/tsgolint.js" } }, "sha512-uRiGb48QVSY2PqPCgAOoYySZM8OKSXTTSHFuF0HeW3tUhefdj/wyHWeZzFfbIU+dSDgMEkG9HVE/WBeT1nc+bA=="],
"parse-entities": ["parse-entities@4.0.2", "", { "dependencies": { "@types/unist": "^2.0.0", "character-entities-legacy": "^3.0.0", "character-reference-invalid": "^2.0.0", "decode-named-character-reference": "^1.0.0", "is-alphanumerical": "^2.0.0", "is-decimal": "^2.0.0", "is-hexadecimal": "^2.0.0" } }, "sha512-GG2AQYWoLgL877gQIKeRPGO1xF9+eG1ujIb5soS5gPvLQ1y2o8FL90w2QWNdf9I361Mpp7726c+lj3U0qK1uGw=="],
"path-type": ["path-type@6.0.0", "", {}, "sha512-Vj7sf++t5pBD637NSfkxpHSMfWaeig5+DKWLhcqIYx6mWQz5hdJTGDVMQiJcw1ZYkhs7AazKDGpRVji1LJCZUQ=="],
"picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"playwright": ["playwright@1.56.1", "", { "dependencies": { "playwright-core": "1.56.1" }, "optionalDependencies": { "fsevents": "2.3.2" }, "bin": { "playwright": "cli.js" } }, "sha512-aFi5B0WovBHTEvpM3DzXTUaeN6eN0qWnTkKx4NQaH4Wvcmc153PdaY2UBdSYKaGYw+UyWXSVyxDUg5DoPEttjw=="],
"playwright-core": ["playwright-core@1.56.1", "", { "bin": { "playwright-core": "cli.js" } }, "sha512-hutraynyn31F+Bifme+Ps9Vq59hKuUCz7H1kDOcBs+2oGguKkWTU50bBWrtz34OUWmIwpBTWDxaRPXrIXkgvmQ=="],
"prettier": ["prettier@3.6.2", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ=="],
"prettier-plugin-jinja-template": ["prettier-plugin-jinja-template@2.1.0", "", { "peerDependencies": { "prettier": "^3.0.0" } }, "sha512-mzoCp2Oy9BDSug80fw3B3J4n4KQj1hRvoQOL1akqcDKBb5nvYxrik9zUEDs4AEJ6nK7QDTGoH0y9rx7AlnQ78Q=="],
"prettier-plugin-packagejson": ["prettier-plugin-packagejson@2.5.19", "", { "dependencies": { "sort-package-json": "3.4.0", "synckit": "0.11.11" }, "peerDependencies": { "prettier": ">= 1.16.0" }, "optionalPeers": ["prettier"] }, "sha512-Qsqp4+jsZbKMpEGZB1UP1pxeAT8sCzne2IwnKkr+QhUe665EXUo3BAvTf1kAPCqyMv9kg3ZmO0+7eOni/C6Uag=="],
"punycode.js": ["punycode.js@2.3.1", "", {}, "sha512-uxFIHU0YlHYhDQtV4R9J6a52SLx28BCjT+4ieh7IGbgwVJWO+km431c4yRlREUAsAmt/uMjQUyQHNEPf0M39CA=="],
"queue-microtask": ["queue-microtask@1.2.3", "", {}, "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="],
"reusify": ["reusify@1.1.0", "", {}, "sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw=="],
"run-parallel": ["run-parallel@1.2.0", "", { "dependencies": { "queue-microtask": "^1.2.2" } }, "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA=="],
"semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="],
"slash": ["slash@5.1.0", "", {}, "sha512-ZA6oR3T/pEyuqwMgAKT0/hAv8oAXckzbkmR0UkUosQ+Mc4RxGoJkRmwHgHufaenlyAgE1Mxgpdcrf75y6XcnDg=="],
"sort-object-keys": ["sort-object-keys@1.1.3", "", {}, "sha512-855pvK+VkU7PaKYPc+Jjnmt4EzejQHyhhF33q31qG8x7maDzkeFhAAThdCYay11CISO+qAMwjOBP+fPZe0IPyg=="],
"sort-package-json": ["sort-package-json@3.4.0", "", { "dependencies": { "detect-indent": "^7.0.1", "detect-newline": "^4.0.1", "git-hooks-list": "^4.0.0", "is-plain-obj": "^4.1.0", "semver": "^7.7.1", "sort-object-keys": "^1.1.3", "tinyglobby": "^0.2.12" }, "bin": { "sort-package-json": "cli.js" } }, "sha512-97oFRRMM2/Js4oEA9LJhjyMlde+2ewpZQf53pgue27UkbEXfHJnDzHlUxQ/DWUkzqmp7DFwJp8D+wi/TYeQhpA=="],
"synckit": ["synckit@0.11.11", "", { "dependencies": { "@pkgr/core": "^0.2.9" } }, "sha512-MeQTA1r0litLUf0Rp/iisCaL8761lKAZHaimlbGK4j0HysC4PLfqygQj9srcs0m2RdtDYnF8UuYyKpbjHYp7Jw=="],
"tinyglobby": ["tinyglobby@0.2.15", "", { "dependencies": { "fdir": "^6.5.0", "picomatch": "^4.0.3" } }, "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ=="],
"to-regex-range": ["to-regex-range@5.0.1", "", { "dependencies": { "is-number": "^7.0.0" } }, "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ=="],
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"uc.micro": ["uc.micro@2.1.0", "", {}, "sha512-ARDJmphmdvUk6Glw7y9DQ2bFkKBHwQHLi2lsaH6PPmz/Ka9sFOBsBluozhDltWmnv9u/cF6Rt87znRTPV+yp/A=="],
"undici-types": ["undici-types@7.16.0", "", {}, "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="],
"unicorn-magic": ["unicorn-magic@0.3.0", "", {}, "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA=="],
"tinyglobby/picomatch": ["picomatch@4.0.3", "", {}, "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="],
}
}

View File

@@ -17,7 +17,7 @@ def main():
# Default to 'manage.py' if no specific command
if cmd_name == "__main__":
# When running as `python -m dashboard_project`, just pass control to manage.py
from dashboard_project.manage import main as manage_main # type: ignore[import-not-found]
from dashboard_project.manage import main as manage_main
manage_main()
return
@@ -48,32 +48,5 @@ def main():
execute_from_command_line(sys.argv)
def runserver():
"""Entrypoint for running Django development server."""
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
sys.argv = ["manage.py", "runserver", "8001"]
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
def migrate():
"""Entrypoint for running Django migrations."""
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
sys.argv = ["manage.py", "migrate"]
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
def shell():
"""Entrypoint for Django shell."""
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
sys.argv = ["manage.py", "shell"]
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
if __name__ == "__main__":
main()

View File

@@ -30,8 +30,7 @@ class CustomUserChangeForm(forms.ModelForm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
# Only staff members can change company and admin status
instance = kwargs.get("instance")
if not instance or not getattr(instance, "is_staff", False):
if not kwargs.get("instance") or not kwargs.get("instance").is_staff:
if "company" in self.fields:
self.fields["company"].disabled = True
if "is_company_admin" in self.fields:

View File

@@ -4,7 +4,7 @@ ASGI config for dashboard_project project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
<https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/>
https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/
"""
import os

View File

@@ -49,9 +49,7 @@ class DataSourceAdmin(admin.ModelAdmin):
@admin.display(description="External Data Status")
def get_external_data_status(self, obj):
if obj.external_source:
last_sync = obj.external_source.last_synced or "Never"
status = obj.external_source.get_status()
return f"Last synced: {last_sync} | Status: {status}"
return f"Last synced: {obj.external_source.last_synced or 'Never'} | Status: {obj.external_source.get_status()}"
return "No external data source linked"

View File

@@ -1,14 +1,7 @@
# dashboard/forms.py
from __future__ import annotations
from typing import TYPE_CHECKING
from django import forms
if TYPE_CHECKING:
pass
from .models import Dashboard, DataSource
@@ -44,9 +37,7 @@ class DashboardForm(forms.ModelForm):
super().__init__(*args, **kwargs)
if self.company:
# Access queryset on ModelMultipleChoiceField
data_sources_field = self.fields["data_sources"] # type: ignore[assignment]
data_sources_field.queryset = DataSource.objects.filter(company=self.company) # type: ignore[attr-defined]
self.fields["data_sources"].queryset = DataSource.objects.filter(company=self.company)
def save(self, commit=True):
instance = super().save(commit=False)

View File

@@ -1,3 +1,2 @@
# dashboard/management/__init__.py
# This file is intentionally left empty to mark the directory as a Python package

View File

@@ -1,3 +1,2 @@
# dashboard/management/commands/__init__.py
# This file is intentionally left empty to mark the directory as a Python package

View File

@@ -83,7 +83,7 @@ class Command(BaseCommand):
ChatSession.objects.all().delete()
# Parse sample CSV
with open(sample_path) as f:
with open(sample_path, "r") as f:
reader = csv.reader(f)
header = next(reader) # Skip header

View File

@@ -1,3 +1,2 @@
# dashboard/templatetags/__init__.py
# This file is intentionally left empty to mark the directory as a Python package

View File

@@ -1,13 +1,10 @@
# dashboard/utils.py
from __future__ import annotations
import contextlib
import numpy as np
import pandas as pd
from django.db import models
from django.db.models import functions
from django.utils.timezone import make_aware
from .models import ChatSession
@@ -140,7 +137,7 @@ def generate_dashboard_data(data_sources):
# Time series data (sessions per day)
time_series_query = (
chat_sessions.filter(start_time__isnull=False)
.annotate(date=functions.TruncDate("start_time")) # type: ignore[attr-defined]
.annotate(date=models.functions.TruncDate("start_time"))
.values("date")
.annotate(count=models.Count("id"))
.order_by("date")

View File

@@ -58,7 +58,7 @@ def dashboard_view(request):
if selected_dashboard_id:
selected_dashboard = get_object_or_404(Dashboard, id=selected_dashboard_id, company=company)
else:
selected_dashboard = dashboards.first() # type: ignore[assignment]
selected_dashboard = dashboards.first()
# Generate dashboard data
dashboard_data = generate_dashboard_data(selected_dashboard.data_sources.all())
@@ -200,10 +200,12 @@ def chat_session_detail_view(request, session_id):
# Check if this is an AJAX navigation request
if is_ajax_navigation(request):
html_content = render_to_string("dashboard/chat_session_detail.html", context, request=request)
return JsonResponse({
return JsonResponse(
{
"html": html_content,
"title": f"Chat Session {session_id} | Chat Analytics",
})
}
)
return render(request, "dashboard/chat_session_detail.html", context)
@@ -280,10 +282,12 @@ def edit_dashboard_view(request, dashboard_id):
# Check if this is an AJAX navigation request
if is_ajax_navigation(request):
html_content = render_to_string("dashboard/dashboard_form.html", context, request=request)
return JsonResponse({
return JsonResponse(
{
"html": html_content,
"title": f"Edit Dashboard: {dashboard.name} | Chat Analytics",
})
}
)
return render(request, "dashboard/dashboard_form.html", context)
@@ -345,8 +349,6 @@ def delete_data_source_view(request, data_source_id):
# API views for dashboard data
@login_required
def dashboard_data_api(request, dashboard_id):
"""API endpoint for dashboard data"""
@@ -448,7 +450,8 @@ def search_chat_sessions(request):
# Check if this is an AJAX pagination request
if request.headers.get("X-Requested-With") == "XMLHttpRequest":
return JsonResponse({
return JsonResponse(
{
"status": "success",
"html_data": render(request, "dashboard/partials/search_results_table.html", context).content.decode(
"utf-8"
@@ -465,7 +468,8 @@ def search_chat_sessions(request):
},
},
"query": query,
})
}
)
return render(request, "dashboard/search_results.html", context)
@@ -550,7 +554,8 @@ def data_view(request):
# Check if this is an AJAX pagination request
if request.headers.get("X-Requested-With") == "XMLHttpRequest":
return JsonResponse({
return JsonResponse(
{
"status": "success",
"html_data": render(request, "dashboard/partials/data_table.html", context).content.decode("utf-8"),
"page_obj": {
@@ -568,6 +573,7 @@ def data_view(request):
"avg_response_time": avg_response_time,
"avg_messages": avg_messages,
"escalation_rate": escalation_rate,
})
}
)
return render(request, "dashboard/data_view.html", context)

View File

@@ -91,7 +91,8 @@ def export_chats_csv(request):
writer = csv.writer(response)
# Write CSV header
writer.writerow([
writer.writerow(
[
"Session ID",
"Start Time",
"End Time",
@@ -109,11 +110,13 @@ def export_chats_csv(request):
"Category",
"Initial Message",
"User Rating",
])
]
)
# Write data rows
for session in sessions:
writer.writerow([
writer.writerow(
[
session.session_id,
session.start_time,
session.end_time,
@@ -131,7 +134,8 @@ def export_chats_csv(request):
session.category,
session.initial_msg,
session.user_rating,
])
]
)
return response

View File

@@ -4,7 +4,7 @@ ASGI config for dashboard_project project.
It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see
<https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/>
https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/
"""
import os

View File

@@ -2,24 +2,18 @@ import os
from celery import Celery
# Set the default Django settings module for the 'celery' program
# Set the default Django settings module for the 'celery' program.
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
app = Celery("dashboard_project")
# Using a string here means the worker doesn't have to serialize
# the configuration object to child processes
# the configuration object to child processes.
# - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix
# should have a `CELERY_` prefix.
app.config_from_object("django.conf:settings", namespace="CELERY")
# Load task modules from all registered Django app configs
# Load task modules from all registered Django app configs.
app.autodiscover_tasks()

View File

@@ -7,7 +7,6 @@ from pathlib import Path
from django.core.management.utils import get_random_secret_key
# Load environment variables from .env file if present
try:
from dotenv import load_dotenv
@@ -15,36 +14,18 @@ try:
except ImportError:
pass
# Build paths inside the project like this: BASE_DIR / 'subdir'
# Build paths inside the project like this: BASE_DIR / 'subdir'.
BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY", get_random_secret_key())
# SECURITY WARNING: don't run with debug turned on in production
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = os.environ.get("DJANGO_DEBUG", "True") == "True"
# Allow localhost, Docker IPs, and entire private network ranges
ALLOWED_HOSTS = [
"localhost",
"127.0.0.1",
"0.0.0.0", # nosec B104
".localhost",
# Allow all 192.168.x.x addresses (private network)
"192.168.*.*",
# Allow all 10.x.x.x addresses (Docker default)
"10.*.*.*",
# Allow all 172.16-31.x.x addresses (Docker)
"172.*.*.*",
# Wildcard for any other IPs (development only)
"*",
]
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
@@ -99,22 +80,6 @@ TEMPLATES = [
WSGI_APPLICATION = "dashboard_project.wsgi.application"
# Database
# Use PostgreSQL when DATABASE_URL is set (Docker), otherwise SQLite (local dev)
if os.environ.get("DATABASE_URL"):
# PostgreSQL configuration for Docker
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("POSTGRES_DB", "dashboard_db"),
"USER": os.environ.get("POSTGRES_USER", "postgres"),
"PASSWORD": os.environ.get("POSTGRES_PASSWORD", "postgres"),
"HOST": os.environ.get("POSTGRES_HOST", "db"),
"PORT": os.environ.get("POSTGRES_PORT", "5432"),
}
}
else:
# SQLite configuration for local development
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
@@ -123,7 +88,6 @@ else:
}
# Password validation
AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
@@ -140,14 +104,12 @@ AUTH_PASSWORD_VALIDATORS = [
]
# Internationalization
LANGUAGE_CODE = "en-US"
TIME_ZONE = "Europe/Amsterdam"
USE_I18N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
STATIC_URL = "static/"
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
@@ -163,28 +125,23 @@ STORAGES = {
}
# Media files
MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, "media")
# Default primary key field type
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
# Crispy Forms
CRISPY_ALLOWED_TEMPLATE_PACKS = "bootstrap5"
CRISPY_TEMPLATE_PACK = "bootstrap5"
# Authentication
AUTH_USER_MODEL = "accounts.CustomUser"
LOGIN_REDIRECT_URL = "dashboard"
LOGOUT_REDIRECT_URL = "login"
ACCOUNT_LOGOUT_ON_GET = True
# django-allauth
AUTHENTICATION_BACKENDS = [
"django.contrib.auth.backends.ModelBackend",
"allauth.account.auth_backends.AuthenticationBackend",
@@ -193,9 +150,7 @@ SITE_ID = 1
ACCOUNT_EMAIL_VERIFICATION = "none"
# Celery Configuration
# Check if Redis is available
try:
import redis
@@ -213,8 +168,8 @@ try:
logger.info("Using Redis for Celery broker and result backend")
except (
ImportError,
redis.exceptions.ConnectionError, # type: ignore[attr-defined]
redis.exceptions.TimeoutError, # type: ignore[attr-defined]
redis.exceptions.ConnectionError,
redis.exceptions.TimeoutError,
) as e:
# Redis is not available, use SQLite as fallback (works for development)
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "sqla+sqlite:///celery.sqlite")
@@ -229,7 +184,6 @@ CELERY_TIMEZONE = TIME_ZONE
CELERY_BEAT_SCHEDULER = "django_celery_beat.schedulers:DatabaseScheduler"
# Get schedule from environment variables or use defaults
CHAT_DATA_FETCH_INTERVAL = int(os.environ.get("CHAT_DATA_FETCH_INTERVAL", 3600)) # Default: 1 hour
CELERY_BEAT_SCHEDULE = {

View File

@@ -4,7 +4,7 @@ WSGI config for dashboard_project project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
<https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/>
https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/
"""
import os

View File

@@ -52,8 +52,10 @@ class ExternalDataSourceAdmin(admin.ModelAdmin):
status,
)
else:
style = "color: white; background-color: orange; padding: 3px 8px; border-radius: 10px;"
return format_html(f'<span style="{style}">{{}}</span>', status)
return format_html(
'<span style="color: white; background-color: orange; padding: 3px 8px; border-radius: 10px;">{}</span>',
status,
)
@admin.display(description="Actions")
def refresh_action(self, obj):

View File

@@ -56,8 +56,7 @@ class Command(BaseCommand):
)
elif col == "sync_interval":
cursor.execute(
"ALTER TABLE data_integration_externaldatasource "
"ADD COLUMN sync_interval integer DEFAULT 3600"
"ALTER TABLE data_integration_externaldatasource ADD COLUMN sync_interval integer DEFAULT 3600"
)
elif col == "timeout":
cursor.execute(

View File

@@ -1,107 +0,0 @@
"""
Management command to set up Jumbo company, users, and link existing data.
"""
from accounts.models import Company, CustomUser
from data_integration.models import ChatSession, ExternalDataSource
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Set up Jumbo company, create users, and link existing external data"
def handle(self, *_args, **_options):
self.stdout.write("Setting up Jumbo company and data...")
# 1. Create Jumbo company
jumbo_company, created = Company.objects.get_or_create(
name="Jumbo", defaults={"description": "Jumbo Supermarkets - External API Data"}
)
if created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo company"))
else:
self.stdout.write(" Jumbo company already exists")
# 2. Create admin user for Jumbo
admin_created = False
if not CustomUser.objects.filter(username="jumbo_admin").exists():
CustomUser.objects.create_user( # nosec B106
username="jumbo_admin",
email="admin@jumbo.nl",
password="jumbo123",
company=jumbo_company,
is_company_admin=True,
)
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo admin: jumbo_admin / jumbo123"))
admin_created = True
else:
self.stdout.write(" Jumbo admin already exists")
# 3. Create regular users for Jumbo
jumbo_users = [
{
"username": "jumbo_analyst",
"email": "analyst@jumbo.nl",
"password": "jumbo123",
"is_company_admin": False,
},
{
"username": "jumbo_manager",
"email": "manager@jumbo.nl",
"password": "jumbo123",
"is_company_admin": False,
},
]
users_created = 0
for user_data in jumbo_users:
if not CustomUser.objects.filter(username=user_data["username"]).exists():
CustomUser.objects.create_user(
username=user_data["username"],
email=user_data["email"],
password=user_data["password"],
company=jumbo_company,
is_company_admin=user_data["is_company_admin"],
)
users_created += 1
if users_created:
self.stdout.write(self.style.SUCCESS(f"✓ Created {users_created} Jumbo users"))
else:
self.stdout.write(" Jumbo users already exist")
# 4. Link External Data Source to Jumbo company
try:
jumbo_ext_source = ExternalDataSource.objects.get(name="Jumbo API")
if not jumbo_ext_source.company:
jumbo_ext_source.company = jumbo_company
jumbo_ext_source.save()
self.stdout.write(self.style.SUCCESS("✓ Linked Jumbo API data source to company"))
else:
self.stdout.write(" Jumbo API data source already linked")
except ExternalDataSource.DoesNotExist:
self.stdout.write(
self.style.WARNING("⚠ Jumbo API external data source not found. Create it in admin first.")
)
# 5. Link existing chat sessions to Jumbo company
unlinked_sessions = ChatSession.objects.filter(company__isnull=True)
if unlinked_sessions.exists():
count = unlinked_sessions.update(company=jumbo_company)
self.stdout.write(self.style.SUCCESS(f"✓ Linked {count} existing chat sessions to Jumbo company"))
else:
self.stdout.write(" All chat sessions already linked to companies")
# 6. Summary
total_sessions = ChatSession.objects.filter(company=jumbo_company).count()
total_users = CustomUser.objects.filter(company=jumbo_company).count()
self.stdout.write(
self.style.SUCCESS(
f"\n✓ Setup complete!"
f"\n Company: {jumbo_company.name}"
f"\n Users: {total_users} (including {1 if admin_created or CustomUser.objects.filter(username='jumbo_admin').exists() else 0} admin)"
f"\n Chat sessions: {total_sessions}"
)
)
self.stdout.write("\nLogin as jumbo_admin/jumbo123 to view the dashboard with Jumbo data.")

View File

@@ -1,69 +0,0 @@
"""
Management command to set up Jumbo API external data source and fetch data.
"""
import os
from accounts.models import Company
from data_integration.models import ChatSession, ExternalDataSource
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Set up Jumbo API external data source and fetch chat data"
def handle(self, **options): # noqa: ARG002
self.stdout.write("Setting up Jumbo API data source...")
# Get Jumbo company
try:
jumbo = Company.objects.get(name="Jumbo")
self.stdout.write(f"✓ Found Jumbo company (ID: {jumbo.id})")
except Company.DoesNotExist:
self.stdout.write(self.style.ERROR("✗ Jumbo company not found. Run setup_jumbo command first."))
return
# Create or get Jumbo API external data source
source, created = ExternalDataSource.objects.get_or_create(
name="Jumbo API",
defaults={
"company": jumbo,
"api_url": "https://proto.notso.ai/jumbo/chats",
"auth_username": os.environ.get("EXTERNAL_API_USERNAME", ""),
"auth_password": os.environ.get("EXTERNAL_API_PASSWORD", ""),
"is_active": True,
},
)
# Ensure company is set if already existed
if not created and not source.company:
source.company = jumbo
source.save()
self.stdout.write(self.style.SUCCESS(f"✓ Linked existing Jumbo API source to {jumbo.name}"))
elif created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo API external data source"))
else:
self.stdout.write("✓ Jumbo API source already exists")
self.stdout.write(
f"\nData source details:"
f"\n ID: {source.id}"
f"\n Company: {source.company.name if source.company else 'None'}"
f"\n Endpoint: {source.api_url}"
f"\n Active: {source.is_active}"
)
# Fetch data (call the task synchronously)
self.stdout.write("\nFetching Jumbo chat data...")
try:
# Use .apply() or direct function call to run synchronously
from data_integration.utils import fetch_and_store_chat_data
result = fetch_and_store_chat_data(source.id)
self.stdout.write(self.style.SUCCESS(f"✓ Data fetch completed: {result}"))
except Exception as e:
self.stdout.write(self.style.ERROR(f"✗ Error fetching data: {e}"))
# Show summary
session_count = ChatSession.objects.filter(company=jumbo).count()
self.stdout.write(self.style.SUCCESS(f"\n✓ Setup complete!\n Total Jumbo chat sessions: {session_count}"))

View File

@@ -1,106 +0,0 @@
"""
Management command to sync Jumbo API data to dashboard app with proper company linking.
"""
from accounts.models import Company, CustomUser
from dashboard.models import ChatSession, DataSource
from data_integration.models import ChatSession as ExtChatSession
from data_integration.models import ExternalDataSource
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Sync Jumbo API data to dashboard app with company linking"
def handle(self, *_args, **_options):
self.stdout.write("Starting Jumbo data sync to dashboard...")
# 1. Get or create Jumbo company
jumbo_company, created = Company.objects.get_or_create(
name="Jumbo", defaults={"description": "Jumbo Supermarkets - External API Data"}
)
if created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo company"))
else:
self.stdout.write(" Jumbo company already exists")
# 2. Get Jumbo external data source
try:
jumbo_ext_source = ExternalDataSource.objects.get(name="Jumbo API")
except ExternalDataSource.DoesNotExist:
self.stdout.write(
self.style.ERROR("✗ Jumbo API external data source not found. Please create it in admin first.")
)
return
# 3. Get or create DataSource linked to Jumbo company
jumbo_datasource, created = DataSource.objects.get_or_create(
name="Jumbo API Data",
company=jumbo_company,
defaults={
"description": "Chat sessions from Jumbo external API",
"external_source": jumbo_ext_source,
},
)
if created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo DataSource"))
else:
self.stdout.write(" Jumbo DataSource already exists")
# 4. Sync chat sessions from data_integration to dashboard
ext_sessions = ExtChatSession.objects.all()
synced_count = 0
skipped_count = 0
for ext_session in ext_sessions:
# Check if already synced
if ChatSession.objects.filter(data_source=jumbo_datasource, session_id=ext_session.session_id).exists():
skipped_count += 1
continue
# Create dashboard ChatSession
ChatSession.objects.create(
data_source=jumbo_datasource,
session_id=ext_session.session_id,
start_time=ext_session.start_time,
end_time=ext_session.end_time,
ip_address=ext_session.ip_address,
country=ext_session.country or "",
language=ext_session.language or "",
messages_sent=ext_session.messages_sent or 0,
sentiment=ext_session.sentiment or "",
escalated=ext_session.escalated or False,
forwarded_hr=ext_session.forwarded_hr or False,
full_transcript=ext_session.full_transcript_url or "",
avg_response_time=ext_session.avg_response_time,
tokens=ext_session.tokens or 0,
tokens_eur=ext_session.tokens_eur,
category=ext_session.category or "",
initial_msg=ext_session.initial_msg or "",
user_rating=str(ext_session.user_rating) if ext_session.user_rating else "",
)
synced_count += 1
self.stdout.write(
self.style.SUCCESS(f"✓ Synced {synced_count} chat sessions (skipped {skipped_count} existing)")
)
# 5. Create admin user for Jumbo company if needed
if not CustomUser.objects.filter(company=jumbo_company, is_company_admin=True).exists():
CustomUser.objects.create_user( # nosec B106
username="jumbo_admin",
email="admin@jumbo.nl",
password="jumbo123",
company=jumbo_company,
is_company_admin=True,
)
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo admin user: jumbo_admin / jumbo123"))
else:
self.stdout.write(" Jumbo admin user already exists")
self.stdout.write(
self.style.SUCCESS(
f"\n✓ Sync complete! Jumbo company now has {ChatSession.objects.filter(data_source__company=jumbo_company).count()} chat sessions"
)
)
self.stdout.write("\nLogin as jumbo_admin to view the dashboard with Jumbo data.")

View File

@@ -59,7 +59,7 @@ class Command(BaseCommand):
redis_client.delete(test_key)
else:
self.stdout.write(self.style.ERROR("❌ Redis ping failed!"))
except redis.exceptions.ConnectionError as e: # type: ignore[attr-defined]
except redis.exceptions.ConnectionError as e:
self.stdout.write(self.style.ERROR(f"❌ Redis connection error: {e}"))
self.stdout.write("Celery will use SQLite fallback if configured.")
except ImportError:

View File

@@ -1,38 +0,0 @@
# Generated by Django 5.2.7 on 2025-11-05 18:20
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("accounts", "0001_initial"),
("data_integration", "0002_externaldatasource_error_count_and_more"),
]
operations = [
migrations.AddField(
model_name="chatsession",
name="company",
field=models.ForeignKey(
blank=True,
help_text="Company this session belongs to",
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="external_chat_sessions",
to="accounts.company",
),
),
migrations.AddField(
model_name="externaldatasource",
name="company",
field=models.ForeignKey(
blank=True,
help_text="Company this data source belongs to",
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="external_data_sources",
to="accounts.company",
),
),
]

View File

@@ -1,19 +1,10 @@
import os
from accounts.models import Company
from django.db import models
class ChatSession(models.Model):
session_id = models.CharField(max_length=255, unique=True)
company = models.ForeignKey(
Company,
on_delete=models.CASCADE,
related_name="external_chat_sessions",
null=True,
blank=True,
help_text="Company this session belongs to",
)
start_time = models.DateTimeField()
end_time = models.DateTimeField()
ip_address = models.GenericIPAddressField(null=True, blank=True)
@@ -48,14 +39,6 @@ class ChatMessage(models.Model):
class ExternalDataSource(models.Model):
name = models.CharField(max_length=255, default="External API")
company = models.ForeignKey(
Company,
on_delete=models.CASCADE,
related_name="external_data_sources",
null=True,
blank=True,
help_text="Company this data source belongs to",
)
api_url = models.URLField(default="https://proto.notso.ai/jumbo/chats")
auth_username = models.CharField(max_length=255, blank=True, null=True)
auth_password = models.CharField(

View File

@@ -1 +1 @@
# Create your tests here
# Create your tests here.

View File

@@ -125,10 +125,7 @@ def fetch_and_store_chat_data(source_id=None):
# If we couldn't parse the dates, log an error and skip this row
if not start_time or not end_time:
error_msg = (
f"Could not parse date fields for session {data['session_id']}: "
f"start_time={data['start_time']}, end_time={data['end_time']}"
)
error_msg = f"Could not parse date fields for session {data['session_id']}: start_time={data['start_time']}, end_time={data['end_time']}"
logger.error(error_msg)
stats["errors"] += 1
continue
@@ -144,7 +141,6 @@ def fetch_and_store_chat_data(source_id=None):
session, created = ChatSession.objects.update_or_create(
session_id=data["session_id"],
defaults={
"company": source.company, # Link to the company from the data source
"start_time": start_time,
"end_time": end_time,
"ip_address": data.get("ip_address"),
@@ -368,8 +364,7 @@ def parse_and_store_transcript_messages(session, transcript_content):
# If no recognized patterns are found, try to intelligently split the transcript
if not has_recognized_patterns and len(lines) > 0:
logger.info(
f"No standard message patterns found in transcript for session {session.session_id}. "
f"Attempting intelligent split."
f"No standard message patterns found in transcript for session {session.session_id}. Attempting intelligent split."
)
# Try timestamp-based parsing if we have enough consistent timestamps

View File

@@ -7,7 +7,7 @@ from .models import ExternalDataSource
from .tasks import periodic_fetch_chat_data, refresh_specific_source
from .utils import fetch_and_store_chat_data
# Create your views here
# Create your views here.
def is_superuser(user):

View File

@@ -4,7 +4,6 @@ import os
import sys
# Add the project root to sys.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")

View File

@@ -1,5 +1,4 @@
#!/usr/bin/env python
# scripts/fix_dashboard_data.py
import os
@@ -16,13 +15,11 @@ from django.db import transaction
from django.utils.timezone import make_aware
# Set up Django environment
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
django.setup()
# SCRIPT CONFIG
CREATE_TEST_DATA = False # Set to True to create sample data if none exists
COMPANY_NAME = "Notso AI" # The company name to use

View File

@@ -1,77 +0,0 @@
#!/usr/bin/env python
"""
Script to create Jumbo API external data source and fetch data.
"""
import os
import sys
import django
# Setup Django
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
django.setup()
from accounts.models import Company # noqa: E402
from data_integration.models import ExternalDataSource # noqa: E402
from data_integration.tasks import refresh_specific_source # noqa: E402
def main():
print("Setting up Jumbo API data source...")
# Get Jumbo company
try:
jumbo = Company.objects.get(name="Jumbo")
print(f"✓ Found Jumbo company (ID: {jumbo.id})")
except Company.DoesNotExist:
print("✗ Jumbo company not found. Run setup_jumbo command first.")
return
# Create or get Jumbo API external data source
source, created = ExternalDataSource.objects.get_or_create(
name="Jumbo API",
defaults={
"company": jumbo,
"api_endpoint": "https://mijn.jumbo.com/api/chat/sessions",
"api_username": os.environ.get("EXTERNAL_API_USERNAME", ""),
"api_password": os.environ.get("EXTERNAL_API_PASSWORD", ""),
"is_active": True,
},
)
# Ensure company is set if already existed
if not created and not source.company:
source.company = jumbo
source.save()
print(f"✓ Linked existing Jumbo API source to {jumbo.name}")
elif created:
print("✓ Created Jumbo API external data source")
else:
print("✓ Jumbo API source already exists")
print("\nData source details:")
print(f" ID: {source.id}")
print(f" Company: {source.company.name if source.company else 'None'}")
print(f" Endpoint: {source.api_endpoint}")
print(f" Active: {source.is_active}")
# Fetch data
print("\nFetching Jumbo chat data...")
try:
result = refresh_specific_source(source.id)
print(f"✓ Data fetch completed: {result}")
except Exception as e:
print(f"✗ Error fetching data: {e}")
# Show summary
from data_integration.models import ChatSession
session_count = ChatSession.objects.filter(company=jumbo).count()
print("\n✓ Setup complete!")
print(f" Total Jumbo chat sessions: {session_count}")
if __name__ == "__main__":
main()

View File

@@ -1,5 +1,4 @@
/**
* dashboard.css - Styles specific to dashboard functionality
*/

View File

@@ -1,5 +1,4 @@
/**
* style.css - Global styles for the application
*/

View File

@@ -1,55 +1,14 @@
/**
* ajax-navigation.js - JavaScript for AJAX-based navigation across the entire application
*
* This script handles AJAX navigation between pages in the Chat Analytics Dashboard.
* It intercepts link clicks, loads content via AJAX, and updates the browser history.
*/
// Function to reload and execute scripts in new content
function reloadScripts(container) {
const scripts = container.getElementsByTagName("script");
for (let script of scripts) {
const newScript = document.createElement("script");
// Copy all attributes
Array.from(script.attributes).forEach((attr) => {
newScript.setAttribute(attr.name, attr.value);
});
// Copy inline script content
newScript.textContent = script.textContent;
// Replace old script with new one
script.parentNode.replaceChild(newScript, script);
}
}
// Function to initialize scripts needed for the new page content
function initializePageScripts() {
// Re-initialize any custom scripts that might be needed
if (typeof setupAjaxPagination === "function") {
setupAjaxPagination();
}
// Initialize Bootstrap tooltips, popovers, etc.
if (typeof bootstrap !== "undefined") {
// Initialize tooltips
const tooltipTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="tooltip"]'),
);
tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl);
});
// Initialize popovers
const popoverTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="popover"]'),
);
popoverTriggerList.map(function (popoverTriggerEl) {
return new bootstrap.Popover(popoverTriggerEl);
});
}
document.addEventListener("DOMContentLoaded", function () {
// Only initialize if AJAX navigation is enabled
if (typeof ENABLE_AJAX_NAVIGATION !== "undefined" && ENABLE_AJAX_NAVIGATION) {
setupAjaxNavigation();
}
// Function to set up AJAX navigation for the application
@@ -126,7 +85,8 @@ function setupAjaxNavigation() {
},
})
.then((response) => {
if (!response.ok) throw new Error(`Network response was not ok: ${response.status}`);
if (!response.ok)
throw new Error(`Network response was not ok: ${response.status}`);
return response.text();
})
.then((html) => {
@@ -162,6 +122,25 @@ function setupAjaxNavigation() {
});
}
// Function to reload and execute scripts in new content
function reloadScripts(container) {
const scripts = container.getElementsByTagName("script");
for (let script of scripts) {
const newScript = document.createElement("script");
// Copy all attributes
Array.from(script.attributes).forEach((attr) => {
newScript.setAttribute(attr.name, attr.value);
});
// Copy inline script content
newScript.textContent = script.textContent;
// Replace old script with new one
script.parentNode.replaceChild(newScript, script);
}
}
// Function to handle form submissions
function handleFormSubmission(form, e) {
e.preventDefault();
@@ -225,6 +204,33 @@ function setupAjaxNavigation() {
}
}
// Function to initialize scripts needed for the new page content
function initializePageScripts() {
// Re-initialize any custom scripts that might be needed
if (typeof setupAjaxPagination === "function") {
setupAjaxPagination();
}
// Initialize Bootstrap tooltips, popovers, etc.
if (typeof bootstrap !== "undefined") {
// Initialize tooltips
const tooltipTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="tooltip"]'),
);
tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl);
});
// Initialize popovers
const popoverTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="popover"]'),
);
popoverTriggerList.map(function (popoverTriggerEl) {
return new bootstrap.Popover(popoverTriggerEl);
});
}
}
// Function to attach event listeners to forms and links
function attachEventListeners() {
// Handle AJAX navigation links
@@ -265,10 +271,4 @@ function setupAjaxNavigation() {
}
});
}
document.addEventListener("DOMContentLoaded", function () {
// Only initialize if AJAX navigation is enabled
if (typeof ENABLE_AJAX_NAVIGATION !== "undefined" && ENABLE_AJAX_NAVIGATION) {
setupAjaxNavigation();
}
});

View File

@@ -5,6 +5,10 @@
* It intercepts pagination link clicks, loads content via AJAX, and updates the browser history.
*/
document.addEventListener("DOMContentLoaded", function () {
// Initialize AJAX pagination
setupAjaxPagination();
// Function to set up AJAX pagination for the entire application
function setupAjaxPagination() {
// Configuration - can be customized per page if needed
@@ -99,8 +103,4 @@ function setupAjaxPagination() {
}
});
}
document.addEventListener("DOMContentLoaded", function () {
// Initialize AJAX pagination
setupAjaxPagination();
});

View File

@@ -1,5 +1,4 @@
/**
* dashboard.js - JavaScript for the dashboard functionality
*
* This file handles the interactive features of the dashboard,
@@ -7,17 +6,20 @@
* customization.
*/
document.addEventListener("DOMContentLoaded", function () {
// Set up Plotly default config based on theme
function updatePlotlyTheme() {
// Force a fresh check of the current theme
const isDarkMode = document.documentElement.getAttribute("data-bs-theme") === "dark";
console.log("updatePlotlyTheme called - Current theme mode:", isDarkMode ? "dark" : "light");
console.log(
"updatePlotlyTheme called - Current theme mode:",
isDarkMode ? "dark" : "light",
);
window.plotlyDefaultLayout = {
font: {
color: isDarkMode ? "#f8f9fa" : "#212529",
family:
'-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif',
family: '-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif',
},
paper_bgcolor: isDarkMode ? "#343a40" : "#ffffff",
plot_bgcolor: isDarkMode ? "#343a40" : "#ffffff",
@@ -101,6 +103,26 @@ function updatePlotlyTheme() {
};
}
// Initialize theme setting
updatePlotlyTheme();
// Listen for theme changes
const observer = new MutationObserver(function (mutations) {
mutations.forEach(function (mutation) {
if (mutation.attributeName === "data-bs-theme") {
console.log(
"Theme changed detected by observer:",
document.documentElement.getAttribute("data-bs-theme"),
);
updatePlotlyTheme();
// Use a small delay to ensure styles have been applied
setTimeout(refreshAllCharts, 100);
}
});
});
observer.observe(document.documentElement, { attributes: true });
// Chart responsiveness
function resizeCharts() {
const charts = document.querySelectorAll(".chart-container");
@@ -114,6 +136,161 @@ function resizeCharts() {
});
}
// Refresh all charts with current theme
function refreshAllCharts() {
if (!window.Plotly) return;
const currentTheme = document.documentElement.getAttribute("data-bs-theme");
console.log("Refreshing charts with theme:", currentTheme);
// Update the theme settings
updatePlotlyTheme();
const charts = document.querySelectorAll(".chart-container");
charts.forEach(function (chart) {
if (chart.id) {
try {
// Safe way to check if element has a plot
const plotElement = document.getElementById(chart.id);
if (plotElement && plotElement._fullLayout) {
console.log("Updating chart theme for:", chart.id);
// Determine chart type to apply appropriate settings
let layoutUpdate = { ...window.plotlyDefaultLayout };
// Check if it's a bar chart
if (
plotElement.data &&
plotElement.data.some((trace) => trace.type === "bar")
) {
layoutUpdate = { ...window.plotlyBarConfig };
}
// Check if it's a pie chart
if (
plotElement.data &&
plotElement.data.some((trace) => trace.type === "pie")
) {
layoutUpdate = { ...window.plotlyPieConfig };
}
// Force paper and plot background colors based on current theme
// This ensures the chart background always matches the current theme
layoutUpdate.paper_bgcolor =
currentTheme === "dark" ? "#343a40" : "#ffffff";
layoutUpdate.plot_bgcolor = currentTheme === "dark" ? "#343a40" : "#ffffff";
// Update font colors too
layoutUpdate.font.color = currentTheme === "dark" ? "#f8f9fa" : "#212529";
// Apply layout updates
Plotly.relayout(chart.id, layoutUpdate);
}
} catch (e) {
console.error("Error updating chart theme:", e);
}
}
});
}
// Make refreshAllCharts available globally
window.refreshAllCharts = refreshAllCharts;
// Handle window resize
window.addEventListener("resize", function () {
if (window.Plotly) {
resizeCharts();
}
});
// Call resizeCharts on initial load
if (window.Plotly) {
// Use a longer delay to ensure charts are fully loaded
setTimeout(function () {
updatePlotlyTheme();
refreshAllCharts();
}, 300);
}
// Apply theme to newly created charts
const originalPlotlyNewPlot = Plotly.newPlot;
Plotly.newPlot = function () {
const args = Array.from(arguments);
// Get the layout argument (3rd argument)
if (args.length >= 3 && typeof args[2] === "object") {
// Ensure plotlyDefaultLayout is up to date
updatePlotlyTheme();
// Apply current theme to new plot
args[2] = { ...window.plotlyDefaultLayout, ...args[2] };
}
return originalPlotlyNewPlot.apply(this, args);
};
// Time range filtering
const timeRangeDropdown = document.getElementById("timeRangeDropdown");
if (timeRangeDropdown) {
const timeRangeLinks = timeRangeDropdown.querySelectorAll(".dropdown-item");
timeRangeLinks.forEach((link) => {
link.addEventListener("click", function (e) {
const url = new URL(this.href);
const dashboardId = url.searchParams.get("dashboard_id");
const timeRange = url.searchParams.get("time_range");
// Fetch updated data via AJAX
if (dashboardId) {
fetchDashboardData(dashboardId, timeRange);
e.preventDefault();
}
});
});
}
// Function to fetch dashboard data
function fetchDashboardData(dashboardId, timeRange) {
const loadingOverlay = document.createElement("div");
loadingOverlay.className = "loading-overlay";
loadingOverlay.innerHTML =
'<div class="spinner-border text-primary" role="status"><span class="visually-hidden">Loading...</span></div>';
document.querySelector("main").appendChild(loadingOverlay);
fetch(`/dashboard/api/dashboard/${dashboardId}/data/?time_range=${timeRange || "all"}`)
.then((response) => {
if (!response.ok) {
throw new Error(`Network response was not ok: ${response.status}`);
}
return response.json();
})
.then((data) => {
console.log("Dashboard API response:", data);
updateDashboardStats(data);
updateDashboardCharts(data);
// Update URL without page reload
const url = new URL(window.location.href);
url.searchParams.set("dashboard_id", dashboardId);
if (timeRange) {
url.searchParams.set("time_range", timeRange);
}
window.history.pushState({}, "", url);
document.querySelector(".loading-overlay").remove();
})
.catch((error) => {
console.error("Error fetching dashboard data:", error);
document.querySelector(".loading-overlay").remove();
// Show error message
const alertElement = document.createElement("div");
alertElement.className = "alert alert-danger alert-dismissible fade show";
alertElement.setAttribute("role", "alert");
alertElement.innerHTML = `
Error loading dashboard data. Please try again.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
`;
document.querySelector("main").prepend(alertElement);
});
}
// Function to update dashboard statistics
function updateDashboardStats(data) {
// Update total sessions
@@ -290,175 +467,6 @@ function updateDashboardCharts(data) {
}
}
document.addEventListener("DOMContentLoaded", function () {
// Initialize theme setting
updatePlotlyTheme();
// Listen for theme changes
const observer = new MutationObserver(function (mutations) {
mutations.forEach(function (mutation) {
if (mutation.attributeName === "data-bs-theme") {
console.log(
"Theme changed detected by observer:",
document.documentElement.getAttribute("data-bs-theme"),
);
updatePlotlyTheme();
// Use a small delay to ensure styles have been applied
setTimeout(refreshAllCharts, 100);
}
});
});
observer.observe(document.documentElement, { attributes: true });
// Refresh all charts with current theme
function refreshAllCharts() {
if (!window.Plotly) return;
const currentTheme = document.documentElement.getAttribute("data-bs-theme");
console.log("Refreshing charts with theme:", currentTheme);
// Update the theme settings
updatePlotlyTheme();
const charts = document.querySelectorAll(".chart-container");
charts.forEach(function (chart) {
if (chart.id) {
try {
// Safe way to check if element has a plot
const plotElement = document.getElementById(chart.id);
if (plotElement && plotElement._fullLayout) {
console.log("Updating chart theme for:", chart.id);
// Determine chart type to apply appropriate settings
let layoutUpdate = { ...window.plotlyDefaultLayout };
// Check if it's a bar chart
if (plotElement.data && plotElement.data.some((trace) => trace.type === "bar")) {
layoutUpdate = { ...window.plotlyBarConfig };
}
// Check if it's a pie chart
if (plotElement.data && plotElement.data.some((trace) => trace.type === "pie")) {
layoutUpdate = { ...window.plotlyPieConfig };
}
// Force paper and plot background colors based on current theme
// This ensures the chart background always matches the current theme
layoutUpdate.paper_bgcolor = currentTheme === "dark" ? "#343a40" : "#ffffff";
layoutUpdate.plot_bgcolor = currentTheme === "dark" ? "#343a40" : "#ffffff";
// Update font colors too
layoutUpdate.font.color = currentTheme === "dark" ? "#f8f9fa" : "#212529";
// Apply layout updates
Plotly.relayout(chart.id, layoutUpdate);
}
} catch (e) {
console.error("Error updating chart theme:", e);
}
}
});
}
// Make refreshAllCharts available globally
window.refreshAllCharts = refreshAllCharts;
// Handle window resize
window.addEventListener("resize", function () {
if (window.Plotly) {
resizeCharts();
}
});
// Call resizeCharts on initial load
if (window.Plotly) {
// Use a longer delay to ensure charts are fully loaded
setTimeout(function () {
updatePlotlyTheme();
refreshAllCharts();
}, 300);
}
// Apply theme to newly created charts
const originalPlotlyNewPlot = Plotly.newPlot;
Plotly.newPlot = function () {
const args = Array.from(arguments);
// Get the layout argument (3rd argument)
if (args.length >= 3 && typeof args[2] === "object") {
// Ensure plotlyDefaultLayout is up to date
updatePlotlyTheme();
// Apply current theme to new plot
args[2] = { ...window.plotlyDefaultLayout, ...args[2] };
}
return originalPlotlyNewPlot.apply(this, args);
};
// Time range filtering
const timeRangeDropdown = document.getElementById("timeRangeDropdown");
if (timeRangeDropdown) {
const timeRangeLinks = timeRangeDropdown.querySelectorAll(".dropdown-item");
timeRangeLinks.forEach((link) => {
link.addEventListener("click", function (e) {
const url = new URL(this.href);
const dashboardId = url.searchParams.get("dashboard_id");
const timeRange = url.searchParams.get("time_range");
// Fetch updated data via AJAX
if (dashboardId) {
fetchDashboardData(dashboardId, timeRange);
e.preventDefault();
}
});
});
}
// Function to fetch dashboard data
function fetchDashboardData(dashboardId, timeRange) {
const loadingOverlay = document.createElement("div");
loadingOverlay.className = "loading-overlay";
loadingOverlay.innerHTML =
'<div class="spinner-border text-primary" role="status"><span class="visually-hidden">Loading...</span></div>';
document.querySelector("main").appendChild(loadingOverlay);
fetch(`/dashboard/api/dashboard/${dashboardId}/data/?time_range=${timeRange || "all"}`)
.then((response) => {
if (!response.ok) {
throw new Error(`Network response was not ok: ${response.status}`);
}
return response.json();
})
.then((data) => {
console.log("Dashboard API response:", data);
updateDashboardStats(data);
updateDashboardCharts(data);
// Update URL without page reload
const url = new URL(window.location.href);
url.searchParams.set("dashboard_id", dashboardId);
if (timeRange) {
url.searchParams.set("time_range", timeRange);
}
window.history.pushState({}, "", url);
document.querySelector(".loading-overlay").remove();
})
.catch((error) => {
console.error("Error fetching dashboard data:", error);
document.querySelector(".loading-overlay").remove();
// Show error message
const alertElement = document.createElement("div");
alertElement.className = "alert alert-danger alert-dismissible fade show";
alertElement.setAttribute("role", "alert");
alertElement.innerHTML = `
Error loading dashboard data. Please try again.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
`;
document.querySelector("main").prepend(alertElement);
});
}
// Dashboard selector
const dashboardSelector = document.querySelectorAll('a[href^="?dashboard_id="]');
dashboardSelector.forEach((link) => {

View File

@@ -1,73 +1,20 @@
/**
* main.js - Global JavaScript functionality
*
* This file contains general JavaScript functionality used across
* the entire application, including navigation, forms, and UI interactions.
*/
// Handle sidebar collapse on small screens
function handleSidebarOnResize() {
if (window.innerWidth < 768) {
document.querySelector(".sidebar")?.classList.remove("show");
}
}
// Theme toggling functionality
function setTheme(theme, isUserPreference = false) {
console.log("Setting theme to:", theme, "User preference:", isUserPreference);
// Update the HTML attribute that controls theme
document.documentElement.setAttribute("data-bs-theme", theme);
// Save the theme preference to localStorage
localStorage.setItem("theme", theme);
// If this was a user choice (from the toggle button), record that fact
if (isUserPreference) {
localStorage.setItem("userPreferredTheme", "true");
}
// Update toggle button icon
const themeToggle = document.getElementById("theme-toggle");
if (themeToggle) {
const icon = themeToggle.querySelector("i");
if (theme === "dark") {
icon.classList.remove("fa-moon");
icon.classList.add("fa-sun");
themeToggle.setAttribute("title", "Switch to light mode");
themeToggle.setAttribute("aria-label", "Switch to light mode");
} else {
icon.classList.remove("fa-sun");
icon.classList.add("fa-moon");
themeToggle.setAttribute("title", "Switch to dark mode");
themeToggle.setAttribute("aria-label", "Switch to dark mode");
}
}
// If we're on a page with charts, refresh them to match the theme
if (typeof window.refreshAllCharts === "function") {
console.log("Calling refresh charts from theme toggle");
// Add a small delay to ensure DOM updates have completed
setTimeout(() => window.refreshAllCharts(), 100);
}
}
// Check if the user has a system preference for dark mode
function getSystemPreference() {
return window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light";
}
document.addEventListener("DOMContentLoaded", function () {
// Initialize tooltips
var tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]'));
tooltipTriggerList.map(function (tooltipTriggerEl) {
var tooltipList = tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl);
});
// Initialize popovers
var popoverTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="popover"]'));
popoverTriggerList.map(function (popoverTriggerEl) {
var popoverList = popoverTriggerList.map(function (popoverTriggerEl) {
return new bootstrap.Popover(popoverTriggerEl);
});
@@ -127,7 +74,7 @@ document.addEventListener("DOMContentLoaded", function () {
// File input customization
const fileInputs = document.querySelectorAll(".custom-file-input");
fileInputs.forEach(function (input) {
input.addEventListener("change", function () {
input.addEventListener("change", function (e) {
const fileName = this.files[0]?.name || "Choose file";
const nextSibling = this.nextElementSibling;
if (nextSibling) {
@@ -188,13 +135,63 @@ document.addEventListener("DOMContentLoaded", function () {
const exportLinks = document.querySelectorAll("[data-export]");
exportLinks.forEach(function (link) {
link.addEventListener("click", function () {
link.addEventListener("click", function (e) {
// Handle export functionality if needed
console.log("Export requested:", this.dataset.export);
});
});
window.addEventListener("resize", handleSidebarOnResize);
// Handle sidebar collapse on small screens
function handleSidebarOnResize() {
if (window.innerWidth < 768) {
document.querySelector(".sidebar")?.classList.remove("show");
}
}
window.addEventListener("resize", handleSidebarOnResize); // Theme toggling functionality
function setTheme(theme, isUserPreference = false) {
console.log("Setting theme to:", theme, "User preference:", isUserPreference);
// Update the HTML attribute that controls theme
document.documentElement.setAttribute("data-bs-theme", theme);
// Save the theme preference to localStorage
localStorage.setItem("theme", theme);
// If this was a user choice (from the toggle button), record that fact
if (isUserPreference) {
localStorage.setItem("userPreferredTheme", "true");
}
// Update toggle button icon
const themeToggle = document.getElementById("theme-toggle");
if (themeToggle) {
const icon = themeToggle.querySelector("i");
if (theme === "dark") {
icon.classList.remove("fa-moon");
icon.classList.add("fa-sun");
themeToggle.setAttribute("title", "Switch to light mode");
themeToggle.setAttribute("aria-label", "Switch to light mode");
} else {
icon.classList.remove("fa-sun");
icon.classList.add("fa-moon");
themeToggle.setAttribute("title", "Switch to dark mode");
themeToggle.setAttribute("aria-label", "Switch to dark mode");
}
}
// If we're on a page with charts, refresh them to match the theme
if (typeof window.refreshAllCharts === "function") {
console.log("Calling refresh charts from theme toggle");
// Add a small delay to ensure DOM updates have completed
setTimeout(window.refreshAllCharts, 100);
}
}
// Check if the user has a system preference for dark mode
function getSystemPreference() {
return window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light";
}
// Initialize theme based on saved preference or system setting
function initializeTheme() {

View File

@@ -1,4 +1,3 @@
{% load crispy_forms_filters %}
<!-- templates/accounts/login.html -->
{% extends 'base.html' %} {% load crispy_forms_tags %}
{% block title %}

View File

@@ -4,7 +4,7 @@ WSGI config for dashboard_project project.
It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see
<https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/>
https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/
"""
import os

18
dev.sh
View File

@@ -1,13 +1,10 @@
#!/usr/bin/env bash
#!/bin/bash
# LiveGraphsDjango Development Helper Script
# Set UV_LINK_MODE to copy to avoid hardlink warnings
export UV_LINK_MODE=copy
# Function to print section header
print_header() {
echo "======================================"
echo "🚀 $1"
@@ -15,7 +12,6 @@ print_header() {
}
# Display help menu
if [[ $1 == "help" ]] || [[ $1 == "-h" ]] || [[ $1 == "--help" ]] || [[ -z $1 ]]; then
print_header "LiveGraphsDjango Development Commands"
echo "Usage: ./dev.sh COMMAND"
@@ -37,7 +33,6 @@ if [[ $1 == "help" ]] || [[ $1 == "-h" ]] || [[ $1 == "--help" ]] || [[ -z $1 ]]
fi
# Start Redis server
if [[ $1 == "redis-start" ]]; then
print_header "Starting Redis Server"
redis-server --daemonize yes
@@ -51,7 +46,6 @@ if [[ $1 == "redis-start" ]]; then
fi
# Test Redis connection
if [[ $1 == "redis-test" ]]; then
print_header "Testing Redis Connection"
cd dashboard_project && python manage.py test_redis
@@ -59,7 +53,6 @@ if [[ $1 == "redis-test" ]]; then
fi
# Stop Redis server
if [[ $1 == "redis-stop" ]]; then
print_header "Stopping Redis Server"
redis-cli shutdown
@@ -68,7 +61,6 @@ if [[ $1 == "redis-stop" ]]; then
fi
# Run migrations
if [[ $1 == "migrate" ]]; then
print_header "Running Migrations"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py migrate
@@ -76,7 +68,6 @@ if [[ $1 == "migrate" ]]; then
fi
# Make migrations
if [[ $1 == "makemigrations" ]]; then
print_header "Creating Migrations"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py makemigrations
@@ -84,7 +75,6 @@ if [[ $1 == "makemigrations" ]]; then
fi
# Create superuser
if [[ $1 == "superuser" ]]; then
print_header "Creating Superuser"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py createsuperuser
@@ -92,7 +82,6 @@ if [[ $1 == "superuser" ]]; then
fi
# Test Celery
if [[ $1 == "test-celery" ]]; then
print_header "Testing Celery"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py test_celery
@@ -100,7 +89,6 @@ if [[ $1 == "test-celery" ]]; then
fi
# View Celery logs
if [[ $1 == "logs-celery" ]]; then
print_header "Celery Worker Logs"
echo "Press Ctrl+C to exit"
@@ -109,7 +97,6 @@ if [[ $1 == "logs-celery" ]]; then
fi
# View Celery Beat logs
if [[ $1 == "logs-beat" ]]; then
print_header "Celery Beat Logs"
echo "Press Ctrl+C to exit"
@@ -118,7 +105,6 @@ if [[ $1 == "logs-beat" ]]; then
fi
# Django shell
if [[ $1 == "shell" ]]; then
print_header "Django Shell"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py shell
@@ -126,7 +112,6 @@ if [[ $1 == "shell" ]]; then
fi
# Start the application
if [[ $1 == "start" ]]; then
print_header "Starting LiveGraphsDjango Application"
./start.sh
@@ -134,7 +119,6 @@ if [[ $1 == "start" ]]; then
fi
# Invalid command
echo "❌ Unknown command: $1"
echo "Run './dev.sh help' to see available commands"
exit 1

View File

@@ -1,23 +1,22 @@
# docker-compose.yml
version: "3.8"
services:
web:
build: .
command: uv run gunicorn dashboard_project.wsgi:application --bind 0.0.0.0:8000 --chdir dashboard_project
command: gunicorn dashboard_project.wsgi:application --bind 0.0.0.0:8000
volumes:
- .:/app
- static_volume:/app/staticfiles
- media_volume:/app/media
ports:
- 8000:8000
env_file:
- .env
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/dashboard_db
- POSTGRES_DB=dashboard_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- DEBUG=0
- SECRET_KEY=your_secret_key_here
- ALLOWED_HOSTS=localhost,127.0.0.1
- DJANGO_SETTINGS_MODULE=dashboard_project.settings
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
@@ -25,7 +24,7 @@ services:
- redis
db:
image: postgres:alpine
image: postgres:13
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
@@ -36,7 +35,7 @@ services:
- 5432:5432
redis:
image: redis:alpine
image: redis:7-alpine
ports:
- 6379:6379
volumes:
@@ -49,16 +48,12 @@ services:
celery:
build: .
command: uv run celery -A dashboard_project worker --loglevel=info --workdir dashboard_project
env_file:
- .env
command: celery -A dashboard_project worker --loglevel=info
volumes:
- .:/app
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/dashboard_db
- POSTGRES_DB=dashboard_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- DEBUG=0
- DJANGO_SETTINGS_MODULE=dashboard_project.settings
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:
@@ -67,16 +62,12 @@ services:
celery-beat:
build: .
command: uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler --workdir dashboard_project
env_file:
- .env
command: celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler
volumes:
- .:/app
environment:
- DATABASE_URL=postgresql://postgres:postgres@db:5432/dashboard_db
- POSTGRES_DB=dashboard_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- DEBUG=0
- DJANGO_SETTINGS_MODULE=dashboard_project.settings
- CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on:

View File

@@ -1,7 +1,6 @@
# Redis and Celery Configuration
This document explains how to set up and use Redis and Celery for background task processing in the LiveGraphs
application.
This document explains how to set up and use Redis and Celery for background task processing in the LiveGraphs application.
## Overview
@@ -73,8 +72,7 @@ Download and install from [microsoftarchive/redis](https://github.com/microsofta
### SQLite Fallback
If Redis is not available, the application will automatically fall back to using SQLite for Celery tasks. This works
well for development but is not recommended for production.
If Redis is not available, the application will automatically fall back to using SQLite for Celery tasks. This works well for development but is not recommended for production.
## Configuration

View File

@@ -54,8 +54,7 @@ If this fails, check the following:
## Fixing CSV Data Processing Issues
If you see the error `zip() argument 2 is shorter than argument 1`, it means the data format doesn't match the expected
headers. We've implemented a fix that:
If you see the error `zip() argument 2 is shorter than argument 1`, it means the data format doesn't match the expected headers. We've implemented a fix that:
1. Pads shorter rows with empty strings
2. Uses more flexible date format parsing

View File

@@ -1,96 +0,0 @@
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"playwright-test": {
"type": "local",
"command": ["npx", "playwright", "run-test-mcp-server"],
"enabled": true
}
},
"tools": {
"playwright*": false
},
"agent": {
"playwright-test-generator": {
"description": "Use this agent when you need to create automated browser tests using Playwright",
"mode": "subagent",
"prompt": "{file:.opencode/prompts/playwright-test-generator.md}",
"tools": {
"ls": true,
"glob": true,
"grep": true,
"read": true,
"playwright-test*browser_click": true,
"playwright-test*browser_drag": true,
"playwright-test*browser_evaluate": true,
"playwright-test*browser_file_upload": true,
"playwright-test*browser_handle_dialog": true,
"playwright-test*browser_hover": true,
"playwright-test*browser_navigate": true,
"playwright-test*browser_press_key": true,
"playwright-test*browser_select_option": true,
"playwright-test*browser_snapshot": true,
"playwright-test*browser_type": true,
"playwright-test*browser_verify_element_visible": true,
"playwright-test*browser_verify_list_visible": true,
"playwright-test*browser_verify_text_visible": true,
"playwright-test*browser_verify_value": true,
"playwright-test*browser_wait_for": true,
"playwright-test*generator_read_log": true,
"playwright-test*generator_setup_page": true,
"playwright-test*generator_write_test": true
}
},
"playwright-test-healer": {
"description": "Use this agent when you need to debug and fix failing Playwright tests",
"mode": "subagent",
"prompt": "{file:.opencode/prompts/playwright-test-healer.md}",
"tools": {
"ls": true,
"glob": true,
"grep": true,
"read": true,
"write": true,
"edit": true,
"playwright-test*browser_console_messages": true,
"playwright-test*browser_evaluate": true,
"playwright-test*browser_generate_locator": true,
"playwright-test*browser_network_requests": true,
"playwright-test*browser_snapshot": true,
"playwright-test*test_debug": true,
"playwright-test*test_list": true,
"playwright-test*test_run": true
}
},
"playwright-test-planner": {
"description": "Use this agent when you need to create comprehensive test plan for a web application or website",
"mode": "subagent",
"prompt": "{file:.opencode/prompts/playwright-test-planner.md}",
"tools": {
"ls": true,
"glob": true,
"grep": true,
"read": true,
"write": true,
"playwright-test*browser_click": true,
"playwright-test*browser_close": true,
"playwright-test*browser_console_messages": true,
"playwright-test*browser_drag": true,
"playwright-test*browser_evaluate": true,
"playwright-test*browser_file_upload": true,
"playwright-test*browser_handle_dialog": true,
"playwright-test*browser_hover": true,
"playwright-test*browser_navigate": true,
"playwright-test*browser_navigate_back": true,
"playwright-test*browser_network_requests": true,
"playwright-test*browser_press_key": true,
"playwright-test*browser_select_option": true,
"playwright-test*browser_snapshot": true,
"playwright-test*browser_take_screenshot": true,
"playwright-test*browser_type": true,
"playwright-test*browser_wait_for": true,
"playwright-test*planner_setup_page": true
}
}
}
}

View File

@@ -1,32 +1,35 @@
{
"name": "livegraphs-django",
"private": true,
"scripts": {
"format": "prettier --write .; bun format:py",
"format:check": "prettier --check .; bun format:py -- --check",
"format:py": "uvx ruff format",
"lint:js": "oxlint",
"lint:js:fix": "bun lint:js -- --fix",
"lint:js:strict": "oxlint --import-plugin -D correctness -W suspicious",
"lint:md": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.{node_modules,trunk,grit,venv,opencode,github/chatmodes,claude/agents}\"",
"lint:md:fix": "bun lint:md -- --fix",
"lint:py": "uvx ruff check",
"lint:py:fix": "uvx ruff check --fix",
"typecheck:js": "oxlint --type-aware",
"typecheck:js:fix": "bun typecheck:js -- --fix",
"typecheck:py": "uvx ty check"
},
"devDependencies": {
"@playwright/test": "^1.56.1",
"@types/bun": "latest",
"markdownlint-cli2": "^0.18.1",
"oxlint": "^1.25.0",
"oxlint-tsgolint": "^0.5.0",
"prettier": "^3.6.2",
"prettier-plugin-jinja-template": "^2.1.0",
"prettier-plugin-packagejson": "^2.5.19"
"prettier": "^3.5.3",
"prettier-plugin-jinja-template": "^2.1.0"
},
"peerDependencies": {
"typescript": "^5"
"scripts": {
"format": "prettier --write .",
"format:check": "prettier --check .",
"lint:md": "markdownlint-cli2 \"**/*.md\" \"!.trunk/**\" \"!.venv/**\" \"!node_modules/**\"",
"lint:md:fix": "markdownlint-cli2 --fix \"**/*.md\" \"!.trunk/**\" \"!.venv/**\" \"!node_modules/**\""
},
"markdownlint-cli2": {
"config": {
"MD007": {
"indent": 4,
"start_indented": false,
"start_indent": 4
},
"MD013": false,
"MD030": {
"ul_single": 3,
"ol_single": 2,
"ul_multi": 3,
"ol_multi": 2
},
"MD033": false
},
"ignores": [
"node_modules",
".git",
"*.json"
]
}
}

View File

@@ -4,127 +4,65 @@ version = "0.1.0"
description = "Live Graphs Django Dashboard"
readme = "README.md"
requires-python = ">=3.13"
license = { text = "MIT" }
authors = [{ name = "LiveGraphs Team" }]
license = { text = "MIT" }
classifiers = [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Framework :: Django",
"Framework :: Django :: 5.2",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
]
dependencies = [
"bleach[css]>=6.3.0",
"celery[sqlalchemy]>=5.5.3",
"crispy-bootstrap5>=2025.6",
"django>=5.2.7",
"django-allauth>=65.13.0",
"bleach[css]>=6.2.0",
"celery[sqlalchemy]>=5.5.2",
"crispy-bootstrap5>=2025.4",
"django>=5.2.1",
"django-allauth>=65.8.0",
"django-celery-beat>=2.8.1",
"django-crispy-forms>=2.4",
"gunicorn>=23.0.0",
"numpy>=2.3.4",
"pandas>=2.3.3",
"plotly>=6.4.0",
"psycopg2-binary>=2.9.11",
"python-dotenv>=1.2.1",
"redis>=7.0.1",
"requests>=2.32.5",
"sqlalchemy>=2.0.44",
"numpy>=2.2.5",
"pandas>=2.2.3",
"plotly>=6.1.0",
"python-dotenv>=1.1.0",
"redis>=6.1.0",
"requests>=2.32.3",
"sqlalchemy>=2.0.41",
"tinycss2>=1.4.0",
"whitenoise>=6.11.0",
"xlsxwriter>=3.2.9",
"whitenoise>=6.9.0",
"xlsxwriter>=3.2.3",
]
[project.urls]
"Bug Tracker" = "https://github.com/kjanat/livegraphsdjango/issues"
"Documentation" = "https://github.com/kjanat/livegraphsdjango#readme"
"Source" = "https://github.com/kjanat/livegraphsdjango"
[project.scripts]
# Django management commands
livegraphs-manage = "dashboard_project.manage:main"
livegraphs-migrate = "dashboard_project.__main__:migrate"
livegraphs-server = "dashboard_project.__main__:runserver"
livegraphs-shell = "dashboard_project.__main__:shell"
[dependency-groups]
dev = [
"bandit>=1.8.6",
"black>=25.9.0",
"coverage>=7.11.0",
"django-debug-toolbar>=6.1.0",
"django-stubs>=5.2.7",
"mypy>=1.18.2",
"pre-commit>=4.3.0",
"pytest>=8.4.2",
"bandit>=1.8.3",
"black>=25.1.0",
"coverage>=7.8.0",
"django-debug-toolbar>=5.2.0",
"django-stubs>=5.2.0",
"mypy>=1.15.0",
"pre-commit>=4.2.0",
"pytest>=8.3.5",
"pytest-django>=4.11.1",
"ruff>=0.14.3",
"ty>=0.0.1a25",
"ruff>=0.11.10",
]
[build-system]
requires = ["setuptools>=69.0.0", "wheel>=0.42.0"]
build-backend = "setuptools.build_meta"
[tool.bandit]
exclude_dirs = [
"tests",
"venv",
".venv",
".git",
"__pycache__",
"migrations",
"**/create_sample_data.py",
]
skips = ["B101"]
targets = ["dashboard_project"]
[tool.setuptools]
packages = ["dashboard_project"]
[tool.coverage.run]
source = ["dashboard_project"]
omit = [
"dashboard_project/manage.py",
"dashboard_project/*/migrations/*",
"dashboard_project/*/tests/*",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"pass",
"raise ImportError",
]
[tool.django-stubs]
django_settings_module = "dashboard_project.settings"
[tool.mypy]
python_version = "3.13"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = false
disallow_incomplete_defs = false
plugins = ["mypy_django_plugin.main"]
[[tool.mypy.overrides]]
module = ["django.*", "rest_framework.*"]
ignore_missing_imports = true
[tool.pytest.ini_options]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
]
python_files = "test_*.py"
testpaths = ["dashboard_project"]
DJANGO_SETTINGS_MODULE = "dashboard_project.settings"
[tool.setuptools.package-data]
"dashboard_project" = ["static/**/*", "templates/**/*", "media/**/*"]
[tool.ruff]
preview = true
# Exclude a variety of commonly ignored directories
# Exclude a variety of commonly ignored directories.
exclude = [
".bzr",
".direnv",
@@ -153,9 +91,11 @@ exclude = [
"site-packages",
"venv",
]
# Same as Black
# Same as Black.
line-length = 120
indent-width = 4
# Assume Python 3.13
target-version = "py313"
@@ -170,13 +110,62 @@ quote-style = "double"
indent-style = "space"
line-ending = "lf"
[tool.setuptools]
packages = ["dashboard_project"]
[tool.setuptools.package-data]
"dashboard_project" = [
"static/**/*",
"templates/**/*",
"media/**/*",
"py.typed"
[tool.bandit]
exclude_dirs = [
"tests",
"venv",
".venv",
".git",
"__pycache__",
"migrations",
"**/create_sample_data.py",
]
skips = ["B101"]
targets = ["dashboard_project"]
[tool.mypy]
python_version = "3.13"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = false
disallow_incomplete_defs = false
plugins = ["mypy_django_plugin.main"]
[[tool.mypy.overrides]]
module = ["django.*", "rest_framework.*"]
ignore_missing_imports = true
[tool.django-stubs]
django_settings_module = "dashboard_project.settings"
[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "dashboard_project.settings"
python_files = "test_*.py"
testpaths = ["dashboard_project"]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
]
[tool.coverage.run]
source = ["dashboard_project"]
omit = [
"dashboard_project/manage.py",
"dashboard_project/*/migrations/*",
"dashboard_project/*/tests/*",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"pass",
"raise ImportError",
]
[project.urls]
"Documentation" = "https://github.com/kjanat/livegraphsdjango#readme"
"Source" = "https://github.com/kjanat/livegraphsdjango"
"Bug Tracker" = "https://github.com/kjanat/livegraphsdjango/issues"

View File

@@ -5,83 +5,65 @@ amqp==5.3.1 \
--hash=sha256:43b3319e1b4e7d1251833a93d672b4af1e40f3d632d479b98661a95f117880a2 \
--hash=sha256:cddc00c725449522023bad949f70fff7b48f0b1ade74d170a6f10ab044739432
# via kombu
asgiref==3.10.0 \
--hash=sha256:aef8a81283a34d0ab31630c9b7dfe70c812c95eba78171367ca8745e88124734 \
--hash=sha256:d89f2d8cd8b56dada7d52fa7dc8075baa08fb836560710d38c292a7a3f78c04e
asgiref==3.8.1 \
--hash=sha256:3e1e3ecc849832fe52ccf2cb6686b7a55f82bb1d6aee72a58826471390335e47 \
--hash=sha256:c343bd80a0bec947a9860adb4c432ffa7db769836c64238fc34bdc3fec84d590
# via
# django
# django-allauth
bandit==1.8.6 \
--hash=sha256:3348e934d736fcdb68b6aa4030487097e23a501adf3e7827b63658df464dddd0 \
--hash=sha256:dbfe9c25fc6961c2078593de55fd19f2559f9e45b99f1272341f5b95dea4e56b
billiard==4.2.2 \
--hash=sha256:4bc05dcf0d1cc6addef470723aac2a6232f3c7ed7475b0b580473a9145829457 \
--hash=sha256:e815017a062b714958463e07ba15981d802dc53d41c5b69d28c5a7c238f8ecf3
# django-stubs
bandit==1.8.3 \
--hash=sha256:28f04dc0d258e1dd0f99dee8eefa13d1cb5e3fde1a5ab0c523971f97b289bcd8 \
--hash=sha256:f5847beb654d309422985c36644649924e0ea4425c76dec2e89110b87506193a
billiard==4.2.1 \
--hash=sha256:12b641b0c539073fc8d3f5b8b7be998956665c4233c7c1fcd66a7e677c4fb36f \
--hash=sha256:40b59a4ac8806ba2c2369ea98d876bc6108b051c227baffd928c644d15d8f3cb
# via celery
black==25.9.0 \
--hash=sha256:0172a012f725b792c358d57fe7b6b6e8e67375dd157f64fa7a3097b3ed3e2175 \
--hash=sha256:0474bca9a0dd1b51791fcc507a4e02078a1c63f6d4e4ae5544b9848c7adfb619 \
--hash=sha256:3bec74ee60f8dfef564b573a96b8930f7b6a538e846123d5ad77ba14a8d7a64f \
--hash=sha256:474b34c1342cdc157d307b56c4c65bce916480c4a8f6551fdc6bf9b486a7c4ae \
--hash=sha256:846d58e3ce7879ec1ffe816bb9df6d006cd9590515ed5d17db14e17666b2b357 \
--hash=sha256:b756fc75871cb1bcac5499552d771822fd9db5a2bb8db2a7247936ca48f39831
bleach==6.3.0 \
--hash=sha256:6f3b91b1c0a02bb9a78b5a454c92506aa0fdf197e1d5e114d2e00c6f64306d22 \
--hash=sha256:fe10ec77c93ddf3d13a73b035abaac7a9f5e436513864ccdad516693213c65d6
black==25.1.0 \
--hash=sha256:030b9759066a4ee5e5aca28c3c77f9c64789cdd4de8ac1df642c40b708be6171 \
--hash=sha256:33496d5cd1222ad73391352b4ae8da15253c5de89b93a80b3e2c8d9a19ec2666 \
--hash=sha256:8f0b18a02996a836cc9c9c78e5babec10930862827b1b724ddfe98ccf2f2fe4f \
--hash=sha256:95e8176dae143ba9097f351d174fdaf0ccd29efb414b362ae3fd72bf0f710717 \
--hash=sha256:a22f402b410566e2d1c950708c77ebf5ebd5d0d88a6a2e87c86d9fb48afa0d18 \
--hash=sha256:afebb7098bfbc70037a053b91ae8437c3857482d3a690fefc03e9ff7aa9a5fd3
bleach==6.2.0 \
--hash=sha256:117d9c6097a7c3d22fd578fcd8d35ff1e125df6736f554da4e432fdd63f31e5e \
--hash=sha256:123e894118b8a599fd80d3ec1a6d4cc7ce4e5882b1317a7e1ba69b56e95f991f
# via livegraphsdjango
celery==5.5.3 \
--hash=sha256:0b5761a07057acee94694464ca482416b959568904c9dfa41ce8413a7d65d525 \
--hash=sha256:6c972ae7968c2b5281227f01c3a3f984037d21c5129d07bf3550cc2afc6b10a5
celery==5.5.2 \
--hash=sha256:4d6930f354f9d29295425d7a37261245c74a32807c45d764bedc286afd0e724e \
--hash=sha256:54425a067afdc88b57cd8d94ed4af2ffaf13ab8c7680041ac2c4ac44357bdf4c
# via
# django-celery-beat
# livegraphsdjango
certifi==2025.10.5 \
--hash=sha256:0f212c2744a9bb6de0c56639a6f68afe01ecd92d91f14ae897c4fe7bbeeef0de \
--hash=sha256:47c09d31ccf2acf0be3f701ea53595ee7e0b8fa08801c6624be771df09ae7b43
certifi==2025.4.26 \
--hash=sha256:0a816057ea3cdefcef70270d2c515e4506bbc954f417fa5ade2021213bb8f0c6 \
--hash=sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3
# via requests
cfgv==3.4.0 \
--hash=sha256:b7265b1f29fd3316bfcd2b330d63d024f2bfd8bcb8b0272f8e19a504856c48f9 \
--hash=sha256:e52591d4c5f5dead8e0f673fb16db7949d2cfb3f7da4582893288f0ded8fe560
# via pre-commit
charset-normalizer==3.4.4 \
--hash=sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152 \
--hash=sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72 \
--hash=sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e \
--hash=sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c \
--hash=sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2 \
--hash=sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44 \
--hash=sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede \
--hash=sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed \
--hash=sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133 \
--hash=sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e \
--hash=sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14 \
--hash=sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828 \
--hash=sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f \
--hash=sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328 \
--hash=sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090 \
--hash=sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c \
--hash=sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb \
--hash=sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a \
--hash=sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec \
--hash=sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc \
--hash=sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac \
--hash=sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894 \
--hash=sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14 \
--hash=sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1 \
--hash=sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3 \
--hash=sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e \
--hash=sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6 \
--hash=sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191 \
--hash=sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd \
--hash=sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2 \
--hash=sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794 \
--hash=sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838 \
--hash=sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490 \
--hash=sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9
charset-normalizer==3.4.2 \
--hash=sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0 \
--hash=sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b \
--hash=sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff \
--hash=sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e \
--hash=sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148 \
--hash=sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63 \
--hash=sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c \
--hash=sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0 \
--hash=sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0 \
--hash=sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1 \
--hash=sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980 \
--hash=sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7 \
--hash=sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691 \
--hash=sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf \
--hash=sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b
# via requests
click==8.3.0 \
--hash=sha256:9b9f285302c6e3064f4330c05f05b81945b2a39544279343e6e7c5f27a9baddc \
--hash=sha256:e7b8232224eba16f4ebe410c25ced9f7875cb5f3263ffc93cc3e8da705e229c4
click==8.2.0 \
--hash=sha256:6b303f0b2aa85f1cb4e5303078fadcbcd4e476f114fab9b5007005711839325c \
--hash=sha256:f5452aeddd9988eefa20f90f05ab66f17fce1ee2a36907fd30b05bbb5953814d
# via
# black
# celery
@@ -92,9 +74,9 @@ click-didyoumean==0.3.1 \
--hash=sha256:4f82fdff0dbe64ef8ab2279bd6aa3f6a99c3b28c05aa09cbfc07c9d7fbb5a463 \
--hash=sha256:5c4bb6007cfea5f2fd6583a2fb6701a22a41eb98957e63d0fac41c10e7c3117c
# via celery
click-plugins==1.1.1.2 \
--hash=sha256:008d65743833ffc1f5417bf0e78e8d2c23aab04d9745ba817bd3e71b0feb6aa6 \
--hash=sha256:d7af3984a99d243c131aa1a828331e7630f4a88a9741fd05c927b204bcf92261
click-plugins==1.1.1 \
--hash=sha256:46ab999744a9d831159c3411bb0c79346d94a444df9a3a3742e9ed63645f264b \
--hash=sha256:5d262006d3222f5057fd81e1623d4443e41dcda5dc815c06b442aa3c02889fc8
# via celery
click-repl==0.3.0 \
--hash=sha256:17849c23dba3d667247dc4defe1757fff98694e90fe37474f3feebb69ced26a9 \
@@ -107,76 +89,44 @@ colorama==0.4.6 ; sys_platform == 'win32' \
# bandit
# click
# pytest
coverage==7.11.0 \
--hash=sha256:05791e528a18f7072bf5998ba772fe29db4da1234c45c2087866b5ba4dea710e \
--hash=sha256:0efa742f431529699712b92ecdf22de8ff198df41e43aeaaadf69973eb93f17a \
--hash=sha256:10ad04ac3a122048688387828b4537bc9cf60c0bf4869c1e9989c46e45690b82 \
--hash=sha256:167bd504ac1ca2af7ff3b81d245dfea0292c5032ebef9d66cc08a7d28c1b8050 \
--hash=sha256:269bfe913b7d5be12ab13a95f3a76da23cf147be7fa043933320ba5625f0a8de \
--hash=sha256:2727d47fce3ee2bac648528e41455d1b0c46395a087a229deac75e9f88ba5a05 \
--hash=sha256:314c24e700d7027ae3ab0d95fbf8d53544fca1f20345fd30cd219b737c6e58d3 \
--hash=sha256:3d4ba9a449e9364a936a27322b20d32d8b166553bfe63059bd21527e681e2fad \
--hash=sha256:4036cc9c7983a2b1f2556d574d2eb2154ac6ed55114761685657e38782b23f52 \
--hash=sha256:424538266794db2861db4922b05d729ade0940ee69dcf0591ce8f69784db0e11 \
--hash=sha256:4b7589765348d78fb4e5fb6ea35d07564e387da2fc5efff62e0222971f155f68 \
--hash=sha256:4c1eeb3fb8eb9e0190bebafd0462936f75717687117339f708f395fe455acc73 \
--hash=sha256:587c38849b853b157706407e9ebdca8fd12f45869edb56defbef2daa5fb0812b \
--hash=sha256:59a6e5a265f7cfc05f76e3bb53eca2e0dfe90f05e07e849930fecd6abb8f40b4 \
--hash=sha256:5a03eaf7ec24078ad64a07f02e30060aaf22b91dedf31a6b24d0d98d2bba7f48 \
--hash=sha256:5ef83b107f50db3f9ae40f69e34b3bd9337456c5a7fe3461c7abf8b75dd666a2 \
--hash=sha256:630d0bd7a293ad2fc8b4b94e5758c8b2536fdf36c05f1681270203e463cbfa9b \
--hash=sha256:695340f698a5f56f795b2836abe6fb576e7c53d48cd155ad2f80fd24bc63a040 \
--hash=sha256:6fbcee1a8f056af07ecd344482f711f563a9eb1c2cad192e87df00338ec3cdb0 \
--hash=sha256:73feb83bb41c32811973b8565f3705caf01d928d972b72042b44e97c71fd70d1 \
--hash=sha256:7ab934dd13b1c5e94b692b1e01bd87e4488cb746e3a50f798cb9464fd128374b \
--hash=sha256:7db53b5cdd2917b6eaadd0b1251cf4e7d96f4a8d24e174bdbdf2f65b5ea7994d \
--hash=sha256:8badf70446042553a773547a61fecaa734b55dc738cacf20c56ab04b77425e43 \
--hash=sha256:8c934bd088eed6174210942761e38ee81d28c46de0132ebb1801dbe36a390dcc \
--hash=sha256:9516add7256b6713ec08359b7b05aeff8850c98d357784c7205b2e60aa2513fa \
--hash=sha256:9ed43fa22c6436f7957df036331f8fe4efa7af132054e1844918866cd228af6c \
--hash=sha256:a09c1211959903a479e389685b7feb8a17f59ec5a4ef9afde7650bd5eabc2777 \
--hash=sha256:a386c1061bf98e7ea4758e4313c0ab5ecf57af341ef0f43a0bf26c2477b5c268 \
--hash=sha256:a3d0e2087dba64c86a6b254f43e12d264b636a39e88c5cc0a01a7c71bcfdab7e \
--hash=sha256:b56efee146c98dbf2cf5cffc61b9829d1e94442df4d7398b26892a53992d3547 \
--hash=sha256:b5c2705afa83f49bd91962a4094b6b082f94aef7626365ab3f8f4bd159c5acf3 \
--hash=sha256:b971bdefdd75096163dd4261c74be813c4508477e39ff7b92191dea19f24cd37 \
--hash=sha256:bab7ec4bb501743edc63609320aaec8cd9188b396354f482f4de4d40a9d10721 \
--hash=sha256:c6f31f281012235ad08f9a560976cc2fc9c95c17604ff3ab20120fe480169bca \
--hash=sha256:c770885b28fb399aaf2a65bbd1c12bf6f307ffd112d6a76c5231a94276f0c497 \
--hash=sha256:c9f08ea03114a637dab06cedb2e914da9dc67fa52c6015c018ff43fdde25b9c2 \
--hash=sha256:cacb29f420cfeb9283b803263c3b9a068924474ff19ca126ba9103e1278dfa44 \
--hash=sha256:cc3f49e65ea6e0d5d9bd60368684fe52a704d46f9e7fc413918f18d046ec40e1 \
--hash=sha256:cdbcd376716d6b7fbfeedd687a6c4be019c5a5671b35f804ba76a4c0a778cba4 \
--hash=sha256:ce37f215223af94ef0f75ac68ea096f9f8e8c8ec7d6e8c346ee45c0d363f0479 \
--hash=sha256:ce9f3bde4e9b031eaf1eb61df95c1401427029ea1bfddb8621c1161dcb0fa02e \
--hash=sha256:cee6291bb4fed184f1c2b663606a115c743df98a537c969c3c64b49989da96c2 \
--hash=sha256:d06f4fc7acf3cabd6d74941d53329e06bab00a8fe10e4df2714f0b134bfc64ef \
--hash=sha256:dadbcce51a10c07b7c72b0ce4a25e4b6dcb0c0372846afb8e5b6307a121eb99f \
--hash=sha256:dbbf012be5f32533a490709ad597ad8a8ff80c582a95adc8d62af664e532f9ca \
--hash=sha256:df01d6c4c81e15a7c88337b795bb7595a8596e92310266b5072c7e301168efbd \
--hash=sha256:e4dc07e95495923d6fd4d6c27bf70769425b71c89053083843fd78f378558996 \
--hash=sha256:e89641f5175d65e2dbb44db15fe4ea48fade5d5bbb9868fdc2b4fce22f4a469d \
--hash=sha256:e9570ad567f880ef675673992222746a124b9595506826b210fbe0ce3f0499cd \
--hash=sha256:eb92e47c92fcbcdc692f428da67db33337fa213756f7adb6a011f7b5a7a20740 \
--hash=sha256:f39ae2f63f37472c17b4990f794035c9890418b1b8cca75c01193f3c8d3e01be \
--hash=sha256:f413ce6e07e0d0dc9c433228727b619871532674b45165abafe201f200cc215f \
--hash=sha256:f91f927a3215b8907e214af77200250bb6aae36eca3f760f89780d13e495388d \
--hash=sha256:f9ea02ef40bb83823b2b04964459d281688fe173e20643870bb5d2edf68bc836
crispy-bootstrap5==2025.6 \
--hash=sha256:a343aa128b4383f35f00295b94de2b10862f2a4f24eda21fa6ead45234c07050 \
--hash=sha256:f1bde7cac074c650fc82f31777d4a4cfd0df2512c68bc4128f259c75d3daada4
coverage==7.8.0 \
--hash=sha256:04bfec25a8ef1c5f41f5e7e5c842f6b615599ca8ba8391ec33a9290d9d2db3a3 \
--hash=sha256:18c5ae6d061ad5b3e7eef4363fb27a0576012a7447af48be6c75b88494c6cf25 \
--hash=sha256:2e4b6b87bb0c846a9315e3ab4be2d52fac905100565f4b92f02c445c8799e257 \
--hash=sha256:379fe315e206b14e21db5240f89dc0774bdd3e25c3c58c2c733c99eca96f1ada \
--hash=sha256:42421e04069fb2cbcbca5a696c4050b84a43b05392679d4068acbe65449b5c64 \
--hash=sha256:554fec1199d93ab30adaa751db68acec2b41c5602ac944bb19187cb9a41a8067 \
--hash=sha256:581a40c7b94921fffd6457ffe532259813fc68eb2bdda60fa8cc343414ce3733 \
--hash=sha256:5aaeb00761f985007b38cf463b1d160a14a22c34eb3f6a39d9ad6fc27cb73008 \
--hash=sha256:5ac46d0c2dd5820ce93943a501ac5f6548ea81594777ca585bf002aa8854cacd \
--hash=sha256:771eb7587a0563ca5bb6f622b9ed7f9d07bd08900f7589b4febff05f469bea00 \
--hash=sha256:7a3d62b3b03b4b6fd41a085f3574874cf946cb4604d2b4d3e8dca8cd570ca501 \
--hash=sha256:95aa6ae391a22bbbce1b77ddac846c98c5473de0372ba5c463480043a07bff42 \
--hash=sha256:a9abbccd778d98e9c7e85038e35e91e67f5b520776781d9a1e2ee9d400869487 \
--hash=sha256:ad80e6b4a0c3cb6f10f29ae4c60e991f424e6b14219d46f1e7d442b938ee68a4 \
--hash=sha256:b87eb6fc9e1bb8f98892a2458781348fa37e6925f35bb6ceb9d4afd54ba36c73 \
--hash=sha256:d1ba00ae33be84066cfbe7361d4e04dec78445b2b88bdb734d0d1cbab916025a \
--hash=sha256:d766a4f0e5aa1ba056ec3496243150698dc0481902e2b8559314368717be82b1 \
--hash=sha256:dbf364b4c5e7bae9250528167dfe40219b62e2d573c854d74be213e1e52069f7 \
--hash=sha256:dd19608788b50eed889e13a5d71d832edc34fc9dfce606f66e8f9f917eef910d \
--hash=sha256:e013b07ba1c748dacc2a80e69a46286ff145935f260eb8c72df7185bf048f502 \
--hash=sha256:f319bae0321bc838e205bf9e5bc28f0a3165f30c203b610f17ab5552cff90323 \
--hash=sha256:f3c38e4e5ccbdc9198aecc766cedbb134b2d89bf64533973678dfcf07effd883
crispy-bootstrap5==2025.4 \
--hash=sha256:51efa19c7d40e339774a6fe23407e83b95b7634cad6de70fd1f1093131bea1d9 \
--hash=sha256:d675ea7e245048905077dfe16bf1fa1ee16842f52fe88164ccc8a5e2d11119b3
# via livegraphsdjango
cron-descriptor==2.0.6 \
--hash=sha256:3a1c0d837c0e5a32e415f821b36cf758eb92d510e6beff8fbfe4fa16573d93d6 \
--hash=sha256:e39d2848e1d8913cfb6e3452e701b5eec662ee18bea8cc5aa53ee1a7bb217157
cron-descriptor==1.4.5 \
--hash=sha256:736b3ae9d1a99bc3dbfc5b55b5e6e7c12031e7ba5de716625772f8b02dcd6013 \
--hash=sha256:f51ce4ffc1d1f2816939add8524f206c376a42c87a5fca3091ce26725b3b1bca
# via django-celery-beat
distlib==0.4.0 \
--hash=sha256:9659f7d87e46584a30b5780e43ac7a2143098441670ff0a49d5f9034c54a6c16 \
--hash=sha256:feec40075be03a04501a973d81f633735b4b69f98b05450592310c0f401a4e0d
distlib==0.3.9 \
--hash=sha256:47f8c22fd27c27e25a65601af709b38e4f0a45ea4fc2e710f65755fa8caaaf87 \
--hash=sha256:a60f20dea646b8a33f3e7772f74dc0b2d0772d2837ee1342a00645c81edf9403
# via virtualenv
django==5.2.8 \
--hash=sha256:23254866a5bb9a2cfa6004e8b809ec6246eba4b58a7589bc2772f1bcc8456c7f \
--hash=sha256:37e687f7bd73ddf043e2b6b97cfe02fcbb11f2dbb3adccc6a2b18c6daa054d7f
django==5.2.1 \
--hash=sha256:57fe1f1b59462caed092c80b3dd324fd92161b620d59a9ba9181c34746c97284 \
--hash=sha256:a9b680e84f9a0e71da83e399f1e922e1ab37b2173ced046b541c72e1589a5961
# via
# crispy-bootstrap5
# django-allauth
@@ -187,9 +137,8 @@ django==5.2.8 \
# django-stubs-ext
# django-timezone-field
# livegraphsdjango
django-allauth==65.13.0 \
--hash=sha256:119c0cf1cc2e0d1a0fe2f13588f30951d64989256084de2d60f13ab9308f9fa0 \
--hash=sha256:7d7b7e7ad603eb3864c142f051e2cce7be2f9a9c6945a51172ec83d48c6c843b
django-allauth==65.8.0 \
--hash=sha256:9da589d99d412740629333a01865a90c95c97e0fae0cde789aa45a8fda90e83b
# via livegraphsdjango
django-celery-beat==2.8.1 \
--hash=sha256:da2b1c6939495c05a551717509d6e3b79444e114a027f7b77bf3727c2a39d171 \
@@ -201,150 +150,117 @@ django-crispy-forms==2.4 \
# via
# crispy-bootstrap5
# livegraphsdjango
django-debug-toolbar==6.1.0 \
--hash=sha256:e214dea4494087e7cebdcea84223819c5eb97f9de3110a3665ad673f0ba98413 \
--hash=sha256:e962ec350c9be8bdba918138e975a9cdb193f60ec396af2bb71b769e8e165519
django-stubs==5.2.7 \
--hash=sha256:2864e74b56ead866ff1365a051f24d852f6ed02238959664f558a6c9601c95bf \
--hash=sha256:2a07e47a8a867836a763c6bba8bf3775847b4fd9555bfa940360e32d0ee384a1
django-stubs-ext==5.2.7 \
--hash=sha256:0466a7132587d49c5bbe12082ac9824d117a0dedcad5d0ada75a6e0d3aca6f60 \
--hash=sha256:b690655bd4cb8a44ae57abb314e0995dc90414280db8f26fff0cb9fb367d1cac
django-debug-toolbar==5.2.0 \
--hash=sha256:15627f4c2836a9099d795e271e38e8cf5204ccd79d5dbcd748f8a6c284dcd195 \
--hash=sha256:9e7f0145e1a1b7d78fcc3b53798686170a5b472d9cf085d88121ff823e900821
django-stubs==5.2.0 \
--hash=sha256:07e25c2d3cbff5be540227ff37719cc89f215dfaaaa5eb038a75b01bbfbb2722 \
--hash=sha256:cd52da033489afc1357d6245f49e3cc57bf49015877253fb8efc6722ea3d2d2b
django-stubs-ext==5.2.0 \
--hash=sha256:00c4ae307b538f5643af761a914c3f8e4e3f25f4e7c6d7098f1906c0d8f2aac9 \
--hash=sha256:b27ae0aab970af4894ba4e9b3fcd3e03421dc8731516669659ee56122d148b23
# via django-stubs
django-timezone-field==7.1 \
--hash=sha256:93914713ed882f5bccda080eda388f7006349f25930b6122e9b07bf8db49c4b4 \
--hash=sha256:b3ef409d88a2718b566fabe10ea996f2838bc72b22d3a2900c0aa905c761380c
# via django-celery-beat
filelock==3.20.0 \
--hash=sha256:339b4732ffda5cd79b13f4e2711a31b0365ce445d95d243bb996273d072546a2 \
--hash=sha256:711e943b4ec6be42e1d4e6690b48dc175c822967466bb31c0c293f34334c13f4
filelock==3.18.0 \
--hash=sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2 \
--hash=sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de
# via virtualenv
greenlet==3.2.4 ; platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64' \
--hash=sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b \
--hash=sha256:015d48959d4add5d6c9f6c5210ee3803a830dce46356e3bc326d6776bde54681 \
--hash=sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735 \
--hash=sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d \
--hash=sha256:1a921e542453fe531144e91e1feedf12e07351b1cf6c9e8a3325ea600a715a31 \
--hash=sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671 \
--hash=sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269 \
--hash=sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f \
--hash=sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337 \
--hash=sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0 \
--hash=sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b \
--hash=sha256:6e343822feb58ac4d0a1211bd9399de2b3a04963ddeec21530fc426cc121f19b \
--hash=sha256:710638eb93b1fa52823aa91bf75326f9ecdfd5e0466f00789246a5280f4ba0fc \
--hash=sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1 \
--hash=sha256:c17b6b34111ea72fc5a4e4beec9711d2226285f0386ea83477cbb97c30a3f3a5 \
--hash=sha256:c5111ccdc9c88f423426df3fd1811bfc40ed66264d35aa373420a34377efc98a \
--hash=sha256:ca7f6f1f2649b89ce02f6f229d7c19f680a6238af656f61e0115b24857917929 \
--hash=sha256:cd3c8e693bff0fff6ba55f140bf390fa92c994083f838fece0f63be121334945 \
--hash=sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae \
--hash=sha256:d76383238584e9711e20ebe14db6c88ddcedc1829a9ad31a584389463b5aa504 \
--hash=sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01
greenlet==3.2.2 ; (python_full_version < '3.14' and platform_machine == 'AMD64') or (python_full_version < '3.14' and platform_machine == 'WIN32') or (python_full_version < '3.14' and platform_machine == 'aarch64') or (python_full_version < '3.14' and platform_machine == 'amd64') or (python_full_version < '3.14' and platform_machine == 'ppc64le') or (python_full_version < '3.14' and platform_machine == 'win32') or (python_full_version < '3.14' and platform_machine == 'x86_64') \
--hash=sha256:02a98600899ca1ca5d3a2590974c9e3ec259503b2d6ba6527605fcd74e08e207 \
--hash=sha256:055916fafad3e3388d27dd68517478933a97edc2fc54ae79d3bec827de2c64c4 \
--hash=sha256:1919cbdc1c53ef739c94cf2985056bcc0838c1f217b57647cbf4578576c63825 \
--hash=sha256:1e76106b6fc55fa3d6fe1c527f95ee65e324a13b62e243f77b48317346559708 \
--hash=sha256:2593283bf81ca37d27d110956b79e8723f9aa50c4bcdc29d3c0543d4743d2763 \
--hash=sha256:2dc5c43bb65ec3669452af0ab10729e8fdc17f87a1f2ad7ec65d4aaaefabf6bf \
--hash=sha256:3885f85b61798f4192d544aac7b25a04ece5fe2704670b4ab73c2d2c14ab740d \
--hash=sha256:3ab7194ee290302ca15449f601036007873028712e92ca15fc76597a0aeb4c59 \
--hash=sha256:45f9f4853fb4cc46783085261c9ec4706628f3b57de3e68bae03e8f8b3c0de51 \
--hash=sha256:6fadd183186db360b61cb34e81117a096bff91c072929cd1b529eb20dd46e6c5 \
--hash=sha256:85f3e248507125bf4af607a26fd6cb8578776197bd4b66e35229cdf5acf1dfbf \
--hash=sha256:89c69e9a10670eb7a66b8cef6354c24671ba241f46152dd3eed447f79c29fb5b \
--hash=sha256:9ea5231428af34226c05f927e16fc7f6fa5e39e3ad3cd24ffa48ba53a47f4240 \
--hash=sha256:ad053d34421a2debba45aa3cc39acf454acbcd025b3fc1a9f8a0dee237abd485 \
--hash=sha256:b50a8c5c162469c3209e5ec92ee4f95c8231b11db6a04db09bbe338176723bb8 \
--hash=sha256:ba30e88607fb6990544d84caf3c706c4b48f629e18853fc6a646f82db9629418 \
--hash=sha256:decb0658ec19e5c1f519faa9a160c0fc85a41a7e6654b3ce1b44b939f8bf1325 \
--hash=sha256:fe46d4f8e94e637634d54477b0cfabcf93c53f29eedcbdeecaf2af32029b4421
# via sqlalchemy
gunicorn==23.0.0 \
--hash=sha256:ec400d38950de4dfd418cff8328b2c8faed0edb0d517d3394e457c317908ca4d \
--hash=sha256:f014447a0101dc57e294f6c18ca6b40227a4c90e9bdb586042628030cba004ec
# via livegraphsdjango
identify==2.6.15 \
--hash=sha256:1181ef7608e00704db228516541eb83a88a9f94433a8c80bb9b5bd54b1d81757 \
--hash=sha256:e4f4864b96c6557ef2a1e1c951771838f4edc9df3a72ec7118b338801b11c7bf
identify==2.6.10 \
--hash=sha256:45e92fd704f3da71cc3880036633f48b4b7265fd4de2b57627cb157216eb7eb8 \
--hash=sha256:5f34248f54136beed1a7ba6a6b5c4b6cf21ff495aac7c359e1ef831ae3b8ab25
# via pre-commit
idna==3.11 \
--hash=sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea \
--hash=sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902
idna==3.10 \
--hash=sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9 \
--hash=sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3
# via requests
iniconfig==2.3.0 \
--hash=sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730 \
--hash=sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12
iniconfig==2.1.0 \
--hash=sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7 \
--hash=sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760
# via pytest
kombu==5.5.4 \
--hash=sha256:886600168275ebeada93b888e831352fe578168342f0d1d5833d88ba0d847363 \
--hash=sha256:a12ed0557c238897d8e518f1d1fdf84bd1516c5e305af2dacd85c2015115feb8
kombu==5.5.3 \
--hash=sha256:021a0e11fcfcd9b0260ef1fb64088c0e92beb976eb59c1dfca7ddd4ad4562ea2 \
--hash=sha256:5b0dbceb4edee50aa464f59469d34b97864be09111338cfb224a10b6a163909b
# via celery
markdown-it-py==4.0.0 \
--hash=sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147 \
--hash=sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3
markdown-it-py==3.0.0 \
--hash=sha256:355216845c60bd96232cd8d8c40e8f9765cc86f46880e43a8fd22dc1a1a8cab1 \
--hash=sha256:e3f60a94fa066dc52ec76661e37c851cb232d92f9886b15cb560aaada2df8feb
# via rich
mdurl==0.1.2 \
--hash=sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8 \
--hash=sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba
# via markdown-it-py
mypy==1.18.2 \
--hash=sha256:06a398102a5f203d7477b2923dda3634c36727fa5c237d8f859ef90c42a9924b \
--hash=sha256:07b8b0f580ca6d289e69209ec9d3911b4a26e5abfde32228a288eb79df129fcc \
--hash=sha256:0e2785a84b34a72ba55fb5daf079a1003a34c05b22238da94fcae2bbe46f3544 \
--hash=sha256:20c02215a080e3a2be3aa50506c67242df1c151eaba0dcbc1e4e557922a26075 \
--hash=sha256:22a1748707dd62b58d2ae53562ffc4d7f8bcc727e8ac7cbc69c053ddc874d47e \
--hash=sha256:62f0e1e988ad41c2a110edde6c398383a889d95b36b3e60bcf155f5164c4fdce \
--hash=sha256:6ca1e64b24a700ab5ce10133f7ccd956a04715463d30498e64ea8715236f9c9c \
--hash=sha256:749b5f83198f1ca64345603118a6f01a4e99ad4bf9d103ddc5a3200cc4614adf \
--hash=sha256:7ab28cc197f1dd77a67e1c6f35cd1f8e8b73ed2217e4fc005f9e6a504e46e7ba \
--hash=sha256:8795a039bab805ff0c1dfdb8cd3344642c2b99b8e439d057aba30850b8d3423d \
--hash=sha256:a431a6f1ef14cf8c144c6b14793a23ec4eae3db28277c358136e79d7d062f62d \
--hash=sha256:c3ad2afadd1e9fea5cf99a45a822346971ede8685cc581ed9cd4d42eaf940986 \
--hash=sha256:d924eef3795cc89fecf6bedc6ed32b33ac13e8321344f6ddbf8ee89f706c05cb \
--hash=sha256:ed4482847168439651d3feee5833ccedbf6657e964572706a2adb1f7fa4dfe2e
mypy==1.15.0 \
--hash=sha256:404534629d51d3efea5c800ee7c42b72a6554d6c400e6a79eafe15d11341fd43 \
--hash=sha256:5469affef548bd1895d86d3bf10ce2b44e33d86923c29e4d675b3e323437ea3e \
--hash=sha256:811aeccadfb730024c5d3e326b2fbe9249bb7413553f15499a4050f7c30e801d \
--hash=sha256:93faf3fdb04768d44bf28693293f3904bbb555d076b781ad2530214ee53e3445 \
--hash=sha256:98b7b9b9aedb65fe628c62a6dc57f6d5088ef2dfca37903a7d9ee374d03acca5 \
--hash=sha256:b9378e2c00146c44793c98b8d5a61039a048e31f429fb0eb546d93f4b000bedf \
--hash=sha256:baefc32840a9f00babd83251560e0ae1573e2f9d1b067719479bfb0e987c6357 \
--hash=sha256:c43a7682e24b4f576d93072216bf56eeff70d9140241f9edec0c104d0c515036
mypy-extensions==1.1.0 \
--hash=sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505 \
--hash=sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558
# via
# black
# mypy
narwhals==2.10.2 \
--hash=sha256:059cd5c6751161b97baedcaf17a514c972af6a70f36a89af17de1a0caf519c43 \
--hash=sha256:ff738a08bc993cbb792266bec15346c1d85cc68fdfe82a23283c3713f78bd354
narwhals==1.39.1 \
--hash=sha256:68d0f29c760f1a9419ada537f35f21ff202b0be1419e6d22135a0352c6d96deb \
--hash=sha256:cf15389e6f8c5321e8cd0ca8b5bace3b1aea5f5622fa59dfd64821998741d836
# via plotly
nodeenv==1.9.1 \
--hash=sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f \
--hash=sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9
# via pre-commit
numpy==2.3.4 \
--hash=sha256:035796aaaddfe2f9664b9a9372f089cfc88bd795a67bd1bfe15e6e770934cf64 \
--hash=sha256:043885b4f7e6e232d7df4f51ffdef8c36320ee9d5f227b380ea636722c7ed12e \
--hash=sha256:04a69abe45b49c5955923cf2c407843d1c85013b424ae8a560bba16c92fe44a0 \
--hash=sha256:0f2bcc76f1e05e5ab58893407c63d90b2029908fa41f9f1cc51eecce936c3365 \
--hash=sha256:15eea9f306b98e0be91eb344a94c0e630689ef302e10c2ce5f7e11905c704f9c \
--hash=sha256:15fb27364ed84114438fff8aaf998c9e19adbeba08c0b75409f8c452a8692c52 \
--hash=sha256:22758999b256b595cf0b1d102b133bb61866ba5ceecf15f759623b64c020c9ec \
--hash=sha256:2ec646892819370cf3558f518797f16597b4e4669894a2ba712caccc9da53f1f \
--hash=sha256:3634093d0b428e6c32c3a69b78e554f0cd20ee420dcad5a9f3b2a63762ce4197 \
--hash=sha256:3da3491cee49cf16157e70f607c03a217ea6647b1cea4819c4f48e53d49139b9 \
--hash=sha256:40cc556d5abbc54aabe2b1ae287042d7bdb80c08edede19f0c0afb36ae586f37 \
--hash=sha256:4ee6a571d1e4f0ea6d5f22d6e5fbd6ed1dc2b18542848e1e7301bd190500c9d7 \
--hash=sha256:56209416e81a7893036eea03abcb91c130643eb14233b2515c90dcac963fe99d \
--hash=sha256:5e199c087e2aa71c8f9ce1cb7a8e10677dc12457e7cc1be4798632da37c3e86e \
--hash=sha256:62b2198c438058a20b6704351b35a1d7db881812d8512d67a69c9de1f18ca05f \
--hash=sha256:6d9cd732068e8288dbe2717177320723ccec4fb064123f0caf9bbd90ab5be868 \
--hash=sha256:7c26b0b2bf58009ed1f38a641f3db4be8d960a417ca96d14e5b06df1506d41ff \
--hash=sha256:817e719a868f0dacde4abdfc5c1910b301877970195db9ab6a5e2c4bd5b121f7 \
--hash=sha256:81c3e6d8c97295a7360d367f9f8553973651b76907988bb6066376bc2252f24e \
--hash=sha256:838f045478638b26c375ee96ea89464d38428c69170360b23a1a50fa4baa3562 \
--hash=sha256:84f01a4d18b2cc4ade1814a08e5f3c907b079c847051d720fad15ce37aa930b6 \
--hash=sha256:85597b2d25ddf655495e2363fe044b0ae999b75bc4d630dc0d886484b03a5eb0 \
--hash=sha256:85d9fb2d8cd998c84d13a79a09cc0c1091648e848e4e6249b0ccd7f6b487fa26 \
--hash=sha256:85e071da78d92a214212cacea81c6da557cab307f2c34b5f85b628e94803f9c0 \
--hash=sha256:863e3b5f4d9915aaf1b8ec79ae560ad21f0b8d5e3adc31e73126491bb86dee1d \
--hash=sha256:86966db35c4040fdca64f0816a1c1dd8dbd027d90fca5a57e00e1ca4cd41b879 \
--hash=sha256:8b5a9a39c45d852b62693d9b3f3e0fe052541f804296ff401a72a1b60edafb29 \
--hash=sha256:8dc20bde86802df2ed8397a08d793da0ad7a5fd4ea3ac85d757bf5dd4ad7c252 \
--hash=sha256:962064de37b9aef801d33bc579690f8bfe6c5e70e29b61783f60bcba838a14d6 \
--hash=sha256:9cb177bc55b010b19798dc5497d540dea67fd13a8d9e882b2dae71de0cf09eb3 \
--hash=sha256:9d729d60f8d53a7361707f4b68a9663c968882dd4f09e0d58c044c8bf5faee7b \
--hash=sha256:a13fc473b6db0be619e45f11f9e81260f7302f8d180c49a22b6e6120022596b3 \
--hash=sha256:a700a4031bc0fd6936e78a752eefb79092cecad2599ea9c8039c548bc097f9bc \
--hash=sha256:a7d018bfedb375a8d979ac758b120ba846a7fe764911a64465fd87b8729f4a6a \
--hash=sha256:b6c231c9c2fadbae4011ca5e7e83e12dc4a5072f1a1d85a0a7b3ed754d145a40 \
--hash=sha256:bd0c630cf256b0a7fd9d0a11c9413b42fef5101219ce6ed5a09624f5a65392c7 \
--hash=sha256:c090d4860032b857d94144d1a9976b8e36709e40386db289aaf6672de2a81966 \
--hash=sha256:d5e081bc082825f8b139f9e9fe42942cb4054524598aaeb177ff476cc76d09d2 \
--hash=sha256:d7315ed1dab0286adca467377c8381cd748f3dc92235f22a7dfc42745644a96a \
--hash=sha256:e1708fac43ef8b419c975926ce1eaf793b0c13b7356cfab6ab0dc34c0a02ac0f \
--hash=sha256:e73d63fd04e3a9d6bc187f5455d81abfad05660b212c8804bf3b407e984cd2bc \
--hash=sha256:e8370eb6925bb8c1c4264fec52b0384b44f675f191df91cbe0140ec9f0955646 \
--hash=sha256:ecb63014bb7f4ce653f8be7f1df8cbc6093a5a2811211770f6606cc92b5a78fd \
--hash=sha256:fc8a63918b04b8571789688b2780ab2b4a33ab44bfe8ccea36d3eba51228c953 \
--hash=sha256:fea80f4f4cf83b54c3a051f2f727870ee51e22f0248d3114b8e755d160b38cfb
numpy==2.2.6 \
--hash=sha256:038613e9fb8c72b0a41f025a7e4c3f0b7a1b5d768ece4796b674c8f3fe13efff \
--hash=sha256:0811bb762109d9708cca4d0b13c4f67146e3c3b7cf8d34018c722adb2d957c84 \
--hash=sha256:0bca768cd85ae743b2affdc762d617eddf3bcf8724435498a1e80132d04879e6 \
--hash=sha256:1bc23a79bfabc5d056d106f9befb8d50c31ced2fbc70eedb8155aec74a45798f \
--hash=sha256:287cc3162b6f01463ccd86be154f284d0893d2b3ed7292439ea97eafa8170e0b \
--hash=sha256:389d771b1623ec92636b0786bc4ae56abafad4a4c513d36a55dce14bd9ce8571 \
--hash=sha256:55a4d33fa519660d69614a9fad433be87e5252f4b03850642f88993f7b2ca566 \
--hash=sha256:5bd4fc3ac8926b3819797a7c0e2631eb889b4118a9898c84f585a54d475b7e40 \
--hash=sha256:5beb72339d9d4fa36522fc63802f469b13cdbe4fdab4a288f0c441b74272ebfd \
--hash=sha256:6031dd6dfecc0cf9f668681a37648373bddd6421fff6c66ec1624eed0180ee06 \
--hash=sha256:8e9ace4a37db23421249ed236fdcdd457d671e25146786dfc96835cd951aa7c1 \
--hash=sha256:b0544343a702fa80c95ad5d3d608ea3599dd54d4632df855e4c8d24eb6ecfa1c \
--hash=sha256:b4f13750ce79751586ae2eb824ba7e1e8dba64784086c98cdbbcc6a42112ce0d \
--hash=sha256:e1dda9c7e08dc141e0247a5b8f49cf05984955246a327d4c48bda16821947b2f \
--hash=sha256:e29554e2bef54a90aa5cc07da6ce955accb83f21ab5de01a62c8478897b264fd \
--hash=sha256:e3143e4451880bed956e706a3220b4e5cf6172ef05fcc397f6f36a550b1dd868 \
--hash=sha256:f1372f041402e37e5e633e586f62aa53de2eac8d98cbfb822806ce4bbefcb74d \
--hash=sha256:f447e6acb680fd307f40d3da4852208af94afdfab89cf850986c3ca00562f4fa \
--hash=sha256:f92729c95468a2f4f15e9bb94c432a9229d0d50de67304399627a943201baa2f \
--hash=sha256:fc0c5673685c508a142ca65209b4e79ed6740a4ed6b2267dbba90f34b0b3cfda \
--hash=sha256:fee4236c876c4e8369388054d02d0e9bb84821feb1a64dd59e137e6511a551f8
# via
# livegraphsdjango
# pandas
@@ -354,106 +270,67 @@ packaging==25.0 \
# via
# black
# gunicorn
# kombu
# plotly
# pytest
pandas==2.3.3 \
--hash=sha256:0242fe9a49aa8b4d78a4fa03acb397a58833ef6199e9aa40a95f027bb3a1b6e7 \
--hash=sha256:1611aedd912e1ff81ff41c745822980c49ce4a7907537be8692c8dbc31924593 \
--hash=sha256:1b07204a219b3b7350abaae088f451860223a52cfb8a6c53358e7948735158e5 \
--hash=sha256:2462b1a365b6109d275250baaae7b760fd25c726aaca0054649286bcfbb3e8ec \
--hash=sha256:2e3ebdb170b5ef78f19bfb71b0dc5dc58775032361fa188e814959b74d726dd5 \
--hash=sha256:318d77e0e42a628c04dc56bcef4b40de67918f7041c2b061af1da41dcff670ac \
--hash=sha256:3869faf4bd07b3b66a9f462417d0ca3a9df29a9f6abd5d0d0dbab15dac7abe87 \
--hash=sha256:4e0a175408804d566144e170d0476b15d78458795bb18f1304fb94160cabf40c \
--hash=sha256:56851a737e3470de7fa88e6131f41281ed440d29a9268dcbf0002da5ac366713 \
--hash=sha256:6253c72c6a1d990a410bc7de641d34053364ef8bcd3126f7e7450125887dffe3 \
--hash=sha256:6435cb949cb34ec11cc9860246ccb2fdc9ecd742c12d3304989017d53f039a78 \
--hash=sha256:6d2cefc361461662ac48810cb14365a365ce864afe85ef1f447ff5a1e99ea81c \
--hash=sha256:74ecdf1d301e812db96a465a525952f4dde225fdb6d8e5a521d47e1f42041e21 \
--hash=sha256:75ea25f9529fdec2d2e93a42c523962261e567d250b0013b16210e1d40d7c2e5 \
--hash=sha256:900f47d8f20860de523a1ac881c4c36d65efcb2eb850e6948140fa781736e110 \
--hash=sha256:93c2d9ab0fc11822b5eece72ec9587e172f63cff87c00b062f6e37448ced4493 \
--hash=sha256:a21d830e78df0a515db2b3d2f5570610f5e6bd2e27749770e8bb7b524b89b450 \
--hash=sha256:a45c765238e2ed7d7c608fc5bc4a6f88b642f2f01e70c0c23d2224dd21829d86 \
--hash=sha256:bdcd9d1167f4885211e401b3036c0c8d9e274eee67ea8d0758a256d60704cfe8 \
--hash=sha256:c46467899aaa4da076d5abc11084634e2d197e9460643dd455ac3db5856b24d6 \
--hash=sha256:c4fc4c21971a1a9f4bdb4c73978c7f7256caa3e62b323f70d6cb80db583350bc \
--hash=sha256:d051c0e065b94b7a3cea50eb1ec32e912cd96dba41647eb24104b6c6c14c5788 \
--hash=sha256:e05e1af93b977f7eafa636d043f9f94c7ee3ac81af99c13508215942e64c993b \
--hash=sha256:e32e7cc9af0f1cc15548288a51a3b681cc2a219faa838e995f7dc53dbab1062d \
--hash=sha256:ee15f284898e7b246df8087fc82b87b01686f98ee67d85a17b7ab44143a3a9a0 \
--hash=sha256:ee67acbbf05014ea6c763beb097e03cd629961c8a632075eeb34247120abcb4b \
--hash=sha256:f8bfc0e12dc78f777f323f55c58649591b2cd0c43534e8355c51d3fede5f4dee
pandas==2.2.3 \
--hash=sha256:15c0e1e02e93116177d29ff83e8b1619c93ddc9c49083f237d4312337a61165d \
--hash=sha256:1db71525a1538b30142094edb9adc10be3f3e176748cd7acc2240c2f2e5aa3a4 \
--hash=sha256:22a9d949bfc9a502d320aa04e5d02feab689d61da4e7764b62c30b991c42c5f0 \
--hash=sha256:3508d914817e153ad359d7e069d752cdd736a247c322d932eb89e6bc84217f28 \
--hash=sha256:38cf8125c40dae9d5acc10fa66af8ea6fdf760b2714ee482ca691fc66e6fcb18 \
--hash=sha256:3b71f27954685ee685317063bf13c7709a7ba74fc996b84fc6821c59b0f06468 \
--hash=sha256:4f18ba62b61d7e192368b84517265a99b4d7ee8912f8708660fb4a366cc82667 \
--hash=sha256:61c5ad4043f791b61dd4752191d9f07f0ae412515d59ba8f005832a532f8736d \
--hash=sha256:6374c452ff3ec675a8f46fd9ab25c4ad0ba590b71cf0656f8b6daa5202bca3fb \
--hash=sha256:800250ecdadb6d9c78eae4990da62743b857b470883fa27f652db8bdde7f6659 \
--hash=sha256:ad5b65698ab28ed8d7f18790a0dc58005c7629f227be9ecc1072aa74c0c1d43a \
--hash=sha256:ba96630bc17c875161df3818780af30e43be9b166ce51c9a18c1feae342906c2 \
--hash=sha256:f00d1345d84d8c86a63e476bb4955e46458b304b9575dcf71102b5c705320015 \
--hash=sha256:f3a255b2c19987fbbe62a9dfd6cff7ff2aa9ccab3fc75218fd4b7530f01efa24
# via livegraphsdjango
pathspec==0.12.1 \
--hash=sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08 \
--hash=sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712
# via
# black
# mypy
platformdirs==4.5.0 \
--hash=sha256:70ddccdd7c99fc5942e9fc25636a8b34d04c24b335100223152c2803e4063312 \
--hash=sha256:e578a81bb873cbb89a41fcc904c7ef523cc18284b7e3b3ccf06aca1403b7ebd3
# via black
pbr==6.1.1 \
--hash=sha256:38d4daea5d9fa63b3f626131b9d34947fd0c8be9b05a29276870580050a25a76 \
--hash=sha256:93ea72ce6989eb2eed99d0f75721474f69ad88128afdef5ac377eb797c4bf76b
# via stevedore
platformdirs==4.3.8 \
--hash=sha256:3d512d96e16bcb959a814c9f348431070822a6496326a4be0911c40b5a74c2bc \
--hash=sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4
# via
# black
# virtualenv
plotly==6.4.0 \
--hash=sha256:68c6db2ed2180289ef978f087841148b7efda687552276da15a6e9b92107052a \
--hash=sha256:a1062eafbdc657976c2eedd276c90e184ccd6c21282a5e9ee8f20efca9c9a4c5
plotly==6.1.0 \
--hash=sha256:a29d3ed523c9d7960095693af1ee52689830df0f9c6bae3e5e92c20c4f5684c3 \
--hash=sha256:f13f497ccc2d97f06f771a30b27fab0cbd220f2975865f4ecbc75057135521de
# via livegraphsdjango
pluggy==1.6.0 \
--hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \
--hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746
# via pytest
pre-commit==4.3.0 \
--hash=sha256:2b0747ad7e6e967169136edffee14c16e148a778a54e4f967921aa1ebf2308d8 \
--hash=sha256:499fe450cc9d42e9d58e606262795ecb64dd05438943c62b66f6a8673da30b16
prompt-toolkit==3.0.52 \
--hash=sha256:28cde192929c8e7321de85de1ddbe736f1375148b02f2e17edd840042b1be855 \
--hash=sha256:9aac639a3bbd33284347de5ad8d68ecc044b91a762dc39b7c21095fcd6a19955
pre-commit==4.2.0 \
--hash=sha256:601283b9757afd87d40c4c4a9b2b5de9637a8ea02eaff7adc2d0fb4e04841146 \
--hash=sha256:a009ca7205f1eb497d10b845e52c838a98b6cdd2102a6c8e4540e94ee75c58bd
prompt-toolkit==3.0.51 \
--hash=sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07 \
--hash=sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed
# via click-repl
psycopg2-binary==2.9.11 \
--hash=sha256:04195548662fa544626c8ea0f06561eb6203f1984ba5b4562764fbeb4c3d14b1 \
--hash=sha256:32770a4d666fbdafab017086655bcddab791d7cb260a16679cc5a7338b64343b \
--hash=sha256:366df99e710a2acd90efed3764bb1e28df6c675d33a7fb40df9b7281694432ee \
--hash=sha256:4012c9c954dfaccd28f94e84ab9f94e12df76b4afb22331b1f0d3154893a6316 \
--hash=sha256:47f212c1d3be608a12937cc131bd85502954398aaa1320cb4c14421a0ffccf4c \
--hash=sha256:5c6ff3335ce08c75afaed19e08699e8aacf95d4a260b495a4a8545244fe2ceb3 \
--hash=sha256:84011ba3109e06ac412f95399b704d3d6950e386b7994475b231cf61eec2fc1f \
--hash=sha256:8c55b385daa2f92cb64b12ec4536c66954ac53654c7f15a203578da4e78105c0 \
--hash=sha256:92e3b669236327083a2e33ccfa0d320dd01b9803b3e14dd986a4fc54aa00f4e1 \
--hash=sha256:9b52a3f9bb540a3e4ec0f6ba6d31339727b2950c9772850d6545b7eae0b9d7c5 \
--hash=sha256:9bd81e64e8de111237737b29d68039b9c813bdf520156af36d26819c9a979e5f \
--hash=sha256:b31e90fdd0f968c2de3b26ab014314fe814225b6c324f770952f7d38abf17e3c \
--hash=sha256:b6aed9e096bf63f9e75edf2581aa9a7e7186d97ab5c177aa6c87797cd591236c \
--hash=sha256:b8fb3db325435d34235b044b199e56cdf9ff41223a4b9752e8576465170bb38c \
--hash=sha256:ba34475ceb08cccbdd98f6b46916917ae6eeb92b5ae111df10b544c3a4621dc4 \
--hash=sha256:c0377174bf1dd416993d16edc15357f6eb17ac998244cca19bc67cdc0e2e5766 \
--hash=sha256:c3cb3a676873d7506825221045bd70e0427c905b9c8ee8d6acd70cfcbd6e576d \
--hash=sha256:d526864e0f67f74937a8fce859bd56c979f5e2ec57ca7c627f5f1071ef7fee60 \
--hash=sha256:db4fd476874ccfdbb630a54426964959e58da4c61c9feba73e6094d51303d7d8 \
--hash=sha256:e0deeb03da539fa3577fcb0b3f2554a97f7e5477c246098dbb18091a4a01c16f \
--hash=sha256:e35b7abae2b0adab776add56111df1735ccc71406e56203515e228a8dc07089f \
--hash=sha256:efff12b432179443f54e230fdf60de1f6cc726b6c832db8701227d089310e8aa \
--hash=sha256:fcf21be3ce5f5659daefd2b3b3b6e4727b028221ddc94e6c1523425579664747
# via livegraphsdjango
pygments==2.19.2 \
--hash=sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887 \
--hash=sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b
# via
# pytest
# rich
pytest==8.4.2 \
--hash=sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01 \
--hash=sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79
pygments==2.19.1 \
--hash=sha256:61c16d2a8576dc0649d9f39e089b5f02bcd27fba10d8fb4dcc28173f7a45151f \
--hash=sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c
# via rich
pytest==8.3.5 \
--hash=sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820 \
--hash=sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845
# via pytest-django
pytest-django==4.11.1 \
--hash=sha256:1b63773f648aa3d8541000c26929c1ea63934be1cfa674c76436966d73fe6a10 \
--hash=sha256:a949141a1ee103cb0e7a20f1451d355f83f5e4a5d07bdd4dcfdd1fd0ff227991
python-crontab==3.3.0 \
--hash=sha256:007c8aee68dddf3e04ec4dce0fac124b93bd68be7470fc95d2a9617a15de291b \
--hash=sha256:739a778b1a771379b75654e53fd4df58e5c63a9279a63b5dfe44c0fcc3ee7884
python-crontab==3.2.0 \
--hash=sha256:40067d1dd39ade3460b2ad8557c7651514cd3851deffff61c5c60e1227c5c36b \
--hash=sha256:82cb9b6a312d41ff66fd3caf3eed7115c28c195bfb50711bc2b4b9592feb9fe5
# via django-celery-beat
python-dateutil==2.9.0.post0 \
--hash=sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3 \
@@ -461,100 +338,81 @@ python-dateutil==2.9.0.post0 \
# via
# celery
# pandas
python-dotenv==1.2.1 \
--hash=sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6 \
--hash=sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61
# python-crontab
python-dotenv==1.1.0 \
--hash=sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5 \
--hash=sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d
# via livegraphsdjango
pytokens==0.3.0 \
--hash=sha256:2f932b14ed08de5fcf0b391ace2642f858f1394c0857202959000b68ed7a458a \
--hash=sha256:95b2b5eaf832e469d141a378872480ede3f251a5a5041b8ec6e581d3ac71bbf3
# via black
pytz==2025.2 \
--hash=sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3 \
--hash=sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00
# via pandas
pyyaml==6.0.3 \
--hash=sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c \
--hash=sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3 \
--hash=sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6 \
--hash=sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65 \
--hash=sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1 \
--hash=sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310 \
--hash=sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac \
--hash=sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9 \
--hash=sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7 \
--hash=sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35 \
--hash=sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb \
--hash=sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065 \
--hash=sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c \
--hash=sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c \
--hash=sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764 \
--hash=sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac \
--hash=sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8 \
--hash=sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3 \
--hash=sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5 \
--hash=sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702 \
--hash=sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788 \
--hash=sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba \
--hash=sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5 \
--hash=sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26 \
--hash=sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f \
--hash=sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b \
--hash=sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be \
--hash=sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c \
--hash=sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6
pyyaml==6.0.2 \
--hash=sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133 \
--hash=sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484 \
--hash=sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc \
--hash=sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1 \
--hash=sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652 \
--hash=sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5 \
--hash=sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563 \
--hash=sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183 \
--hash=sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e \
--hash=sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba
# via
# bandit
# pre-commit
redis==7.0.1 \
--hash=sha256:4977af3c7d67f8f0eb8b6fec0dafc9605db9343142f634041fb0235f67c0588a \
--hash=sha256:c949df947dca995dc68fdf5a7863950bf6df24f8d6022394585acc98e81624f1
redis==6.1.0 \
--hash=sha256:3b72622f3d3a89df2a6041e82acd896b0e67d9f54e9bcd906d091d23ba5219f6 \
--hash=sha256:c928e267ad69d3069af28a9823a07726edf72c7e37764f43dc0123f37928c075
# via livegraphsdjango
requests==2.32.5 \
--hash=sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6 \
--hash=sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf
requests==2.32.3 \
--hash=sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760 \
--hash=sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6
# via livegraphsdjango
rich==14.2.0 \
--hash=sha256:73ff50c7c0c1c77c8243079283f4edb376f0f6442433aecb8ce7e6d0b92d1fe4 \
--hash=sha256:76bc51fe2e57d2b1be1f96c524b890b816e334ab4c1e45888799bfaab0021edd
rich==14.0.0 \
--hash=sha256:1c9491e1951aac09caffd42f448ee3d04e58923ffe14993f6e83068dc395d7e0 \
--hash=sha256:82f1bc23a6a21ebca4ae0c45af9bdbc492ed20231dcb63f297d6d1021a9d5725
# via bandit
ruff==0.14.3 \
--hash=sha256:0e2f8a0bbcffcfd895df39c9a4ecd59bb80dca03dc43f7fb63e647ed176b741e \
--hash=sha256:1ec1ac071e7e37e0221d2f2dbaf90897a988c531a8592a6a5959f0603a1ecf5e \
--hash=sha256:26eb477ede6d399d898791d01961e16b86f02bc2486d0d1a7a9bb2379d055dc1 \
--hash=sha256:3d6bc90307c469cb9d28b7cfad90aaa600b10d67c6e22026869f585e1e8a2db0 \
--hash=sha256:469e35872a09c0e45fecf48dd960bfbce056b5db2d5e6b50eca329b4f853ae20 \
--hash=sha256:4ff876d2ab2b161b6de0aa1f5bd714e8e9b4033dc122ee006925fbacc4f62153 \
--hash=sha256:678fdd7c7d2d94851597c23ee6336d25f9930b460b55f8598e011b57c74fd8c5 \
--hash=sha256:71ff6edca490c308f083156938c0c1a66907151263c4abdcb588602c6e696a14 \
--hash=sha256:786ee3ce6139772ff9272aaf43296d975c0217ee1b97538a98171bf0d21f87ed \
--hash=sha256:7bfc42f81862749a7136267a343990f865e71fe2f99cf8d2958f684d23ce3dfa \
--hash=sha256:876b21e6c824f519446715c1342b8e60f97f93264012de9d8d10314f8a79c371 \
--hash=sha256:a497ec0c3d2c88561b6d90f9c29f5ae68221ac00d471f306fa21fa4264ce5fcd \
--hash=sha256:a65e448cfd7e9c59fae8cf37f9221585d3354febaad9a07f29158af1528e165f \
--hash=sha256:afcdc4b5335ef440d19e7df9e8ae2ad9f749352190e96d481dc501b753f0733e \
--hash=sha256:b6fd8c79b457bedd2abf2702b9b472147cd860ed7855c73a5247fa55c9117654 \
--hash=sha256:cd6291d0061811c52b8e392f946889916757610d45d004e41140d81fb6cd5ddc \
--hash=sha256:d7b7006ac0756306db212fd37116cce2bd307e1e109375e1c6c106002df0ae5f \
--hash=sha256:e231e1be58fc568950a04fbe6887c8e4b85310e7889727e2b81db205c45059eb \
--hash=sha256:f3d91857d023ba93e14ed2d462ab62c3428f9bbf2b4fbac50a03ca66d31991f7
ruff==0.11.10 \
--hash=sha256:1067245bad978e7aa7b22f67113ecc6eb241dca0d9b696144256c3a879663bca \
--hash=sha256:2f071b0deed7e9245d5820dac235cbdd4ef99d7b12ff04c330a241ad3534319f \
--hash=sha256:3afead355f1d16d95630df28d4ba17fb2cb9c8dfac8d21ced14984121f639bad \
--hash=sha256:4a60e3a0a617eafba1f2e4186d827759d65348fa53708ca547e384db28406a0b \
--hash=sha256:5a94acf798a82db188f6f36575d80609072b032105d114b0f98661e1679c9125 \
--hash=sha256:5b6a9cc5b62c03cc1fea0044ed8576379dbaf751d5503d718c973d5418483641 \
--hash=sha256:5cc725fbb4d25b0f185cb42df07ab6b76c4489b4bfb740a175f3a59c70e8a224 \
--hash=sha256:607ecbb6f03e44c9e0a93aedacb17b4eb4f3563d00e8b474298a201622677947 \
--hash=sha256:7b3a522fa389402cd2137df9ddefe848f727250535c70dafa840badffb56b7a4 \
--hash=sha256:859a7bfa7bc8888abbea31ef8a2b411714e6a80f0d173c2a82f9041ed6b50f58 \
--hash=sha256:8b4564e9f99168c0f9195a0fd5fa5928004b33b377137f978055e40008a082c5 \
--hash=sha256:968220a57e09ea5e4fd48ed1c646419961a0570727c7e069842edd018ee8afed \
--hash=sha256:d522fb204b4959909ecac47da02830daec102eeb100fb50ea9554818d47a5fa6 \
--hash=sha256:da8ec977eaa4b7bf75470fb575bea2cb41a0e07c7ea9d5a0a97d13dbca697bf2 \
--hash=sha256:dc061a98d32a97211af7e7f3fa1d4ca2fcf919fb96c28f39551f35fc55bdbc19 \
--hash=sha256:ddf8967e08227d1bd95cc0851ef80d2ad9c7c0c5aab1eba31db49cf0a7b99523 \
--hash=sha256:ef69637b35fb8b210743926778d0e45e1bffa850a7c61e428c6b971549b5f5d1 \
--hash=sha256:f4854fd09c7aed5b1590e996a81aeff0c9ff51378b084eb5a0b9cd9518e6cff2
setuptools==80.7.1 \
--hash=sha256:ca5cc1069b85dc23070a6628e6bcecb3292acac802399c7f8edc0100619f9009 \
--hash=sha256:f6ffc5f0142b1bd8d0ca94ee91b30c0ca862ffd50826da1ea85258a06fd94552
# via pbr
six==1.17.0 \
--hash=sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274 \
--hash=sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81
# via python-dateutil
sqlalchemy==2.0.44 \
--hash=sha256:0ae7454e1ab1d780aee69fd2aae7d6b8670a581d8847f2d1e0f7ddfbf47e5a22 \
--hash=sha256:0b1af8392eb27b372ddb783b317dea0f650241cea5bd29199b22235299ca2e45 \
--hash=sha256:15f3326f7f0b2bfe406ee562e17f43f36e16167af99c4c0df61db668de20002d \
--hash=sha256:19de7ca1246fbef9f9d1bff8f1ab25641569df226364a0e40457dc5457c54b05 \
--hash=sha256:1e77faf6ff919aa8cd63f1c4e561cac1d9a454a191bb864d5dd5e545935e5a40 \
--hash=sha256:2b61188657e3a2b9ac4e8f04d6cf8e51046e28175f79464c67f2fd35bceb0976 \
--hash=sha256:b87e7b91a5d5973dda5f00cd61ef72ad75a1db73a386b62877d4875a8840959c \
--hash=sha256:c1c80faaee1a6c3428cecf40d16a2365bcf56c424c92c2b6f0f9ad204b899e9e \
--hash=sha256:ee51625c2d51f8baadf2829fae817ad0b66b140573939dd69284d2ba3553ae73 \
--hash=sha256:ff486e183d151e51b1d694c7aa1695747599bb00b9f5f604092b54b74c64a8e1
sqlalchemy==2.0.41 \
--hash=sha256:4eeb195cdedaf17aab6b247894ff2734dcead6c08f748e617bfe05bd5a218443 \
--hash=sha256:4f67766965996e63bb46cfbf2ce5355fc32d9dd3b8ad7e536a920ff9ee422e23 \
--hash=sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576 \
--hash=sha256:82ca366a844eb551daff9d2e6e7a9e5e76d2612c8564f58db6c19a726869c1df \
--hash=sha256:a62448526dd9ed3e3beedc93df9bb6b55a436ed1474db31a2af13b313a70a7e1 \
--hash=sha256:bfc9064f6658a3d1cadeaa0ba07570b83ce6801a1314985bf98ec9b95d74e15f \
--hash=sha256:c153265408d18de4cc5ded1941dcd8315894572cddd3c58df5d5b5705b3fa28d \
--hash=sha256:d4ae769b9c1c7757e4ccce94b0641bc203bbdf43ba7a2413ab2523d8d047d8dc \
--hash=sha256:dc56c9788617b8964ad02e8fcfeed4001c1f8ba91a9e1f31483c0dffb207002a \
--hash=sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9
# via
# kombu
# celery
# livegraphsdjango
sqlparse==0.5.3 \
--hash=sha256:09f67787f56a0b16ecdbde1bfc7f5d9c3371ca683cfeaa8e6ff60b4807ec9272 \
@@ -562,9 +420,9 @@ sqlparse==0.5.3 \
# via
# django
# django-debug-toolbar
stevedore==5.5.0 \
--hash=sha256:18363d4d268181e8e8452e71a38cd77630f345b2ef6b4a8d5614dac5ee0d18cf \
--hash=sha256:d31496a4f4df9825e1a1e4f1f74d19abb0154aff311c3b376fcc89dae8fccd73
stevedore==5.4.1 \
--hash=sha256:3135b5ae50fe12816ef291baff420acb727fcd356106e3e9cbfa9e5985cd6f4b \
--hash=sha256:d10a31c7b86cba16c1f6e8d15416955fc797052351a56af15e608ad20811fcfe
# via bandit
tinycss2==1.4.0 \
--hash=sha256:10c0972f6fc0fbee87c3edb76549357415e94548c1ae10ebccdea16fb404a9b7 \
@@ -572,34 +430,14 @@ tinycss2==1.4.0 \
# via
# bleach
# livegraphsdjango
ty==0.0.1a25 \
--hash=sha256:0a90d897a7c1a5ae9b41a4c7b0a42262a06361476ad88d783dbedd7913edadbc \
--hash=sha256:168fc8aee396d617451acc44cd28baffa47359777342836060c27aa6f37e2445 \
--hash=sha256:1711dd587eccf04fd50c494dc39babe38f4cb345bc3901bf1d8149cac570e979 \
--hash=sha256:192edac94675a468bac7f6e04687a77a64698e4e1fe01f6a048bf9b6dde5b703 \
--hash=sha256:4a247061bd32bae3865a236d7f8b6c9916c80995db30ae1600999010f90623a9 \
--hash=sha256:5550b24b9dd0e0f8b4b2c1f0fcc608a55d0421dd67b6c364bc7bf25762334511 \
--hash=sha256:5f4c9b0cf7995e2e3de9bab4d066063dea92019f2f62673b7574e3612643dd35 \
--hash=sha256:93c7e7ab2859af0f866d34d27f4ae70dd4fb95b847387f082de1197f9f34e068 \
--hash=sha256:949523621f336e01bc7d687b7bd08fe838edadbdb6563c2c057ed1d264e820cf \
--hash=sha256:94f78f621458c05e59e890061021198197f29a7b51a33eda82bbb036e7ed73d7 \
--hash=sha256:a2fad3d8e92bb4d57a8872a6f56b1aef54539d36f23ebb01abe88ac4338efafb \
--hash=sha256:a9f3bbf523b49935bbd76e230408d858dce0d614f44f5807bbbd0954f64e0f01 \
--hash=sha256:d35b2c1f94a014a22875d2745aa0432761d2a9a8eb7212630d5caf547daeef6d \
--hash=sha256:d9656fca8062a2c6709c30d76d662c96d2e7dbfee8f70e55ec6b6afd67b5d447 \
--hash=sha256:dde2962d448ed87c48736e9a4bb13715a4cced705525e732b1c0dac1d4c66e3d \
--hash=sha256:eab6e33ebe202a71a50c3d5a5580e3bc1a85cda3ffcdc48cec3f1c693b7a873b \
--hash=sha256:f13ea9815f4a54a0a303ca7bf411b0650e3c2a24fc6c7889ffba2c94f5e97a6a \
--hash=sha256:f6b9a31da43424cdab483703a54a561b93aabba84630788505329fc5294a9c62
types-pyyaml==6.0.12.20250915 \
--hash=sha256:0f8b54a528c303f0e6f7165687dd33fafa81c807fcac23f632b63aa624ced1d3 \
--hash=sha256:e7d4d9e064e89a3b3cae120b4990cd370874d2bf12fa5f46c97018dd5d3c9ab6
types-pyyaml==6.0.12.20250516 \
--hash=sha256:8478208feaeb53a34cb5d970c56a7cd76b72659442e733e268a94dc72b2d0530 \
--hash=sha256:9f21a70216fc0fa1b216a8176db5f9e0af6eb35d2f2932acb87689d03a5bf6ba
# via django-stubs
typing-extensions==4.15.0 \
--hash=sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466 \
--hash=sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548
typing-extensions==4.13.2 \
--hash=sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c \
--hash=sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef
# via
# cron-descriptor
# django-stubs
# django-stubs-ext
# mypy
@@ -612,9 +450,9 @@ tzdata==2025.2 \
# django-celery-beat
# kombu
# pandas
urllib3==2.5.0 \
--hash=sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760 \
--hash=sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc
urllib3==2.4.0 \
--hash=sha256:414bc6535b787febd7567804cc015fee39daab8ad86268f1310a9250697de466 \
--hash=sha256:4e16665048960a0900c702d4a66415956a584919c03361cac9f1df5c5dd7e813
# via requests
vine==5.1.0 \
--hash=sha256:40fdf3c48b2cfe1c38a49e9ae2da6fda88e4794c810050a728bd7413811fb1dc \
@@ -623,13 +461,13 @@ vine==5.1.0 \
# amqp
# celery
# kombu
virtualenv==20.35.4 \
--hash=sha256:643d3914d73d3eeb0c552cbb12d7e82adf0e504dbf86a3182f8771a153a1971c \
--hash=sha256:c21c9cede36c9753eeade68ba7d523529f228a403463376cf821eaae2b650f1b
virtualenv==20.31.2 \
--hash=sha256:36efd0d9650ee985f0cad72065001e66d49a6f24eb44d98980f630686243cf11 \
--hash=sha256:e10c0a9d02835e592521be48b332b6caee6887f332c111aa79a09b9e79efc2af
# via pre-commit
wcwidth==0.2.14 \
--hash=sha256:4d478375d31bc5395a3c55c40ccdf3354688364cd61c4f6adacaa9215d0b3605 \
--hash=sha256:a7bb560c8aee30f9957e5f9895805edd20602f2d7f720186dfd906e82b4982e1
wcwidth==0.2.13 \
--hash=sha256:3da69048e4540d84af32131829ff948f1e022c1c6bdb8d6102117aac784f6859 \
--hash=sha256:72ea0c06399eb286d978fdedb6923a9eb47e1c486ce63e9b4e64fc18303972b5
# via prompt-toolkit
webencodings==0.5.1 \
--hash=sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78 \
@@ -637,11 +475,11 @@ webencodings==0.5.1 \
# via
# bleach
# tinycss2
whitenoise==6.11.0 \
--hash=sha256:0f5bfce6061ae6611cd9396a8231e088722e4fc67bc13a111be74c738d99375f \
--hash=sha256:b2aeb45950597236f53b5342b3121c5de69c8da0109362aee506ce88e022d258
whitenoise==6.9.0 \
--hash=sha256:8c4a7c9d384694990c26f3047e118c691557481d624f069b7f7752a2f735d609 \
--hash=sha256:c8a489049b7ee9889617bb4c274a153f3d979e8f51d2efd0f5b403caf41c57df
# via livegraphsdjango
xlsxwriter==3.2.9 \
--hash=sha256:254b1c37a368c444eac6e2f867405cc9e461b0ed97a3233b2ac1e574efb4140c \
--hash=sha256:9a5db42bc5dff014806c58a20b9eae7322a134abb6fce3c92c181bfb275ec5b3
xlsxwriter==3.2.3 \
--hash=sha256:593f8296e8a91790c6d0378ab08b064f34a642b3feb787cf6738236bd0a4860d \
--hash=sha256:ad6fd41bdcf1b885876b1f6b7087560aecc9ae5a9cc2ba97dcac7ab2e210d3d5
# via livegraphsdjango

View File

@@ -1,7 +0,0 @@
import { test } from "@playwright/test";
test.describe("Test group", () => {
test("seed", async ({ page: _page }) => {
// generate code here.
});
});

View File

@@ -1,18 +1,14 @@
#!/usr/bin/env bash
#!/bin/bash
# Set UV_LINK_MODE to copy to avoid hardlink warnings
export UV_LINK_MODE=copy
# Check if Redis is running
if ! redis-cli ping >/dev/null 2>&1; then
echo "Starting Redis server..."
redis-server --daemonize yes
sleep 1
# Verify Redis is now running
if redis-cli ping >/dev/null 2>&1; then
echo "✅ Redis server is now running"
else
@@ -26,7 +22,6 @@ else
fi
# Set environment variables for Redis if it's running
if redis-cli ping >/dev/null 2>&1; then
export CELERY_BROKER_URL=redis://localhost:6379/0
export CELERY_RESULT_BACKEND=redis://localhost:6379/0
@@ -38,5 +33,4 @@ else
fi
# Start the application using foreman
foreman start

View File

@@ -1,29 +0,0 @@
{
"compilerOptions": {
// Environment setup & latest features
"lib": ["ESNext"],
"target": "ESNext",
"module": "Preserve",
"moduleDetection": "force",
"jsx": "react-jsx",
"allowJs": true,
// Bundler mode
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"noEmit": true,
// Best practices
"strict": true,
"skipLibCheck": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
// Some stricter flags (disabled by default)
"noUnusedLocals": false,
"noUnusedParameters": false,
"noPropertyAccessFromIndexSignature": false
}
}

26
ty.toml
View File

@@ -1,26 +0,0 @@
# ty Type Checker Configuration
[environment]
# Django project root for first-party module resolution
root = ["dashboard_project"]
# Python version (matches pyproject.toml requires-python)
python-version = "3.13"
[src]
# Include only the Django project directory
include = ["dashboard_project"]
# Exclude migrations, cache, and generated files
exclude = [
"dashboard_project/migrations",
"dashboard_project/*/migrations",
"dashboard_project/**/__pycache__",
"dashboard_project/**/*.pyc"
]
# Respect .gitignore files
respect-ignore-files = true
[terminal]
# Use concise output for cleaner CI/CD logs
output-format = "concise"
# Treat warnings as errors in CI
error-on-warning = false

836
uv.lock generated

File diff suppressed because it is too large Load Diff