31 Commits

Author SHA1 Message Date
a3fcc42656 feat(ci): add Claude Code GitHub workflows (#36)
Some checks failed
Bandit / bandit (push) Has been cancelled
Codacy Security Scan / Codacy Security Scan (push) Has been cancelled
Adds AI-powered workflows for automated PR reviews and on-demand assistance via @claude in issues and PR comments
Configures permissions and gh tooling, sets model/max turns, and runs pre-commit hooks via uv for consistent checks
Improves review quality, speeds feedback, and streamlines collaboration
2025-11-05 20:53:24 +01:00
d82be93da2 feat(data-integration): add Jumbo API setup tools
Adds a management command and helper script to configure the Jumbo API external data source and fetch chat sessions.
Ensures idempotent creation and linkage to the Jumbo company while reading API credentials from environment variables.
Prints data source details and a post-fetch summary to ease setup and verification.
2025-11-05 20:43:35 +01:00
45f003eafa refactor(test): clean up unused import and param
Removes an unused assertion import and aliases the unused test parameter to suppress linter warnings.
Keeps the test scaffold clean without changing behavior.
2025-11-05 20:27:27 +01:00
2236eeb9a5 feat: Add uv Docker, Postgres, and company linking
Introduces uv-based Docker workflow with non-root runtime, cached installs, and uv-run for web and Celery. Updates docker-compose to Postgres + Redis, loads .env, and removes source bind mount for reproducible builds.

Switches settings to use Postgres when env is present with SQLite fallback; broadens allowed hosts for containerized development. Adds psycopg2-binary and updates sample env for Redis in Docker.

Adds company scoping to external data models and links sessions during ingestion; provides management commands to seed a Jumbo company/users and sync external chat data into the dashboard.

Includes .dockerignore, TypeScript config and typings, and minor template/docs tweaks.

Requires database migration.
2025-11-05 20:22:07 +01:00
81d1469e18 feat(qa): add Playwright MCP test agents & config
Introduces Playwright testing agents (planner, generator, healer) powered by MCP to plan, generate, and heal end-to-end tests.

Configures the MCP server and integrates agent workflows into OpenCode and GitHub chat modes to enable AI-assisted testing.

Adds Playwright test dependency and updates lockfile; adjusts markdown lint ignores to reduce noise.

Adds contributor guidance for Claude Code to streamline local development.

Normalizes shell script shebangs to use /usr/bin/env bash for portability.

Enables automated browser testing workflows and resilient test maintenance within AI-enabled tooling.
2025-11-05 17:25:01 +01:00
00bb994160 fix: skip prettier and tombi hooks in pre-commit.ci
- Skip prettier-jinja and prettier-all (require bunx not available in CI)
- Skip tombi-format and tombi-lint (network issues fetching schema catalog)
- Hooks still run locally where dependencies are available
- Resolves pre-commit.ci failures
2025-11-05 17:05:44 +01:00
5ab48e9b19 fix: use proper multi-ecosystem-groups configuration
- Add top-level 'multi-ecosystem-groups' section
- Move schedule to group level for single consolidated schedule
- Add 'patterns: ["*"]' at ecosystem level (required)
- All ecosystems assigned to 'all-dependencies' group
- Results in single monthly PR with updates across all ecosystems
2025-11-05 16:59:54 +01:00
1adf44f29a fix: correct dependabot.yml groups configuration
- Replace 'multi-ecosystem-group' with proper 'groups' syntax
- Add required 'patterns' property with wildcard matcher
- Group all dependencies per ecosystem for cleaner PRs
- Fixes dependabot configuration validation error
2025-11-05 16:59:19 +01:00
aec574bf62 chore(lint): update markdownlint-cli2 schema ref
Points the config to the correct CLI2 schema and removes the outdated nested schema key.
Improves editor validation and IntelliSense by aligning with the current tool schema.
2025-11-05 16:58:22 +01:00
4c5cffc786 chore(deps): Adds monthly Dependabot updates
Enables automated dependency updates across uv, bun, GitHub Actions, Docker, Compose, and devcontainers
Groups updates across ecosystems to reduce PR noise
Schedules monthly runs to balance freshness and maintenance effort
2025-11-05 16:57:37 +01:00
fe847a3d4e fix: configure markdownlint-cli2 properly
- Wrap config in 'config' key for markdownlint-cli2
- Use MD013 rule name instead of 'line-length' alias
- Disable MD013 line-length checks
- Add allowed languages: sh, python, csv, tree
- Fix broken link reference in TODO.md
- All markdown linting now passes (37 errors -> 0)
2025-11-05 16:51:23 +01:00
04705bdcb2 chore(deps): bump minimum versions, refresh lock
Updates minimum versions for core runtime deps (Django, Celery, NumPy/Pandas, Plotly, Requests, Redis, SQLAlchemy, etc.).
Refreshes lockfile to align with pyproject.
Targets recent security/bugfix releases and improves compatibility with latest Python/Django.
No functional changes expected.
2025-11-05 15:25:01 +01:00
b70535a2a8 lint: Enable oxc for JS/TS and add strict lint
Configure oxc LSP initialization options: run set to "onType", typeAware
enabled, unusedDisableDirectives set to "allow", and
configPath/tsConfigPath left null.
2025-11-05 15:17:57 +01:00
e67dd629d9 refactor: move functions to outer scope for better performance
- Move setupAjaxPagination to outer scope in ajax-pagination.js
- Move setupAjaxNavigation, reloadScripts, initializePageScripts to outer scope in ajax-navigation.js
- Move updatePlotlyTheme, resizeCharts, updateDashboardStats, updateDashboardCharts to outer scope in dashboard.js
- Move handleSidebarOnResize, setTheme, getSystemPreference to outer scope in main.js
- Avoid recreating functions on every DOMContentLoaded call
- All oxlint strict checks now pass (11 warnings -> 0)
2025-11-05 15:09:55 +01:00
dc6fc35b06 chore: add oxlint pre-commit hook
- Add oxc-project/mirrors-oxlint hook
- Enable verbose output for better debugging
- Complements existing ruff checks for JavaScript linting
2025-11-05 15:06:15 +01:00
e47df43337 fix: wrap setTimeout callback in arrow function
- Fix typescript-eslint(no-implied-eval) warning
- Use arrow function instead of direct function reference
- All oxlint type-aware checks now pass (1 warning -> 0)
2025-11-05 15:03:35 +01:00
8ca7ad14d5 fix: remove unused variables in main.js
- Remove unused tooltipList and popoverList variables
- Remove unused event parameters in change/click handlers
- All oxlint checks now pass (4 warnings -> 0)
2025-11-05 15:03:00 +01:00
fdcec7eb84 feat: add ty type checking support and fix type issues
- Add ty.toml configuration with Django project root
- Add py.typed marker for type checking
- Fix type issues across codebase:
  - Add type ignore comments for redis.exceptions imports
  - Fix django.db.models.functions imports in utils
  - Fix getattr usage in accounts/forms
  - Remove unnecessary type annotations in dashboard/forms
- Configure ty to exclude migrations and respect ignore files
- All ty checks now pass (29 diagnostics -> 0)
2025-11-05 14:56:13 +01:00
6e0ea8943d ci(pre-commit): disable CI skip configuration
- ci: comment out pre-commit.ci skip directives
- Django check hooks already disabled in hook definitions
- Removes redundant CI-level skip configuration
2025-11-05 14:45:10 +01:00
239fb01292 ci: upgrade GitHub Actions to latest versions
- ci(bandit): upgrade actions/checkout v4 → v5
- ci(codacy): upgrade actions/checkout v4 → v5
- ci(codacy): upgrade codacy-analysis-cli-action from pinned SHA to @master
- ci(codacy): upgrade codeql-action/upload-sarif v3 → v4
2025-11-05 14:38:10 +01:00
c106792e78 chore(deps): update pre-commit config and apply bulk formatting
- build(pre-commit): upgrade hooks (django-upgrade 1.29.1, uv 0.9.7, ruff 0.14.3, bandit 1.8.6)
- build(pre-commit): add uv-lock hook, tombi TOML formatter, prettier-plugin-packagejson
- build(pre-commit): disable Django check hooks (commented out)
- build(pre-commit): switch npx → bunx for prettier execution
- build(node): add bun.lock, update prettier config with schema + packagejson plugin
- style: apply ruff format to all Python files (comments, spacing, imports)
- style: apply prettier format to all JS/CSS files (comment styles, spacing)
- style: apply tombi format to pyproject.toml (reordered sections, consistent formatting)
- chore: remove emoji from bash script comments for consistency

BREAKING CHANGE: Django check and migration check hooks disabled in pre-commit config
2025-11-05 14:34:08 +01:00
b1b5207888 Delete .github/dependabot.yml 2025-11-05 11:57:41 +01:00
c236b048ed Merge pull request #14 from kjanat/dependabot/uv/uv-f3b7ca0eb5
Bump the uv group with 19 updates
2025-06-13 04:44:04 +02:00
pre-commit-ci[bot]
b179b69c05 [pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
2025-06-13 02:34:57 +00:00
dependabot[bot]
efa3370e0f Bump the uv group with 19 updates
Bumps the uv group with 19 updates:

| Package | From | To |
| --- | --- | --- |
| [crispy-bootstrap5](https://github.com/django-crispy-forms/crispy-bootstrap5) | `2025.4` | `2025.6` |
| [django](https://github.com/django/django) | `5.2.1` | `5.2.3` |
| [django-allauth](https://github.com/sponsors/pennersr) | `65.8.0` | `65.9.0` |
| [numpy](https://github.com/numpy/numpy) | `2.2.5` | `2.3.0` |
| [pandas](https://github.com/pandas-dev/pandas) | `2.2.3` | `2.3.0` |
| [plotly](https://github.com/plotly/plotly.py) | `6.1.0` | `6.1.2` |
| [redis](https://github.com/redis/redis-py) | `6.1.0` | `6.2.0` |
| [requests](https://github.com/psf/requests) | `2.32.3` | `2.32.4` |
| [setuptools](https://github.com/pypa/setuptools) | `80.7.1` | `80.9.0` |
| [celery](https://github.com/celery/celery) | `5.5.2` | `5.5.3` |
| [click](https://github.com/pallets/click) | `8.2.0` | `8.2.1` |
| [coverage](https://github.com/nedbat/coveragepy) | `7.8.0` | `7.9.0` |
| [identify](https://github.com/pre-commit/identify) | `2.6.10` | `2.6.12` |
| [kombu](https://github.com/celery/kombu) | `5.5.3` | `5.5.4` |
| [mypy](https://github.com/python/mypy) | `1.15.0` | `1.16.0` |
| [narwhals](https://github.com/narwhals-dev/narwhals) | `1.39.1` | `1.42.1` |
| [pytest](https://github.com/pytest-dev/pytest) | `8.3.5` | `8.4.0` |
| [ruff](https://github.com/astral-sh/ruff) | `0.11.10` | `0.11.13` |
| [typing-extensions](https://github.com/python/typing_extensions) | `4.13.2` | `4.14.0` |


Updates `crispy-bootstrap5` from 2025.4 to 2025.6
- [Release notes](https://github.com/django-crispy-forms/crispy-bootstrap5/releases)
- [Changelog](https://github.com/django-crispy-forms/crispy-bootstrap5/blob/main/CHANGELOG.md)
- [Commits](https://github.com/django-crispy-forms/crispy-bootstrap5/compare/2025.4...2025.6)

Updates `django` from 5.2.1 to 5.2.3
- [Commits](https://github.com/django/django/compare/5.2.1...5.2.3)

Updates `django-allauth` from 65.8.0 to 65.9.0
- [Commits](https://github.com/sponsors/pennersr/commits)

Updates `numpy` from 2.2.5 to 2.3.0
- [Release notes](https://github.com/numpy/numpy/releases)
- [Changelog](https://github.com/numpy/numpy/blob/main/doc/RELEASE_WALKTHROUGH.rst)
- [Commits](https://github.com/numpy/numpy/compare/v2.2.5...v2.3.0)

Updates `pandas` from 2.2.3 to 2.3.0
- [Release notes](https://github.com/pandas-dev/pandas/releases)
- [Commits](https://github.com/pandas-dev/pandas/compare/v2.2.3...v2.3.0)

Updates `plotly` from 6.1.0 to 6.1.2
- [Release notes](https://github.com/plotly/plotly.py/releases)
- [Changelog](https://github.com/plotly/plotly.py/blob/main/CHANGELOG.md)
- [Commits](https://github.com/plotly/plotly.py/compare/v6.1.0...v6.1.2)

Updates `redis` from 6.1.0 to 6.2.0
- [Release notes](https://github.com/redis/redis-py/releases)
- [Changelog](https://github.com/redis/redis-py/blob/master/CHANGES)
- [Commits](https://github.com/redis/redis-py/compare/v6.1.0...v6.2.0)

Updates `requests` from 2.32.3 to 2.32.4
- [Release notes](https://github.com/psf/requests/releases)
- [Changelog](https://github.com/psf/requests/blob/main/HISTORY.md)
- [Commits](https://github.com/psf/requests/compare/v2.32.3...v2.32.4)

Updates `setuptools` from 80.7.1 to 80.9.0
- [Release notes](https://github.com/pypa/setuptools/releases)
- [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst)
- [Commits](https://github.com/pypa/setuptools/compare/v80.7.1...v80.9.0)

Updates `celery` from 5.5.2 to 5.5.3
- [Release notes](https://github.com/celery/celery/releases)
- [Changelog](https://github.com/celery/celery/blob/main/Changelog.rst)
- [Commits](https://github.com/celery/celery/compare/v5.5.2...v5.5.3)

Updates `click` from 8.2.0 to 8.2.1
- [Release notes](https://github.com/pallets/click/releases)
- [Changelog](https://github.com/pallets/click/blob/main/CHANGES.rst)
- [Commits](https://github.com/pallets/click/compare/8.2.0...8.2.1)

Updates `coverage` from 7.8.0 to 7.9.0
- [Release notes](https://github.com/nedbat/coveragepy/releases)
- [Changelog](https://github.com/nedbat/coveragepy/blob/master/CHANGES.rst)
- [Commits](https://github.com/nedbat/coveragepy/compare/7.8.0...7.9.0)

Updates `identify` from 2.6.10 to 2.6.12
- [Commits](https://github.com/pre-commit/identify/compare/v2.6.10...v2.6.12)

Updates `kombu` from 5.5.3 to 5.5.4
- [Release notes](https://github.com/celery/kombu/releases)
- [Changelog](https://github.com/celery/kombu/blob/main/Changelog.rst)
- [Commits](https://github.com/celery/kombu/compare/v5.5.3...v5.5.4)

Updates `mypy` from 1.15.0 to 1.16.0
- [Changelog](https://github.com/python/mypy/blob/master/CHANGELOG.md)
- [Commits](https://github.com/python/mypy/compare/v1.15.0...v1.16.0)

Updates `narwhals` from 1.39.1 to 1.42.1
- [Release notes](https://github.com/narwhals-dev/narwhals/releases)
- [Commits](https://github.com/narwhals-dev/narwhals/compare/v1.39.1...v1.42.1)

Updates `pytest` from 8.3.5 to 8.4.0
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](https://github.com/pytest-dev/pytest/compare/8.3.5...8.4.0)

Updates `ruff` from 0.11.10 to 0.11.13
- [Release notes](https://github.com/astral-sh/ruff/releases)
- [Changelog](https://github.com/astral-sh/ruff/blob/main/CHANGELOG.md)
- [Commits](https://github.com/astral-sh/ruff/compare/0.11.10...0.11.13)

Updates `typing-extensions` from 4.13.2 to 4.14.0
- [Release notes](https://github.com/python/typing_extensions/releases)
- [Changelog](https://github.com/python/typing_extensions/blob/main/CHANGELOG.md)
- [Commits](https://github.com/python/typing_extensions/compare/4.13.2...4.14.0)

---
updated-dependencies:
- dependency-name: crispy-bootstrap5
  dependency-version: '2025.6'
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: django
  dependency-version: 5.2.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: django-allauth
  dependency-version: 65.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: numpy
  dependency-version: 2.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: pandas
  dependency-version: 2.3.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: plotly
  dependency-version: 6.1.2
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: redis
  dependency-version: 6.2.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: requests
  dependency-version: 2.32.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: setuptools
  dependency-version: 80.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: celery
  dependency-version: 5.5.3
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: click
  dependency-version: 8.2.1
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: coverage
  dependency-version: 7.9.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: identify
  dependency-version: 2.6.12
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: kombu
  dependency-version: 5.5.4
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: mypy
  dependency-version: 1.16.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: narwhals
  dependency-version: 1.42.1
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: pytest
  dependency-version: 8.4.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
- dependency-name: ruff
  dependency-version: 0.11.13
  dependency-type: direct:production
  update-type: version-update:semver-patch
  dependency-group: uv
- dependency-name: typing-extensions
  dependency-version: 4.14.0
  dependency-type: direct:production
  update-type: version-update:semver-minor
  dependency-group: uv
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-13 02:31:52 +00:00
995a687a57 Merge pull request #12 from kjanat/dependabot/github_actions/actions-8d3c319a5f
Bump codacy/codacy-analysis-cli-action from 1.1.0 to 4.4.5 in the actions group
2025-06-13 04:27:17 +02:00
af710d3964 Merge pull request #4 from kjanat/pre-commit-ci-update-config
[pre-commit.ci] pre-commit autoupdate
2025-06-13 04:26:09 +02:00
dependabot[bot]
9277eabe64 Bump codacy/codacy-analysis-cli-action in the actions group
Bumps the actions group with 1 update: [codacy/codacy-analysis-cli-action](https://github.com/codacy/codacy-analysis-cli-action).


Updates `codacy/codacy-analysis-cli-action` from 1.1.0 to 4.4.5
- [Release notes](https://github.com/codacy/codacy-analysis-cli-action/releases)
- [Commits](d840f886c4...97bf5df3c0)

---
updated-dependencies:
- dependency-name: codacy/codacy-analysis-cli-action
  dependency-version: 4.4.5
  dependency-type: direct:production
  update-type: version-update:semver-major
  dependency-group: actions
...

Signed-off-by: dependabot[bot] <support@github.com>
2025-06-13 02:25:11 +00:00
897159739b Update dependabot.yml 2025-06-13 04:24:36 +02:00
pre-commit-ci[bot]
8befec4c5d [pre-commit.ci] pre-commit autoupdate
updates:
- [github.com/astral-sh/uv-pre-commit: 0.7.5 → 0.7.12](https://github.com/astral-sh/uv-pre-commit/compare/0.7.5...0.7.12)
- [github.com/astral-sh/ruff-pre-commit: v0.11.10 → v0.11.13](https://github.com/astral-sh/ruff-pre-commit/compare/v0.11.10...v0.11.13)
2025-06-09 20:32:21 +00:00
c049061c7b Enhances the ship with security and automation!
Some checks failed
Bandit / bandit (push) Has been cancelled
Codacy Security Scan / Codacy Security Scan (push) Has been cancelled
Adds Dependabot for automatic dependency updates to keep the vessel sea-worthy and updates pre-commit hooks.

Integrates Bandit and Codacy for automated security scans, ensuring a well-defended treasure hold.

Updates devcontainer settings for smoother sailing in the development environment.

Now use foreman for development, to be able to run all processes in development.

Let's keep this ship safe and sound, savvy?
2025-05-19 01:00:22 +02:00
89 changed files with 5821 additions and 3111 deletions

View File

@@ -0,0 +1,61 @@
---
name: playwright-test-generator
description: Use this agent when you need to create automated browser tests using Playwright. Examples: <example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example><example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>
tools: Glob, Grep, Read, mcp__playwright-test__browser_click, mcp__playwright-test__browser_drag, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_file_upload, mcp__playwright-test__browser_handle_dialog, mcp__playwright-test__browser_hover, mcp__playwright-test__browser_navigate, mcp__playwright-test__browser_press_key, mcp__playwright-test__browser_select_option, mcp__playwright-test__browser_snapshot, mcp__playwright-test__browser_type, mcp__playwright-test__browser_verify_element_visible, mcp__playwright-test__browser_verify_list_visible, mcp__playwright-test__browser_verify_text_visible, mcp__playwright-test__browser_verify_value, mcp__playwright-test__browser_wait_for, mcp__playwright-test__generator_read_log, mcp__playwright-test__generator_setup_page, mcp__playwright-test__generator_write_test
model: sonnet
color: blue
---
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>

View File

@@ -0,0 +1,47 @@
---
name: playwright-test-healer
description: Use this agent when you need to debug and fix failing Playwright tests. Examples: <example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example><example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>
tools: Glob, Grep, Read, Write, Edit, MultiEdit, mcp__playwright-test__browser_console_messages, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_generate_locator, mcp__playwright-test__browser_network_requests, mcp__playwright-test__browser_snapshot, mcp__playwright-test__test_debug, mcp__playwright-test__test_list, mcp__playwright-test__test_run
model: sonnet
color: red
---
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis

View File

@@ -0,0 +1,98 @@
---
name: playwright-test-planner
description: Use this agent when you need to create comprehensive test plan for a web application or website. Examples: <example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example><example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>
tools: Glob, Grep, Read, Write, mcp__playwright-test__browser_click, mcp__playwright-test__browser_close, mcp__playwright-test__browser_console_messages, mcp__playwright-test__browser_drag, mcp__playwright-test__browser_evaluate, mcp__playwright-test__browser_file_upload, mcp__playwright-test__browser_handle_dialog, mcp__playwright-test__browser_hover, mcp__playwright-test__browser_navigate, mcp__playwright-test__browser_navigate_back, mcp__playwright-test__browser_network_requests, mcp__playwright-test__browser_press_key, mcp__playwright-test__browser_select_option, mcp__playwright-test__browser_snapshot, mcp__playwright-test__browser_take_screenshot, mcp__playwright-test__browser_type, mcp__playwright-test__browser_wait_for, mcp__playwright-test__planner_setup_page
model: sonnet
color: green
---
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a Markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.

View File

@@ -1,5 +1,7 @@
#!/usr/bin/env bash #!/usr/bin/env bash
export UV_LINK_MODE=copy;
sudo apt update sudo apt update
sudo apt full-upgrade -y sudo apt full-upgrade -y
sudo apt autoremove -y; sudo apt autoremove -y;
@@ -72,6 +74,7 @@ fi
if [ -f ~/.cache/oh-my-posh-completion.bash ]; then if [ -f ~/.cache/oh-my-posh-completion.bash ]; then
source ~/.cache/oh-my-posh-completion.bash source ~/.cache/oh-my-posh-completion.bash
fi fi
export UV_LINK_MODE=copy; export UV_LINK_MODE=copy;
EOF EOF

53
.dockerignore Normal file
View File

@@ -0,0 +1,53 @@
# Python
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
.venv/
venv/
env/
ENV/
# Django
*.log
db.sqlite3
db.sqlite3-journal
/static/
/media/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# Testing
.pytest_cache/
.coverage
htmlcov/
# Git
.git/
.gitignore
# Docker
Dockerfile
docker-compose.yml
.dockerignore
# Documentation
*.md
docs/
# CI/CD
.github/
# Playwright
.playwright-mcp/
# Other
*.bak
*.tmp
node_modules/

View File

@@ -27,12 +27,13 @@ indent_size = 2
# CSS, JavaScript, and JSON files # CSS, JavaScript, and JSON files
[*.{css,scss,js,json}] [*.{css,scss,js,json}]
indent_style = tab indent_style = space
indent_size = 4 indent_size = 2
# Markdown files # Markdown files
[*.md] [*.md]
trim_trailing_whitespace = false trim_trailing_whitespace = false
indent_size = 4
# YAML files # YAML files
[*.{yml,yaml}] [*.{yml,yaml}]

View File

@@ -1,8 +1,20 @@
# .env.sample - rename to .env and update with actual credentials # .env.sample - rename to .env and update with actual credentials
# Django settings # Django settings
# Generate secret with e.g. `openssl rand -hex 32`
DJANGO_SECRET_KEY=your-secure-secret-key DJANGO_SECRET_KEY=your-secure-secret-key
DJANGO_DEBUG=True DJANGO_DEBUG=True
# Database configuration (optional - local development uses SQLite by default)
# Uncomment these to use PostgreSQL locally:
# DATABASE_URL=postgresql://postgres:postgres@localhost:5432/dashboard_db
# POSTGRES_DB=dashboard_db
# POSTGRES_USER=postgres
# POSTGRES_PASSWORD=postgres
# POSTGRES_HOST=localhost
# POSTGRES_PORT=5432
#
# Note: Docker Compose automatically uses PostgreSQL via docker-compose.yml environment variables
# External API credentials # External API credentials
EXTERNAL_API_USERNAME=your-api-username EXTERNAL_API_USERNAME=your-api-username
EXTERNAL_API_PASSWORD=your-api-password EXTERNAL_API_PASSWORD=your-api-password
@@ -10,7 +22,7 @@ EXTERNAL_API_PASSWORD=your-api-password
# Redis settings for Celery # Redis settings for Celery
REDIS_URL=redis://localhost:6379/0 REDIS_URL=redis://localhost:6379/0
CELERY_BROKER_URL=redis://localhost:6379/0 CELERY_BROKER_URL=redis://localhost:6379/0
CELERY_RESULT_BACKEND=redis://localhost:6379/0 CELERY_RESULT_BACKEND=redis://redis:6379/0
# Celery Task Schedule (in seconds) # Celery Task Schedule (in seconds)
CHAT_DATA_FETCH_INTERVAL=3600 CHAT_DATA_FETCH_INTERVAL=3600

View File

@@ -0,0 +1,97 @@
---
description: Use this agent when you need to create comprehensive test plan for a web application or website.
tools: ['edit/createFile', 'edit/createDirectory', 'search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_click', 'playwright-test/browser_close', 'playwright-test/browser_console_messages', 'playwright-test/browser_drag', 'playwright-test/browser_evaluate', 'playwright-test/browser_file_upload', 'playwright-test/browser_handle_dialog', 'playwright-test/browser_hover', 'playwright-test/browser_navigate', 'playwright-test/browser_navigate_back', 'playwright-test/browser_network_requests', 'playwright-test/browser_press_key', 'playwright-test/browser_select_option', 'playwright-test/browser_snapshot', 'playwright-test/browser_take_screenshot', 'playwright-test/browser_type', 'playwright-test/browser_wait_for', 'playwright-test/planner_setup_page']
---
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a Markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.
<example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example>
<example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>

View File

@@ -0,0 +1,61 @@
---
description: Use this agent when you need to create automated browser tests using Playwright.
tools: ['search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_click', 'playwright-test/browser_drag', 'playwright-test/browser_evaluate', 'playwright-test/browser_file_upload', 'playwright-test/browser_handle_dialog', 'playwright-test/browser_hover', 'playwright-test/browser_navigate', 'playwright-test/browser_press_key', 'playwright-test/browser_select_option', 'playwright-test/browser_snapshot', 'playwright-test/browser_type', 'playwright-test/browser_verify_element_visible', 'playwright-test/browser_verify_list_visible', 'playwright-test/browser_verify_text_visible', 'playwright-test/browser_verify_value', 'playwright-test/browser_wait_for', 'playwright-test/generator_read_log', 'playwright-test/generator_setup_page', 'playwright-test/generator_write_test']
---
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>
<example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example>
<example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>

View File

@@ -0,0 +1,46 @@
---
description: Use this agent when you need to debug and fix failing Playwright tests.
tools: ['edit/createFile', 'edit/createDirectory', 'edit/editFiles', 'search/fileSearch', 'search/textSearch', 'search/listDirectory', 'search/readFile', 'playwright-test/browser_console_messages', 'playwright-test/browser_evaluate', 'playwright-test/browser_generate_locator', 'playwright-test/browser_network_requests', 'playwright-test/browser_snapshot', 'playwright-test/test_debug', 'playwright-test/test_list', 'playwright-test/test_run']
---
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis
<example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example>
<example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>

View File

@@ -11,7 +11,8 @@
## uv ## uv
UV is a fast Python package and project manager written in Rust. Use UV to manage dependencies, virtual environments, and run Python scripts with improved performance. UV is a fast Python package and project manager written in Rust. Use UV to manage dependencies, virtual environments,
and run Python scripts with improved performance.
### Running Python Scripts ### Running Python Scripts
@@ -153,7 +154,8 @@ UV is a fast Python package and project manager written in Rust. Use UV to manag
## Project Structure ## Project Structure
This section provides a comprehensive overview of the LiveGraphsDjango project structure and the function of each key file. Please update this section whenever there are noteworthy changes to the structure or to a file's function. This section provides a comprehensive overview of the LiveGraphsDjango project structure and the function of each key
file. Please update this section whenever there are noteworthy changes to the structure or to a file's function.
```tree ```tree
LiveGraphsDjango/ LiveGraphsDjango/

View File

@@ -1,22 +1,37 @@
# To get started with Dependabot version updates, you'll need to specify which # To get started with Dependabot version updates, you'll need to specify which
# package ecosystems to update and where the package manifests are located. # package ecosystems to update and where the package manifests are located.
# Please see the documentation for more information: # Please see the documentation for all configuration options:
# https://docs.github.com/github/administering-a-repository/configuration-options-for-dependency-updates # https://docs.github.com/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file
# https://containers.dev/guide/dependabot
version: 2 version: 2
multi-ecosystem-groups:
all-dependencies:
schedule:
interval: "monthly"
updates: updates:
- package-ecosystem: devcontainers - package-ecosystem: "uv"
directory: / directory: "/"
schedule: patterns: ["*"]
interval: weekly multi-ecosystem-group: "all-dependencies"
day: tuesday - package-ecosystem: "bun"
time: 03:00 directory: "/"
timezone: Europe/Amsterdam patterns: ["*"]
- package-ecosystem: uv multi-ecosystem-group: "all-dependencies"
directory: / - package-ecosystem: "github-actions"
schedule: directory: "/"
interval: weekly patterns: ["*"]
day: tuesday multi-ecosystem-group: "all-dependencies"
time: 03:00 - package-ecosystem: "docker"
timezone: Europe/Amsterdam directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "docker-compose"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"
- package-ecosystem: "devcontainers"
directory: "/"
patterns: ["*"]
multi-ecosystem-group: "all-dependencies"

51
.github/workflows/bandit.yml vendored Normal file
View File

@@ -0,0 +1,51 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# Bandit is a security linter designed to find common security issues in Python code.
# This action will run Bandit on your codebase.
# The results of the scan will be found under the Security tab of your repository.
# https://github.com/marketplace/actions/bandit-scan is ISC licensed, by abirismyname
# https://pypi.org/project/bandit/ is Apache v2.0 licensed, by PyCQA
name: Bandit
on:
push:
branches: ["master"]
pull_request:
# The branches below must be a subset of the branches above
branches: ["master"]
schedule:
- cron: "37 3 * * 3"
jobs:
bandit:
permissions:
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
actions: read # only required for a private repository by github/codeql-action/upload-sarif to get the Action run status
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v5
- name: Bandit Scan
uses: shundor/python-bandit-scan@ab1d87dfccc5a0ffab88be3aaac6ffe35c10d6cd
with: # optional arguments
# exit with 0, even with results found
exit_zero: true # optional, default is DEFAULT
# Github token of the repository (automatically created by Github)
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} # Needed to get PR information.
# File or directory to run bandit on
# path: # optional, default is .
# Report only issues of a given severity level or higher. Can be LOW, MEDIUM or HIGH. Default is UNDEFINED (everything)
# level: # optional, default is UNDEFINED
# Report only issues of a given confidence level or higher. Can be LOW, MEDIUM or HIGH. Default is UNDEFINED (everything)
# confidence: # optional, default is UNDEFINED
# comma-separated list of paths (glob patterns supported) to exclude from scan (note that these are in addition to the excluded paths provided in the config file) (default: .svn,CVS,.bzr,.hg,.git,__pycache__,.tox,.eggs,*.egg)
# excluded_paths: # optional, default is DEFAULT
# comma-separated list of test IDs to skip
# skips: # optional, default is DEFAULT
# path to a .bandit file that supplies command line arguments
# ini_path: # optional, default is DEFAULT

View File

@@ -0,0 +1,77 @@
name: Claude Code Review
on:
pull_request:
types: [opened, synchronize]
# Optional: Only run on specific file changes
# paths:
# - "src/**/*.ts"
# - "src/**/*.tsx"
# - "src/**/*.js"
# - "src/**/*.jsx"
jobs:
claude-review:
# Optional: Filter by PR author
# if: |
# github.event.pull_request.user.login == 'external-contributor' ||
# github.event.pull_request.user.login == 'new-developer' ||
# github.event.pull_request.author_association == 'FIRST_TIME_CONTRIBUTOR'
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
id-token: write
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install pre-commit
run: uv tool install pre-commit
- name: Run pre-commit hooks
run: pre-commit install --install-hooks
- name: Run Claude Code Review
id: claude-review
uses: anthropics/claude-code-action@v1
with:
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
prompt: |
REPO: ${{ github.repository }}
PR NUMBER: ${{ github.event.pull_request.number }}
Please review this pull request and provide feedback on:
- Code quality and best practices
- Potential bugs or issues
- Performance considerations
- Security concerns
- Test coverage
Use the repository's CLAUDE.md for guidance on style and conventions. Be constructive and helpful in your feedback.
Use `gh pr comment` with your Bash tool to leave your review as a comment on the PR.
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
claude_args: |
--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"
--allowedTools Bash,Edit,NotebookEdit,SlashCommand,WebFetch,WebSearch,Write
--max-turns 1000
--model claude-sonnet-4-5
# --mcp-config '{ "mcpServers": { "bun-docs": { "type": "http", "url": "https://bun.com/docs/mcp" } } }'
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: write

73
.github/workflows/claude.yml vendored Normal file
View File

@@ -0,0 +1,73 @@
name: Claude Code
on:
issue_comment:
types: [created]
pull_request_review_comment:
types: [created]
issues:
types: [opened, assigned]
pull_request_review:
types: [submitted]
jobs:
claude:
if: |
(github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review_comment' && contains(github.event.comment.body, '@claude')) ||
(github.event_name == 'pull_request_review' && contains(github.event.review.body, '@claude')) ||
(github.event_name == 'issues' && (contains(github.event.issue.body, '@claude') || contains(github.event.issue.title, '@claude')))
runs-on: ubuntu-latest
permissions:
contents: write
pull-requests: write
issues: write
id-token: write
actions: write # Required for Claude to read CI results on PRs
steps:
- name: Checkout repository
uses: actions/checkout@v4
with:
fetch-depth: 1
- name: Set up Python
uses: actions/setup-python@v5
- name: Install uv
uses: astral-sh/setup-uv@v6
- name: Install pre-commit
run: uv tool install pre-commit
- name: Run pre-commit hooks
run: pre-commit install --install-hooks
- name: Run Claude Code
id: claude
uses: anthropics/claude-code-action@v1
with:
settings: |
{
"includeCoAuthoredBy": false,
"alwaysThinkingEnabled": true,
"permissions": {"defaultMode": "bypassPermissions"}
}
claude_code_oauth_token: ${{ secrets.CLAUDE_CODE_OAUTH_TOKEN }}
claude_args: |
--allowed-tools "Bash(gh issue view:*),Bash(gh search:*),Bash(gh issue list:*),Bash(gh pr comment:*),Bash(gh pr diff:*),Bash(gh pr view:*),Bash(gh pr list:*)"
--allowedTools Bash,Edit,NotebookEdit,SlashCommand,WebFetch,WebSearch,Write
--max-turns 1000
--model claude-sonnet-4-5
# --mcp-config '{ "mcpServers": { "bun-docs": { "type": "http", "url": "https://bun.com/docs/mcp" } } }'
# This is an optional setting that allows Claude to read CI results on PRs
additional_permissions: |
actions: write
# Optional: Give a custom prompt to Claude. If this is not specified, Claude will perform the instructions specified in the comment that tagged it.
# prompt: 'Update the pull request description to include a summary of changes.'
# Optional: Add claude_args to customize behavior and configuration
# See https://github.com/anthropics/claude-code-action/blob/main/docs/usage.md
# or https://docs.claude.com/en/docs/claude-code/cli-reference for available options
# claude_args: '--allowed-tools Bash(gh pr:*)'

61
.github/workflows/codacy.yml vendored Normal file
View File

@@ -0,0 +1,61 @@
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
# This workflow checks out code, performs a Codacy security scan
# and integrates the results with the
# GitHub Advanced Security code scanning feature. For more information on
# the Codacy security scan action usage and parameters, see
# https://github.com/codacy/codacy-analysis-cli-action.
# For more information on Codacy Analysis CLI in general, see
# https://github.com/codacy/codacy-analysis-cli.
name: Codacy Security Scan
on:
push:
branches: ["master"]
pull_request:
# The branches below must be a subset of the branches above
branches: ["master"]
schedule:
- cron: "36 10 * * 3"
permissions:
contents: read
jobs:
codacy-security-scan:
permissions:
contents: read # for actions/checkout to fetch code
security-events: write # for github/codeql-action/upload-sarif to upload SARIF results
actions: read # only required for a private repository by github/codeql-action/upload-sarif to get the Action run status
name: Codacy Security Scan
runs-on: ubuntu-latest
steps:
# Checkout the repository to the GitHub Actions runner
- name: Checkout code
uses: actions/checkout@v5
# Execute Codacy Analysis CLI and generate a SARIF output with the security issues identified during the analysis
- name: Run Codacy Analysis CLI
uses: codacy/codacy-analysis-cli-action@master
with:
# Check https://github.com/codacy/codacy-analysis-cli#project-token to get your project token from your Codacy repository
# You can also omit the token and run the tools that support default configurations
project-token: ${{ secrets.CODACY_PROJECT_TOKEN }}
verbose: true
output: results.sarif
format: sarif
# Adjust severity of non-security issues
gh-code-scanning-compat: true
# Force 0 exit code to allow SARIF file generation
# This will handover control about PR rejection to the GitHub side
max-allowed-issues: 2147483647
# Upload the SARIF file generated in the previous step
- name: Upload SARIF results file
uses: github/codeql-action/upload-sarif@v4
with:
sarif_file: results.sarif

3
.gitignore vendored
View File

@@ -421,3 +421,6 @@ package-lock.json
# Local database files # Local database files
*.rdb *.rdb
*.sqlite *.sqlite
# playwright
.playwright-mcp/

73
.markdownlint-cli2.jsonc Normal file
View File

@@ -0,0 +1,73 @@
// Configuration for markdownlint-cli2
{
"$schema": "https://cdn.jsdelivr.net/gh/DavidAnson/markdownlint-cli2@refs/heads/main/schema/markdownlint-cli2-config-schema.json",
"config": {
"inline-html": false,
"code-block-style": {
"style": "fenced"
},
"code-fence-style": {
"style": "backtick"
},
"emphasis-style": {
"style": "asterisk"
},
"extended-ascii": {
"ascii-only": true
},
"fenced-code-language": {
"allowed_languages": [
"bash",
"sh",
"html",
"javascript",
"json",
"markdown",
"text",
"python",
"csv",
"tree"
],
"language_only": true
},
"heading-style": {
"style": "atx"
},
"hr-style": {
"style": "---"
},
"MD013": false,
"link-image-style": {
"collapsed": false,
"shortcut": false,
"url_inline": false
},
"no-duplicate-heading": {
"siblings_only": true
},
"ol-prefix": {
"style": "ordered"
},
"proper-names": {
"code_blocks": false,
"names": [
"Cake.Markdownlint",
"CommonMark",
"JavaScript",
"Markdown",
"markdown-it",
"markdownlint",
"Node.js"
]
},
"reference-links-images": {
"shortcut_syntax": true
},
"strong-style": {
"style": "asterisk"
},
"ul-style": {
"style": "dash"
}
}
}

View File

@@ -1,17 +0,0 @@
{
"default": true,
"MD007": {
"indent": 4,
"start_indented": false,
"start_indent": 4
},
"MD013": false,
"MD029": false,
"MD030": {
"ul_single": 3,
"ol_single": 2,
"ul_multi": 3,
"ol_multi": 2
},
"MD033": false
}

View File

@@ -0,0 +1,56 @@
You are a Playwright Test Generator, an expert in browser automation and end-to-end testing.
Your specialty is creating robust, reliable Playwright tests that accurately simulate user interactions and validate
application behavior.
# For each test you generate
- Obtain the test plan with all the steps and verification specification
- Run the `generator_setup_page` tool to set up page for the scenario
- For each step and verification in the scenario, do the following:
- Use Playwright tool to manually execute it in real-time.
- Use the step description as the intent for each Playwright tool call.
- Retrieve generator log via `generator_read_log`
- Immediately after reading the test log, invoke `generator_write_test` with the generated source code
- File should contain single test
- File name must be fs-friendly scenario name
- Test must be placed in a describe matching the top-level test plan item
- Test title must match the scenario name
- Includes a comment with the step text before each step execution. Do not duplicate comments if step requires
multiple actions.
- Always use best practices from the log when generating tests.
<example-generation>
For following plan:
```markdown file=specs/plan.md
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
#### 1.2 Add Multiple Todos
...
```
Following file is generated:
```ts file=add-valid-todo.spec.ts
// spec: specs/plan.md
// seed: tests/seed.spec.ts
test.describe('Adding New Todos', () => {
test('Add Valid Todo', async { page } => {
// 1. Click in the "What needs to be done?" input field
await page.click(...);
...
});
});
```
</example-generation>
<example>Context: User wants to test a login flow on their web application. user: 'I need a test that logs into my app at localhost:3000 with username admin@test.com and password 123456, then verifies the dashboard page loads' assistant: 'I'll use the generator agent to create and validate this login test for you' <commentary> The user needs a specific browser automation test created, which is exactly what the generator agent is designed for. </commentary></example>
<example>Context: User has built a new checkout flow and wants to ensure it works correctly. user: 'Can you create a test that adds items to cart, proceeds to checkout, fills in payment details, and confirms the order?' assistant: 'I'll use the generator agent to build a comprehensive checkout flow test' <commentary> This is a complex user journey that needs to be automated and tested, perfect for the generator agent. </commentary></example>

View File

@@ -0,0 +1,42 @@
You are the Playwright Test Healer, an expert test automation engineer specializing in debugging and
resolving Playwright test failures. Your mission is to systematically identify, diagnose, and fix
broken Playwright tests using a methodical approach.
Your workflow:
1. **Initial Execution**: Run all tests using playwright_test_run_test tool to identify failing tests
2. **Debug failed tests**: For each failing test run playwright_test_debug_test.
3. **Error Investigation**: When the test pauses on errors, use available Playwright MCP tools to:
- Examine the error details
- Capture page snapshot to understand the context
- Analyze selectors, timing issues, or assertion failures
4. **Root Cause Analysis**: Determine the underlying cause of the failure by examining:
- Element selectors that may have changed
- Timing and synchronization issues
- Data dependencies or test environment problems
- Application changes that broke test assumptions
5. **Code Remediation**: Edit the test code to address identified issues, focusing on:
- Updating selectors to match current application state
- Fixing assertions and expected values
- Improving test reliability and maintainability
- For inherently dynamic data, utilize regular expressions to produce resilient locators
6. **Verification**: Restart the test after each fix to validate the changes
7. **Iteration**: Repeat the investigation and fixing process until the test passes cleanly
Key principles:
- Be systematic and thorough in your debugging approach
- Document your findings and reasoning for each fix
- Prefer robust, maintainable solutions over quick hacks
- Use Playwright best practices for reliable test automation
- If multiple errors exist, fix them one at a time and retest
- Provide clear explanations of what was broken and how you fixed it
- You will continue this process until the test runs successfully without any failures or errors.
- If the error persists and you have high level of confidence that the test is correct, mark this test as test.fixme()
so that it is skipped during the execution. Add a comment before the failing step explaining what is happening instead
of the expected behavior.
- Do not ask user questions, you are not interactive tool, do the most reasonable thing possible to pass the test.
- Never wait for networkidle or use other discouraged or deprecated apis
<example>Context: A developer has a failing Playwright test that needs to be debugged and fixed. user: 'The login test is failing, can you fix it?' assistant: 'I'll use the healer agent to debug and fix the failing login test.' <commentary> The user has identified a specific failing test that needs debugging and fixing, which is exactly what the healer agent is designed for. </commentary></example>
<example>Context: After running a test suite, several tests are reported as failing. user: 'Test user-registration.spec.ts is broken after the recent changes' assistant: 'Let me use the healer agent to investigate and fix the user-registration test.' <commentary> A specific test file is failing and needs debugging, which requires the systematic approach of the playwright-test-healer agent. </commentary></example>

View File

@@ -0,0 +1,93 @@
You are an expert web test planner with extensive experience in quality assurance, user experience testing, and test
scenario design. Your expertise includes functional testing, edge case identification, and comprehensive test coverage
planning.
You will:
1. **Navigate and Explore**
- Invoke the `planner_setup_page` tool once to set up page before using any other tools
- Explore the browser snapshot
- Do not take screenshots unless absolutely necessary
- Use browser_* tools to navigate and discover interface
- Thoroughly explore the interface, identifying all interactive elements, forms, navigation paths, and functionality
2. **Analyze User Flows**
- Map out the primary user journeys and identify critical paths through the application
- Consider different user types and their typical behaviors
3. **Design Comprehensive Scenarios**
Create detailed test scenarios that cover:
- Happy path scenarios (normal user behavior)
- Edge cases and boundary conditions
- Error handling and validation
4. **Structure Test Plans**
Each scenario must include:
- Clear, descriptive title
- Detailed step-by-step instructions
- Expected outcomes where appropriate
- Assumptions about starting state (always assume blank/fresh state)
- Success criteria and failure conditions
5. **Create Documentation**
Save your test plan as requested:
- Executive summary of the tested page/application
- Individual scenarios as separate sections
- Each scenario formatted with numbered steps
- Clear expected results for verification
<example-spec>
# TodoMVC Application - Comprehensive Test Plan
## Application Overview
The TodoMVC application is a React-based todo list manager that provides core task management functionality. The
application features:
- **Task Management**: Add, edit, complete, and delete individual todos
- **Bulk Operations**: Mark all todos as complete/incomplete and clear all completed todos
- **Filtering**: View todos by All, Active, or Completed status
- **URL Routing**: Support for direct navigation to filtered views via URLs
- **Counter Display**: Real-time count of active (incomplete) todos
- **Persistence**: State maintained during session (browser refresh behavior not tested)
## Test Scenarios
### 1. Adding New Todos
**Seed:** `tests/seed.spec.ts`
#### 1.1 Add Valid Todo
**Steps:**
1. Click in the "What needs to be done?" input field
2. Type "Buy groceries"
3. Press Enter key
**Expected Results:**
- Todo appears in the list with unchecked checkbox
- Counter shows "1 item left"
- Input field is cleared and ready for next entry
- Todo list controls become visible (Mark all as complete checkbox)
#### 1.2
...
</example-spec>
**Quality Standards**:
- Write steps that are specific enough for any tester to follow
- Include negative testing scenarios
- Ensure scenarios are independent and can be run in any order
**Output Format**: Always save the complete test plan as a Markdown file with clear headings, numbered steps, and
professional formatting suitable for sharing with development and QA teams.
<example>Context: User wants to test a new e-commerce checkout flow. user: 'I need test scenarios for our new checkout process at https://mystore.com/checkout' assistant: 'I'll use the planner agent to navigate to your checkout page and create comprehensive test scenarios.' <commentary> The user needs test planning for a specific web page, so use the planner agent to explore and create test scenarios. </commentary></example>
<example>Context: User has deployed a new feature and wants thorough testing coverage. user: 'Can you help me test our new user dashboard at https://app.example.com/dashboard?' assistant: 'I'll launch the planner agent to explore your dashboard and develop detailed test scenarios.' <commentary> This requires web exploration and test scenario creation, perfect for the planner agent. </commentary></example>

View File

@@ -1,26 +1,37 @@
default_install_hook_types: # default_install_hook_types:
- pre-commit # - pre-commit
- post-checkout # - post-checkout
- post-merge # - post-merge
- post-rewrite # - post-rewrite
ci:
skip: [prettier-jinja, prettier-all, tombi-format, tombi-lint] # django-check, django-check-migrations
default_language_version:
node: 22.15.1
python: python3.13
repos: repos:
- repo: https://github.com/adamchainz/django-upgrade - repo: https://github.com/adamchainz/django-upgrade
rev: 1.25.0 rev: 1.29.1
hooks: hooks:
- id: django-upgrade - id: django-upgrade
# uv hooks for dependency management # uv hooks for dependency management
- repo: https://github.com/astral-sh/uv-pre-commit - repo: https://github.com/astral-sh/uv-pre-commit
rev: 0.7.5 rev: 0.9.7
hooks: hooks:
# Update the uv lockfile
- id: uv-lock
# Update the requirements.txt
- id: uv-export - id: uv-export
# Standard pre-commit hooks # Standard pre-commit hooks
- repo: https://github.com/pre-commit/pre-commit-hooks - repo: https://github.com/pre-commit/pre-commit-hooks
rev: v5.0.0 rev: v6.0.0
hooks: hooks:
- id: trailing-whitespace - id: trailing-whitespace
args: [--markdown-linebreak-ext=md]
- id: end-of-file-fixer - id: end-of-file-fixer
- id: check-yaml - id: check-yaml
# - id: check-json # - id: check-json
@@ -34,28 +45,21 @@ repos:
- id: mixed-line-ending - id: mixed-line-ending
args: [--fix=lf] args: [--fix=lf]
# - repo: https://github.com/psf/black - repo: local
# rev: 22.10.0
# hooks:
# - id: black
# # HTML/Django template linting
# - repo: https://github.com/rtts/djhtml
# rev: 3.0.7
# hooks:
# - id: djhtml
# entry: djhtml --tabwidth 4
- repo: https://github.com/pre-commit/mirrors-prettier
rev: v3.1.0
hooks: hooks:
- id: prettier - id: prettier-jinja
types_or: [javascript, jsx, ts, tsx, css, scss, html, json, yaml, markdown] name: Prettier Jinja
language: node
additional_dependencies: additional_dependencies:
- prettier - prettier
- prettier-plugin-jinja-template - prettier-plugin-jinja-template
# types_or: [javascript, jsx, ts, tsx, css, scss, json, yaml, markdown] types_or: [html, jinja]
# exclude: '.*\.html$' entry: bunx prettier --plugin=prettier-plugin-jinja-template --parser=jinja-template --write
- id: prettier-all
name: Prettier All
language: node
types_or: [javascript, jsx, ts, tsx, css, scss, json, yaml, markdown]
entry: bunx prettier --write
- repo: https://github.com/DavidAnson/markdownlint-cli2 - repo: https://github.com/DavidAnson/markdownlint-cli2
rev: v0.18.1 rev: v0.18.1
@@ -65,37 +69,54 @@ repos:
# Ruff for linting and formatting # Ruff for linting and formatting
- repo: https://github.com/astral-sh/ruff-pre-commit - repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.11.10 rev: v0.14.3
hooks: hooks:
- id: ruff # Run the linter.
- id: ruff-check
args: [--fix] args: [--fix]
# Run the formatter.
- id: ruff-format - id: ruff-format
# Django-specific hooks - repo: https://github.com/oxc-project/mirrors-oxlint
- repo: local rev: v1.25.0 # change to the latest version
hooks: hooks:
- id: django-check - id: oxlint
name: Django Check verbose: true
entry: uv run python dashboard_project/manage.py check
language: system
pass_filenames: false
types: [python]
always_run: true
- id: django-check-migrations # # Django-specific hooks
name: Django Check Migrations # - repo: local
entry: uv run python dashboard_project/manage.py makemigrations --check --dry-run # hooks:
language: system # - id: django-check
pass_filenames: false # name: Django Check
types: [python] # entry: uv run python dashboard_project/manage.py check
# language: python
# pass_filenames: false
# types: [python]
# always_run: true
# additional_dependencies: [uv]
# - id: django-check-migrations
# name: Django Check Migrations
# entry: uv run python dashboard_project/manage.py makemigrations --check --dry-run
# language: python
# pass_filenames: false
# types: [python]
# additional_dependencies: [uv]
# Security checks # Security checks
- repo: https://github.com/pycqa/bandit - repo: https://github.com/pycqa/bandit
rev: 1.8.3 rev: 1.8.6
hooks: hooks:
- id: bandit - id: bandit
args: [-c, pyproject.toml, -r, dashboard_project] args: [-c, pyproject.toml, -r, dashboard_project]
additional_dependencies: ["bandit[toml]"] # additional_dependencies: ["bandit[toml]"]
# TOML formatting and linting
- repo: https://github.com/tombi-toml/tombi-pre-commit
rev: v0.6.40
hooks:
- id: tombi-format
- id: tombi-lint
# # Type checking # # Type checking
# - repo: https://github.com/pre-commit/mirrors-mypy # - repo: https://github.com/pre-commit/mirrors-mypy

View File

@@ -1,4 +1,5 @@
{ {
"$schema": "https://json.schemastore.org/prettierrc.json",
"arrowParens": "always", "arrowParens": "always",
"bracketSpacing": true, "bracketSpacing": true,
"embeddedLanguageFormatting": "auto", "embeddedLanguageFormatting": "auto",
@@ -27,7 +28,13 @@
"proseWrap": "preserve", "proseWrap": "preserve",
"printWidth": 100 "printWidth": 100
} }
},
{
"files": ["*.jsonc"],
"options": {
"trailingComma": "none"
}
} }
], ],
"plugins": ["prettier-plugin-jinja-template"] "plugins": ["prettier-plugin-jinja-template", "prettier-plugin-packagejson"]
} }

View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# Run linting, formatting and type checking # Run linting, formatting and type checking
echo "Running Ruff linter..." echo "Running Ruff linter..."

View File

@@ -1,4 +1,4 @@
#!/bin/bash #!/usr/bin/env bash
# Run tests with coverage # Run tests with coverage
echo "Running tests with coverage..." echo "Running tests with coverage..."

10
.uv
View File

@@ -5,17 +5,11 @@ keep-lockfile = true
# Cache compiled bytecode for dependencies # Cache compiled bytecode for dependencies
compile-bytecode = true compile-bytecode = true
# Use a local cache directory
local-cache = true
# Verbosity of output # Verbosity of output
verbosity = "minimal" verbosity = "minimal"
# Define which part of the environment to check ; # Define which part of the environment to check
environment-checks = ["python", "dependencies"] ; environment-checks = ["python", "dependencies"]
# How to resolve dependencies not specified with exact versions # How to resolve dependencies not specified with exact versions
dependency-resolution = "strict" dependency-resolution = "strict"
# If the cache and target directories are on different filesystems, hardlinking may not be supported.
link-mode = "copy"

42
.zed/settings.json Normal file
View File

@@ -0,0 +1,42 @@
{
"auto_install_extensions": { "ty": true },
"languages": {
"Python": {
"language_servers": [
// Disable basedpyright and enable Ty, and otherwise
// use the default configuration.
"ty",
"!basedpyright",
"..."
]
},
"TypeScript": {
"language_servers": [
// Enable oxc for TypeScript files.
"oxc",
"..."
]
},
"JavaScript": {
"language_servers": [
// Enable oxc for JavaScript files.
"oxc",
"..."
]
}
},
"lsp": {
"oxc": {
"initialization_options": {
"options": {
"run": "onType",
"configPath": null,
"tsConfigPath": null,
"unusedDisableDirectives": "allow",
"typeAware": true,
"flags": {}
}
}
}
}
}

276
CLAUDE.md Normal file
View File

@@ -0,0 +1,276 @@
# CLAUDE.md
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
## Project Overview
Multi-tenant Django analytics dashboard for chat session data. Companies upload CSV files or connect external APIs to visualize chat metrics, sentiment analysis, and session details. Built with Django 5.2+, Python 3.13+, managed via UV package manager.
## Essential Commands
### Development Server
```bash
# Start Django dev server (port 8001)
make run
# or
cd dashboard_project && uv run python manage.py runserver 8001
```
### Database Operations
```bash
# Create migrations after model changes
make makemigrations
# Apply migrations
make migrate
# Reset database (flush + migrate)
make reset-db
# Create superuser
make superuser
```
### Background Tasks (Celery)
```bash
# Start Celery worker (separate terminal)
make celery
# or
cd dashboard_project && uv run celery -A dashboard_project worker --loglevel=info
# Start Celery Beat scheduler (separate terminal)
make celery-beat
# or
cd dashboard_project && uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler
# Start all services (web + celery + beat) with foreman
make procfile
```
### Testing & Quality
```bash
# Run tests
make test
# or
uv run -m pytest
# Run single test
cd dashboard_project && uv run -m pytest path/to/test_file.py::test_function
# Linting
make lint # Python only
npm run lint:py # Ruff check
npm run lint:py:fix # Auto-fix Python issues
# Formatting
make format # Ruff + Black
npm run format # Prettier (templates) + Python
npm run format:check # Verify formatting
# JavaScript linting
npm run lint:js
npm run lint:js:fix
# Markdown linting
npm run lint:md
npm run lint:md:fix
# Type checking
npm run typecheck:py # Python with ty
npm run typecheck:js # JavaScript with oxlint
```
### Dependency Management (UV)
```bash
# Install all dependencies
uv pip install -e ".[dev]"
# Add new package
uv pip install <package-name>
# Then manually update pyproject.toml dependencies
# Update lockfile
make lock # or uv pip freeze > requirements.lock
```
### Docker
```bash
make docker-build
make docker-up
make docker-down
```
## Architecture
### Three-App Structure
1. **accounts** - Authentication & multi-tenancy
- `CustomUser` extends AbstractUser with `company` FK and `is_company_admin` flag
- `Company` model is top-level organizational unit
- All users belong to exactly one Company
2. **dashboard** - Core analytics
- `DataSource` - CSV uploads or external API links, owned by Company
- `ChatSession` - Parsed chat data from CSVs/APIs, linked to DataSource
- `Dashboard` - Custom dashboard configs with M2M to DataSources
- Views: dashboard display, CSV upload, data export (CSV/JSON/Excel), search
3. **data_integration** - External API data fetching
- `ExternalDataSource` - API credentials and endpoints
- `ChatSession` & `ChatMessage` - API-fetched data models (parallel to dashboard.ChatSession)
- Celery tasks for async API data fetching via `tasks.py`
### Multi-Tenancy Model
```text
Company (root isolation)
├── CustomUser (employees, one is_company_admin)
├── DataSource (CSV files or API links)
│ └── ChatSession (parsed data)
└── Dashboard (M2M to DataSources)
```
**Critical**: All views must filter by `request.user.company` to enforce data isolation.
### Data Flow
**CSV Upload**:
1. User uploads CSV via `dashboard/views.py:upload_data`
2. CSV parsed, creates DataSource + multiple ChatSession records
3. Dashboard aggregates ChatSessions for visualization
**External API**:
1. Admin configures ExternalDataSource with API credentials
2. Celery task (`data_integration/tasks.py`) fetches data periodically
3. Creates ChatSession + ChatMessage records in `data_integration` app
4. Optionally synced to `dashboard` app for unified analytics
### Key Design Patterns
- **Multi-tenant isolation**: Every query filtered by Company FK
- **Role-based access**: is_staff (Django admin), is_company_admin (company management), regular user (view only)
- **Dual ChatSession models**: `dashboard.ChatSession` (CSV-based) and `data_integration.ChatSession` (API-based) exist separately
- **Async processing**: Celery handles long-running API fetches, uses Redis or SQLite backend
## Configuration Notes
### Settings (`dashboard_project/settings.py`)
- Uses `python-dotenv` for environment variables
- Multi-app: accounts, dashboard, data_integration
- Celery configured in `dashboard_project/celery.py`
- Custom user model: `AUTH_USER_MODEL = "accounts.CustomUser"`
### Environment Variables
Create `.env` from `.env.sample`:
- `DJANGO_SECRET_KEY` - Generate for production
- `DJANGO_DEBUG` - Set False in production
- `EXTERNAL_API_USERNAME` / `EXTERNAL_API_PASSWORD` - For data_integration API
- `CELERY_BROKER_URL` - Redis URL or SQLite fallback
### Template Formatting
- Prettier configured for Django templates via `prettier-plugin-jinja-template`
- Pre-commit hook auto-formats HTML templates
- Run manually: `npm run format`
## Common Patterns
### Adding New Model
1. Edit `models.py` in appropriate app
2. `make makemigrations`
3. `make migrate`
4. Register in `admin.py` if needed
5. Update views to filter by company
### CSV Upload Field Mapping
Expected CSV columns (see README.md for full schema):
- session_id, start_time, end_time, ip_address, country, language
- messages_sent, sentiment, escalated, forwarded_hr
- full_transcript, avg_response_time, tokens, tokens_eur
- category, initial_msg, user_rating
### Testing Celery Tasks
```bash
cd dashboard_project
uv run python manage.py test_celery
```
### Creating Sample Data
```bash
cd dashboard_project
uv run python manage.py create_sample_data
```
Creates admin user (admin/admin123), 3 companies with users, sample dashboards.
## Development Workflow
1. **Before starting**: `uv venv && source .venv/bin/activate && uv sync"`
2. **Run migrations**: `make migrate`
3. **Start services**: Terminal 1: `make run`, Terminal 2: `make celery`, Terminal 3: `make celery-beat`
4. **Make changes**: Edit code, test locally
5. **Test**: `make test` and `make lint`
6. **Format**: `make format && bun run format`
7. **Commit**: Pre-commit hooks run automatically
> `yq -r '.scripts' package.json`
>
> ```json
> {
> "format": "prettier --write .; bun format:py",
> "format:check": "prettier --check .; bun format:py -- --check",
> "format:py": "uvx ruff format",
> "lint:js": "oxlint",
> "lint:js:fix": "bun lint:js -- --fix",
> "lint:js:strict": "oxlint --import-plugin -D correctness -W suspicious",
> "lint:md": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.{node_modules,trunk,grit,venv,opencode,github/chatmodes,claude/agents}\"",
> "lint:md:fix": "bun lint:md -- --fix",
> "lint:py": "uvx ruff check",
> "lint:py:fix": "uvx ruff check --fix",
> "typecheck:js": "oxlint --type-aware",
> "typecheck:js:fix": "bun typecheck:js -- --fix",
> "typecheck:py": "uvx ty check"
> }
> ```
## Important Context
- **Django 5.2+** specific features may be in use
- **UV package manager** preferred over pip for speed
- **Celery** required for background tasks, needs Redis or SQLite backend
- **Multi-tenancy** is enforced at query level, not database level
- **Bootstrap 5** + **Plotly.js** for frontend
- **Working directory**: All Django commands run from `dashboard_project/` subdirectory
## File Organization
- **Django apps**: `dashboard_project/{accounts,dashboard,data_integration}/`
- **Settings**: `dashboard_project/dashboard_project/settings.py`
- **Static files**: `dashboard_project/static/`
- **Templates**: `dashboard_project/templates/`
- **Uploaded CSVs**: `dashboard_project/media/data_sources/`
- **Scripts**: `dashboard_project/scripts/` (cleanup, data fixes)
- **Examples**: `examples/` (sample CSV files)
## Testing Notes
- pytest configured via `pyproject.toml`
- Test discovery: `test_*.py` files in `dashboard_project/`
- Django settings: `DJANGO_SETTINGS_MODULE = "dashboard_project.settings"`
- Run specific test: `cd dashboard_project && uv run -m pytest path/to/test.py::TestClass::test_method`

View File

@@ -1,34 +1,58 @@
# Dockerfile # Dockerfile
FROM python:3.13-slim # Use a Python image with uv pre-installed
FROM ghcr.io/astral-sh/uv:python3.13-bookworm-slim
# Setup a non-root user
RUN groupadd --system --gid 999 nonroot \
&& useradd --system --gid 999 --uid 999 --create-home nonroot
# Set environment variables # Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1 ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED 1 ENV PYTHONUNBUFFERED=1
ENV DJANGO_SETTINGS_MODULE=dashboard_project.settings ENV DJANGO_SETTINGS_MODULE=dashboard_project.settings
# Set work directory # Change the working directory to the `app` directory
WORKDIR /app WORKDIR /app
# Install UV for Python package management # Enable bytecode compilation
RUN pip install uv ENV UV_COMPILE_BYTECODE=1
# Copy project files # Copy from the cache instead of linking since it's a mounted volume
COPY pyproject.toml . ENV UV_LINK_MODE=copy
COPY uv.lock .
COPY . .
# Install dependencies # Install dependencies (separate layer for caching)
RUN uv pip install -e . RUN --mount=type=cache,target=/root/.cache/uv \
--mount=type=bind,source=uv.lock,target=uv.lock \
--mount=type=bind,source=pyproject.toml,target=pyproject.toml \
uv sync --frozen --no-install-project --no-dev
# Copy the project into the image
COPY . /app
# Sync the project (install the project itself)
RUN --mount=type=cache,target=/root/.cache/uv \
uv sync --frozen --no-dev
# Place executables in the environment at the front of the path
ENV PATH="/app/.venv/bin:$PATH"
# Change to the Django project directory # Change to the Django project directory
WORKDIR /app/dashboard_project WORKDIR /app/dashboard_project
# Collect static files # Collect static files (runs as root)
RUN python manage.py collectstatic --noinput RUN uv run manage.py collectstatic --noinput
# Fix ownership of dashboard_project directory for nonroot user
# This ensures db.sqlite3 and any files created during runtime are writable
RUN chown -R nonroot:nonroot /app/dashboard_project && \
chmod 775 /app/dashboard_project
# Change back to the app directory # Change back to the app directory
WORKDIR /app WORKDIR /app
# Run gunicorn # Use the non-root user to run our application
CMD ["gunicorn", "dashboard_project.wsgi:application", "--bind", "0.0.0.0:8000"] USER nonroot
# Run gunicorn via uv run to ensure it's in the environment
CMD ["uv", "run", "gunicorn", "dashboard_project.wsgi:application", "--bind", "0.0.0.0:8000", "--chdir", "dashboard_project"]

View File

@@ -1,35 +1,43 @@
.PHONY: venv install install-dev lint test format clean run migrate makemigrations superuser setup-node celery celery-beat docker-build docker-up docker-down reset-db setup-dev procfile .PHONY: venv install install-dev lint test format clean run migrate makemigrations superuser setup-node celery celery-beat docker-build docker-up docker-down reset-db setup-dev procfile
# Create a virtual environment # Create a virtual environment
venv: venv:
uv venv -p 3.13 uv venv -p 3.13
# Install production dependencies # Install production dependencies
install: install:
uv pip install -e . uv pip install -e .
# Install development dependencies # Install development dependencies
install-dev: install-dev:
uv pip install -e ".[dev]" uv pip install -e ".[dev]"
# Run linting # Run linting
lint: lint:
uv run -m ruff check dashboard_project uv run -m ruff check dashboard_project
# Run tests # Run tests
test: test:
uv run -m pytest uv run -m pytest
# Format Python code # Format Python code
format: format:
uv run -m ruff format dashboard_project uv run -m ruff format dashboard_project
uv run -m black dashboard_project uv run -m black dashboard_project
# Setup Node.js dependencies # Setup Node.js dependencies
setup-node: setup-node:
npm install --include=dev npm install --include=dev
# Clean Python cache files # Clean Python cache files
clean: clean:
find . -type d -name "__pycache__" -exec rm -rf {} + find . -type d -name "__pycache__" -exec rm -rf {} +
find . -type f -name "*.pyc" -delete find . -type f -name "*.pyc" -delete
@@ -48,42 +56,52 @@ clean:
rm -rf dist/ rm -rf dist/
# Run the development server # Run the development server
run: run:
cd dashboard_project && uv run python manage.py runserver 8001 cd dashboard_project && uv run python manage.py runserver 8001
# Run Celery worker for background tasks # Run Celery worker for background tasks
celery: celery:
cd dashboard_project && uv run celery -A dashboard_project worker --loglevel=info cd dashboard_project && uv run celery -A dashboard_project worker --loglevel=info
# Run Celery Beat for scheduled tasks # Run Celery Beat for scheduled tasks
celery-beat: celery-beat:
cd dashboard_project && uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler cd dashboard_project && uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler
# Apply migrations # Apply migrations
migrate: migrate:
cd dashboard_project && uv run python manage.py migrate cd dashboard_project && uv run python manage.py migrate
# Create migrations # Create migrations
makemigrations: makemigrations:
cd dashboard_project && uv run python manage.py makemigrations cd dashboard_project && uv run python manage.py makemigrations
# Create a superuser # Create a superuser
superuser: superuser:
cd dashboard_project && uv run python manage.py createsuperuser cd dashboard_project && uv run python manage.py createsuperuser
# Update uv lock file # Update uv lock file
lock: lock:
uv pip freeze > requirements.lock uv pip freeze > requirements.lock
# Setup pre-commit hooks # Setup pre-commit hooks
setup-pre-commit: setup-pre-commit:
pre-commit install pre-commit install
# Run pre-commit on all files # Run pre-commit on all files
lint-all: lint-all:
pre-commit run --all-files pre-commit run --all-files
# Docker commands # Docker commands
docker-build: docker-build:
docker-compose build docker-compose build
@@ -94,34 +112,37 @@ docker-down:
docker-compose down docker-compose down
# Initialize or reset the database in development # Initialize or reset the database in development
reset-db: reset-db:
cd dashboard_project && uv run python manage.py flush --no-input cd dashboard_project && uv run python manage.py flush --no-input
cd dashboard_project && uv run python manage.py migrate cd dashboard_project && uv run python manage.py migrate
# Start a Redis server in development (if not installed, fallback to SQLite) # Start a Redis server in development (if not installed, fallback to SQLite)
run-redis: run-redis:
redis-server || echo "Redis not installed, using SQLite fallback" redis-server || echo "Redis not installed, using SQLite fallback"
# Start all development services (web, redis, celery, celery-beat) # Start all development services (web, redis, celery, celery-beat)
run-all: run-all:
make run-redis & \ foreman start
make run & \
make celery & \ procfile:
make celery-beat foreman start
# Test Celery task # Test Celery task
test-celery: test-celery:
cd dashboard_project && uv run python manage.py test_celery cd dashboard_project && uv run python manage.py test_celery
# Initialize data integration # Initialize data integration
init-data-integration: init-data-integration:
cd dashboard_project && uv run python manage.py create_default_datasource cd dashboard_project && uv run python manage.py create_default_datasource
cd dashboard_project && uv run python manage.py create_default_datasource cd dashboard_project && uv run python manage.py create_default_datasource
cd dashboard_project && uv run python manage.py test_celery cd dashboard_project && uv run python manage.py test_celery
# Setup development environment # Setup development environment
setup-dev: venv install-dev migrate create_default_datasource setup-dev: venv install-dev migrate create_default_datasource
@echo "Development environment setup complete" @echo "Development environment setup complete"
procfile:
foreman start

View File

@@ -1,10 +1,13 @@
# Chat Analytics Dashboard # Chat Analytics Dashboard
A Django application that creates an analytics dashboard for chat session data. The application allows different companies to have their own dashboards and view their own data. A Django application that creates an analytics dashboard for chat session data. The application allows different
companies to have their own dashboards and view their own data.
## Project Overview ## Project Overview
This Django project creates a multi-tenant dashboard application for analyzing chat session data. Companies can upload their chat data (in CSV format) and view analytics and metrics through an interactive dashboard. The application supports user authentication, role-based access control, and separate data isolation for different companies. This Django project creates a multi-tenant dashboard application for analyzing chat session data. Companies can upload
their chat data (in CSV format) and view analytics and metrics through an interactive dashboard. The application
supports user authentication, role-based access control, and separate data isolation for different companies.
### Project Structure ### Project Structure
@@ -39,8 +42,8 @@ The project consists of two main Django apps:
1. Clone the repository: 1. Clone the repository:
```sh ```sh
git clone <repository-url> git clone https://github.com/kjanat/livegraphs-django.git
cd LiveGraphsDjango cd livegraphs-django
``` ```
2. Install uv if you don't have it yet: 2. Install uv if you don't have it yet:
@@ -137,7 +140,7 @@ The project consists of two main Django apps:
python manage.py runserver python manage.py runserver
``` ```
10. Access the application at <http://127.0.0.1:8000/> 10. Access the application at `http://127.0.0.1:8000/`
### Development Workflow with UV ### Development Workflow with UV
@@ -187,8 +190,8 @@ UV offers several advantages over traditional pip, including faster dependency r
1. Clone the repository: 1. Clone the repository:
```sh ```sh
git clone <repository-url> git clone https://github.com/kjanat/livegraphs-django.git
cd dashboard_project cd livegraphs-django
``` ```
2. Build and run with Docker Compose: 2. Build and run with Docker Compose:
@@ -209,7 +212,8 @@ docker-compose exec web python manage.py createsuperuser
### Prettier for Django Templates ### Prettier for Django Templates
This project uses Prettier with the `prettier-plugin-django-annotations` plugin to format HTML templates with Django template syntax. This project uses Prettier with the `prettier-plugin-django-annotations` plugin to format HTML templates with Django
template syntax.
#### Prettier Configuration #### Prettier Configuration
@@ -342,6 +346,7 @@ This will create:
- Fill in the company details and save - Fill in the company details and save
3. **Create Users**: 3. **Create Users**:
- Go to Users > Add User - Go to Users > Add User
- Fill in user details - Fill in user details
- Assign the user to a company - Assign the user to a company
@@ -362,6 +367,7 @@ This will create:
- Click "Upload" - Click "Upload"
3. **Create a Dashboard**: 3. **Create a Dashboard**:
- Click on "New Dashboard" in the sidebar - Click on "New Dashboard" in the sidebar
- Fill in the dashboard details - Fill in the dashboard details
- Select data sources to include - Select data sources to include
@@ -382,6 +388,7 @@ This will create:
- Use filters to refine results - Use filters to refine results
3. **View Session Details**: 3. **View Session Details**:
- In search results, click the eye icon for a session - In search results, click the eye icon for a session
- View complete session information and transcript - View complete session information and transcript
@@ -435,7 +442,7 @@ If your dashboard is empty:
The CSV file should contain the following columns: The CSV file should contain the following columns:
| Column | Description | | Column | Description |
| ------------------- | ------------------------------------------------------ | |---------------------|--------------------------------------------------------|
| `session_id` | Unique identifier for the chat session | | `session_id` | Unique identifier for the chat session |
| `start_time` | When the session started (datetime) | | `start_time` | When the session started (datetime) |
| `end_time` | When the session ended (datetime) | | `end_time` | When the session ended (datetime) |
@@ -507,6 +514,7 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
- System-wide configuration - System-wide configuration
7. **Responsive Design**: 7. **Responsive Design**:
- Mobile-friendly interface using Bootstrap 5 - Mobile-friendly interface using Bootstrap 5
- Consistent layout and navigation - Consistent layout and navigation
- Accessible UI components - Accessible UI components
@@ -550,6 +558,7 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
- JSON serialization for frontend - JSON serialization for frontend
3. **User Authentication**: 3. **User Authentication**:
- Login/registration handling - Login/registration handling
- Session management - Session management
- Permission checks - Permission checks
@@ -590,6 +599,7 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
- Manages company users - Manages company users
3. **Regular Users**: 3. **Regular Users**:
- View dashboards - View dashboards
- Search and explore chat data - Search and explore chat data
- Analyze chat metrics - Analyze chat metrics
@@ -610,4 +620,5 @@ acme_1,2023-05-01 10:30:00,2023-05-01 10:45:00,192.168.1.1,USA,English,10,Positi
## License ## License
This project is unlicensed. Usage is restricted to personal and educational purposes only. For commercial use, please contact the author. This project is unlicensed. Usage is restricted to personal and educational purposes only. For commercial use, please
contact the author.

View File

@@ -51,7 +51,7 @@
- [ ] Implement periodic data download from external API - [ ] Implement periodic data download from external API
- Source: <https://proto.notso.ai/jumbo/chats> - Source: <https://proto.notso.ai/jumbo/chats>
- Authentication: Basic Auth - Authentication: Basic Auth
- Credentials: [stored securely] - Credentials: stored securely
- An example of the data structure can be found in [jumbo.csv](examples/jumbo.csv) - An example of the data structure can be found in [jumbo.csv](examples/jumbo.csv)
- The file that the endpoint returns is a CSV file, but the file is not a standard CSV file. It has a different structure and format: - The file that the endpoint returns is a CSV file, but the file is not a standard CSV file. It has a different structure and format:
- The header row is missing, it is supposed to be `session_id,start_time,end_time,ip_address,country,language,messages_sent,sentiment,escalated,forwarded_hr,full_transcript,avg_response_time,tokens,tokens_eur,category,initial_msg,user_rating` - The header row is missing, it is supposed to be `session_id,start_time,end_time,ip_address,country,language,messages_sent,sentiment,escalated,forwarded_hr,full_transcript,avg_response_time,tokens,tokens_eur,category,initial_msg,user_rating`

265
bun.lock Normal file
View File

@@ -0,0 +1,265 @@
{
"lockfileVersion": 1,
"workspaces": {
"": {
"devDependencies": {
"@playwright/test": "^1.56.1",
"@types/bun": "latest",
"markdownlint-cli2": "^0.18.1",
"oxlint": "^1.25.0",
"oxlint-tsgolint": "^0.5.0",
"prettier": "^3.6.2",
"prettier-plugin-jinja-template": "^2.1.0",
"prettier-plugin-packagejson": "^2.5.19",
},
"peerDependencies": {
"typescript": "^5",
},
},
},
"packages": {
"@nodelib/fs.scandir": ["@nodelib/fs.scandir@2.1.5", "", { "dependencies": { "@nodelib/fs.stat": "2.0.5", "run-parallel": "^1.1.9" } }, "sha512-vq24Bq3ym5HEQm2NKCr3yXDwjc7vTsEThRDnkp2DK9p1uqLR+DHurm/NOTo0KG7HYHU7eppKZj3MyqYuMBf62g=="],
"@nodelib/fs.stat": ["@nodelib/fs.stat@2.0.5", "", {}, "sha512-RkhPPp2zrqDAQA/2jNhnztcPAlv64XdhIp7a7454A5ovI7Bukxgt7MX7udwAu3zg1DcpPU0rz3VV1SeaqvY4+A=="],
"@nodelib/fs.walk": ["@nodelib/fs.walk@1.2.8", "", { "dependencies": { "@nodelib/fs.scandir": "2.1.5", "fastq": "^1.6.0" } }, "sha512-oGB+UxlgWcgQkgwo8GcEGwemoTFt3FIO9ababBmaGwXIoBKZ+GTy0pP185beGg7Llih/NSHSV2XAs1lnznocSg=="],
"@oxlint-tsgolint/darwin-arm64": ["@oxlint-tsgolint/darwin-arm64@0.5.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-OjNuyBRlxqUgcvlc51Ab38tjRWN+gBSV6Z2004hgfbt6w7RoX6ITA6v3KYQzovCfa4Ne8l+XbhVf9y9PtLipgQ=="],
"@oxlint-tsgolint/darwin-x64": ["@oxlint-tsgolint/darwin-x64@0.5.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-KiKVKe8dd52O1vfnrE4ffyv/bJDG7RtkZiv5/lrMV0r9USL/VKN80qNtYgdzaf64WOUegAgRdWs++DZXGjYGbA=="],
"@oxlint-tsgolint/linux-arm64": ["@oxlint-tsgolint/linux-arm64@0.5.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-10Rd1GVW+Z8pd9zxpWBtRt8ur3pUXZf05iDw3Py9VyPYYpkupXLhAS0T42HLz62qHSmh79XKfDFvMOQqjVzTxQ=="],
"@oxlint-tsgolint/linux-x64": ["@oxlint-tsgolint/linux-x64@0.5.0", "", { "os": "linux", "cpu": "x64" }, "sha512-bj4YUqn7M3vLWELwQ9Y0mrdiV9Elvj/oVCs7meperiIV5FHM29of74ePupKzWp645iJB+gj7jA/OlhCvT4Exug=="],
"@oxlint-tsgolint/win32-arm64": ["@oxlint-tsgolint/win32-arm64@0.5.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-SqNClNZjFFQlnLFmzjKxTuEEJOCT0v7aamkaxtG4ek5uBRUKuN8sAwWesS06yQ3t5JMg0vPLB2fs9xJcM+VdmQ=="],
"@oxlint-tsgolint/win32-x64": ["@oxlint-tsgolint/win32-x64@0.5.0", "", { "os": "win32", "cpu": "x64" }, "sha512-pEsrzV6VM3Eo41AQavoLzNLE1nMWYNtHYUPkzBzkUZMinklWDsdLpRDYX2fw58W7mEyY0NR6T9a+Hn/qI8/aaQ=="],
"@oxlint/darwin-arm64": ["@oxlint/darwin-arm64@1.25.0", "", { "os": "darwin", "cpu": "arm64" }, "sha512-OLx4XyUv5SO7k8y5FzJIoTKan+iKK53T1Ws8fBIl4zblUIWI66ZIqSVG2A2rxOBA7XfINqCz8UipGzOW9yzKcg=="],
"@oxlint/darwin-x64": ["@oxlint/darwin-x64@1.25.0", "", { "os": "darwin", "cpu": "x64" }, "sha512-srndNPiliA0rchYKqYfOdqA9kqyVQ6YChK3XJe9Lxo/YG8tTJ5K65g2A5SHTT2s1Nm5DnQa5AKZH7w+7KI/m8A=="],
"@oxlint/linux-arm64-gnu": ["@oxlint/linux-arm64-gnu@1.25.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-W9+DnHDbygprpGV586BolwWES+o2raOcSJv404nOFPQjWZ09efG24nuXrg/fpyoMQb4YoW2W1fvlnyMVU+ADcw=="],
"@oxlint/linux-arm64-musl": ["@oxlint/linux-arm64-musl@1.25.0", "", { "os": "linux", "cpu": "arm64" }, "sha512-1tIMpQhKlItm7uKzs3lluG7KorZR5ItoNKd1iFYF/IPmZ+i0/iuZ7MVWXRjBcgQMhMYSdfZpSVEdFKcFz2HDxA=="],
"@oxlint/linux-x64-gnu": ["@oxlint/linux-x64-gnu@1.25.0", "", { "os": "linux", "cpu": "x64" }, "sha512-xVkmk/zkIulc5o0OUWY04DyBfKotnq9+60O9I5c0DpdKAELVLhZkLmct0apx3jAX6Z/3yYPzhc6Lw1Ia3jU3VQ=="],
"@oxlint/linux-x64-musl": ["@oxlint/linux-x64-musl@1.25.0", "", { "os": "linux", "cpu": "x64" }, "sha512-IeO10dZosJV58YzN0gckhRYac+FM9s5VCKUx2ghgbKR91z/bpSRcRl8Sy5cWTkcVwu3ZTikhK8aXC6j7XIqKNw=="],
"@oxlint/win32-arm64": ["@oxlint/win32-arm64@1.25.0", "", { "os": "win32", "cpu": "arm64" }, "sha512-mpdiXZm2oNuSQAbTEPRDuSeR6v1DCD7Cl/xouR2ggHZu3AKZ4XYmm29hyrzIxrYVoQ/5j+182TGdOpGYn9xQJg=="],
"@oxlint/win32-x64": ["@oxlint/win32-x64@1.25.0", "", { "os": "win32", "cpu": "x64" }, "sha512-opoIACOkcFloWQO6dubBLbcWwW52ML8+3deFdr0WE0PeM9UXdLB0jRMuLsEnplmBoy9TRvmxDJ+Pw8xc2PsOfQ=="],
"@pkgr/core": ["@pkgr/core@0.2.9", "", {}, "sha512-QNqXyfVS2wm9hweSYD2O7F0G06uurj9kZ96TRQE5Y9hU7+tgdZwIkbAKc5Ocy1HxEY2kuDQa6cQ1WRs/O5LFKA=="],
"@playwright/test": ["@playwright/test@1.56.1", "", { "dependencies": { "playwright": "1.56.1" }, "bin": { "playwright": "cli.js" } }, "sha512-vSMYtL/zOcFpvJCW71Q/OEGQb7KYBPAdKh35WNSkaZA75JlAO8ED8UN6GUNTm3drWomcbcqRPFqQbLae8yBTdg=="],
"@sindresorhus/merge-streams": ["@sindresorhus/merge-streams@2.3.0", "", {}, "sha512-LtoMMhxAlorcGhmFYI+LhPgbPZCkgP6ra1YL604EeF6U98pLlQ3iWIGMdWSC+vWmPBWBNgmDBAhnAobLROJmwg=="],
"@types/bun": ["@types/bun@1.3.1", "", { "dependencies": { "bun-types": "1.3.1" } }, "sha512-4jNMk2/K9YJtfqwoAa28c8wK+T7nvJFOjxI4h/7sORWcypRNxBpr+TPNaCfVWq70tLCJsqoFwcf0oI0JU/fvMQ=="],
"@types/debug": ["@types/debug@4.1.12", "", { "dependencies": { "@types/ms": "*" } }, "sha512-vIChWdVG3LG1SMxEvI/AK+FWJthlrqlTu7fbrlywTkkaONwk/UAGaULXRlf8vkzFBLVm0zkMdCquhL5aOjhXPQ=="],
"@types/katex": ["@types/katex@0.16.7", "", {}, "sha512-HMwFiRujE5PjrgwHQ25+bsLJgowjGjm5Z8FVSf0N6PwgJrwxH0QxzHYDcKsTfV3wva0vzrpqMTJS2jXPr5BMEQ=="],
"@types/ms": ["@types/ms@2.1.0", "", {}, "sha512-GsCCIZDE/p3i96vtEqx+7dBUGXrc7zeSK3wwPHIaRThS+9OhWIXRqzs4d6k1SVU8g91DrNRWxWUGhp5KXQb2VA=="],
"@types/node": ["@types/node@24.10.0", "", { "dependencies": { "undici-types": "~7.16.0" } }, "sha512-qzQZRBqkFsYyaSWXuEHc2WR9c0a0CXwiE5FWUvn7ZM+vdy1uZLfCunD38UzhuB7YN/J11ndbDBcTmOdxJo9Q7A=="],
"@types/react": ["@types/react@19.2.2", "", { "dependencies": { "csstype": "^3.0.2" } }, "sha512-6mDvHUFSjyT2B2yeNx2nUgMxh9LtOWvkhIU3uePn2I2oyNymUAX1NIsdgviM4CH+JSrp2D2hsMvJOkxY+0wNRA=="],
"@types/unist": ["@types/unist@2.0.11", "", {}, "sha512-CmBKiL6NNo/OqgmMn95Fk9Whlp2mtvIv+KNpQKN2F4SjvrEesubTRWGYSg+BnWZOnlCaSTU1sMpsBOzgbYhnsA=="],
"argparse": ["argparse@2.0.1", "", {}, "sha512-8+9WqebbFzpX9OR+Wa6O29asIogeRMzcGtAINdpMHHyAg10f05aSFVBbcEqGf/PXw1EjAZ+q2/bEBg3DvurK3Q=="],
"braces": ["braces@3.0.3", "", { "dependencies": { "fill-range": "^7.1.1" } }, "sha512-yQbXgO/OSZVD2IsiLlro+7Hf6Q18EJrKSEsdoMzKePKXct3gvD8oLcOQdIzGupr5Fj+EDe8gO/lxc1BzfMpxvA=="],
"bun-types": ["bun-types@1.3.1", "", { "dependencies": { "@types/node": "*" }, "peerDependencies": { "@types/react": "^19" } }, "sha512-NMrcy7smratanWJ2mMXdpatalovtxVggkj11bScuWuiOoXTiKIu2eVS1/7qbyI/4yHedtsn175n4Sm4JcdHLXw=="],
"character-entities": ["character-entities@2.0.2", "", {}, "sha512-shx7oQ0Awen/BRIdkjkvz54PnEEI/EjwXDSIZp86/KKdbafHh1Df/RYGBhn4hbe2+uKC9FnT5UCEdyPz3ai9hQ=="],
"character-entities-legacy": ["character-entities-legacy@3.0.0", "", {}, "sha512-RpPp0asT/6ufRm//AJVwpViZbGM/MkjQFxJccQRHmISF/22NBtsHqAWmL+/pmkPWoIUJdWyeVleTl1wydHATVQ=="],
"character-reference-invalid": ["character-reference-invalid@2.0.1", "", {}, "sha512-iBZ4F4wRbyORVsu0jPV7gXkOsGYjGHPmAyv+HiHG8gi5PtC9KI2j1+v8/tlibRvjoWX027ypmG/n0HtO5t7unw=="],
"commander": ["commander@8.3.0", "", {}, "sha512-OkTL9umf+He2DZkUq8f8J9of7yL6RJKI24dVITBmNfZBmri9zYZQrKkuXiKhyfPSu8tUhnVBB1iKXevvnlR4Ww=="],
"csstype": ["csstype@3.1.3", "", {}, "sha512-M1uQkMl8rQK/szD0LNhtqxIPLpimGm8sOBwU7lLnCpSbTyY3yeU1Vc7l4KT5zT4s/yOxHH5O7tIuuLOCnLADRw=="],
"debug": ["debug@4.4.3", "", { "dependencies": { "ms": "^2.1.3" } }, "sha512-RGwwWnwQvkVfavKVt22FGLw+xYSdzARwm0ru6DhTVA3umU5hZc28V3kO4stgYryrTlLpuvgI9GiijltAjNbcqA=="],
"decode-named-character-reference": ["decode-named-character-reference@1.2.0", "", { "dependencies": { "character-entities": "^2.0.0" } }, "sha512-c6fcElNV6ShtZXmsgNgFFV5tVX2PaV4g+MOAkb8eXHvn6sryJBrZa9r0zV6+dtTyoCKxtDy5tyQ5ZwQuidtd+Q=="],
"dequal": ["dequal@2.0.3", "", {}, "sha512-0je+qPKHEMohvfRTCEo3CrPG6cAzAYgmzKyxRiYSSDkS6eGJdyVJm7WaYA5ECaAD9wLB2T4EEeymA5aFVcYXCA=="],
"detect-indent": ["detect-indent@7.0.2", "", {}, "sha512-y+8xyqdGLL+6sh0tVeHcfP/QDd8gUgbasolJJpY7NgeQGSZ739bDtSiaiDgtoicy+mtYB81dKLxO9xRhCyIB3A=="],
"detect-newline": ["detect-newline@4.0.1", "", {}, "sha512-qE3Veg1YXzGHQhlA6jzebZN2qVf6NX+A7m7qlhCGG30dJixrAQhYOsJjsnBjJkCSmuOPpCk30145fr8FV0bzog=="],
"devlop": ["devlop@1.1.0", "", { "dependencies": { "dequal": "^2.0.0" } }, "sha512-RWmIqhcFf1lRYBvNmr7qTNuyCt/7/ns2jbpp1+PalgE/rDQcBT0fioSMUpJ93irlUhC5hrg4cYqe6U+0ImW0rA=="],
"entities": ["entities@4.5.0", "", {}, "sha512-V0hjH4dGPh9Ao5p0MoRY6BVqtwCjhz6vI5LT8AJ55H+4g9/4vbHx1I54fS0XuclLhDHArPQCiMjDxjaL8fPxhw=="],
"fast-glob": ["fast-glob@3.3.3", "", { "dependencies": { "@nodelib/fs.stat": "^2.0.2", "@nodelib/fs.walk": "^1.2.3", "glob-parent": "^5.1.2", "merge2": "^1.3.0", "micromatch": "^4.0.8" } }, "sha512-7MptL8U0cqcFdzIzwOTHoilX9x5BrNqye7Z/LuC7kCMRio1EMSyqRK3BEAUD7sXRq4iT4AzTVuZdhgQ2TCvYLg=="],
"fastq": ["fastq@1.19.1", "", { "dependencies": { "reusify": "^1.0.4" } }, "sha512-GwLTyxkCXjXbxqIhTsMI2Nui8huMPtnxg7krajPJAjnEG/iiOS7i+zCtWGZR9G0NBKbXKh6X9m9UIsYX/N6vvQ=="],
"fdir": ["fdir@6.5.0", "", { "peerDependencies": { "picomatch": "^3 || ^4" }, "optionalPeers": ["picomatch"] }, "sha512-tIbYtZbucOs0BRGqPJkshJUYdL+SDH7dVM8gjy+ERp3WAUjLEFJE+02kanyHtwjWOnwrKYBiwAmM0p4kLJAnXg=="],
"fill-range": ["fill-range@7.1.1", "", { "dependencies": { "to-regex-range": "^5.0.1" } }, "sha512-YsGpe3WHLK8ZYi4tWDg2Jy3ebRz2rXowDxnld4bkQB00cc/1Zw9AWnC0i9ztDJitivtQvaI9KaLyKrc+hBW0yg=="],
"fsevents": ["fsevents@2.3.2", "", { "os": "darwin" }, "sha512-xiqMQR4xAeHTuB9uWm+fFRcIOgKBMiOBP+eXiyT7jsgVCq1bkVygt00oASowB7EdtpOHaaPgKt812P9ab+DDKA=="],
"git-hooks-list": ["git-hooks-list@4.1.1", "", {}, "sha512-cmP497iLq54AZnv4YRAEMnEyQ1eIn4tGKbmswqwmFV4GBnAqE8NLtWxxdXa++AalfgL5EBH4IxTPyquEuGY/jA=="],
"glob-parent": ["glob-parent@5.1.2", "", { "dependencies": { "is-glob": "^4.0.1" } }, "sha512-AOIgSQCepiJYwP3ARnGx+5VnTu2HBYdzbGP45eLw1vr3zB3vZLeyed1sC9hnbcOc9/SrMyM5RPQrkGz4aS9Zow=="],
"globby": ["globby@14.1.0", "", { "dependencies": { "@sindresorhus/merge-streams": "^2.1.0", "fast-glob": "^3.3.3", "ignore": "^7.0.3", "path-type": "^6.0.0", "slash": "^5.1.0", "unicorn-magic": "^0.3.0" } }, "sha512-0Ia46fDOaT7k4og1PDW4YbodWWr3scS2vAr2lTbsplOt2WkKp0vQbkI9wKis/T5LV/dqPjO3bpS/z6GTJB82LA=="],
"ignore": ["ignore@7.0.5", "", {}, "sha512-Hs59xBNfUIunMFgWAbGX5cq6893IbWg4KnrjbYwX3tx0ztorVgTDA6B2sxf8ejHJ4wz8BqGUMYlnzNBer5NvGg=="],
"is-alphabetical": ["is-alphabetical@2.0.1", "", {}, "sha512-FWyyY60MeTNyeSRpkM2Iry0G9hpr7/9kD40mD/cGQEuilcZYS4okz8SN2Q6rLCJ8gbCt6fN+rC+6tMGS99LaxQ=="],
"is-alphanumerical": ["is-alphanumerical@2.0.1", "", { "dependencies": { "is-alphabetical": "^2.0.0", "is-decimal": "^2.0.0" } }, "sha512-hmbYhX/9MUMF5uh7tOXyK/n0ZvWpad5caBA17GsC6vyuCqaWliRG5K1qS9inmUhEMaOBIW7/whAnSwveW/LtZw=="],
"is-decimal": ["is-decimal@2.0.1", "", {}, "sha512-AAB9hiomQs5DXWcRB1rqsxGUstbRroFOPPVAomNk/3XHR5JyEZChOyTWe2oayKnsSsr/kcGqF+z6yuH6HHpN0A=="],
"is-extglob": ["is-extglob@2.1.1", "", {}, "sha512-SbKbANkN603Vi4jEZv49LeVJMn4yGwsbzZworEoyEiutsN3nJYdbO36zfhGJ6QEDpOZIFkDtnq5JRxmvl3jsoQ=="],
"is-glob": ["is-glob@4.0.3", "", { "dependencies": { "is-extglob": "^2.1.1" } }, "sha512-xelSayHH36ZgE7ZWhli7pW34hNbNl8Ojv5KVmkJD4hBdD3th8Tfk9vYasLM+mXWOZhFkgZfxhLSnrwRr4elSSg=="],
"is-hexadecimal": ["is-hexadecimal@2.0.1", "", {}, "sha512-DgZQp241c8oO6cA1SbTEWiXeoxV42vlcJxgH+B3hi1AiqqKruZR3ZGF8In3fj4+/y/7rHvlOZLZtgJ/4ttYGZg=="],
"is-number": ["is-number@7.0.0", "", {}, "sha512-41Cifkg6e8TylSpdtTpeLVMqvSBEVzTttHvERD741+pnZ8ANv0004MRL43QKPDlK9cGvNp6NZWZUBlbGXYxxng=="],
"is-plain-obj": ["is-plain-obj@4.1.0", "", {}, "sha512-+Pgi+vMuUNkJyExiMBt5IlFoMyKnr5zhJ4Uspz58WOhBF5QoIZkFyNHIbBAtHwzVAgk5RtndVNsDRN61/mmDqg=="],
"js-yaml": ["js-yaml@4.1.0", "", { "dependencies": { "argparse": "^2.0.1" }, "bin": { "js-yaml": "bin/js-yaml.js" } }, "sha512-wpxZs9NoxZaJESJGIZTyDEaYpl0FKSA+FB9aJiyemKhMwkxQg63h4T1KJgUGHpTqPDNRcmmYLugrRjJlBtWvRA=="],
"jsonc-parser": ["jsonc-parser@3.3.1", "", {}, "sha512-HUgH65KyejrUFPvHFPbqOY0rsFip3Bo5wb4ngvdi1EpCYWUQDC5V+Y7mZws+DLkr4M//zQJoanu1SP+87Dv1oQ=="],
"katex": ["katex@0.16.25", "", { "dependencies": { "commander": "^8.3.0" }, "bin": { "katex": "cli.js" } }, "sha512-woHRUZ/iF23GBP1dkDQMh1QBad9dmr8/PAwNA54VrSOVYgI12MAcE14TqnDdQOdzyEonGzMepYnqBMYdsoAr8Q=="],
"linkify-it": ["linkify-it@5.0.0", "", { "dependencies": { "uc.micro": "^2.0.0" } }, "sha512-5aHCbzQRADcdP+ATqnDuhhJ/MRIqDkZX5pyjFHRRysS8vZ5AbqGEoFIb6pYHPZ+L/OC2Lc+xT8uHVVR5CAK/wQ=="],
"markdown-it": ["markdown-it@14.1.0", "", { "dependencies": { "argparse": "^2.0.1", "entities": "^4.4.0", "linkify-it": "^5.0.0", "mdurl": "^2.0.0", "punycode.js": "^2.3.1", "uc.micro": "^2.1.0" }, "bin": { "markdown-it": "bin/markdown-it.mjs" } }, "sha512-a54IwgWPaeBCAAsv13YgmALOF1elABB08FxO9i+r4VFk5Vl4pKokRPeX8u5TCgSsPi6ec1otfLjdOpVcgbpshg=="],
"markdownlint": ["markdownlint@0.38.0", "", { "dependencies": { "micromark": "4.0.2", "micromark-core-commonmark": "2.0.3", "micromark-extension-directive": "4.0.0", "micromark-extension-gfm-autolink-literal": "2.1.0", "micromark-extension-gfm-footnote": "2.1.0", "micromark-extension-gfm-table": "2.1.1", "micromark-extension-math": "3.1.0", "micromark-util-types": "2.0.2" } }, "sha512-xaSxkaU7wY/0852zGApM8LdlIfGCW8ETZ0Rr62IQtAnUMlMuifsg09vWJcNYeL4f0anvr8Vo4ZQar8jGpV0btQ=="],
"markdownlint-cli2": ["markdownlint-cli2@0.18.1", "", { "dependencies": { "globby": "14.1.0", "js-yaml": "4.1.0", "jsonc-parser": "3.3.1", "markdown-it": "14.1.0", "markdownlint": "0.38.0", "markdownlint-cli2-formatter-default": "0.0.5", "micromatch": "4.0.8" }, "bin": { "markdownlint-cli2": "markdownlint-cli2-bin.mjs" } }, "sha512-/4Osri9QFGCZOCTkfA8qJF+XGjKYERSHkXzxSyS1hd3ZERJGjvsUao2h4wdnvpHp6Tu2Jh/bPHM0FE9JJza6ng=="],
"markdownlint-cli2-formatter-default": ["markdownlint-cli2-formatter-default@0.0.5", "", { "peerDependencies": { "markdownlint-cli2": ">=0.0.4" } }, "sha512-4XKTwQ5m1+Txo2kuQ3Jgpo/KmnG+X90dWt4acufg6HVGadTUG5hzHF/wssp9b5MBYOMCnZ9RMPaU//uHsszF8Q=="],
"mdurl": ["mdurl@2.0.0", "", {}, "sha512-Lf+9+2r+Tdp5wXDXC4PcIBjTDtq4UKjCPMQhKIuzpJNW0b96kVqSwW0bT7FhRSfmAiFYgP+SCRvdrDozfh0U5w=="],
"merge2": ["merge2@1.4.1", "", {}, "sha512-8q7VEgMJW4J8tcfVPy8g09NcQwZdbwFEqhe/WZkoIzjn/3TGDwtOCYtXGxA3O8tPzpczCCDgv+P2P5y00ZJOOg=="],
"micromark": ["micromark@4.0.2", "", { "dependencies": { "@types/debug": "^4.0.0", "debug": "^4.0.0", "decode-named-character-reference": "^1.0.0", "devlop": "^1.0.0", "micromark-core-commonmark": "^2.0.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-chunked": "^2.0.0", "micromark-util-combine-extensions": "^2.0.0", "micromark-util-decode-numeric-character-reference": "^2.0.0", "micromark-util-encode": "^2.0.0", "micromark-util-normalize-identifier": "^2.0.0", "micromark-util-resolve-all": "^2.0.0", "micromark-util-sanitize-uri": "^2.0.0", "micromark-util-subtokenize": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-zpe98Q6kvavpCr1NPVSCMebCKfD7CA2NqZ+rykeNhONIJBpc1tFKt9hucLGwha3jNTNI8lHpctWJWoimVF4PfA=="],
"micromark-core-commonmark": ["micromark-core-commonmark@2.0.3", "", { "dependencies": { "decode-named-character-reference": "^1.0.0", "devlop": "^1.0.0", "micromark-factory-destination": "^2.0.0", "micromark-factory-label": "^2.0.0", "micromark-factory-space": "^2.0.0", "micromark-factory-title": "^2.0.0", "micromark-factory-whitespace": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-chunked": "^2.0.0", "micromark-util-classify-character": "^2.0.0", "micromark-util-html-tag-name": "^2.0.0", "micromark-util-normalize-identifier": "^2.0.0", "micromark-util-resolve-all": "^2.0.0", "micromark-util-subtokenize": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-RDBrHEMSxVFLg6xvnXmb1Ayr2WzLAWjeSATAoxwKYJV94TeNavgoIdA0a9ytzDSVzBy2YKFK+emCPOEibLeCrg=="],
"micromark-extension-directive": ["micromark-extension-directive@4.0.0", "", { "dependencies": { "devlop": "^1.0.0", "micromark-factory-space": "^2.0.0", "micromark-factory-whitespace": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0", "parse-entities": "^4.0.0" } }, "sha512-/C2nqVmXXmiseSSuCdItCMho7ybwwop6RrrRPk0KbOHW21JKoCldC+8rFOaundDoRBUWBnJJcxeA/Kvi34WQXg=="],
"micromark-extension-gfm-autolink-literal": ["micromark-extension-gfm-autolink-literal@2.1.0", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-sanitize-uri": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-oOg7knzhicgQ3t4QCjCWgTmfNhvQbDDnJeVu9v81r7NltNCVmhPy1fJRX27pISafdjL+SVc4d3l48Gb6pbRypw=="],
"micromark-extension-gfm-footnote": ["micromark-extension-gfm-footnote@2.1.0", "", { "dependencies": { "devlop": "^1.0.0", "micromark-core-commonmark": "^2.0.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-normalize-identifier": "^2.0.0", "micromark-util-sanitize-uri": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-/yPhxI1ntnDNsiHtzLKYnE3vf9JZ6cAisqVDauhp4CEHxlb4uoOTxOCJ+9s51bIB8U1N1FJ1RXOKTIlD5B/gqw=="],
"micromark-extension-gfm-table": ["micromark-extension-gfm-table@2.1.1", "", { "dependencies": { "devlop": "^1.0.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-t2OU/dXXioARrC6yWfJ4hqB7rct14e8f7m0cbI5hUmDyyIlwv5vEtooptH8INkbLzOatzKuVbQmAYcbWoyz6Dg=="],
"micromark-extension-math": ["micromark-extension-math@3.1.0", "", { "dependencies": { "@types/katex": "^0.16.0", "devlop": "^1.0.0", "katex": "^0.16.0", "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-lvEqd+fHjATVs+2v/8kg9i5Q0AP2k85H0WUOwpIVvUML8BapsMvh1XAogmQjOCsLpoKRCVQqEkQBB3NhVBcsOg=="],
"micromark-factory-destination": ["micromark-factory-destination@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-Xe6rDdJlkmbFRExpTOmRj9N3MaWmbAgdpSrBQvCFqhezUn4AHqJHbaEnfbVYYiexVSs//tqOdY/DxhjdCiJnIA=="],
"micromark-factory-label": ["micromark-factory-label@2.0.1", "", { "dependencies": { "devlop": "^1.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-VFMekyQExqIW7xIChcXn4ok29YE3rnuyveW3wZQWWqF4Nv9Wk5rgJ99KzPvHjkmPXF93FXIbBp6YdW3t71/7Vg=="],
"micromark-factory-space": ["micromark-factory-space@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-zRkxjtBxxLd2Sc0d+fbnEunsTj46SWXgXciZmHq0kDYGnck/ZSGj9/wULTV95uoeYiK5hRXP2mJ98Uo4cq/LQg=="],
"micromark-factory-title": ["micromark-factory-title@2.0.1", "", { "dependencies": { "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-5bZ+3CjhAd9eChYTHsjy6TGxpOFSKgKKJPJxr293jTbfry2KDoWkhBb6TcPVB4NmzaPhMs1Frm9AZH7OD4Cjzw=="],
"micromark-factory-whitespace": ["micromark-factory-whitespace@2.0.1", "", { "dependencies": { "micromark-factory-space": "^2.0.0", "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-Ob0nuZ3PKt/n0hORHyvoD9uZhr+Za8sFoP+OnMcnWK5lngSzALgQYKMr9RJVOWLqQYuyn6ulqGWSXdwf6F80lQ=="],
"micromark-util-character": ["micromark-util-character@2.1.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-wv8tdUTJ3thSFFFJKtpYKOYiGP2+v96Hvk4Tu8KpCAsTMs6yi+nVmGh1syvSCsaxz45J6Jbw+9DD6g97+NV67Q=="],
"micromark-util-chunked": ["micromark-util-chunked@2.0.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0" } }, "sha512-QUNFEOPELfmvv+4xiNg2sRYeS/P84pTW0TCgP5zc9FpXetHY0ab7SxKyAQCNCc1eK0459uoLI1y5oO5Vc1dbhA=="],
"micromark-util-classify-character": ["micromark-util-classify-character@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-K0kHzM6afW/MbeWYWLjoHQv1sgg2Q9EccHEDzSkxiP/EaagNzCm7T/WMKZ3rjMbvIpvBiZgwR3dKMygtA4mG1Q=="],
"micromark-util-combine-extensions": ["micromark-util-combine-extensions@2.0.1", "", { "dependencies": { "micromark-util-chunked": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-OnAnH8Ujmy59JcyZw8JSbK9cGpdVY44NKgSM7E9Eh7DiLS2E9RNQf0dONaGDzEG9yjEl5hcqeIsj4hfRkLH/Bg=="],
"micromark-util-decode-numeric-character-reference": ["micromark-util-decode-numeric-character-reference@2.0.2", "", { "dependencies": { "micromark-util-symbol": "^2.0.0" } }, "sha512-ccUbYk6CwVdkmCQMyr64dXz42EfHGkPQlBj5p7YVGzq8I7CtjXZJrubAYezf7Rp+bjPseiROqe7G6foFd+lEuw=="],
"micromark-util-encode": ["micromark-util-encode@2.0.1", "", {}, "sha512-c3cVx2y4KqUnwopcO9b/SCdo2O67LwJJ/UyqGfbigahfegL9myoEFoDYZgkT7f36T0bLrM9hZTAaAyH+PCAXjw=="],
"micromark-util-html-tag-name": ["micromark-util-html-tag-name@2.0.1", "", {}, "sha512-2cNEiYDhCWKI+Gs9T0Tiysk136SnR13hhO8yW6BGNyhOC4qYFnwF1nKfD3HFAIXA5c45RrIG1ub11GiXeYd1xA=="],
"micromark-util-normalize-identifier": ["micromark-util-normalize-identifier@2.0.1", "", { "dependencies": { "micromark-util-symbol": "^2.0.0" } }, "sha512-sxPqmo70LyARJs0w2UclACPUUEqltCkJ6PhKdMIDuJ3gSf/Q+/GIe3WKl0Ijb/GyH9lOpUkRAO2wp0GVkLvS9Q=="],
"micromark-util-resolve-all": ["micromark-util-resolve-all@2.0.1", "", { "dependencies": { "micromark-util-types": "^2.0.0" } }, "sha512-VdQyxFWFT2/FGJgwQnJYbe1jjQoNTS4RjglmSjTUlpUMa95Htx9NHeYW4rGDJzbjvCsl9eLjMQwGeElsqmzcHg=="],
"micromark-util-sanitize-uri": ["micromark-util-sanitize-uri@2.0.1", "", { "dependencies": { "micromark-util-character": "^2.0.0", "micromark-util-encode": "^2.0.0", "micromark-util-symbol": "^2.0.0" } }, "sha512-9N9IomZ/YuGGZZmQec1MbgxtlgougxTodVwDzzEouPKo3qFWvymFHWcnDi2vzV1ff6kas9ucW+o3yzJK9YB1AQ=="],
"micromark-util-subtokenize": ["micromark-util-subtokenize@2.1.0", "", { "dependencies": { "devlop": "^1.0.0", "micromark-util-chunked": "^2.0.0", "micromark-util-symbol": "^2.0.0", "micromark-util-types": "^2.0.0" } }, "sha512-XQLu552iSctvnEcgXw6+Sx75GflAPNED1qx7eBJ+wydBb2KCbRZe+NwvIEEMM83uml1+2WSXpBAcp9IUCgCYWA=="],
"micromark-util-symbol": ["micromark-util-symbol@2.0.1", "", {}, "sha512-vs5t8Apaud9N28kgCrRUdEed4UJ+wWNvicHLPxCa9ENlYuAY31M0ETy5y1vA33YoNPDFTghEbnh6efaE8h4x0Q=="],
"micromark-util-types": ["micromark-util-types@2.0.2", "", {}, "sha512-Yw0ECSpJoViF1qTU4DC6NwtC4aWGt1EkzaQB8KPPyCRR8z9TWeV0HbEFGTO+ZY1wB22zmxnJqhPyTpOVCpeHTA=="],
"micromatch": ["micromatch@4.0.8", "", { "dependencies": { "braces": "^3.0.3", "picomatch": "^2.3.1" } }, "sha512-PXwfBhYu0hBCPw8Dn0E+WDYb7af3dSLVWKi3HGv84IdF4TyFoC0ysxFd0Goxw7nSv4T/PzEJQxsYsEiFCKo2BA=="],
"ms": ["ms@2.1.3", "", {}, "sha512-6FlzubTLZG3J2a/NVCAleEhjzq5oxgHyaCU9yYXvcLsvoVaHJq/s5xXI6/XXP6tz7R9xAOtHnSO/tXtF3WRTlA=="],
"oxlint": ["oxlint@1.25.0", "", { "optionalDependencies": { "@oxlint/darwin-arm64": "1.25.0", "@oxlint/darwin-x64": "1.25.0", "@oxlint/linux-arm64-gnu": "1.25.0", "@oxlint/linux-arm64-musl": "1.25.0", "@oxlint/linux-x64-gnu": "1.25.0", "@oxlint/linux-x64-musl": "1.25.0", "@oxlint/win32-arm64": "1.25.0", "@oxlint/win32-x64": "1.25.0" }, "peerDependencies": { "oxlint-tsgolint": ">=0.4.0" }, "optionalPeers": ["oxlint-tsgolint"], "bin": { "oxlint": "bin/oxlint", "oxc_language_server": "bin/oxc_language_server" } }, "sha512-O6iJ9xeuy9eQCi8/EghvsNO6lzSaUPs0FR1uLy51Exp3RkVpjvJKyPPhd9qv65KLnfG/BNd2HE/rH0NbEfVVzA=="],
"oxlint-tsgolint": ["oxlint-tsgolint@0.5.0", "", { "optionalDependencies": { "@oxlint-tsgolint/darwin-arm64": "0.5.0", "@oxlint-tsgolint/darwin-x64": "0.5.0", "@oxlint-tsgolint/linux-arm64": "0.5.0", "@oxlint-tsgolint/linux-x64": "0.5.0", "@oxlint-tsgolint/win32-arm64": "0.5.0", "@oxlint-tsgolint/win32-x64": "0.5.0" }, "bin": { "tsgolint": "bin/tsgolint.js" } }, "sha512-uRiGb48QVSY2PqPCgAOoYySZM8OKSXTTSHFuF0HeW3tUhefdj/wyHWeZzFfbIU+dSDgMEkG9HVE/WBeT1nc+bA=="],
"parse-entities": ["parse-entities@4.0.2", "", { "dependencies": { "@types/unist": "^2.0.0", "character-entities-legacy": "^3.0.0", "character-reference-invalid": "^2.0.0", "decode-named-character-reference": "^1.0.0", "is-alphanumerical": "^2.0.0", "is-decimal": "^2.0.0", "is-hexadecimal": "^2.0.0" } }, "sha512-GG2AQYWoLgL877gQIKeRPGO1xF9+eG1ujIb5soS5gPvLQ1y2o8FL90w2QWNdf9I361Mpp7726c+lj3U0qK1uGw=="],
"path-type": ["path-type@6.0.0", "", {}, "sha512-Vj7sf++t5pBD637NSfkxpHSMfWaeig5+DKWLhcqIYx6mWQz5hdJTGDVMQiJcw1ZYkhs7AazKDGpRVji1LJCZUQ=="],
"picomatch": ["picomatch@2.3.1", "", {}, "sha512-JU3teHTNjmE2VCGFzuY8EXzCDVwEqB2a8fsIvwaStHhAWJEeVd1o1QD80CU6+ZdEXXSLbSsuLwJjkCBWqRQUVA=="],
"playwright": ["playwright@1.56.1", "", { "dependencies": { "playwright-core": "1.56.1" }, "optionalDependencies": { "fsevents": "2.3.2" }, "bin": { "playwright": "cli.js" } }, "sha512-aFi5B0WovBHTEvpM3DzXTUaeN6eN0qWnTkKx4NQaH4Wvcmc153PdaY2UBdSYKaGYw+UyWXSVyxDUg5DoPEttjw=="],
"playwright-core": ["playwright-core@1.56.1", "", { "bin": { "playwright-core": "cli.js" } }, "sha512-hutraynyn31F+Bifme+Ps9Vq59hKuUCz7H1kDOcBs+2oGguKkWTU50bBWrtz34OUWmIwpBTWDxaRPXrIXkgvmQ=="],
"prettier": ["prettier@3.6.2", "", { "bin": { "prettier": "bin/prettier.cjs" } }, "sha512-I7AIg5boAr5R0FFtJ6rCfD+LFsWHp81dolrFD8S79U9tb8Az2nGrJncnMSnys+bpQJfRUzqs9hnA81OAA3hCuQ=="],
"prettier-plugin-jinja-template": ["prettier-plugin-jinja-template@2.1.0", "", { "peerDependencies": { "prettier": "^3.0.0" } }, "sha512-mzoCp2Oy9BDSug80fw3B3J4n4KQj1hRvoQOL1akqcDKBb5nvYxrik9zUEDs4AEJ6nK7QDTGoH0y9rx7AlnQ78Q=="],
"prettier-plugin-packagejson": ["prettier-plugin-packagejson@2.5.19", "", { "dependencies": { "sort-package-json": "3.4.0", "synckit": "0.11.11" }, "peerDependencies": { "prettier": ">= 1.16.0" }, "optionalPeers": ["prettier"] }, "sha512-Qsqp4+jsZbKMpEGZB1UP1pxeAT8sCzne2IwnKkr+QhUe665EXUo3BAvTf1kAPCqyMv9kg3ZmO0+7eOni/C6Uag=="],
"punycode.js": ["punycode.js@2.3.1", "", {}, "sha512-uxFIHU0YlHYhDQtV4R9J6a52SLx28BCjT+4ieh7IGbgwVJWO+km431c4yRlREUAsAmt/uMjQUyQHNEPf0M39CA=="],
"queue-microtask": ["queue-microtask@1.2.3", "", {}, "sha512-NuaNSa6flKT5JaSYQzJok04JzTL1CA6aGhv5rfLW3PgqA+M2ChpZQnAC8h8i4ZFkBS8X5RqkDBHA7r4hej3K9A=="],
"reusify": ["reusify@1.1.0", "", {}, "sha512-g6QUff04oZpHs0eG5p83rFLhHeV00ug/Yf9nZM6fLeUrPguBTkTQOdpAWWspMh55TZfVQDPaN3NQJfbVRAxdIw=="],
"run-parallel": ["run-parallel@1.2.0", "", { "dependencies": { "queue-microtask": "^1.2.2" } }, "sha512-5l4VyZR86LZ/lDxZTR6jqL8AFE2S0IFLMP26AbjsLVADxHdhB/c0GUsH+y39UfCi3dzz8OlQuPmnaJOMoDHQBA=="],
"semver": ["semver@7.7.3", "", { "bin": { "semver": "bin/semver.js" } }, "sha512-SdsKMrI9TdgjdweUSR9MweHA4EJ8YxHn8DFaDisvhVlUOe4BF1tLD7GAj0lIqWVl+dPb/rExr0Btby5loQm20Q=="],
"slash": ["slash@5.1.0", "", {}, "sha512-ZA6oR3T/pEyuqwMgAKT0/hAv8oAXckzbkmR0UkUosQ+Mc4RxGoJkRmwHgHufaenlyAgE1Mxgpdcrf75y6XcnDg=="],
"sort-object-keys": ["sort-object-keys@1.1.3", "", {}, "sha512-855pvK+VkU7PaKYPc+Jjnmt4EzejQHyhhF33q31qG8x7maDzkeFhAAThdCYay11CISO+qAMwjOBP+fPZe0IPyg=="],
"sort-package-json": ["sort-package-json@3.4.0", "", { "dependencies": { "detect-indent": "^7.0.1", "detect-newline": "^4.0.1", "git-hooks-list": "^4.0.0", "is-plain-obj": "^4.1.0", "semver": "^7.7.1", "sort-object-keys": "^1.1.3", "tinyglobby": "^0.2.12" }, "bin": { "sort-package-json": "cli.js" } }, "sha512-97oFRRMM2/Js4oEA9LJhjyMlde+2ewpZQf53pgue27UkbEXfHJnDzHlUxQ/DWUkzqmp7DFwJp8D+wi/TYeQhpA=="],
"synckit": ["synckit@0.11.11", "", { "dependencies": { "@pkgr/core": "^0.2.9" } }, "sha512-MeQTA1r0litLUf0Rp/iisCaL8761lKAZHaimlbGK4j0HysC4PLfqygQj9srcs0m2RdtDYnF8UuYyKpbjHYp7Jw=="],
"tinyglobby": ["tinyglobby@0.2.15", "", { "dependencies": { "fdir": "^6.5.0", "picomatch": "^4.0.3" } }, "sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ=="],
"to-regex-range": ["to-regex-range@5.0.1", "", { "dependencies": { "is-number": "^7.0.0" } }, "sha512-65P7iz6X5yEr1cwcgvQxbbIw7Uk3gOy5dIdtZ4rDveLqhrdJP+Li/Hx6tyK0NEb+2GCyneCMJiGqrADCSNk8sQ=="],
"typescript": ["typescript@5.9.3", "", { "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" } }, "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw=="],
"uc.micro": ["uc.micro@2.1.0", "", {}, "sha512-ARDJmphmdvUk6Glw7y9DQ2bFkKBHwQHLi2lsaH6PPmz/Ka9sFOBsBluozhDltWmnv9u/cF6Rt87znRTPV+yp/A=="],
"undici-types": ["undici-types@7.16.0", "", {}, "sha512-Zz+aZWSj8LE6zoxD+xrjh4VfkIG8Ya6LvYkZqtUQGJPZjYl53ypCaUwWqo7eI0x66KBGeRo+mlBEkMSeSZ38Nw=="],
"unicorn-magic": ["unicorn-magic@0.3.0", "", {}, "sha512-+QBBXBCvifc56fsbuxZQ6Sic3wqqc3WWaqxs58gvJrcOuN83HGTCwz3oS5phzU9LthRNE9VrJCFCLUgHeeFnfA=="],
"tinyglobby/picomatch": ["picomatch@4.0.3", "", {}, "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q=="],
}
}

View File

@@ -17,7 +17,7 @@ def main():
# Default to 'manage.py' if no specific command # Default to 'manage.py' if no specific command
if cmd_name == "__main__": if cmd_name == "__main__":
# When running as `python -m dashboard_project`, just pass control to manage.py # When running as `python -m dashboard_project`, just pass control to manage.py
from dashboard_project.manage import main as manage_main from dashboard_project.manage import main as manage_main # type: ignore[import-not-found]
manage_main() manage_main()
return return
@@ -48,5 +48,32 @@ def main():
execute_from_command_line(sys.argv) execute_from_command_line(sys.argv)
def runserver():
"""Entrypoint for running Django development server."""
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
sys.argv = ["manage.py", "runserver", "8001"]
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
def migrate():
"""Entrypoint for running Django migrations."""
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
sys.argv = ["manage.py", "migrate"]
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
def shell():
"""Entrypoint for Django shell."""
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
sys.argv = ["manage.py", "shell"]
from django.core.management import execute_from_command_line
execute_from_command_line(sys.argv)
if __name__ == "__main__": if __name__ == "__main__":
main() main()

View File

@@ -30,7 +30,8 @@ class CustomUserChangeForm(forms.ModelForm):
def __init__(self, *args, **kwargs): def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
# Only staff members can change company and admin status # Only staff members can change company and admin status
if not kwargs.get("instance") or not kwargs.get("instance").is_staff: instance = kwargs.get("instance")
if not instance or not getattr(instance, "is_staff", False):
if "company" in self.fields: if "company" in self.fields:
self.fields["company"].disabled = True self.fields["company"].disabled = True
if "is_company_admin" in self.fields: if "is_company_admin" in self.fields:

View File

@@ -4,7 +4,7 @@ ASGI config for dashboard_project project.
It exposes the ASGI callable as a module-level variable named ``application``. It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see For more information on this file, see
https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/ <https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/>
""" """
import os import os

View File

@@ -49,7 +49,9 @@ class DataSourceAdmin(admin.ModelAdmin):
@admin.display(description="External Data Status") @admin.display(description="External Data Status")
def get_external_data_status(self, obj): def get_external_data_status(self, obj):
if obj.external_source: if obj.external_source:
return f"Last synced: {obj.external_source.last_synced or 'Never'} | Status: {obj.external_source.get_status()}" last_sync = obj.external_source.last_synced or "Never"
status = obj.external_source.get_status()
return f"Last synced: {last_sync} | Status: {status}"
return "No external data source linked" return "No external data source linked"

View File

@@ -1,7 +1,14 @@
# dashboard/forms.py # dashboard/forms.py
from __future__ import annotations
from typing import TYPE_CHECKING
from django import forms from django import forms
if TYPE_CHECKING:
pass
from .models import Dashboard, DataSource from .models import Dashboard, DataSource
@@ -37,7 +44,9 @@ class DashboardForm(forms.ModelForm):
super().__init__(*args, **kwargs) super().__init__(*args, **kwargs)
if self.company: if self.company:
self.fields["data_sources"].queryset = DataSource.objects.filter(company=self.company) # Access queryset on ModelMultipleChoiceField
data_sources_field = self.fields["data_sources"] # type: ignore[assignment]
data_sources_field.queryset = DataSource.objects.filter(company=self.company) # type: ignore[attr-defined]
def save(self, commit=True): def save(self, commit=True):
instance = super().save(commit=False) instance = super().save(commit=False)

View File

@@ -1,2 +1,3 @@
# dashboard/management/__init__.py # dashboard/management/__init__.py
# This file is intentionally left empty to mark the directory as a Python package # This file is intentionally left empty to mark the directory as a Python package

View File

@@ -1,2 +1,3 @@
# dashboard/management/commands/__init__.py # dashboard/management/commands/__init__.py
# This file is intentionally left empty to mark the directory as a Python package # This file is intentionally left empty to mark the directory as a Python package

View File

@@ -83,7 +83,7 @@ class Command(BaseCommand):
ChatSession.objects.all().delete() ChatSession.objects.all().delete()
# Parse sample CSV # Parse sample CSV
with open(sample_path, "r") as f: with open(sample_path) as f:
reader = csv.reader(f) reader = csv.reader(f)
header = next(reader) # Skip header header = next(reader) # Skip header

View File

@@ -1,2 +1,3 @@
# dashboard/templatetags/__init__.py # dashboard/templatetags/__init__.py
# This file is intentionally left empty to mark the directory as a Python package # This file is intentionally left empty to mark the directory as a Python package

View File

@@ -1,10 +1,13 @@
# dashboard/utils.py # dashboard/utils.py
from __future__ import annotations
import contextlib import contextlib
import numpy as np import numpy as np
import pandas as pd import pandas as pd
from django.db import models from django.db import models
from django.db.models import functions
from django.utils.timezone import make_aware from django.utils.timezone import make_aware
from .models import ChatSession from .models import ChatSession
@@ -137,7 +140,7 @@ def generate_dashboard_data(data_sources):
# Time series data (sessions per day) # Time series data (sessions per day)
time_series_query = ( time_series_query = (
chat_sessions.filter(start_time__isnull=False) chat_sessions.filter(start_time__isnull=False)
.annotate(date=models.functions.TruncDate("start_time")) .annotate(date=functions.TruncDate("start_time")) # type: ignore[attr-defined]
.values("date") .values("date")
.annotate(count=models.Count("id")) .annotate(count=models.Count("id"))
.order_by("date") .order_by("date")

View File

@@ -58,7 +58,7 @@ def dashboard_view(request):
if selected_dashboard_id: if selected_dashboard_id:
selected_dashboard = get_object_or_404(Dashboard, id=selected_dashboard_id, company=company) selected_dashboard = get_object_or_404(Dashboard, id=selected_dashboard_id, company=company)
else: else:
selected_dashboard = dashboards.first() selected_dashboard = dashboards.first() # type: ignore[assignment]
# Generate dashboard data # Generate dashboard data
dashboard_data = generate_dashboard_data(selected_dashboard.data_sources.all()) dashboard_data = generate_dashboard_data(selected_dashboard.data_sources.all())
@@ -200,12 +200,10 @@ def chat_session_detail_view(request, session_id):
# Check if this is an AJAX navigation request # Check if this is an AJAX navigation request
if is_ajax_navigation(request): if is_ajax_navigation(request):
html_content = render_to_string("dashboard/chat_session_detail.html", context, request=request) html_content = render_to_string("dashboard/chat_session_detail.html", context, request=request)
return JsonResponse( return JsonResponse({
{
"html": html_content, "html": html_content,
"title": f"Chat Session {session_id} | Chat Analytics", "title": f"Chat Session {session_id} | Chat Analytics",
} })
)
return render(request, "dashboard/chat_session_detail.html", context) return render(request, "dashboard/chat_session_detail.html", context)
@@ -282,12 +280,10 @@ def edit_dashboard_view(request, dashboard_id):
# Check if this is an AJAX navigation request # Check if this is an AJAX navigation request
if is_ajax_navigation(request): if is_ajax_navigation(request):
html_content = render_to_string("dashboard/dashboard_form.html", context, request=request) html_content = render_to_string("dashboard/dashboard_form.html", context, request=request)
return JsonResponse( return JsonResponse({
{
"html": html_content, "html": html_content,
"title": f"Edit Dashboard: {dashboard.name} | Chat Analytics", "title": f"Edit Dashboard: {dashboard.name} | Chat Analytics",
} })
)
return render(request, "dashboard/dashboard_form.html", context) return render(request, "dashboard/dashboard_form.html", context)
@@ -349,6 +345,8 @@ def delete_data_source_view(request, data_source_id):
# API views for dashboard data # API views for dashboard data
@login_required @login_required
def dashboard_data_api(request, dashboard_id): def dashboard_data_api(request, dashboard_id):
"""API endpoint for dashboard data""" """API endpoint for dashboard data"""
@@ -450,8 +448,7 @@ def search_chat_sessions(request):
# Check if this is an AJAX pagination request # Check if this is an AJAX pagination request
if request.headers.get("X-Requested-With") == "XMLHttpRequest": if request.headers.get("X-Requested-With") == "XMLHttpRequest":
return JsonResponse( return JsonResponse({
{
"status": "success", "status": "success",
"html_data": render(request, "dashboard/partials/search_results_table.html", context).content.decode( "html_data": render(request, "dashboard/partials/search_results_table.html", context).content.decode(
"utf-8" "utf-8"
@@ -468,8 +465,7 @@ def search_chat_sessions(request):
}, },
}, },
"query": query, "query": query,
} })
)
return render(request, "dashboard/search_results.html", context) return render(request, "dashboard/search_results.html", context)
@@ -554,8 +550,7 @@ def data_view(request):
# Check if this is an AJAX pagination request # Check if this is an AJAX pagination request
if request.headers.get("X-Requested-With") == "XMLHttpRequest": if request.headers.get("X-Requested-With") == "XMLHttpRequest":
return JsonResponse( return JsonResponse({
{
"status": "success", "status": "success",
"html_data": render(request, "dashboard/partials/data_table.html", context).content.decode("utf-8"), "html_data": render(request, "dashboard/partials/data_table.html", context).content.decode("utf-8"),
"page_obj": { "page_obj": {
@@ -573,7 +568,6 @@ def data_view(request):
"avg_response_time": avg_response_time, "avg_response_time": avg_response_time,
"avg_messages": avg_messages, "avg_messages": avg_messages,
"escalation_rate": escalation_rate, "escalation_rate": escalation_rate,
} })
)
return render(request, "dashboard/data_view.html", context) return render(request, "dashboard/data_view.html", context)

View File

@@ -91,8 +91,7 @@ def export_chats_csv(request):
writer = csv.writer(response) writer = csv.writer(response)
# Write CSV header # Write CSV header
writer.writerow( writer.writerow([
[
"Session ID", "Session ID",
"Start Time", "Start Time",
"End Time", "End Time",
@@ -110,13 +109,11 @@ def export_chats_csv(request):
"Category", "Category",
"Initial Message", "Initial Message",
"User Rating", "User Rating",
] ])
)
# Write data rows # Write data rows
for session in sessions: for session in sessions:
writer.writerow( writer.writerow([
[
session.session_id, session.session_id,
session.start_time, session.start_time,
session.end_time, session.end_time,
@@ -134,8 +131,7 @@ def export_chats_csv(request):
session.category, session.category,
session.initial_msg, session.initial_msg,
session.user_rating, session.user_rating,
] ])
)
return response return response

View File

@@ -4,7 +4,7 @@ ASGI config for dashboard_project project.
It exposes the ASGI callable as a module-level variable named ``application``. It exposes the ASGI callable as a module-level variable named ``application``.
For more information on this file, see For more information on this file, see
https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/ <https://docs.djangoproject.com/en/4.0/howto/deployment/asgi/>
""" """
import os import os

View File

@@ -2,18 +2,24 @@ import os
from celery import Celery from celery import Celery
# Set the default Django settings module for the 'celery' program. # Set the default Django settings module for the 'celery' program
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings") os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
app = Celery("dashboard_project") app = Celery("dashboard_project")
# Using a string here means the worker doesn't have to serialize # Using a string here means the worker doesn't have to serialize
# the configuration object to child processes.
# the configuration object to child processes
# - namespace='CELERY' means all celery-related configuration keys # - namespace='CELERY' means all celery-related configuration keys
# should have a `CELERY_` prefix.
# should have a `CELERY_` prefix
app.config_from_object("django.conf:settings", namespace="CELERY") app.config_from_object("django.conf:settings", namespace="CELERY")
# Load task modules from all registered Django app configs. # Load task modules from all registered Django app configs
app.autodiscover_tasks() app.autodiscover_tasks()

View File

@@ -7,6 +7,7 @@ from pathlib import Path
from django.core.management.utils import get_random_secret_key from django.core.management.utils import get_random_secret_key
# Load environment variables from .env file if present # Load environment variables from .env file if present
try: try:
from dotenv import load_dotenv from dotenv import load_dotenv
@@ -14,18 +15,36 @@ try:
except ImportError: except ImportError:
pass pass
# Build paths inside the project like this: BASE_DIR / 'subdir'. # Build paths inside the project like this: BASE_DIR / 'subdir'
BASE_DIR = Path(__file__).resolve().parent.parent BASE_DIR = Path(__file__).resolve().parent.parent
# SECURITY WARNING: keep the secret key used in production secret! # SECURITY WARNING: keep the secret key used in production secret
SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY", get_random_secret_key()) SECRET_KEY = os.environ.get("DJANGO_SECRET_KEY", get_random_secret_key())
# SECURITY WARNING: don't run with debug turned on in production! # SECURITY WARNING: don't run with debug turned on in production
DEBUG = os.environ.get("DJANGO_DEBUG", "True") == "True" DEBUG = os.environ.get("DJANGO_DEBUG", "True") == "True"
ALLOWED_HOSTS = [] # Allow localhost, Docker IPs, and entire private network ranges
ALLOWED_HOSTS = [
"localhost",
"127.0.0.1",
"0.0.0.0", # nosec B104
".localhost",
# Allow all 192.168.x.x addresses (private network)
"192.168.*.*",
# Allow all 10.x.x.x addresses (Docker default)
"10.*.*.*",
# Allow all 172.16-31.x.x addresses (Docker)
"172.*.*.*",
# Wildcard for any other IPs (development only)
"*",
]
# Application definition # Application definition
INSTALLED_APPS = [ INSTALLED_APPS = [
"django.contrib.admin", "django.contrib.admin",
"django.contrib.auth", "django.contrib.auth",
@@ -80,6 +99,22 @@ TEMPLATES = [
WSGI_APPLICATION = "dashboard_project.wsgi.application" WSGI_APPLICATION = "dashboard_project.wsgi.application"
# Database # Database
# Use PostgreSQL when DATABASE_URL is set (Docker), otherwise SQLite (local dev)
if os.environ.get("DATABASE_URL"):
# PostgreSQL configuration for Docker
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": os.environ.get("POSTGRES_DB", "dashboard_db"),
"USER": os.environ.get("POSTGRES_USER", "postgres"),
"PASSWORD": os.environ.get("POSTGRES_PASSWORD", "postgres"),
"HOST": os.environ.get("POSTGRES_HOST", "db"),
"PORT": os.environ.get("POSTGRES_PORT", "5432"),
}
}
else:
# SQLite configuration for local development
DATABASES = { DATABASES = {
"default": { "default": {
"ENGINE": "django.db.backends.sqlite3", "ENGINE": "django.db.backends.sqlite3",
@@ -88,6 +123,7 @@ DATABASES = {
} }
# Password validation # Password validation
AUTH_PASSWORD_VALIDATORS = [ AUTH_PASSWORD_VALIDATORS = [
{ {
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator", "NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
@@ -104,12 +140,14 @@ AUTH_PASSWORD_VALIDATORS = [
] ]
# Internationalization # Internationalization
LANGUAGE_CODE = "en-US" LANGUAGE_CODE = "en-US"
TIME_ZONE = "Europe/Amsterdam" TIME_ZONE = "Europe/Amsterdam"
USE_I18N = True USE_I18N = True
USE_TZ = True USE_TZ = True
# Static files (CSS, JavaScript, Images) # Static files (CSS, JavaScript, Images)
STATIC_URL = "static/" STATIC_URL = "static/"
STATICFILES_DIRS = [ STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"), os.path.join(BASE_DIR, "static"),
@@ -125,23 +163,28 @@ STORAGES = {
} }
# Media files # Media files
MEDIA_URL = "/media/" MEDIA_URL = "/media/"
MEDIA_ROOT = os.path.join(BASE_DIR, "media") MEDIA_ROOT = os.path.join(BASE_DIR, "media")
# Default primary key field type # Default primary key field type
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField" DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
# Crispy Forms # Crispy Forms
CRISPY_ALLOWED_TEMPLATE_PACKS = "bootstrap5" CRISPY_ALLOWED_TEMPLATE_PACKS = "bootstrap5"
CRISPY_TEMPLATE_PACK = "bootstrap5" CRISPY_TEMPLATE_PACK = "bootstrap5"
# Authentication # Authentication
AUTH_USER_MODEL = "accounts.CustomUser" AUTH_USER_MODEL = "accounts.CustomUser"
LOGIN_REDIRECT_URL = "dashboard" LOGIN_REDIRECT_URL = "dashboard"
LOGOUT_REDIRECT_URL = "login" LOGOUT_REDIRECT_URL = "login"
ACCOUNT_LOGOUT_ON_GET = True ACCOUNT_LOGOUT_ON_GET = True
# django-allauth # django-allauth
AUTHENTICATION_BACKENDS = [ AUTHENTICATION_BACKENDS = [
"django.contrib.auth.backends.ModelBackend", "django.contrib.auth.backends.ModelBackend",
"allauth.account.auth_backends.AuthenticationBackend", "allauth.account.auth_backends.AuthenticationBackend",
@@ -150,7 +193,9 @@ SITE_ID = 1
ACCOUNT_EMAIL_VERIFICATION = "none" ACCOUNT_EMAIL_VERIFICATION = "none"
# Celery Configuration # Celery Configuration
# Check if Redis is available # Check if Redis is available
try: try:
import redis import redis
@@ -168,8 +213,8 @@ try:
logger.info("Using Redis for Celery broker and result backend") logger.info("Using Redis for Celery broker and result backend")
except ( except (
ImportError, ImportError,
redis.exceptions.ConnectionError, redis.exceptions.ConnectionError, # type: ignore[attr-defined]
redis.exceptions.TimeoutError, redis.exceptions.TimeoutError, # type: ignore[attr-defined]
) as e: ) as e:
# Redis is not available, use SQLite as fallback (works for development) # Redis is not available, use SQLite as fallback (works for development)
CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "sqla+sqlite:///celery.sqlite") CELERY_BROKER_URL = os.environ.get("CELERY_BROKER_URL", "sqla+sqlite:///celery.sqlite")
@@ -184,6 +229,7 @@ CELERY_TIMEZONE = TIME_ZONE
CELERY_BEAT_SCHEDULER = "django_celery_beat.schedulers:DatabaseScheduler" CELERY_BEAT_SCHEDULER = "django_celery_beat.schedulers:DatabaseScheduler"
# Get schedule from environment variables or use defaults # Get schedule from environment variables or use defaults
CHAT_DATA_FETCH_INTERVAL = int(os.environ.get("CHAT_DATA_FETCH_INTERVAL", 3600)) # Default: 1 hour CHAT_DATA_FETCH_INTERVAL = int(os.environ.get("CHAT_DATA_FETCH_INTERVAL", 3600)) # Default: 1 hour
CELERY_BEAT_SCHEDULE = { CELERY_BEAT_SCHEDULE = {

View File

@@ -4,7 +4,7 @@ WSGI config for dashboard_project project.
It exposes the WSGI callable as a module-level variable named ``application``. It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see For more information on this file, see
https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/ <https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/>
""" """
import os import os

View File

@@ -52,10 +52,8 @@ class ExternalDataSourceAdmin(admin.ModelAdmin):
status, status,
) )
else: else:
return format_html( style = "color: white; background-color: orange; padding: 3px 8px; border-radius: 10px;"
'<span style="color: white; background-color: orange; padding: 3px 8px; border-radius: 10px;">{}</span>', return format_html(f'<span style="{style}">{{}}</span>', status)
status,
)
@admin.display(description="Actions") @admin.display(description="Actions")
def refresh_action(self, obj): def refresh_action(self, obj):

View File

@@ -56,7 +56,8 @@ class Command(BaseCommand):
) )
elif col == "sync_interval": elif col == "sync_interval":
cursor.execute( cursor.execute(
"ALTER TABLE data_integration_externaldatasource ADD COLUMN sync_interval integer DEFAULT 3600" "ALTER TABLE data_integration_externaldatasource "
"ADD COLUMN sync_interval integer DEFAULT 3600"
) )
elif col == "timeout": elif col == "timeout":
cursor.execute( cursor.execute(

View File

@@ -0,0 +1,107 @@
"""
Management command to set up Jumbo company, users, and link existing data.
"""
from accounts.models import Company, CustomUser
from data_integration.models import ChatSession, ExternalDataSource
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Set up Jumbo company, create users, and link existing external data"
def handle(self, *_args, **_options):
self.stdout.write("Setting up Jumbo company and data...")
# 1. Create Jumbo company
jumbo_company, created = Company.objects.get_or_create(
name="Jumbo", defaults={"description": "Jumbo Supermarkets - External API Data"}
)
if created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo company"))
else:
self.stdout.write(" Jumbo company already exists")
# 2. Create admin user for Jumbo
admin_created = False
if not CustomUser.objects.filter(username="jumbo_admin").exists():
CustomUser.objects.create_user( # nosec B106
username="jumbo_admin",
email="admin@jumbo.nl",
password="jumbo123",
company=jumbo_company,
is_company_admin=True,
)
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo admin: jumbo_admin / jumbo123"))
admin_created = True
else:
self.stdout.write(" Jumbo admin already exists")
# 3. Create regular users for Jumbo
jumbo_users = [
{
"username": "jumbo_analyst",
"email": "analyst@jumbo.nl",
"password": "jumbo123",
"is_company_admin": False,
},
{
"username": "jumbo_manager",
"email": "manager@jumbo.nl",
"password": "jumbo123",
"is_company_admin": False,
},
]
users_created = 0
for user_data in jumbo_users:
if not CustomUser.objects.filter(username=user_data["username"]).exists():
CustomUser.objects.create_user(
username=user_data["username"],
email=user_data["email"],
password=user_data["password"],
company=jumbo_company,
is_company_admin=user_data["is_company_admin"],
)
users_created += 1
if users_created:
self.stdout.write(self.style.SUCCESS(f"✓ Created {users_created} Jumbo users"))
else:
self.stdout.write(" Jumbo users already exist")
# 4. Link External Data Source to Jumbo company
try:
jumbo_ext_source = ExternalDataSource.objects.get(name="Jumbo API")
if not jumbo_ext_source.company:
jumbo_ext_source.company = jumbo_company
jumbo_ext_source.save()
self.stdout.write(self.style.SUCCESS("✓ Linked Jumbo API data source to company"))
else:
self.stdout.write(" Jumbo API data source already linked")
except ExternalDataSource.DoesNotExist:
self.stdout.write(
self.style.WARNING("⚠ Jumbo API external data source not found. Create it in admin first.")
)
# 5. Link existing chat sessions to Jumbo company
unlinked_sessions = ChatSession.objects.filter(company__isnull=True)
if unlinked_sessions.exists():
count = unlinked_sessions.update(company=jumbo_company)
self.stdout.write(self.style.SUCCESS(f"✓ Linked {count} existing chat sessions to Jumbo company"))
else:
self.stdout.write(" All chat sessions already linked to companies")
# 6. Summary
total_sessions = ChatSession.objects.filter(company=jumbo_company).count()
total_users = CustomUser.objects.filter(company=jumbo_company).count()
self.stdout.write(
self.style.SUCCESS(
f"\n✓ Setup complete!"
f"\n Company: {jumbo_company.name}"
f"\n Users: {total_users} (including {1 if admin_created or CustomUser.objects.filter(username='jumbo_admin').exists() else 0} admin)"
f"\n Chat sessions: {total_sessions}"
)
)
self.stdout.write("\nLogin as jumbo_admin/jumbo123 to view the dashboard with Jumbo data.")

View File

@@ -0,0 +1,69 @@
"""
Management command to set up Jumbo API external data source and fetch data.
"""
import os
from accounts.models import Company
from data_integration.models import ChatSession, ExternalDataSource
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Set up Jumbo API external data source and fetch chat data"
def handle(self, **options): # noqa: ARG002
self.stdout.write("Setting up Jumbo API data source...")
# Get Jumbo company
try:
jumbo = Company.objects.get(name="Jumbo")
self.stdout.write(f"✓ Found Jumbo company (ID: {jumbo.id})")
except Company.DoesNotExist:
self.stdout.write(self.style.ERROR("✗ Jumbo company not found. Run setup_jumbo command first."))
return
# Create or get Jumbo API external data source
source, created = ExternalDataSource.objects.get_or_create(
name="Jumbo API",
defaults={
"company": jumbo,
"api_url": "https://proto.notso.ai/jumbo/chats",
"auth_username": os.environ.get("EXTERNAL_API_USERNAME", ""),
"auth_password": os.environ.get("EXTERNAL_API_PASSWORD", ""),
"is_active": True,
},
)
# Ensure company is set if already existed
if not created and not source.company:
source.company = jumbo
source.save()
self.stdout.write(self.style.SUCCESS(f"✓ Linked existing Jumbo API source to {jumbo.name}"))
elif created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo API external data source"))
else:
self.stdout.write("✓ Jumbo API source already exists")
self.stdout.write(
f"\nData source details:"
f"\n ID: {source.id}"
f"\n Company: {source.company.name if source.company else 'None'}"
f"\n Endpoint: {source.api_url}"
f"\n Active: {source.is_active}"
)
# Fetch data (call the task synchronously)
self.stdout.write("\nFetching Jumbo chat data...")
try:
# Use .apply() or direct function call to run synchronously
from data_integration.utils import fetch_and_store_chat_data
result = fetch_and_store_chat_data(source.id)
self.stdout.write(self.style.SUCCESS(f"✓ Data fetch completed: {result}"))
except Exception as e:
self.stdout.write(self.style.ERROR(f"✗ Error fetching data: {e}"))
# Show summary
session_count = ChatSession.objects.filter(company=jumbo).count()
self.stdout.write(self.style.SUCCESS(f"\n✓ Setup complete!\n Total Jumbo chat sessions: {session_count}"))

View File

@@ -0,0 +1,106 @@
"""
Management command to sync Jumbo API data to dashboard app with proper company linking.
"""
from accounts.models import Company, CustomUser
from dashboard.models import ChatSession, DataSource
from data_integration.models import ChatSession as ExtChatSession
from data_integration.models import ExternalDataSource
from django.core.management.base import BaseCommand
class Command(BaseCommand):
help = "Sync Jumbo API data to dashboard app with company linking"
def handle(self, *_args, **_options):
self.stdout.write("Starting Jumbo data sync to dashboard...")
# 1. Get or create Jumbo company
jumbo_company, created = Company.objects.get_or_create(
name="Jumbo", defaults={"description": "Jumbo Supermarkets - External API Data"}
)
if created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo company"))
else:
self.stdout.write(" Jumbo company already exists")
# 2. Get Jumbo external data source
try:
jumbo_ext_source = ExternalDataSource.objects.get(name="Jumbo API")
except ExternalDataSource.DoesNotExist:
self.stdout.write(
self.style.ERROR("✗ Jumbo API external data source not found. Please create it in admin first.")
)
return
# 3. Get or create DataSource linked to Jumbo company
jumbo_datasource, created = DataSource.objects.get_or_create(
name="Jumbo API Data",
company=jumbo_company,
defaults={
"description": "Chat sessions from Jumbo external API",
"external_source": jumbo_ext_source,
},
)
if created:
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo DataSource"))
else:
self.stdout.write(" Jumbo DataSource already exists")
# 4. Sync chat sessions from data_integration to dashboard
ext_sessions = ExtChatSession.objects.all()
synced_count = 0
skipped_count = 0
for ext_session in ext_sessions:
# Check if already synced
if ChatSession.objects.filter(data_source=jumbo_datasource, session_id=ext_session.session_id).exists():
skipped_count += 1
continue
# Create dashboard ChatSession
ChatSession.objects.create(
data_source=jumbo_datasource,
session_id=ext_session.session_id,
start_time=ext_session.start_time,
end_time=ext_session.end_time,
ip_address=ext_session.ip_address,
country=ext_session.country or "",
language=ext_session.language or "",
messages_sent=ext_session.messages_sent or 0,
sentiment=ext_session.sentiment or "",
escalated=ext_session.escalated or False,
forwarded_hr=ext_session.forwarded_hr or False,
full_transcript=ext_session.full_transcript_url or "",
avg_response_time=ext_session.avg_response_time,
tokens=ext_session.tokens or 0,
tokens_eur=ext_session.tokens_eur,
category=ext_session.category or "",
initial_msg=ext_session.initial_msg or "",
user_rating=str(ext_session.user_rating) if ext_session.user_rating else "",
)
synced_count += 1
self.stdout.write(
self.style.SUCCESS(f"✓ Synced {synced_count} chat sessions (skipped {skipped_count} existing)")
)
# 5. Create admin user for Jumbo company if needed
if not CustomUser.objects.filter(company=jumbo_company, is_company_admin=True).exists():
CustomUser.objects.create_user( # nosec B106
username="jumbo_admin",
email="admin@jumbo.nl",
password="jumbo123",
company=jumbo_company,
is_company_admin=True,
)
self.stdout.write(self.style.SUCCESS("✓ Created Jumbo admin user: jumbo_admin / jumbo123"))
else:
self.stdout.write(" Jumbo admin user already exists")
self.stdout.write(
self.style.SUCCESS(
f"\n✓ Sync complete! Jumbo company now has {ChatSession.objects.filter(data_source__company=jumbo_company).count()} chat sessions"
)
)
self.stdout.write("\nLogin as jumbo_admin to view the dashboard with Jumbo data.")

View File

@@ -59,7 +59,7 @@ class Command(BaseCommand):
redis_client.delete(test_key) redis_client.delete(test_key)
else: else:
self.stdout.write(self.style.ERROR("❌ Redis ping failed!")) self.stdout.write(self.style.ERROR("❌ Redis ping failed!"))
except redis.exceptions.ConnectionError as e: except redis.exceptions.ConnectionError as e: # type: ignore[attr-defined]
self.stdout.write(self.style.ERROR(f"❌ Redis connection error: {e}")) self.stdout.write(self.style.ERROR(f"❌ Redis connection error: {e}"))
self.stdout.write("Celery will use SQLite fallback if configured.") self.stdout.write("Celery will use SQLite fallback if configured.")
except ImportError: except ImportError:

View File

@@ -0,0 +1,38 @@
# Generated by Django 5.2.7 on 2025-11-05 18:20
import django.db.models.deletion
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
("accounts", "0001_initial"),
("data_integration", "0002_externaldatasource_error_count_and_more"),
]
operations = [
migrations.AddField(
model_name="chatsession",
name="company",
field=models.ForeignKey(
blank=True,
help_text="Company this session belongs to",
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="external_chat_sessions",
to="accounts.company",
),
),
migrations.AddField(
model_name="externaldatasource",
name="company",
field=models.ForeignKey(
blank=True,
help_text="Company this data source belongs to",
null=True,
on_delete=django.db.models.deletion.CASCADE,
related_name="external_data_sources",
to="accounts.company",
),
),
]

View File

@@ -1,10 +1,19 @@
import os import os
from accounts.models import Company
from django.db import models from django.db import models
class ChatSession(models.Model): class ChatSession(models.Model):
session_id = models.CharField(max_length=255, unique=True) session_id = models.CharField(max_length=255, unique=True)
company = models.ForeignKey(
Company,
on_delete=models.CASCADE,
related_name="external_chat_sessions",
null=True,
blank=True,
help_text="Company this session belongs to",
)
start_time = models.DateTimeField() start_time = models.DateTimeField()
end_time = models.DateTimeField() end_time = models.DateTimeField()
ip_address = models.GenericIPAddressField(null=True, blank=True) ip_address = models.GenericIPAddressField(null=True, blank=True)
@@ -39,6 +48,14 @@ class ChatMessage(models.Model):
class ExternalDataSource(models.Model): class ExternalDataSource(models.Model):
name = models.CharField(max_length=255, default="External API") name = models.CharField(max_length=255, default="External API")
company = models.ForeignKey(
Company,
on_delete=models.CASCADE,
related_name="external_data_sources",
null=True,
blank=True,
help_text="Company this data source belongs to",
)
api_url = models.URLField(default="https://proto.notso.ai/jumbo/chats") api_url = models.URLField(default="https://proto.notso.ai/jumbo/chats")
auth_username = models.CharField(max_length=255, blank=True, null=True) auth_username = models.CharField(max_length=255, blank=True, null=True)
auth_password = models.CharField( auth_password = models.CharField(

View File

@@ -1 +1 @@
# Create your tests here. # Create your tests here

View File

@@ -125,7 +125,10 @@ def fetch_and_store_chat_data(source_id=None):
# If we couldn't parse the dates, log an error and skip this row # If we couldn't parse the dates, log an error and skip this row
if not start_time or not end_time: if not start_time or not end_time:
error_msg = f"Could not parse date fields for session {data['session_id']}: start_time={data['start_time']}, end_time={data['end_time']}" error_msg = (
f"Could not parse date fields for session {data['session_id']}: "
f"start_time={data['start_time']}, end_time={data['end_time']}"
)
logger.error(error_msg) logger.error(error_msg)
stats["errors"] += 1 stats["errors"] += 1
continue continue
@@ -141,6 +144,7 @@ def fetch_and_store_chat_data(source_id=None):
session, created = ChatSession.objects.update_or_create( session, created = ChatSession.objects.update_or_create(
session_id=data["session_id"], session_id=data["session_id"],
defaults={ defaults={
"company": source.company, # Link to the company from the data source
"start_time": start_time, "start_time": start_time,
"end_time": end_time, "end_time": end_time,
"ip_address": data.get("ip_address"), "ip_address": data.get("ip_address"),
@@ -364,7 +368,8 @@ def parse_and_store_transcript_messages(session, transcript_content):
# If no recognized patterns are found, try to intelligently split the transcript # If no recognized patterns are found, try to intelligently split the transcript
if not has_recognized_patterns and len(lines) > 0: if not has_recognized_patterns and len(lines) > 0:
logger.info( logger.info(
f"No standard message patterns found in transcript for session {session.session_id}. Attempting intelligent split." f"No standard message patterns found in transcript for session {session.session_id}. "
f"Attempting intelligent split."
) )
# Try timestamp-based parsing if we have enough consistent timestamps # Try timestamp-based parsing if we have enough consistent timestamps

View File

@@ -7,7 +7,7 @@ from .models import ExternalDataSource
from .tasks import periodic_fetch_chat_data, refresh_specific_source from .tasks import periodic_fetch_chat_data, refresh_specific_source
from .utils import fetch_and_store_chat_data from .utils import fetch_and_store_chat_data
# Create your views here. # Create your views here
def is_superuser(user): def is_superuser(user):

View File

View File

@@ -4,6 +4,7 @@ import os
import sys import sys
# Add the project root to sys.path # Add the project root to sys.path
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))) sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings") os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")

View File

@@ -1,4 +1,5 @@
# !/usr/bin/env python # !/usr/bin/env python
# scripts/fix_dashboard_data.py # scripts/fix_dashboard_data.py
import os import os
@@ -15,11 +16,13 @@ from django.db import transaction
from django.utils.timezone import make_aware from django.utils.timezone import make_aware
# Set up Django environment # Set up Django environment
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))) sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), "..")))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings") os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
django.setup() django.setup()
# SCRIPT CONFIG # SCRIPT CONFIG
CREATE_TEST_DATA = False # Set to True to create sample data if none exists CREATE_TEST_DATA = False # Set to True to create sample data if none exists
COMPANY_NAME = "Notso AI" # The company name to use COMPANY_NAME = "Notso AI" # The company name to use

View File

@@ -0,0 +1,77 @@
#!/usr/bin/env python
"""
Script to create Jumbo API external data source and fetch data.
"""
import os
import sys
import django
# Setup Django
sys.path.insert(0, os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
os.environ.setdefault("DJANGO_SETTINGS_MODULE", "dashboard_project.settings")
django.setup()
from accounts.models import Company # noqa: E402
from data_integration.models import ExternalDataSource # noqa: E402
from data_integration.tasks import refresh_specific_source # noqa: E402
def main():
print("Setting up Jumbo API data source...")
# Get Jumbo company
try:
jumbo = Company.objects.get(name="Jumbo")
print(f"✓ Found Jumbo company (ID: {jumbo.id})")
except Company.DoesNotExist:
print("✗ Jumbo company not found. Run setup_jumbo command first.")
return
# Create or get Jumbo API external data source
source, created = ExternalDataSource.objects.get_or_create(
name="Jumbo API",
defaults={
"company": jumbo,
"api_endpoint": "https://mijn.jumbo.com/api/chat/sessions",
"api_username": os.environ.get("EXTERNAL_API_USERNAME", ""),
"api_password": os.environ.get("EXTERNAL_API_PASSWORD", ""),
"is_active": True,
},
)
# Ensure company is set if already existed
if not created and not source.company:
source.company = jumbo
source.save()
print(f"✓ Linked existing Jumbo API source to {jumbo.name}")
elif created:
print("✓ Created Jumbo API external data source")
else:
print("✓ Jumbo API source already exists")
print("\nData source details:")
print(f" ID: {source.id}")
print(f" Company: {source.company.name if source.company else 'None'}")
print(f" Endpoint: {source.api_endpoint}")
print(f" Active: {source.is_active}")
# Fetch data
print("\nFetching Jumbo chat data...")
try:
result = refresh_specific_source(source.id)
print(f"✓ Data fetch completed: {result}")
except Exception as e:
print(f"✗ Error fetching data: {e}")
# Show summary
from data_integration.models import ChatSession
session_count = ChatSession.objects.filter(company=jumbo).count()
print("\n✓ Setup complete!")
print(f" Total Jumbo chat sessions: {session_count}")
if __name__ == "__main__":
main()

View File

@@ -1,4 +1,5 @@
/** /**
* dashboard.css - Styles specific to dashboard functionality * dashboard.css - Styles specific to dashboard functionality
*/ */

View File

@@ -1,4 +1,5 @@
/** /**
* style.css - Global styles for the application * style.css - Global styles for the application
*/ */

View File

@@ -1,14 +1,55 @@
/** /**
* ajax-navigation.js - JavaScript for AJAX-based navigation across the entire application * ajax-navigation.js - JavaScript for AJAX-based navigation across the entire application
* *
* This script handles AJAX navigation between pages in the Chat Analytics Dashboard. * This script handles AJAX navigation between pages in the Chat Analytics Dashboard.
* It intercepts link clicks, loads content via AJAX, and updates the browser history. * It intercepts link clicks, loads content via AJAX, and updates the browser history.
*/ */
document.addEventListener("DOMContentLoaded", function () { // Function to reload and execute scripts in new content
// Only initialize if AJAX navigation is enabled function reloadScripts(container) {
if (typeof ENABLE_AJAX_NAVIGATION !== "undefined" && ENABLE_AJAX_NAVIGATION) { const scripts = container.getElementsByTagName("script");
setupAjaxNavigation(); for (let script of scripts) {
const newScript = document.createElement("script");
// Copy all attributes
Array.from(script.attributes).forEach((attr) => {
newScript.setAttribute(attr.name, attr.value);
});
// Copy inline script content
newScript.textContent = script.textContent;
// Replace old script with new one
script.parentNode.replaceChild(newScript, script);
}
}
// Function to initialize scripts needed for the new page content
function initializePageScripts() {
// Re-initialize any custom scripts that might be needed
if (typeof setupAjaxPagination === "function") {
setupAjaxPagination();
}
// Initialize Bootstrap tooltips, popovers, etc.
if (typeof bootstrap !== "undefined") {
// Initialize tooltips
const tooltipTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="tooltip"]'),
);
tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl);
});
// Initialize popovers
const popoverTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="popover"]'),
);
popoverTriggerList.map(function (popoverTriggerEl) {
return new bootstrap.Popover(popoverTriggerEl);
});
}
} }
// Function to set up AJAX navigation for the application // Function to set up AJAX navigation for the application
@@ -85,8 +126,7 @@ document.addEventListener("DOMContentLoaded", function () {
}, },
}) })
.then((response) => { .then((response) => {
if (!response.ok) if (!response.ok) throw new Error(`Network response was not ok: ${response.status}`);
throw new Error(`Network response was not ok: ${response.status}`);
return response.text(); return response.text();
}) })
.then((html) => { .then((html) => {
@@ -122,25 +162,6 @@ document.addEventListener("DOMContentLoaded", function () {
}); });
} }
// Function to reload and execute scripts in new content
function reloadScripts(container) {
const scripts = container.getElementsByTagName("script");
for (let script of scripts) {
const newScript = document.createElement("script");
// Copy all attributes
Array.from(script.attributes).forEach((attr) => {
newScript.setAttribute(attr.name, attr.value);
});
// Copy inline script content
newScript.textContent = script.textContent;
// Replace old script with new one
script.parentNode.replaceChild(newScript, script);
}
}
// Function to handle form submissions // Function to handle form submissions
function handleFormSubmission(form, e) { function handleFormSubmission(form, e) {
e.preventDefault(); e.preventDefault();
@@ -204,33 +225,6 @@ document.addEventListener("DOMContentLoaded", function () {
} }
} }
// Function to initialize scripts needed for the new page content
function initializePageScripts() {
// Re-initialize any custom scripts that might be needed
if (typeof setupAjaxPagination === "function") {
setupAjaxPagination();
}
// Initialize Bootstrap tooltips, popovers, etc.
if (typeof bootstrap !== "undefined") {
// Initialize tooltips
const tooltipTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="tooltip"]'),
);
tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl);
});
// Initialize popovers
const popoverTriggerList = [].slice.call(
document.querySelectorAll('[data-bs-toggle="popover"]'),
);
popoverTriggerList.map(function (popoverTriggerEl) {
return new bootstrap.Popover(popoverTriggerEl);
});
}
}
// Function to attach event listeners to forms and links // Function to attach event listeners to forms and links
function attachEventListeners() { function attachEventListeners() {
// Handle AJAX navigation links // Handle AJAX navigation links
@@ -271,4 +265,10 @@ document.addEventListener("DOMContentLoaded", function () {
} }
}); });
} }
document.addEventListener("DOMContentLoaded", function () {
// Only initialize if AJAX navigation is enabled
if (typeof ENABLE_AJAX_NAVIGATION !== "undefined" && ENABLE_AJAX_NAVIGATION) {
setupAjaxNavigation();
}
}); });

View File

@@ -5,10 +5,6 @@
* It intercepts pagination link clicks, loads content via AJAX, and updates the browser history. * It intercepts pagination link clicks, loads content via AJAX, and updates the browser history.
*/ */
document.addEventListener("DOMContentLoaded", function () {
// Initialize AJAX pagination
setupAjaxPagination();
// Function to set up AJAX pagination for the entire application // Function to set up AJAX pagination for the entire application
function setupAjaxPagination() { function setupAjaxPagination() {
// Configuration - can be customized per page if needed // Configuration - can be customized per page if needed
@@ -103,4 +99,8 @@ document.addEventListener("DOMContentLoaded", function () {
} }
}); });
} }
document.addEventListener("DOMContentLoaded", function () {
// Initialize AJAX pagination
setupAjaxPagination();
}); });

View File

@@ -1,4 +1,5 @@
/** /**
* dashboard.js - JavaScript for the dashboard functionality * dashboard.js - JavaScript for the dashboard functionality
* *
* This file handles the interactive features of the dashboard, * This file handles the interactive features of the dashboard,
@@ -6,20 +7,17 @@
* customization. * customization.
*/ */
document.addEventListener("DOMContentLoaded", function () {
// Set up Plotly default config based on theme // Set up Plotly default config based on theme
function updatePlotlyTheme() { function updatePlotlyTheme() {
// Force a fresh check of the current theme // Force a fresh check of the current theme
const isDarkMode = document.documentElement.getAttribute("data-bs-theme") === "dark"; const isDarkMode = document.documentElement.getAttribute("data-bs-theme") === "dark";
console.log( console.log("updatePlotlyTheme called - Current theme mode:", isDarkMode ? "dark" : "light");
"updatePlotlyTheme called - Current theme mode:",
isDarkMode ? "dark" : "light",
);
window.plotlyDefaultLayout = { window.plotlyDefaultLayout = {
font: { font: {
color: isDarkMode ? "#f8f9fa" : "#212529", color: isDarkMode ? "#f8f9fa" : "#212529",
family: '-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif', family:
'-apple-system, BlinkMacSystemFont, "Segoe UI", Roboto, "Helvetica Neue", Arial, sans-serif',
}, },
paper_bgcolor: isDarkMode ? "#343a40" : "#ffffff", paper_bgcolor: isDarkMode ? "#343a40" : "#ffffff",
plot_bgcolor: isDarkMode ? "#343a40" : "#ffffff", plot_bgcolor: isDarkMode ? "#343a40" : "#ffffff",
@@ -103,26 +101,6 @@ document.addEventListener("DOMContentLoaded", function () {
}; };
} }
// Initialize theme setting
updatePlotlyTheme();
// Listen for theme changes
const observer = new MutationObserver(function (mutations) {
mutations.forEach(function (mutation) {
if (mutation.attributeName === "data-bs-theme") {
console.log(
"Theme changed detected by observer:",
document.documentElement.getAttribute("data-bs-theme"),
);
updatePlotlyTheme();
// Use a small delay to ensure styles have been applied
setTimeout(refreshAllCharts, 100);
}
});
});
observer.observe(document.documentElement, { attributes: true });
// Chart responsiveness // Chart responsiveness
function resizeCharts() { function resizeCharts() {
const charts = document.querySelectorAll(".chart-container"); const charts = document.querySelectorAll(".chart-container");
@@ -136,161 +114,6 @@ document.addEventListener("DOMContentLoaded", function () {
}); });
} }
// Refresh all charts with current theme
function refreshAllCharts() {
if (!window.Plotly) return;
const currentTheme = document.documentElement.getAttribute("data-bs-theme");
console.log("Refreshing charts with theme:", currentTheme);
// Update the theme settings
updatePlotlyTheme();
const charts = document.querySelectorAll(".chart-container");
charts.forEach(function (chart) {
if (chart.id) {
try {
// Safe way to check if element has a plot
const plotElement = document.getElementById(chart.id);
if (plotElement && plotElement._fullLayout) {
console.log("Updating chart theme for:", chart.id);
// Determine chart type to apply appropriate settings
let layoutUpdate = { ...window.plotlyDefaultLayout };
// Check if it's a bar chart
if (
plotElement.data &&
plotElement.data.some((trace) => trace.type === "bar")
) {
layoutUpdate = { ...window.plotlyBarConfig };
}
// Check if it's a pie chart
if (
plotElement.data &&
plotElement.data.some((trace) => trace.type === "pie")
) {
layoutUpdate = { ...window.plotlyPieConfig };
}
// Force paper and plot background colors based on current theme
// This ensures the chart background always matches the current theme
layoutUpdate.paper_bgcolor =
currentTheme === "dark" ? "#343a40" : "#ffffff";
layoutUpdate.plot_bgcolor = currentTheme === "dark" ? "#343a40" : "#ffffff";
// Update font colors too
layoutUpdate.font.color = currentTheme === "dark" ? "#f8f9fa" : "#212529";
// Apply layout updates
Plotly.relayout(chart.id, layoutUpdate);
}
} catch (e) {
console.error("Error updating chart theme:", e);
}
}
});
}
// Make refreshAllCharts available globally
window.refreshAllCharts = refreshAllCharts;
// Handle window resize
window.addEventListener("resize", function () {
if (window.Plotly) {
resizeCharts();
}
});
// Call resizeCharts on initial load
if (window.Plotly) {
// Use a longer delay to ensure charts are fully loaded
setTimeout(function () {
updatePlotlyTheme();
refreshAllCharts();
}, 300);
}
// Apply theme to newly created charts
const originalPlotlyNewPlot = Plotly.newPlot;
Plotly.newPlot = function () {
const args = Array.from(arguments);
// Get the layout argument (3rd argument)
if (args.length >= 3 && typeof args[2] === "object") {
// Ensure plotlyDefaultLayout is up to date
updatePlotlyTheme();
// Apply current theme to new plot
args[2] = { ...window.plotlyDefaultLayout, ...args[2] };
}
return originalPlotlyNewPlot.apply(this, args);
};
// Time range filtering
const timeRangeDropdown = document.getElementById("timeRangeDropdown");
if (timeRangeDropdown) {
const timeRangeLinks = timeRangeDropdown.querySelectorAll(".dropdown-item");
timeRangeLinks.forEach((link) => {
link.addEventListener("click", function (e) {
const url = new URL(this.href);
const dashboardId = url.searchParams.get("dashboard_id");
const timeRange = url.searchParams.get("time_range");
// Fetch updated data via AJAX
if (dashboardId) {
fetchDashboardData(dashboardId, timeRange);
e.preventDefault();
}
});
});
}
// Function to fetch dashboard data
function fetchDashboardData(dashboardId, timeRange) {
const loadingOverlay = document.createElement("div");
loadingOverlay.className = "loading-overlay";
loadingOverlay.innerHTML =
'<div class="spinner-border text-primary" role="status"><span class="visually-hidden">Loading...</span></div>';
document.querySelector("main").appendChild(loadingOverlay);
fetch(`/dashboard/api/dashboard/${dashboardId}/data/?time_range=${timeRange || "all"}`)
.then((response) => {
if (!response.ok) {
throw new Error(`Network response was not ok: ${response.status}`);
}
return response.json();
})
.then((data) => {
console.log("Dashboard API response:", data);
updateDashboardStats(data);
updateDashboardCharts(data);
// Update URL without page reload
const url = new URL(window.location.href);
url.searchParams.set("dashboard_id", dashboardId);
if (timeRange) {
url.searchParams.set("time_range", timeRange);
}
window.history.pushState({}, "", url);
document.querySelector(".loading-overlay").remove();
})
.catch((error) => {
console.error("Error fetching dashboard data:", error);
document.querySelector(".loading-overlay").remove();
// Show error message
const alertElement = document.createElement("div");
alertElement.className = "alert alert-danger alert-dismissible fade show";
alertElement.setAttribute("role", "alert");
alertElement.innerHTML = `
Error loading dashboard data. Please try again.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
`;
document.querySelector("main").prepend(alertElement);
});
}
// Function to update dashboard statistics // Function to update dashboard statistics
function updateDashboardStats(data) { function updateDashboardStats(data) {
// Update total sessions // Update total sessions
@@ -467,6 +290,175 @@ document.addEventListener("DOMContentLoaded", function () {
} }
} }
document.addEventListener("DOMContentLoaded", function () {
// Initialize theme setting
updatePlotlyTheme();
// Listen for theme changes
const observer = new MutationObserver(function (mutations) {
mutations.forEach(function (mutation) {
if (mutation.attributeName === "data-bs-theme") {
console.log(
"Theme changed detected by observer:",
document.documentElement.getAttribute("data-bs-theme"),
);
updatePlotlyTheme();
// Use a small delay to ensure styles have been applied
setTimeout(refreshAllCharts, 100);
}
});
});
observer.observe(document.documentElement, { attributes: true });
// Refresh all charts with current theme
function refreshAllCharts() {
if (!window.Plotly) return;
const currentTheme = document.documentElement.getAttribute("data-bs-theme");
console.log("Refreshing charts with theme:", currentTheme);
// Update the theme settings
updatePlotlyTheme();
const charts = document.querySelectorAll(".chart-container");
charts.forEach(function (chart) {
if (chart.id) {
try {
// Safe way to check if element has a plot
const plotElement = document.getElementById(chart.id);
if (plotElement && plotElement._fullLayout) {
console.log("Updating chart theme for:", chart.id);
// Determine chart type to apply appropriate settings
let layoutUpdate = { ...window.plotlyDefaultLayout };
// Check if it's a bar chart
if (plotElement.data && plotElement.data.some((trace) => trace.type === "bar")) {
layoutUpdate = { ...window.plotlyBarConfig };
}
// Check if it's a pie chart
if (plotElement.data && plotElement.data.some((trace) => trace.type === "pie")) {
layoutUpdate = { ...window.plotlyPieConfig };
}
// Force paper and plot background colors based on current theme
// This ensures the chart background always matches the current theme
layoutUpdate.paper_bgcolor = currentTheme === "dark" ? "#343a40" : "#ffffff";
layoutUpdate.plot_bgcolor = currentTheme === "dark" ? "#343a40" : "#ffffff";
// Update font colors too
layoutUpdate.font.color = currentTheme === "dark" ? "#f8f9fa" : "#212529";
// Apply layout updates
Plotly.relayout(chart.id, layoutUpdate);
}
} catch (e) {
console.error("Error updating chart theme:", e);
}
}
});
}
// Make refreshAllCharts available globally
window.refreshAllCharts = refreshAllCharts;
// Handle window resize
window.addEventListener("resize", function () {
if (window.Plotly) {
resizeCharts();
}
});
// Call resizeCharts on initial load
if (window.Plotly) {
// Use a longer delay to ensure charts are fully loaded
setTimeout(function () {
updatePlotlyTheme();
refreshAllCharts();
}, 300);
}
// Apply theme to newly created charts
const originalPlotlyNewPlot = Plotly.newPlot;
Plotly.newPlot = function () {
const args = Array.from(arguments);
// Get the layout argument (3rd argument)
if (args.length >= 3 && typeof args[2] === "object") {
// Ensure plotlyDefaultLayout is up to date
updatePlotlyTheme();
// Apply current theme to new plot
args[2] = { ...window.plotlyDefaultLayout, ...args[2] };
}
return originalPlotlyNewPlot.apply(this, args);
};
// Time range filtering
const timeRangeDropdown = document.getElementById("timeRangeDropdown");
if (timeRangeDropdown) {
const timeRangeLinks = timeRangeDropdown.querySelectorAll(".dropdown-item");
timeRangeLinks.forEach((link) => {
link.addEventListener("click", function (e) {
const url = new URL(this.href);
const dashboardId = url.searchParams.get("dashboard_id");
const timeRange = url.searchParams.get("time_range");
// Fetch updated data via AJAX
if (dashboardId) {
fetchDashboardData(dashboardId, timeRange);
e.preventDefault();
}
});
});
}
// Function to fetch dashboard data
function fetchDashboardData(dashboardId, timeRange) {
const loadingOverlay = document.createElement("div");
loadingOverlay.className = "loading-overlay";
loadingOverlay.innerHTML =
'<div class="spinner-border text-primary" role="status"><span class="visually-hidden">Loading...</span></div>';
document.querySelector("main").appendChild(loadingOverlay);
fetch(`/dashboard/api/dashboard/${dashboardId}/data/?time_range=${timeRange || "all"}`)
.then((response) => {
if (!response.ok) {
throw new Error(`Network response was not ok: ${response.status}`);
}
return response.json();
})
.then((data) => {
console.log("Dashboard API response:", data);
updateDashboardStats(data);
updateDashboardCharts(data);
// Update URL without page reload
const url = new URL(window.location.href);
url.searchParams.set("dashboard_id", dashboardId);
if (timeRange) {
url.searchParams.set("time_range", timeRange);
}
window.history.pushState({}, "", url);
document.querySelector(".loading-overlay").remove();
})
.catch((error) => {
console.error("Error fetching dashboard data:", error);
document.querySelector(".loading-overlay").remove();
// Show error message
const alertElement = document.createElement("div");
alertElement.className = "alert alert-danger alert-dismissible fade show";
alertElement.setAttribute("role", "alert");
alertElement.innerHTML = `
Error loading dashboard data. Please try again.
<button type="button" class="btn-close" data-bs-dismiss="alert" aria-label="Close"></button>
`;
document.querySelector("main").prepend(alertElement);
});
}
// Dashboard selector // Dashboard selector
const dashboardSelector = document.querySelectorAll('a[href^="?dashboard_id="]'); const dashboardSelector = document.querySelectorAll('a[href^="?dashboard_id="]');
dashboardSelector.forEach((link) => { dashboardSelector.forEach((link) => {

View File

@@ -1,20 +1,73 @@
/** /**
* main.js - Global JavaScript functionality * main.js - Global JavaScript functionality
* *
* This file contains general JavaScript functionality used across * This file contains general JavaScript functionality used across
* the entire application, including navigation, forms, and UI interactions. * the entire application, including navigation, forms, and UI interactions.
*/ */
// Handle sidebar collapse on small screens
function handleSidebarOnResize() {
if (window.innerWidth < 768) {
document.querySelector(".sidebar")?.classList.remove("show");
}
}
// Theme toggling functionality
function setTheme(theme, isUserPreference = false) {
console.log("Setting theme to:", theme, "User preference:", isUserPreference);
// Update the HTML attribute that controls theme
document.documentElement.setAttribute("data-bs-theme", theme);
// Save the theme preference to localStorage
localStorage.setItem("theme", theme);
// If this was a user choice (from the toggle button), record that fact
if (isUserPreference) {
localStorage.setItem("userPreferredTheme", "true");
}
// Update toggle button icon
const themeToggle = document.getElementById("theme-toggle");
if (themeToggle) {
const icon = themeToggle.querySelector("i");
if (theme === "dark") {
icon.classList.remove("fa-moon");
icon.classList.add("fa-sun");
themeToggle.setAttribute("title", "Switch to light mode");
themeToggle.setAttribute("aria-label", "Switch to light mode");
} else {
icon.classList.remove("fa-sun");
icon.classList.add("fa-moon");
themeToggle.setAttribute("title", "Switch to dark mode");
themeToggle.setAttribute("aria-label", "Switch to dark mode");
}
}
// If we're on a page with charts, refresh them to match the theme
if (typeof window.refreshAllCharts === "function") {
console.log("Calling refresh charts from theme toggle");
// Add a small delay to ensure DOM updates have completed
setTimeout(() => window.refreshAllCharts(), 100);
}
}
// Check if the user has a system preference for dark mode
function getSystemPreference() {
return window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light";
}
document.addEventListener("DOMContentLoaded", function () { document.addEventListener("DOMContentLoaded", function () {
// Initialize tooltips // Initialize tooltips
var tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]')); var tooltipTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="tooltip"]'));
var tooltipList = tooltipTriggerList.map(function (tooltipTriggerEl) { tooltipTriggerList.map(function (tooltipTriggerEl) {
return new bootstrap.Tooltip(tooltipTriggerEl); return new bootstrap.Tooltip(tooltipTriggerEl);
}); });
// Initialize popovers // Initialize popovers
var popoverTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="popover"]')); var popoverTriggerList = [].slice.call(document.querySelectorAll('[data-bs-toggle="popover"]'));
var popoverList = popoverTriggerList.map(function (popoverTriggerEl) { popoverTriggerList.map(function (popoverTriggerEl) {
return new bootstrap.Popover(popoverTriggerEl); return new bootstrap.Popover(popoverTriggerEl);
}); });
@@ -74,7 +127,7 @@ document.addEventListener("DOMContentLoaded", function () {
// File input customization // File input customization
const fileInputs = document.querySelectorAll(".custom-file-input"); const fileInputs = document.querySelectorAll(".custom-file-input");
fileInputs.forEach(function (input) { fileInputs.forEach(function (input) {
input.addEventListener("change", function (e) { input.addEventListener("change", function () {
const fileName = this.files[0]?.name || "Choose file"; const fileName = this.files[0]?.name || "Choose file";
const nextSibling = this.nextElementSibling; const nextSibling = this.nextElementSibling;
if (nextSibling) { if (nextSibling) {
@@ -135,63 +188,13 @@ document.addEventListener("DOMContentLoaded", function () {
const exportLinks = document.querySelectorAll("[data-export]"); const exportLinks = document.querySelectorAll("[data-export]");
exportLinks.forEach(function (link) { exportLinks.forEach(function (link) {
link.addEventListener("click", function (e) { link.addEventListener("click", function () {
// Handle export functionality if needed // Handle export functionality if needed
console.log("Export requested:", this.dataset.export); console.log("Export requested:", this.dataset.export);
}); });
}); });
// Handle sidebar collapse on small screens window.addEventListener("resize", handleSidebarOnResize);
function handleSidebarOnResize() {
if (window.innerWidth < 768) {
document.querySelector(".sidebar")?.classList.remove("show");
}
}
window.addEventListener("resize", handleSidebarOnResize); // Theme toggling functionality
function setTheme(theme, isUserPreference = false) {
console.log("Setting theme to:", theme, "User preference:", isUserPreference);
// Update the HTML attribute that controls theme
document.documentElement.setAttribute("data-bs-theme", theme);
// Save the theme preference to localStorage
localStorage.setItem("theme", theme);
// If this was a user choice (from the toggle button), record that fact
if (isUserPreference) {
localStorage.setItem("userPreferredTheme", "true");
}
// Update toggle button icon
const themeToggle = document.getElementById("theme-toggle");
if (themeToggle) {
const icon = themeToggle.querySelector("i");
if (theme === "dark") {
icon.classList.remove("fa-moon");
icon.classList.add("fa-sun");
themeToggle.setAttribute("title", "Switch to light mode");
themeToggle.setAttribute("aria-label", "Switch to light mode");
} else {
icon.classList.remove("fa-sun");
icon.classList.add("fa-moon");
themeToggle.setAttribute("title", "Switch to dark mode");
themeToggle.setAttribute("aria-label", "Switch to dark mode");
}
}
// If we're on a page with charts, refresh them to match the theme
if (typeof window.refreshAllCharts === "function") {
console.log("Calling refresh charts from theme toggle");
// Add a small delay to ensure DOM updates have completed
setTimeout(window.refreshAllCharts, 100);
}
}
// Check if the user has a system preference for dark mode
function getSystemPreference() {
return window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light";
}
// Initialize theme based on saved preference or system setting // Initialize theme based on saved preference or system setting
function initializeTheme() { function initializeTheme() {

View File

@@ -1,3 +1,4 @@
{% load crispy_forms_filters %}
<!-- templates/accounts/login.html --> <!-- templates/accounts/login.html -->
{% extends 'base.html' %} {% load crispy_forms_tags %} {% extends 'base.html' %} {% load crispy_forms_tags %}
{% block title %} {% block title %}

View File

@@ -4,7 +4,7 @@ WSGI config for dashboard_project project.
It exposes the WSGI callable as a module-level variable named ``application``. It exposes the WSGI callable as a module-level variable named ``application``.
For more information on this file, see For more information on this file, see
https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/ <https://docs.djangoproject.com/en/4.0/howto/deployment/wsgi/>
""" """
import os import os

18
dev.sh
View File

@@ -1,10 +1,13 @@
#!/bin/bash #!/usr/bin/env bash
# LiveGraphsDjango Development Helper Script # LiveGraphsDjango Development Helper Script
# Set UV_LINK_MODE to copy to avoid hardlink warnings # Set UV_LINK_MODE to copy to avoid hardlink warnings
export UV_LINK_MODE=copy export UV_LINK_MODE=copy
# Function to print section header # Function to print section header
print_header() { print_header() {
echo "======================================" echo "======================================"
echo "🚀 $1" echo "🚀 $1"
@@ -12,6 +15,7 @@ print_header() {
} }
# Display help menu # Display help menu
if [[ $1 == "help" ]] || [[ $1 == "-h" ]] || [[ $1 == "--help" ]] || [[ -z $1 ]]; then if [[ $1 == "help" ]] || [[ $1 == "-h" ]] || [[ $1 == "--help" ]] || [[ -z $1 ]]; then
print_header "LiveGraphsDjango Development Commands" print_header "LiveGraphsDjango Development Commands"
echo "Usage: ./dev.sh COMMAND" echo "Usage: ./dev.sh COMMAND"
@@ -33,6 +37,7 @@ if [[ $1 == "help" ]] || [[ $1 == "-h" ]] || [[ $1 == "--help" ]] || [[ -z $1 ]]
fi fi
# Start Redis server # Start Redis server
if [[ $1 == "redis-start" ]]; then if [[ $1 == "redis-start" ]]; then
print_header "Starting Redis Server" print_header "Starting Redis Server"
redis-server --daemonize yes redis-server --daemonize yes
@@ -46,6 +51,7 @@ if [[ $1 == "redis-start" ]]; then
fi fi
# Test Redis connection # Test Redis connection
if [[ $1 == "redis-test" ]]; then if [[ $1 == "redis-test" ]]; then
print_header "Testing Redis Connection" print_header "Testing Redis Connection"
cd dashboard_project && python manage.py test_redis cd dashboard_project && python manage.py test_redis
@@ -53,6 +59,7 @@ if [[ $1 == "redis-test" ]]; then
fi fi
# Stop Redis server # Stop Redis server
if [[ $1 == "redis-stop" ]]; then if [[ $1 == "redis-stop" ]]; then
print_header "Stopping Redis Server" print_header "Stopping Redis Server"
redis-cli shutdown redis-cli shutdown
@@ -61,6 +68,7 @@ if [[ $1 == "redis-stop" ]]; then
fi fi
# Run migrations # Run migrations
if [[ $1 == "migrate" ]]; then if [[ $1 == "migrate" ]]; then
print_header "Running Migrations" print_header "Running Migrations"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py migrate cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py migrate
@@ -68,6 +76,7 @@ if [[ $1 == "migrate" ]]; then
fi fi
# Make migrations # Make migrations
if [[ $1 == "makemigrations" ]]; then if [[ $1 == "makemigrations" ]]; then
print_header "Creating Migrations" print_header "Creating Migrations"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py makemigrations cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py makemigrations
@@ -75,6 +84,7 @@ if [[ $1 == "makemigrations" ]]; then
fi fi
# Create superuser # Create superuser
if [[ $1 == "superuser" ]]; then if [[ $1 == "superuser" ]]; then
print_header "Creating Superuser" print_header "Creating Superuser"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py createsuperuser cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py createsuperuser
@@ -82,6 +92,7 @@ if [[ $1 == "superuser" ]]; then
fi fi
# Test Celery # Test Celery
if [[ $1 == "test-celery" ]]; then if [[ $1 == "test-celery" ]]; then
print_header "Testing Celery" print_header "Testing Celery"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py test_celery cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py test_celery
@@ -89,6 +100,7 @@ if [[ $1 == "test-celery" ]]; then
fi fi
# View Celery logs # View Celery logs
if [[ $1 == "logs-celery" ]]; then if [[ $1 == "logs-celery" ]]; then
print_header "Celery Worker Logs" print_header "Celery Worker Logs"
echo "Press Ctrl+C to exit" echo "Press Ctrl+C to exit"
@@ -97,6 +109,7 @@ if [[ $1 == "logs-celery" ]]; then
fi fi
# View Celery Beat logs # View Celery Beat logs
if [[ $1 == "logs-beat" ]]; then if [[ $1 == "logs-beat" ]]; then
print_header "Celery Beat Logs" print_header "Celery Beat Logs"
echo "Press Ctrl+C to exit" echo "Press Ctrl+C to exit"
@@ -105,6 +118,7 @@ if [[ $1 == "logs-beat" ]]; then
fi fi
# Django shell # Django shell
if [[ $1 == "shell" ]]; then if [[ $1 == "shell" ]]; then
print_header "Django Shell" print_header "Django Shell"
cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py shell cd dashboard_project && UV_LINK_MODE=copy uv run python manage.py shell
@@ -112,6 +126,7 @@ if [[ $1 == "shell" ]]; then
fi fi
# Start the application # Start the application
if [[ $1 == "start" ]]; then if [[ $1 == "start" ]]; then
print_header "Starting LiveGraphsDjango Application" print_header "Starting LiveGraphsDjango Application"
./start.sh ./start.sh
@@ -119,6 +134,7 @@ if [[ $1 == "start" ]]; then
fi fi
# Invalid command # Invalid command
echo "❌ Unknown command: $1" echo "❌ Unknown command: $1"
echo "Run './dev.sh help' to see available commands" echo "Run './dev.sh help' to see available commands"
exit 1 exit 1

View File

@@ -1,22 +1,23 @@
# docker-compose.yml # docker-compose.yml
version: "3.8"
services: services:
web: web:
build: . build: .
command: gunicorn dashboard_project.wsgi:application --bind 0.0.0.0:8000 command: uv run gunicorn dashboard_project.wsgi:application --bind 0.0.0.0:8000 --chdir dashboard_project
volumes: volumes:
- .:/app
- static_volume:/app/staticfiles - static_volume:/app/staticfiles
- media_volume:/app/media - media_volume:/app/media
ports: ports:
- 8000:8000 - 8000:8000
env_file:
- .env
environment: environment:
- DEBUG=0 - DATABASE_URL=postgresql://postgres:postgres@db:5432/dashboard_db
- SECRET_KEY=your_secret_key_here - POSTGRES_DB=dashboard_db
- ALLOWED_HOSTS=localhost,127.0.0.1 - POSTGRES_USER=postgres
- DJANGO_SETTINGS_MODULE=dashboard_project.settings - POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- CELERY_BROKER_URL=redis://redis:6379/0 - CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0 - CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on: depends_on:
@@ -24,7 +25,7 @@ services:
- redis - redis
db: db:
image: postgres:13 image: postgres:alpine
volumes: volumes:
- postgres_data:/var/lib/postgresql/data/ - postgres_data:/var/lib/postgresql/data/
environment: environment:
@@ -35,7 +36,7 @@ services:
- 5432:5432 - 5432:5432
redis: redis:
image: redis:7-alpine image: redis:alpine
ports: ports:
- 6379:6379 - 6379:6379
volumes: volumes:
@@ -48,12 +49,16 @@ services:
celery: celery:
build: . build: .
command: celery -A dashboard_project worker --loglevel=info command: uv run celery -A dashboard_project worker --loglevel=info --workdir dashboard_project
volumes: env_file:
- .:/app - .env
environment: environment:
- DEBUG=0 - DATABASE_URL=postgresql://postgres:postgres@db:5432/dashboard_db
- DJANGO_SETTINGS_MODULE=dashboard_project.settings - POSTGRES_DB=dashboard_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- CELERY_BROKER_URL=redis://redis:6379/0 - CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0 - CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on: depends_on:
@@ -62,12 +67,16 @@ services:
celery-beat: celery-beat:
build: . build: .
command: celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler command: uv run celery -A dashboard_project beat --scheduler django_celery_beat.schedulers:DatabaseScheduler --workdir dashboard_project
volumes: env_file:
- .:/app - .env
environment: environment:
- DEBUG=0 - DATABASE_URL=postgresql://postgres:postgres@db:5432/dashboard_db
- DJANGO_SETTINGS_MODULE=dashboard_project.settings - POSTGRES_DB=dashboard_db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
- CELERY_BROKER_URL=redis://redis:6379/0 - CELERY_BROKER_URL=redis://redis:6379/0
- CELERY_RESULT_BACKEND=redis://redis:6379/0 - CELERY_RESULT_BACKEND=redis://redis:6379/0
depends_on: depends_on:

View File

@@ -1,6 +1,7 @@
# Redis and Celery Configuration # Redis and Celery Configuration
This document explains how to set up and use Redis and Celery for background task processing in the LiveGraphs application. This document explains how to set up and use Redis and Celery for background task processing in the LiveGraphs
application.
## Overview ## Overview
@@ -72,7 +73,8 @@ Download and install from [microsoftarchive/redis](https://github.com/microsofta
### SQLite Fallback ### SQLite Fallback
If Redis is not available, the application will automatically fall back to using SQLite for Celery tasks. This works well for development but is not recommended for production. If Redis is not available, the application will automatically fall back to using SQLite for Celery tasks. This works
well for development but is not recommended for production.
## Configuration ## Configuration

View File

@@ -54,7 +54,8 @@ If this fails, check the following:
## Fixing CSV Data Processing Issues ## Fixing CSV Data Processing Issues
If you see the error `zip() argument 2 is shorter than argument 1`, it means the data format doesn't match the expected headers. We've implemented a fix that: If you see the error `zip() argument 2 is shorter than argument 1`, it means the data format doesn't match the expected
headers. We've implemented a fix that:
1. Pads shorter rows with empty strings 1. Pads shorter rows with empty strings
2. Uses more flexible date format parsing 2. Uses more flexible date format parsing

96
opencode.json Normal file
View File

@@ -0,0 +1,96 @@
{
"$schema": "https://opencode.ai/config.json",
"mcp": {
"playwright-test": {
"type": "local",
"command": ["npx", "playwright", "run-test-mcp-server"],
"enabled": true
}
},
"tools": {
"playwright*": false
},
"agent": {
"playwright-test-generator": {
"description": "Use this agent when you need to create automated browser tests using Playwright",
"mode": "subagent",
"prompt": "{file:.opencode/prompts/playwright-test-generator.md}",
"tools": {
"ls": true,
"glob": true,
"grep": true,
"read": true,
"playwright-test*browser_click": true,
"playwright-test*browser_drag": true,
"playwright-test*browser_evaluate": true,
"playwright-test*browser_file_upload": true,
"playwright-test*browser_handle_dialog": true,
"playwright-test*browser_hover": true,
"playwright-test*browser_navigate": true,
"playwright-test*browser_press_key": true,
"playwright-test*browser_select_option": true,
"playwright-test*browser_snapshot": true,
"playwright-test*browser_type": true,
"playwright-test*browser_verify_element_visible": true,
"playwright-test*browser_verify_list_visible": true,
"playwright-test*browser_verify_text_visible": true,
"playwright-test*browser_verify_value": true,
"playwright-test*browser_wait_for": true,
"playwright-test*generator_read_log": true,
"playwright-test*generator_setup_page": true,
"playwright-test*generator_write_test": true
}
},
"playwright-test-healer": {
"description": "Use this agent when you need to debug and fix failing Playwright tests",
"mode": "subagent",
"prompt": "{file:.opencode/prompts/playwright-test-healer.md}",
"tools": {
"ls": true,
"glob": true,
"grep": true,
"read": true,
"write": true,
"edit": true,
"playwright-test*browser_console_messages": true,
"playwright-test*browser_evaluate": true,
"playwright-test*browser_generate_locator": true,
"playwright-test*browser_network_requests": true,
"playwright-test*browser_snapshot": true,
"playwright-test*test_debug": true,
"playwright-test*test_list": true,
"playwright-test*test_run": true
}
},
"playwright-test-planner": {
"description": "Use this agent when you need to create comprehensive test plan for a web application or website",
"mode": "subagent",
"prompt": "{file:.opencode/prompts/playwright-test-planner.md}",
"tools": {
"ls": true,
"glob": true,
"grep": true,
"read": true,
"write": true,
"playwright-test*browser_click": true,
"playwright-test*browser_close": true,
"playwright-test*browser_console_messages": true,
"playwright-test*browser_drag": true,
"playwright-test*browser_evaluate": true,
"playwright-test*browser_file_upload": true,
"playwright-test*browser_handle_dialog": true,
"playwright-test*browser_hover": true,
"playwright-test*browser_navigate": true,
"playwright-test*browser_navigate_back": true,
"playwright-test*browser_network_requests": true,
"playwright-test*browser_press_key": true,
"playwright-test*browser_select_option": true,
"playwright-test*browser_snapshot": true,
"playwright-test*browser_take_screenshot": true,
"playwright-test*browser_type": true,
"playwright-test*browser_wait_for": true,
"playwright-test*planner_setup_page": true
}
}
}
}

View File

@@ -1,35 +1,32 @@
{ {
"devDependencies": { "name": "livegraphs-django",
"markdownlint-cli2": "^0.18.1", "private": true,
"prettier": "^3.5.3",
"prettier-plugin-jinja-template": "^2.1.0"
},
"scripts": { "scripts": {
"format": "prettier --write .", "format": "prettier --write .; bun format:py",
"format:check": "prettier --check .", "format:check": "prettier --check .; bun format:py -- --check",
"lint:md": "markdownlint-cli2 \"**/*.md\" \"!.trunk/**\" \"!.venv/**\" \"!node_modules/**\"", "format:py": "uvx ruff format",
"lint:md:fix": "markdownlint-cli2 --fix \"**/*.md\" \"!.trunk/**\" \"!.venv/**\" \"!node_modules/**\"" "lint:js": "oxlint",
"lint:js:fix": "bun lint:js -- --fix",
"lint:js:strict": "oxlint --import-plugin -D correctness -W suspicious",
"lint:md": "markdownlint-cli2 \"**/*.md\" \"#node_modules\" \"#.{node_modules,trunk,grit,venv,opencode,github/chatmodes,claude/agents}\"",
"lint:md:fix": "bun lint:md -- --fix",
"lint:py": "uvx ruff check",
"lint:py:fix": "uvx ruff check --fix",
"typecheck:js": "oxlint --type-aware",
"typecheck:js:fix": "bun typecheck:js -- --fix",
"typecheck:py": "uvx ty check"
}, },
"markdownlint-cli2": { "devDependencies": {
"config": { "@playwright/test": "^1.56.1",
"MD007": { "@types/bun": "latest",
"indent": 4, "markdownlint-cli2": "^0.18.1",
"start_indented": false, "oxlint": "^1.25.0",
"start_indent": 4 "oxlint-tsgolint": "^0.5.0",
"prettier": "^3.6.2",
"prettier-plugin-jinja-template": "^2.1.0",
"prettier-plugin-packagejson": "^2.5.19"
}, },
"MD013": false, "peerDependencies": {
"MD030": { "typescript": "^5"
"ul_single": 3,
"ol_single": 2,
"ul_multi": 3,
"ol_multi": 2
},
"MD033": false
},
"ignores": [
"node_modules",
".git",
"*.json"
]
} }
} }

View File

@@ -4,65 +4,127 @@ version = "0.1.0"
description = "Live Graphs Django Dashboard" description = "Live Graphs Django Dashboard"
readme = "README.md" readme = "README.md"
requires-python = ">=3.13" requires-python = ">=3.13"
authors = [{ name = "LiveGraphs Team" }]
license = { text = "MIT" } license = { text = "MIT" }
authors = [{ name = "LiveGraphs Team" }]
classifiers = [ classifiers = [
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
"Framework :: Django", "Framework :: Django",
"Framework :: Django :: 5.2", "Framework :: Django :: 5.2",
"License :: OSI Approved :: MIT License", "License :: OSI Approved :: MIT License",
"Operating System :: OS Independent", "Operating System :: OS Independent",
"Programming Language :: Python :: 3",
"Programming Language :: Python :: 3.13",
] ]
dependencies = [ dependencies = [
"bleach[css]>=6.2.0", "bleach[css]>=6.3.0",
"celery[sqlalchemy]>=5.5.2", "celery[sqlalchemy]>=5.5.3",
"crispy-bootstrap5>=2025.4", "crispy-bootstrap5>=2025.6",
"django>=5.2.1", "django>=5.2.7",
"django-allauth>=65.8.0", "django-allauth>=65.13.0",
"django-celery-beat>=2.8.1", "django-celery-beat>=2.8.1",
"django-crispy-forms>=2.4", "django-crispy-forms>=2.4",
"gunicorn>=23.0.0", "gunicorn>=23.0.0",
"numpy>=2.2.5", "numpy>=2.3.4",
"pandas>=2.2.3", "pandas>=2.3.3",
"plotly>=6.1.0", "plotly>=6.4.0",
"python-dotenv>=1.1.0", "psycopg2-binary>=2.9.11",
"redis>=6.1.0", "python-dotenv>=1.2.1",
"requests>=2.32.3", "redis>=7.0.1",
"sqlalchemy>=2.0.41", "requests>=2.32.5",
"sqlalchemy>=2.0.44",
"tinycss2>=1.4.0", "tinycss2>=1.4.0",
"whitenoise>=6.9.0", "whitenoise>=6.11.0",
"xlsxwriter>=3.2.3", "xlsxwriter>=3.2.9",
] ]
[project.urls]
"Bug Tracker" = "https://github.com/kjanat/livegraphsdjango/issues"
"Documentation" = "https://github.com/kjanat/livegraphsdjango#readme"
"Source" = "https://github.com/kjanat/livegraphsdjango"
[project.scripts]
# Django management commands
livegraphs-manage = "dashboard_project.manage:main"
livegraphs-migrate = "dashboard_project.__main__:migrate"
livegraphs-server = "dashboard_project.__main__:runserver"
livegraphs-shell = "dashboard_project.__main__:shell"
[dependency-groups] [dependency-groups]
dev = [ dev = [
"bandit>=1.8.3", "bandit>=1.8.6",
"black>=25.1.0", "black>=25.9.0",
"coverage>=7.8.0", "coverage>=7.11.0",
"django-debug-toolbar>=5.2.0", "django-debug-toolbar>=6.1.0",
"django-stubs>=5.2.0", "django-stubs>=5.2.7",
"mypy>=1.15.0", "mypy>=1.18.2",
"pre-commit>=4.2.0", "pre-commit>=4.3.0",
"pytest>=8.3.5", "pytest>=8.4.2",
"pytest-django>=4.11.1", "pytest-django>=4.11.1",
"ruff>=0.11.10", "ruff>=0.14.3",
"ty>=0.0.1a25",
] ]
[build-system] [build-system]
requires = ["setuptools>=69.0.0", "wheel>=0.42.0"] requires = ["setuptools>=69.0.0", "wheel>=0.42.0"]
build-backend = "setuptools.build_meta" build-backend = "setuptools.build_meta"
[tool.setuptools] [tool.bandit]
packages = ["dashboard_project"] exclude_dirs = [
"tests",
"venv",
".venv",
".git",
"__pycache__",
"migrations",
"**/create_sample_data.py",
]
skips = ["B101"]
targets = ["dashboard_project"]
[tool.setuptools.package-data] [tool.coverage.run]
"dashboard_project" = ["static/**/*", "templates/**/*", "media/**/*"] source = ["dashboard_project"]
omit = [
"dashboard_project/manage.py",
"dashboard_project/*/migrations/*",
"dashboard_project/*/tests/*",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"pass",
"raise ImportError",
]
[tool.django-stubs]
django_settings_module = "dashboard_project.settings"
[tool.mypy]
python_version = "3.13"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = false
disallow_incomplete_defs = false
plugins = ["mypy_django_plugin.main"]
[[tool.mypy.overrides]]
module = ["django.*", "rest_framework.*"]
ignore_missing_imports = true
[tool.pytest.ini_options]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
]
python_files = "test_*.py"
testpaths = ["dashboard_project"]
DJANGO_SETTINGS_MODULE = "dashboard_project.settings"
[tool.ruff] [tool.ruff]
# Exclude a variety of commonly ignored directories. preview = true
# Exclude a variety of commonly ignored directories
exclude = [ exclude = [
".bzr", ".bzr",
".direnv", ".direnv",
@@ -91,11 +153,9 @@ exclude = [
"site-packages", "site-packages",
"venv", "venv",
] ]
# Same as Black
# Same as Black.
line-length = 120 line-length = 120
indent-width = 4 indent-width = 4
# Assume Python 3.13 # Assume Python 3.13
target-version = "py313" target-version = "py313"
@@ -110,62 +170,13 @@ quote-style = "double"
indent-style = "space" indent-style = "space"
line-ending = "lf" line-ending = "lf"
[tool.bandit] [tool.setuptools]
exclude_dirs = [ packages = ["dashboard_project"]
"tests",
"venv", [tool.setuptools.package-data]
".venv", "dashboard_project" = [
".git", "static/**/*",
"__pycache__", "templates/**/*",
"migrations", "media/**/*",
"**/create_sample_data.py", "py.typed"
] ]
skips = ["B101"]
targets = ["dashboard_project"]
[tool.mypy]
python_version = "3.13"
warn_return_any = true
warn_unused_configs = true
disallow_untyped_defs = false
disallow_incomplete_defs = false
plugins = ["mypy_django_plugin.main"]
[[tool.mypy.overrides]]
module = ["django.*", "rest_framework.*"]
ignore_missing_imports = true
[tool.django-stubs]
django_settings_module = "dashboard_project.settings"
[tool.pytest.ini_options]
DJANGO_SETTINGS_MODULE = "dashboard_project.settings"
python_files = "test_*.py"
testpaths = ["dashboard_project"]
filterwarnings = [
"ignore::DeprecationWarning",
"ignore::PendingDeprecationWarning",
]
[tool.coverage.run]
source = ["dashboard_project"]
omit = [
"dashboard_project/manage.py",
"dashboard_project/*/migrations/*",
"dashboard_project/*/tests/*",
]
[tool.coverage.report]
exclude_lines = [
"pragma: no cover",
"def __repr__",
"raise NotImplementedError",
"if __name__ == .__main__.:",
"pass",
"raise ImportError",
]
[project.urls]
"Documentation" = "https://github.com/kjanat/livegraphsdjango#readme"
"Source" = "https://github.com/kjanat/livegraphsdjango"
"Bug Tracker" = "https://github.com/kjanat/livegraphsdjango/issues"

View File

@@ -5,65 +5,83 @@ amqp==5.3.1 \
--hash=sha256:43b3319e1b4e7d1251833a93d672b4af1e40f3d632d479b98661a95f117880a2 \ --hash=sha256:43b3319e1b4e7d1251833a93d672b4af1e40f3d632d479b98661a95f117880a2 \
--hash=sha256:cddc00c725449522023bad949f70fff7b48f0b1ade74d170a6f10ab044739432 --hash=sha256:cddc00c725449522023bad949f70fff7b48f0b1ade74d170a6f10ab044739432
# via kombu # via kombu
asgiref==3.8.1 \ asgiref==3.10.0 \
--hash=sha256:3e1e3ecc849832fe52ccf2cb6686b7a55f82bb1d6aee72a58826471390335e47 \ --hash=sha256:aef8a81283a34d0ab31630c9b7dfe70c812c95eba78171367ca8745e88124734 \
--hash=sha256:c343bd80a0bec947a9860adb4c432ffa7db769836c64238fc34bdc3fec84d590 --hash=sha256:d89f2d8cd8b56dada7d52fa7dc8075baa08fb836560710d38c292a7a3f78c04e
# via # via
# django # django
# django-allauth # django-allauth
# django-stubs bandit==1.8.6 \
bandit==1.8.3 \ --hash=sha256:3348e934d736fcdb68b6aa4030487097e23a501adf3e7827b63658df464dddd0 \
--hash=sha256:28f04dc0d258e1dd0f99dee8eefa13d1cb5e3fde1a5ab0c523971f97b289bcd8 \ --hash=sha256:dbfe9c25fc6961c2078593de55fd19f2559f9e45b99f1272341f5b95dea4e56b
--hash=sha256:f5847beb654d309422985c36644649924e0ea4425c76dec2e89110b87506193a billiard==4.2.2 \
billiard==4.2.1 \ --hash=sha256:4bc05dcf0d1cc6addef470723aac2a6232f3c7ed7475b0b580473a9145829457 \
--hash=sha256:12b641b0c539073fc8d3f5b8b7be998956665c4233c7c1fcd66a7e677c4fb36f \ --hash=sha256:e815017a062b714958463e07ba15981d802dc53d41c5b69d28c5a7c238f8ecf3
--hash=sha256:40b59a4ac8806ba2c2369ea98d876bc6108b051c227baffd928c644d15d8f3cb
# via celery # via celery
black==25.1.0 \ black==25.9.0 \
--hash=sha256:030b9759066a4ee5e5aca28c3c77f9c64789cdd4de8ac1df642c40b708be6171 \ --hash=sha256:0172a012f725b792c358d57fe7b6b6e8e67375dd157f64fa7a3097b3ed3e2175 \
--hash=sha256:33496d5cd1222ad73391352b4ae8da15253c5de89b93a80b3e2c8d9a19ec2666 \ --hash=sha256:0474bca9a0dd1b51791fcc507a4e02078a1c63f6d4e4ae5544b9848c7adfb619 \
--hash=sha256:8f0b18a02996a836cc9c9c78e5babec10930862827b1b724ddfe98ccf2f2fe4f \ --hash=sha256:3bec74ee60f8dfef564b573a96b8930f7b6a538e846123d5ad77ba14a8d7a64f \
--hash=sha256:95e8176dae143ba9097f351d174fdaf0ccd29efb414b362ae3fd72bf0f710717 \ --hash=sha256:474b34c1342cdc157d307b56c4c65bce916480c4a8f6551fdc6bf9b486a7c4ae \
--hash=sha256:a22f402b410566e2d1c950708c77ebf5ebd5d0d88a6a2e87c86d9fb48afa0d18 \ --hash=sha256:846d58e3ce7879ec1ffe816bb9df6d006cd9590515ed5d17db14e17666b2b357 \
--hash=sha256:afebb7098bfbc70037a053b91ae8437c3857482d3a690fefc03e9ff7aa9a5fd3 --hash=sha256:b756fc75871cb1bcac5499552d771822fd9db5a2bb8db2a7247936ca48f39831
bleach==6.2.0 \ bleach==6.3.0 \
--hash=sha256:117d9c6097a7c3d22fd578fcd8d35ff1e125df6736f554da4e432fdd63f31e5e \ --hash=sha256:6f3b91b1c0a02bb9a78b5a454c92506aa0fdf197e1d5e114d2e00c6f64306d22 \
--hash=sha256:123e894118b8a599fd80d3ec1a6d4cc7ce4e5882b1317a7e1ba69b56e95f991f --hash=sha256:fe10ec77c93ddf3d13a73b035abaac7a9f5e436513864ccdad516693213c65d6
# via livegraphsdjango # via livegraphsdjango
celery==5.5.2 \ celery==5.5.3 \
--hash=sha256:4d6930f354f9d29295425d7a37261245c74a32807c45d764bedc286afd0e724e \ --hash=sha256:0b5761a07057acee94694464ca482416b959568904c9dfa41ce8413a7d65d525 \
--hash=sha256:54425a067afdc88b57cd8d94ed4af2ffaf13ab8c7680041ac2c4ac44357bdf4c --hash=sha256:6c972ae7968c2b5281227f01c3a3f984037d21c5129d07bf3550cc2afc6b10a5
# via # via
# django-celery-beat # django-celery-beat
# livegraphsdjango # livegraphsdjango
certifi==2025.4.26 \ certifi==2025.10.5 \
--hash=sha256:0a816057ea3cdefcef70270d2c515e4506bbc954f417fa5ade2021213bb8f0c6 \ --hash=sha256:0f212c2744a9bb6de0c56639a6f68afe01ecd92d91f14ae897c4fe7bbeeef0de \
--hash=sha256:30350364dfe371162649852c63336a15c70c6510c2ad5015b21c2345311805f3 --hash=sha256:47c09d31ccf2acf0be3f701ea53595ee7e0b8fa08801c6624be771df09ae7b43
# via requests # via requests
cfgv==3.4.0 \ cfgv==3.4.0 \
--hash=sha256:b7265b1f29fd3316bfcd2b330d63d024f2bfd8bcb8b0272f8e19a504856c48f9 \ --hash=sha256:b7265b1f29fd3316bfcd2b330d63d024f2bfd8bcb8b0272f8e19a504856c48f9 \
--hash=sha256:e52591d4c5f5dead8e0f673fb16db7949d2cfb3f7da4582893288f0ded8fe560 --hash=sha256:e52591d4c5f5dead8e0f673fb16db7949d2cfb3f7da4582893288f0ded8fe560
# via pre-commit # via pre-commit
charset-normalizer==3.4.2 \ charset-normalizer==3.4.4 \
--hash=sha256:1c95a1e2902a8b722868587c0e1184ad5c55631de5afc0eb96bc4b0d738092c0 \ --hash=sha256:2b7d8f6c26245217bd2ad053761201e9f9680f8ce52f0fcd8d0755aeae5b2152 \
--hash=sha256:289200a18fa698949d2b39c671c2cc7a24d44096784e76614899a7ccf2574b7b \ --hash=sha256:3162d5d8ce1bb98dd51af660f2121c55d0fa541b46dff7bb9b9f86ea1d87de72 \
--hash=sha256:32fc0341d72e0f73f80acb0a2c94216bd704f4f0bce10aedea38f30502b271ff \ --hash=sha256:362d61fd13843997c1c446760ef36f240cf81d3ebf74ac62652aebaf7838561e \
--hash=sha256:3fddb7e2c84ac87ac3a947cb4e66d143ca5863ef48e4a5ecb83bd48619e4634e \ --hash=sha256:47cc91b2f4dd2833fddaedd2893006b0106129d4b94fdb6af1f4ce5a9965577c \
--hash=sha256:4a476b06fbcf359ad25d34a057b7219281286ae2477cc5ff5e3f70a246971148 \ --hash=sha256:542d2cee80be6f80247095cc36c418f7bddd14f4a6de45af91dfad36d817bba2 \
--hash=sha256:5baececa9ecba31eff645232d59845c07aa030f0c81ee70184a90d35099a0e63 \ --hash=sha256:554af85e960429cf30784dd47447d5125aaa3b99a6f0683589dbd27e2f45da44 \
--hash=sha256:6c9379d65defcab82d07b2a9dfbfc2e95bc8fe0ebb1b176a3190230a3ef0e07c \ --hash=sha256:5bd2293095d766545ec1a8f612559f6b40abc0eb18bb2f5d1171872d34036ede \
--hash=sha256:7f56930ab0abd1c45cd15be65cc741c28b1c9a34876ce8c17a2fa107810c0af0 \ --hash=sha256:6b39f987ae8ccdf0d2642338faf2abb1862340facc796048b604ef14919e55ed \
--hash=sha256:926ca93accd5d36ccdabd803392ddc3e03e6d4cd1cf17deff3b989ab8e9dbcf0 \ --hash=sha256:74018750915ee7ad843a774364e13a3db91682f26142baddf775342c3f5b1133 \
--hash=sha256:98f862da73774290f251b9df8d11161b6cf25b599a66baf087c1ffe340e9bfd1 \ --hash=sha256:74664978bb272435107de04e36db5a9735e78232b85b77d45cfb38f758efd33e \
--hash=sha256:aa6af9e7d59f9c12b33ae4e9450619cf2488e2bbe9b44030905877f0b2324980 \ --hash=sha256:752944c7ffbfdd10c074dc58ec2d5a8a4cd9493b314d367c14d24c17684ddd14 \
--hash=sha256:aaeeb6a479c7667fbe1099af9617c83aaca22182d6cf8c53966491a0f1b7ffb7 \ --hash=sha256:799a7a5e4fb2d5898c60b640fd4981d6a25f1c11790935a44ce38c54e985f828 \
--hash=sha256:e635b87f01ebc977342e2697d05b56632f5f879a4f15955dfe8cef2448b51691 \ --hash=sha256:7a32c560861a02ff789ad905a2fe94e3f840803362c84fecf1851cb4cf3dc37f \
--hash=sha256:eba9904b0f38a143592d9fc0e19e2df0fa2e41c3c3745554761c5f6447eedabf \ --hash=sha256:81d5eb2a312700f4ecaa977a8235b634ce853200e828fbadf3a9c50bab278328 \
--hash=sha256:ef8de666d6179b009dce7bcb2ad4c4a779f113f12caf8dc77f0162c29d20490b --hash=sha256:82004af6c302b5d3ab2cfc4cc5f29db16123b1a8417f2e25f9066f91d4411090 \
--hash=sha256:8a6562c3700cce886c5be75ade4a5db4214fda19fede41d9792d100288d8f94c \
--hash=sha256:8af65f14dc14a79b924524b1e7fffe304517b2bff5a58bf64f30b98bbc5079eb \
--hash=sha256:94537985111c35f28720e43603b8e7b43a6ecfb2ce1d3058bbe955b73404e21a \
--hash=sha256:99ae2cffebb06e6c22bdc25801d7b30f503cc87dbd283479e7b606f70aff57ec \
--hash=sha256:9a26f18905b8dd5d685d6d07b0cdf98a79f3c7a918906af7cc143ea2e164c8bc \
--hash=sha256:9b35f4c90079ff2e2edc5b26c0c77925e5d2d255c42c74fdb70fb49b172726ac \
--hash=sha256:a8a8b89589086a25749f471e6a900d3f662d1d3b6e2e59dcecf787b1cc3a1894 \
--hash=sha256:b435cba5f4f750aa6c0a0d92c541fb79f69a387c91e61f1795227e4ed9cece14 \
--hash=sha256:bc7637e2f80d8530ee4a78e878bce464f70087ce73cf7c1caf142416923b98f1 \
--hash=sha256:c0463276121fdee9c49b98908b3a89c39be45d86d1dbaa22957e38f6321d4ce3 \
--hash=sha256:c8ae8a0f02f57a6e61203a31428fa1d677cbe50c93622b4149d5c0f319c1d19e \
--hash=sha256:cb6254dc36b47a990e59e1068afacdcd02958bdcce30bb50cc1700a8b9d624a6 \
--hash=sha256:d1f13550535ad8cff21b8d757a3257963e951d96e20ec82ab44bc64aeb62a191 \
--hash=sha256:da3326d9e65ef63a817ecbcc0df6e94463713b754fe293eaa03da99befb9a5bd \
--hash=sha256:de00632ca48df9daf77a2c65a484531649261ec9f25489917f09e455cb09ddb2 \
--hash=sha256:e1f185f86a6f3403aa2420e815904c67b2f9ebc443f045edd0de921108345794 \
--hash=sha256:ecaae4149d99b1c9e7b88bb03e3221956f68fd6d50be2ef061b2381b61d20838 \
--hash=sha256:f8bf04158c6b607d747e93949aa60618b61312fe647a6369f88ce2ff16043490 \
--hash=sha256:f9d332f8c2a2fcbffe1378594431458ddbef721c1769d78e2cbc06280d8155f9
# via requests # via requests
click==8.2.0 \ click==8.3.0 \
--hash=sha256:6b303f0b2aa85f1cb4e5303078fadcbcd4e476f114fab9b5007005711839325c \ --hash=sha256:9b9f285302c6e3064f4330c05f05b81945b2a39544279343e6e7c5f27a9baddc \
--hash=sha256:f5452aeddd9988eefa20f90f05ab66f17fce1ee2a36907fd30b05bbb5953814d --hash=sha256:e7b8232224eba16f4ebe410c25ced9f7875cb5f3263ffc93cc3e8da705e229c4
# via # via
# black # black
# celery # celery
@@ -74,9 +92,9 @@ click-didyoumean==0.3.1 \
--hash=sha256:4f82fdff0dbe64ef8ab2279bd6aa3f6a99c3b28c05aa09cbfc07c9d7fbb5a463 \ --hash=sha256:4f82fdff0dbe64ef8ab2279bd6aa3f6a99c3b28c05aa09cbfc07c9d7fbb5a463 \
--hash=sha256:5c4bb6007cfea5f2fd6583a2fb6701a22a41eb98957e63d0fac41c10e7c3117c --hash=sha256:5c4bb6007cfea5f2fd6583a2fb6701a22a41eb98957e63d0fac41c10e7c3117c
# via celery # via celery
click-plugins==1.1.1 \ click-plugins==1.1.1.2 \
--hash=sha256:46ab999744a9d831159c3411bb0c79346d94a444df9a3a3742e9ed63645f264b \ --hash=sha256:008d65743833ffc1f5417bf0e78e8d2c23aab04d9745ba817bd3e71b0feb6aa6 \
--hash=sha256:5d262006d3222f5057fd81e1623d4443e41dcda5dc815c06b442aa3c02889fc8 --hash=sha256:d7af3984a99d243c131aa1a828331e7630f4a88a9741fd05c927b204bcf92261
# via celery # via celery
click-repl==0.3.0 \ click-repl==0.3.0 \
--hash=sha256:17849c23dba3d667247dc4defe1757fff98694e90fe37474f3feebb69ced26a9 \ --hash=sha256:17849c23dba3d667247dc4defe1757fff98694e90fe37474f3feebb69ced26a9 \
@@ -89,44 +107,76 @@ colorama==0.4.6 ; sys_platform == 'win32' \
# bandit # bandit
# click # click
# pytest # pytest
coverage==7.8.0 \ coverage==7.11.0 \
--hash=sha256:04bfec25a8ef1c5f41f5e7e5c842f6b615599ca8ba8391ec33a9290d9d2db3a3 \ --hash=sha256:05791e528a18f7072bf5998ba772fe29db4da1234c45c2087866b5ba4dea710e \
--hash=sha256:18c5ae6d061ad5b3e7eef4363fb27a0576012a7447af48be6c75b88494c6cf25 \ --hash=sha256:0efa742f431529699712b92ecdf22de8ff198df41e43aeaaadf69973eb93f17a \
--hash=sha256:2e4b6b87bb0c846a9315e3ab4be2d52fac905100565f4b92f02c445c8799e257 \ --hash=sha256:10ad04ac3a122048688387828b4537bc9cf60c0bf4869c1e9989c46e45690b82 \
--hash=sha256:379fe315e206b14e21db5240f89dc0774bdd3e25c3c58c2c733c99eca96f1ada \ --hash=sha256:167bd504ac1ca2af7ff3b81d245dfea0292c5032ebef9d66cc08a7d28c1b8050 \
--hash=sha256:42421e04069fb2cbcbca5a696c4050b84a43b05392679d4068acbe65449b5c64 \ --hash=sha256:269bfe913b7d5be12ab13a95f3a76da23cf147be7fa043933320ba5625f0a8de \
--hash=sha256:554fec1199d93ab30adaa751db68acec2b41c5602ac944bb19187cb9a41a8067 \ --hash=sha256:2727d47fce3ee2bac648528e41455d1b0c46395a087a229deac75e9f88ba5a05 \
--hash=sha256:581a40c7b94921fffd6457ffe532259813fc68eb2bdda60fa8cc343414ce3733 \ --hash=sha256:314c24e700d7027ae3ab0d95fbf8d53544fca1f20345fd30cd219b737c6e58d3 \
--hash=sha256:5aaeb00761f985007b38cf463b1d160a14a22c34eb3f6a39d9ad6fc27cb73008 \ --hash=sha256:3d4ba9a449e9364a936a27322b20d32d8b166553bfe63059bd21527e681e2fad \
--hash=sha256:5ac46d0c2dd5820ce93943a501ac5f6548ea81594777ca585bf002aa8854cacd \ --hash=sha256:4036cc9c7983a2b1f2556d574d2eb2154ac6ed55114761685657e38782b23f52 \
--hash=sha256:771eb7587a0563ca5bb6f622b9ed7f9d07bd08900f7589b4febff05f469bea00 \ --hash=sha256:424538266794db2861db4922b05d729ade0940ee69dcf0591ce8f69784db0e11 \
--hash=sha256:7a3d62b3b03b4b6fd41a085f3574874cf946cb4604d2b4d3e8dca8cd570ca501 \ --hash=sha256:4b7589765348d78fb4e5fb6ea35d07564e387da2fc5efff62e0222971f155f68 \
--hash=sha256:95aa6ae391a22bbbce1b77ddac846c98c5473de0372ba5c463480043a07bff42 \ --hash=sha256:4c1eeb3fb8eb9e0190bebafd0462936f75717687117339f708f395fe455acc73 \
--hash=sha256:a9abbccd778d98e9c7e85038e35e91e67f5b520776781d9a1e2ee9d400869487 \ --hash=sha256:587c38849b853b157706407e9ebdca8fd12f45869edb56defbef2daa5fb0812b \
--hash=sha256:ad80e6b4a0c3cb6f10f29ae4c60e991f424e6b14219d46f1e7d442b938ee68a4 \ --hash=sha256:59a6e5a265f7cfc05f76e3bb53eca2e0dfe90f05e07e849930fecd6abb8f40b4 \
--hash=sha256:b87eb6fc9e1bb8f98892a2458781348fa37e6925f35bb6ceb9d4afd54ba36c73 \ --hash=sha256:5a03eaf7ec24078ad64a07f02e30060aaf22b91dedf31a6b24d0d98d2bba7f48 \
--hash=sha256:d1ba00ae33be84066cfbe7361d4e04dec78445b2b88bdb734d0d1cbab916025a \ --hash=sha256:5ef83b107f50db3f9ae40f69e34b3bd9337456c5a7fe3461c7abf8b75dd666a2 \
--hash=sha256:d766a4f0e5aa1ba056ec3496243150698dc0481902e2b8559314368717be82b1 \ --hash=sha256:630d0bd7a293ad2fc8b4b94e5758c8b2536fdf36c05f1681270203e463cbfa9b \
--hash=sha256:dbf364b4c5e7bae9250528167dfe40219b62e2d573c854d74be213e1e52069f7 \ --hash=sha256:695340f698a5f56f795b2836abe6fb576e7c53d48cd155ad2f80fd24bc63a040 \
--hash=sha256:dd19608788b50eed889e13a5d71d832edc34fc9dfce606f66e8f9f917eef910d \ --hash=sha256:6fbcee1a8f056af07ecd344482f711f563a9eb1c2cad192e87df00338ec3cdb0 \
--hash=sha256:e013b07ba1c748dacc2a80e69a46286ff145935f260eb8c72df7185bf048f502 \ --hash=sha256:73feb83bb41c32811973b8565f3705caf01d928d972b72042b44e97c71fd70d1 \
--hash=sha256:f319bae0321bc838e205bf9e5bc28f0a3165f30c203b610f17ab5552cff90323 \ --hash=sha256:7ab934dd13b1c5e94b692b1e01bd87e4488cb746e3a50f798cb9464fd128374b \
--hash=sha256:f3c38e4e5ccbdc9198aecc766cedbb134b2d89bf64533973678dfcf07effd883 --hash=sha256:7db53b5cdd2917b6eaadd0b1251cf4e7d96f4a8d24e174bdbdf2f65b5ea7994d \
crispy-bootstrap5==2025.4 \ --hash=sha256:8badf70446042553a773547a61fecaa734b55dc738cacf20c56ab04b77425e43 \
--hash=sha256:51efa19c7d40e339774a6fe23407e83b95b7634cad6de70fd1f1093131bea1d9 \ --hash=sha256:8c934bd088eed6174210942761e38ee81d28c46de0132ebb1801dbe36a390dcc \
--hash=sha256:d675ea7e245048905077dfe16bf1fa1ee16842f52fe88164ccc8a5e2d11119b3 --hash=sha256:9516add7256b6713ec08359b7b05aeff8850c98d357784c7205b2e60aa2513fa \
--hash=sha256:9ed43fa22c6436f7957df036331f8fe4efa7af132054e1844918866cd228af6c \
--hash=sha256:a09c1211959903a479e389685b7feb8a17f59ec5a4ef9afde7650bd5eabc2777 \
--hash=sha256:a386c1061bf98e7ea4758e4313c0ab5ecf57af341ef0f43a0bf26c2477b5c268 \
--hash=sha256:a3d0e2087dba64c86a6b254f43e12d264b636a39e88c5cc0a01a7c71bcfdab7e \
--hash=sha256:b56efee146c98dbf2cf5cffc61b9829d1e94442df4d7398b26892a53992d3547 \
--hash=sha256:b5c2705afa83f49bd91962a4094b6b082f94aef7626365ab3f8f4bd159c5acf3 \
--hash=sha256:b971bdefdd75096163dd4261c74be813c4508477e39ff7b92191dea19f24cd37 \
--hash=sha256:bab7ec4bb501743edc63609320aaec8cd9188b396354f482f4de4d40a9d10721 \
--hash=sha256:c6f31f281012235ad08f9a560976cc2fc9c95c17604ff3ab20120fe480169bca \
--hash=sha256:c770885b28fb399aaf2a65bbd1c12bf6f307ffd112d6a76c5231a94276f0c497 \
--hash=sha256:c9f08ea03114a637dab06cedb2e914da9dc67fa52c6015c018ff43fdde25b9c2 \
--hash=sha256:cacb29f420cfeb9283b803263c3b9a068924474ff19ca126ba9103e1278dfa44 \
--hash=sha256:cc3f49e65ea6e0d5d9bd60368684fe52a704d46f9e7fc413918f18d046ec40e1 \
--hash=sha256:cdbcd376716d6b7fbfeedd687a6c4be019c5a5671b35f804ba76a4c0a778cba4 \
--hash=sha256:ce37f215223af94ef0f75ac68ea096f9f8e8c8ec7d6e8c346ee45c0d363f0479 \
--hash=sha256:ce9f3bde4e9b031eaf1eb61df95c1401427029ea1bfddb8621c1161dcb0fa02e \
--hash=sha256:cee6291bb4fed184f1c2b663606a115c743df98a537c969c3c64b49989da96c2 \
--hash=sha256:d06f4fc7acf3cabd6d74941d53329e06bab00a8fe10e4df2714f0b134bfc64ef \
--hash=sha256:dadbcce51a10c07b7c72b0ce4a25e4b6dcb0c0372846afb8e5b6307a121eb99f \
--hash=sha256:dbbf012be5f32533a490709ad597ad8a8ff80c582a95adc8d62af664e532f9ca \
--hash=sha256:df01d6c4c81e15a7c88337b795bb7595a8596e92310266b5072c7e301168efbd \
--hash=sha256:e4dc07e95495923d6fd4d6c27bf70769425b71c89053083843fd78f378558996 \
--hash=sha256:e89641f5175d65e2dbb44db15fe4ea48fade5d5bbb9868fdc2b4fce22f4a469d \
--hash=sha256:e9570ad567f880ef675673992222746a124b9595506826b210fbe0ce3f0499cd \
--hash=sha256:eb92e47c92fcbcdc692f428da67db33337fa213756f7adb6a011f7b5a7a20740 \
--hash=sha256:f39ae2f63f37472c17b4990f794035c9890418b1b8cca75c01193f3c8d3e01be \
--hash=sha256:f413ce6e07e0d0dc9c433228727b619871532674b45165abafe201f200cc215f \
--hash=sha256:f91f927a3215b8907e214af77200250bb6aae36eca3f760f89780d13e495388d \
--hash=sha256:f9ea02ef40bb83823b2b04964459d281688fe173e20643870bb5d2edf68bc836
crispy-bootstrap5==2025.6 \
--hash=sha256:a343aa128b4383f35f00295b94de2b10862f2a4f24eda21fa6ead45234c07050 \
--hash=sha256:f1bde7cac074c650fc82f31777d4a4cfd0df2512c68bc4128f259c75d3daada4
# via livegraphsdjango # via livegraphsdjango
cron-descriptor==1.4.5 \ cron-descriptor==2.0.6 \
--hash=sha256:736b3ae9d1a99bc3dbfc5b55b5e6e7c12031e7ba5de716625772f8b02dcd6013 \ --hash=sha256:3a1c0d837c0e5a32e415f821b36cf758eb92d510e6beff8fbfe4fa16573d93d6 \
--hash=sha256:f51ce4ffc1d1f2816939add8524f206c376a42c87a5fca3091ce26725b3b1bca --hash=sha256:e39d2848e1d8913cfb6e3452e701b5eec662ee18bea8cc5aa53ee1a7bb217157
# via django-celery-beat # via django-celery-beat
distlib==0.3.9 \ distlib==0.4.0 \
--hash=sha256:47f8c22fd27c27e25a65601af709b38e4f0a45ea4fc2e710f65755fa8caaaf87 \ --hash=sha256:9659f7d87e46584a30b5780e43ac7a2143098441670ff0a49d5f9034c54a6c16 \
--hash=sha256:a60f20dea646b8a33f3e7772f74dc0b2d0772d2837ee1342a00645c81edf9403 --hash=sha256:feec40075be03a04501a973d81f633735b4b69f98b05450592310c0f401a4e0d
# via virtualenv # via virtualenv
django==5.2.1 \ django==5.2.7 \
--hash=sha256:57fe1f1b59462caed092c80b3dd324fd92161b620d59a9ba9181c34746c97284 \ --hash=sha256:59a13a6515f787dec9d97a0438cd2efac78c8aca1c80025244b0fe507fe0754b \
--hash=sha256:a9b680e84f9a0e71da83e399f1e922e1ab37b2173ced046b541c72e1589a5961 --hash=sha256:e0f6f12e2551b1716a95a63a1366ca91bbcd7be059862c1b18f989b1da356cdd
# via # via
# crispy-bootstrap5 # crispy-bootstrap5
# django-allauth # django-allauth
@@ -137,8 +187,9 @@ django==5.2.1 \
# django-stubs-ext # django-stubs-ext
# django-timezone-field # django-timezone-field
# livegraphsdjango # livegraphsdjango
django-allauth==65.8.0 \ django-allauth==65.13.0 \
--hash=sha256:9da589d99d412740629333a01865a90c95c97e0fae0cde789aa45a8fda90e83b --hash=sha256:119c0cf1cc2e0d1a0fe2f13588f30951d64989256084de2d60f13ab9308f9fa0 \
--hash=sha256:7d7b7e7ad603eb3864c142f051e2cce7be2f9a9c6945a51172ec83d48c6c843b
# via livegraphsdjango # via livegraphsdjango
django-celery-beat==2.8.1 \ django-celery-beat==2.8.1 \
--hash=sha256:da2b1c6939495c05a551717509d6e3b79444e114a027f7b77bf3727c2a39d171 \ --hash=sha256:da2b1c6939495c05a551717509d6e3b79444e114a027f7b77bf3727c2a39d171 \
@@ -150,117 +201,150 @@ django-crispy-forms==2.4 \
# via # via
# crispy-bootstrap5 # crispy-bootstrap5
# livegraphsdjango # livegraphsdjango
django-debug-toolbar==5.2.0 \ django-debug-toolbar==6.1.0 \
--hash=sha256:15627f4c2836a9099d795e271e38e8cf5204ccd79d5dbcd748f8a6c284dcd195 \ --hash=sha256:e214dea4494087e7cebdcea84223819c5eb97f9de3110a3665ad673f0ba98413 \
--hash=sha256:9e7f0145e1a1b7d78fcc3b53798686170a5b472d9cf085d88121ff823e900821 --hash=sha256:e962ec350c9be8bdba918138e975a9cdb193f60ec396af2bb71b769e8e165519
django-stubs==5.2.0 \ django-stubs==5.2.7 \
--hash=sha256:07e25c2d3cbff5be540227ff37719cc89f215dfaaaa5eb038a75b01bbfbb2722 \ --hash=sha256:2864e74b56ead866ff1365a051f24d852f6ed02238959664f558a6c9601c95bf \
--hash=sha256:cd52da033489afc1357d6245f49e3cc57bf49015877253fb8efc6722ea3d2d2b --hash=sha256:2a07e47a8a867836a763c6bba8bf3775847b4fd9555bfa940360e32d0ee384a1
django-stubs-ext==5.2.0 \ django-stubs-ext==5.2.7 \
--hash=sha256:00c4ae307b538f5643af761a914c3f8e4e3f25f4e7c6d7098f1906c0d8f2aac9 \ --hash=sha256:0466a7132587d49c5bbe12082ac9824d117a0dedcad5d0ada75a6e0d3aca6f60 \
--hash=sha256:b27ae0aab970af4894ba4e9b3fcd3e03421dc8731516669659ee56122d148b23 --hash=sha256:b690655bd4cb8a44ae57abb314e0995dc90414280db8f26fff0cb9fb367d1cac
# via django-stubs # via django-stubs
django-timezone-field==7.1 \ django-timezone-field==7.1 \
--hash=sha256:93914713ed882f5bccda080eda388f7006349f25930b6122e9b07bf8db49c4b4 \ --hash=sha256:93914713ed882f5bccda080eda388f7006349f25930b6122e9b07bf8db49c4b4 \
--hash=sha256:b3ef409d88a2718b566fabe10ea996f2838bc72b22d3a2900c0aa905c761380c --hash=sha256:b3ef409d88a2718b566fabe10ea996f2838bc72b22d3a2900c0aa905c761380c
# via django-celery-beat # via django-celery-beat
filelock==3.18.0 \ filelock==3.20.0 \
--hash=sha256:adbc88eabb99d2fec8c9c1b229b171f18afa655400173ddc653d5d01501fb9f2 \ --hash=sha256:339b4732ffda5cd79b13f4e2711a31b0365ce445d95d243bb996273d072546a2 \
--hash=sha256:c401f4f8377c4464e6db25fff06205fd89bdd83b65eb0488ed1b160f780e21de --hash=sha256:711e943b4ec6be42e1d4e6690b48dc175c822967466bb31c0c293f34334c13f4
# via virtualenv # via virtualenv
greenlet==3.2.2 ; (python_full_version < '3.14' and platform_machine == 'AMD64') or (python_full_version < '3.14' and platform_machine == 'WIN32') or (python_full_version < '3.14' and platform_machine == 'aarch64') or (python_full_version < '3.14' and platform_machine == 'amd64') or (python_full_version < '3.14' and platform_machine == 'ppc64le') or (python_full_version < '3.14' and platform_machine == 'win32') or (python_full_version < '3.14' and platform_machine == 'x86_64') \ greenlet==3.2.4 ; platform_machine == 'AMD64' or platform_machine == 'WIN32' or platform_machine == 'aarch64' or platform_machine == 'amd64' or platform_machine == 'ppc64le' or platform_machine == 'win32' or platform_machine == 'x86_64' \
--hash=sha256:02a98600899ca1ca5d3a2590974c9e3ec259503b2d6ba6527605fcd74e08e207 \ --hash=sha256:00fadb3fedccc447f517ee0d3fd8fe49eae949e1cd0f6a611818f4f6fb7dc83b \
--hash=sha256:055916fafad3e3388d27dd68517478933a97edc2fc54ae79d3bec827de2c64c4 \ --hash=sha256:015d48959d4add5d6c9f6c5210ee3803a830dce46356e3bc326d6776bde54681 \
--hash=sha256:1919cbdc1c53ef739c94cf2985056bcc0838c1f217b57647cbf4578576c63825 \ --hash=sha256:061dc4cf2c34852b052a8620d40f36324554bc192be474b9e9770e8c042fd735 \
--hash=sha256:1e76106b6fc55fa3d6fe1c527f95ee65e324a13b62e243f77b48317346559708 \ --hash=sha256:0dca0d95ff849f9a364385f36ab49f50065d76964944638be9691e1832e9f86d \
--hash=sha256:2593283bf81ca37d27d110956b79e8723f9aa50c4bcdc29d3c0543d4743d2763 \ --hash=sha256:1a921e542453fe531144e91e1feedf12e07351b1cf6c9e8a3325ea600a715a31 \
--hash=sha256:2dc5c43bb65ec3669452af0ab10729e8fdc17f87a1f2ad7ec65d4aaaefabf6bf \ --hash=sha256:23768528f2911bcd7e475210822ffb5254ed10d71f4028387e5a99b4c6699671 \
--hash=sha256:3885f85b61798f4192d544aac7b25a04ece5fe2704670b4ab73c2d2c14ab740d \ --hash=sha256:2917bdf657f5859fbf3386b12d68ede4cf1f04c90c3a6bc1f013dd68a22e2269 \
--hash=sha256:3ab7194ee290302ca15449f601036007873028712e92ca15fc76597a0aeb4c59 \ --hash=sha256:299fd615cd8fc86267b47597123e3f43ad79c9d8a22bebdce535e53550763e2f \
--hash=sha256:45f9f4853fb4cc46783085261c9ec4706628f3b57de3e68bae03e8f8b3c0de51 \ --hash=sha256:44358b9bf66c8576a9f57a590d5f5d6e72fa4228b763d0e43fee6d3b06d3a337 \
--hash=sha256:6fadd183186db360b61cb34e81117a096bff91c072929cd1b529eb20dd46e6c5 \ --hash=sha256:49a30d5fda2507ae77be16479bdb62a660fa51b1eb4928b524975b3bde77b3c0 \
--hash=sha256:85f3e248507125bf4af607a26fd6cb8578776197bd4b66e35229cdf5acf1dfbf \ --hash=sha256:554b03b6e73aaabec3745364d6239e9e012d64c68ccd0b8430c64ccc14939a8b \
--hash=sha256:89c69e9a10670eb7a66b8cef6354c24671ba241f46152dd3eed447f79c29fb5b \ --hash=sha256:6e343822feb58ac4d0a1211bd9399de2b3a04963ddeec21530fc426cc121f19b \
--hash=sha256:9ea5231428af34226c05f927e16fc7f6fa5e39e3ad3cd24ffa48ba53a47f4240 \ --hash=sha256:710638eb93b1fa52823aa91bf75326f9ecdfd5e0466f00789246a5280f4ba0fc \
--hash=sha256:ad053d34421a2debba45aa3cc39acf454acbcd025b3fc1a9f8a0dee237abd485 \ --hash=sha256:b4a1870c51720687af7fa3e7cda6d08d801dae660f75a76f3845b642b4da6ee1 \
--hash=sha256:b50a8c5c162469c3209e5ec92ee4f95c8231b11db6a04db09bbe338176723bb8 \ --hash=sha256:c17b6b34111ea72fc5a4e4beec9711d2226285f0386ea83477cbb97c30a3f3a5 \
--hash=sha256:ba30e88607fb6990544d84caf3c706c4b48f629e18853fc6a646f82db9629418 \ --hash=sha256:c5111ccdc9c88f423426df3fd1811bfc40ed66264d35aa373420a34377efc98a \
--hash=sha256:decb0658ec19e5c1f519faa9a160c0fc85a41a7e6654b3ce1b44b939f8bf1325 \ --hash=sha256:ca7f6f1f2649b89ce02f6f229d7c19f680a6238af656f61e0115b24857917929 \
--hash=sha256:fe46d4f8e94e637634d54477b0cfabcf93c53f29eedcbdeecaf2af32029b4421 --hash=sha256:cd3c8e693bff0fff6ba55f140bf390fa92c994083f838fece0f63be121334945 \
--hash=sha256:d25c5091190f2dc0eaa3f950252122edbbadbb682aa7b1ef2f8af0f8c0afefae \
--hash=sha256:d76383238584e9711e20ebe14db6c88ddcedc1829a9ad31a584389463b5aa504 \
--hash=sha256:e37ab26028f12dbb0ff65f29a8d3d44a765c61e729647bf2ddfbbed621726f01
# via sqlalchemy # via sqlalchemy
gunicorn==23.0.0 \ gunicorn==23.0.0 \
--hash=sha256:ec400d38950de4dfd418cff8328b2c8faed0edb0d517d3394e457c317908ca4d \ --hash=sha256:ec400d38950de4dfd418cff8328b2c8faed0edb0d517d3394e457c317908ca4d \
--hash=sha256:f014447a0101dc57e294f6c18ca6b40227a4c90e9bdb586042628030cba004ec --hash=sha256:f014447a0101dc57e294f6c18ca6b40227a4c90e9bdb586042628030cba004ec
# via livegraphsdjango # via livegraphsdjango
identify==2.6.10 \ identify==2.6.15 \
--hash=sha256:45e92fd704f3da71cc3880036633f48b4b7265fd4de2b57627cb157216eb7eb8 \ --hash=sha256:1181ef7608e00704db228516541eb83a88a9f94433a8c80bb9b5bd54b1d81757 \
--hash=sha256:5f34248f54136beed1a7ba6a6b5c4b6cf21ff495aac7c359e1ef831ae3b8ab25 --hash=sha256:e4f4864b96c6557ef2a1e1c951771838f4edc9df3a72ec7118b338801b11c7bf
# via pre-commit # via pre-commit
idna==3.10 \ idna==3.11 \
--hash=sha256:12f65c9b470abda6dc35cf8e63cc574b1c52b11df2c86030af0ac09b01b13ea9 \ --hash=sha256:771a87f49d9defaf64091e6e6fe9c18d4833f140bd19464795bc32d966ca37ea \
--hash=sha256:946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3 --hash=sha256:795dafcc9c04ed0c1fb032c2aa73654d8e8c5023a7df64a53f39190ada629902
# via requests # via requests
iniconfig==2.1.0 \ iniconfig==2.3.0 \
--hash=sha256:3abbd2e30b36733fee78f9c7f7308f2d0050e88f0087fd25c2645f63c773e1c7 \ --hash=sha256:c76315c77db068650d49c5b56314774a7804df16fee4402c1f19d6d15d8c4730 \
--hash=sha256:9deba5723312380e77435581c6bf4935c94cbfab9b1ed33ef8d238ea168eb760 --hash=sha256:f631c04d2c48c52b84d0d0549c99ff3859c98df65b3101406327ecc7d53fbf12
# via pytest # via pytest
kombu==5.5.3 \ kombu==5.5.4 \
--hash=sha256:021a0e11fcfcd9b0260ef1fb64088c0e92beb976eb59c1dfca7ddd4ad4562ea2 \ --hash=sha256:886600168275ebeada93b888e831352fe578168342f0d1d5833d88ba0d847363 \
--hash=sha256:5b0dbceb4edee50aa464f59469d34b97864be09111338cfb224a10b6a163909b --hash=sha256:a12ed0557c238897d8e518f1d1fdf84bd1516c5e305af2dacd85c2015115feb8
# via celery # via celery
markdown-it-py==3.0.0 \ markdown-it-py==4.0.0 \
--hash=sha256:355216845c60bd96232cd8d8c40e8f9765cc86f46880e43a8fd22dc1a1a8cab1 \ --hash=sha256:87327c59b172c5011896038353a81343b6754500a08cd7a4973bb48c6d578147 \
--hash=sha256:e3f60a94fa066dc52ec76661e37c851cb232d92f9886b15cb560aaada2df8feb --hash=sha256:cb0a2b4aa34f932c007117b194e945bd74e0ec24133ceb5bac59009cda1cb9f3
# via rich # via rich
mdurl==0.1.2 \ mdurl==0.1.2 \
--hash=sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8 \ --hash=sha256:84008a41e51615a49fc9966191ff91509e3c40b939176e643fd50a5c2196b8f8 \
--hash=sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba --hash=sha256:bb413d29f5eea38f31dd4754dd7377d4465116fb207585f97bf925588687c1ba
# via markdown-it-py # via markdown-it-py
mypy==1.15.0 \ mypy==1.18.2 \
--hash=sha256:404534629d51d3efea5c800ee7c42b72a6554d6c400e6a79eafe15d11341fd43 \ --hash=sha256:06a398102a5f203d7477b2923dda3634c36727fa5c237d8f859ef90c42a9924b \
--hash=sha256:5469affef548bd1895d86d3bf10ce2b44e33d86923c29e4d675b3e323437ea3e \ --hash=sha256:07b8b0f580ca6d289e69209ec9d3911b4a26e5abfde32228a288eb79df129fcc \
--hash=sha256:811aeccadfb730024c5d3e326b2fbe9249bb7413553f15499a4050f7c30e801d \ --hash=sha256:0e2785a84b34a72ba55fb5daf079a1003a34c05b22238da94fcae2bbe46f3544 \
--hash=sha256:93faf3fdb04768d44bf28693293f3904bbb555d076b781ad2530214ee53e3445 \ --hash=sha256:20c02215a080e3a2be3aa50506c67242df1c151eaba0dcbc1e4e557922a26075 \
--hash=sha256:98b7b9b9aedb65fe628c62a6dc57f6d5088ef2dfca37903a7d9ee374d03acca5 \ --hash=sha256:22a1748707dd62b58d2ae53562ffc4d7f8bcc727e8ac7cbc69c053ddc874d47e \
--hash=sha256:b9378e2c00146c44793c98b8d5a61039a048e31f429fb0eb546d93f4b000bedf \ --hash=sha256:62f0e1e988ad41c2a110edde6c398383a889d95b36b3e60bcf155f5164c4fdce \
--hash=sha256:baefc32840a9f00babd83251560e0ae1573e2f9d1b067719479bfb0e987c6357 \ --hash=sha256:6ca1e64b24a700ab5ce10133f7ccd956a04715463d30498e64ea8715236f9c9c \
--hash=sha256:c43a7682e24b4f576d93072216bf56eeff70d9140241f9edec0c104d0c515036 --hash=sha256:749b5f83198f1ca64345603118a6f01a4e99ad4bf9d103ddc5a3200cc4614adf \
--hash=sha256:7ab28cc197f1dd77a67e1c6f35cd1f8e8b73ed2217e4fc005f9e6a504e46e7ba \
--hash=sha256:8795a039bab805ff0c1dfdb8cd3344642c2b99b8e439d057aba30850b8d3423d \
--hash=sha256:a431a6f1ef14cf8c144c6b14793a23ec4eae3db28277c358136e79d7d062f62d \
--hash=sha256:c3ad2afadd1e9fea5cf99a45a822346971ede8685cc581ed9cd4d42eaf940986 \
--hash=sha256:d924eef3795cc89fecf6bedc6ed32b33ac13e8321344f6ddbf8ee89f706c05cb \
--hash=sha256:ed4482847168439651d3feee5833ccedbf6657e964572706a2adb1f7fa4dfe2e
mypy-extensions==1.1.0 \ mypy-extensions==1.1.0 \
--hash=sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505 \ --hash=sha256:1be4cccdb0f2482337c4743e60421de3a356cd97508abadd57d47403e94f5505 \
--hash=sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558 --hash=sha256:52e68efc3284861e772bbcd66823fde5ae21fd2fdb51c62a211403730b916558
# via # via
# black # black
# mypy # mypy
narwhals==1.39.1 \ narwhals==2.10.2 \
--hash=sha256:68d0f29c760f1a9419ada537f35f21ff202b0be1419e6d22135a0352c6d96deb \ --hash=sha256:059cd5c6751161b97baedcaf17a514c972af6a70f36a89af17de1a0caf519c43 \
--hash=sha256:cf15389e6f8c5321e8cd0ca8b5bace3b1aea5f5622fa59dfd64821998741d836 --hash=sha256:ff738a08bc993cbb792266bec15346c1d85cc68fdfe82a23283c3713f78bd354
# via plotly # via plotly
nodeenv==1.9.1 \ nodeenv==1.9.1 \
--hash=sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f \ --hash=sha256:6ec12890a2dab7946721edbfbcd91f3319c6ccc9aec47be7c7e6b7011ee6645f \
--hash=sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9 --hash=sha256:ba11c9782d29c27c70ffbdda2d7415098754709be8a7056d79a737cd901155c9
# via pre-commit # via pre-commit
numpy==2.2.5 \ numpy==2.3.4 \
--hash=sha256:02f226baeefa68f7d579e213d0f3493496397d8f1cff5e2b222af274c86a552a \ --hash=sha256:035796aaaddfe2f9664b9a9372f089cfc88bd795a67bd1bfe15e6e770934cf64 \
--hash=sha256:059b51b658f4414fff78c6d7b1b4e18283ab5fa56d270ff212d5ba0c561846f4 \ --hash=sha256:043885b4f7e6e232d7df4f51ffdef8c36320ee9d5f227b380ea636722c7ed12e \
--hash=sha256:1a161c2c79ab30fe4501d5a2bbfe8b162490757cf90b7f05be8b80bc02f7bb8e \ --hash=sha256:04a69abe45b49c5955923cf2c407843d1c85013b424ae8a560bba16c92fe44a0 \
--hash=sha256:261a1ef047751bb02f29dfe337230b5882b54521ca121fc7f62668133cb119c9 \ --hash=sha256:0f2bcc76f1e05e5ab58893407c63d90b2029908fa41f9f1cc51eecce936c3365 \
--hash=sha256:2ba321813a00e508d5421104464510cc962a6f791aa2fca1c97b1e65027da80d \ --hash=sha256:15eea9f306b98e0be91eb344a94c0e630689ef302e10c2ce5f7e11905c704f9c \
--hash=sha256:352d330048c055ea6db701130abc48a21bec690a8d38f8284e00fab256dc1376 \ --hash=sha256:15fb27364ed84114438fff8aaf998c9e19adbeba08c0b75409f8c452a8692c52 \
--hash=sha256:3d14b17b9be5f9c9301f43d2e2a4886a33b53f4e6fdf9ca2f4cc60aeeee76372 \ --hash=sha256:22758999b256b595cf0b1d102b133bb61866ba5ceecf15f759623b64c020c9ec \
--hash=sha256:4520caa3807c1ceb005d125a75e715567806fed67e315cea619d5ec6e75a4191 \ --hash=sha256:2ec646892819370cf3558f518797f16597b4e4669894a2ba712caccc9da53f1f \
--hash=sha256:47f9ed103af0bc63182609044b0490747e03bd20a67e391192dde119bf43d52f \ --hash=sha256:3634093d0b428e6c32c3a69b78e554f0cd20ee420dcad5a9f3b2a63762ce4197 \
--hash=sha256:54088a5a147ab71a8e7fdfd8c3601972751ded0739c6b696ad9cb0343e21ab73 \ --hash=sha256:3da3491cee49cf16157e70f607c03a217ea6647b1cea4819c4f48e53d49139b9 \
--hash=sha256:55f09e00d4dccd76b179c0f18a44f041e5332fd0e022886ba1c0bbf3ea4a18d0 \ --hash=sha256:40cc556d5abbc54aabe2b1ae287042d7bdb80c08edede19f0c0afb36ae586f37 \
--hash=sha256:8b4c0773b6ada798f51f0f8e30c054d32304ccc6e9c5d93d46cb26f3d385ab19 \ --hash=sha256:4ee6a571d1e4f0ea6d5f22d6e5fbd6ed1dc2b18542848e1e7301bd190500c9d7 \
--hash=sha256:8dfa94b6a4374e7851bbb6f35e6ded2120b752b063e6acdd3157e4d2bb922eba \ --hash=sha256:56209416e81a7893036eea03abcb91c130643eb14233b2515c90dcac963fe99d \
--hash=sha256:97c8425d4e26437e65e1d189d22dff4a079b747ff9c2788057bfb8114ce1e133 \ --hash=sha256:5e199c087e2aa71c8f9ce1cb7a8e10677dc12457e7cc1be4798632da37c3e86e \
--hash=sha256:a4cbdef3ddf777423060c6f81b5694bad2dc9675f110c4b2a60dc0181543fac7 \ --hash=sha256:62b2198c438058a20b6704351b35a1d7db881812d8512d67a69c9de1f18ca05f \
--hash=sha256:a9c0d994680cd991b1cb772e8b297340085466a6fe964bc9d4e80f5e2f43c291 \ --hash=sha256:6d9cd732068e8288dbe2717177320723ccec4fb064123f0caf9bbd90ab5be868 \
--hash=sha256:c26843fd58f65da9491165072da2cccc372530681de481ef670dcc8e27cfb066 \ --hash=sha256:7c26b0b2bf58009ed1f38a641f3db4be8d960a417ca96d14e5b06df1506d41ff \
--hash=sha256:c8b82a55ef86a2d8e81b63da85e55f5537d2157165be1cb2ce7cfa57b6aef38b \ --hash=sha256:817e719a868f0dacde4abdfc5c1910b301877970195db9ab6a5e2c4bd5b121f7 \
--hash=sha256:d403c84991b5ad291d3809bace5e85f4bbf44a04bdc9a88ed2bb1807b3360bb8 \ --hash=sha256:81c3e6d8c97295a7360d367f9f8553973651b76907988bb6066376bc2252f24e \
--hash=sha256:d8882a829fd779f0f43998e931c466802a77ca1ee0fe25a3abe50278616b1471 \ --hash=sha256:838f045478638b26c375ee96ea89464d38428c69170360b23a1a50fa4baa3562 \
--hash=sha256:e8b025c351b9f0e8b5436cf28a07fa4ac0204d67b38f01433ac7f9b870fa38c6 --hash=sha256:84f01a4d18b2cc4ade1814a08e5f3c907b079c847051d720fad15ce37aa930b6 \
--hash=sha256:85597b2d25ddf655495e2363fe044b0ae999b75bc4d630dc0d886484b03a5eb0 \
--hash=sha256:85d9fb2d8cd998c84d13a79a09cc0c1091648e848e4e6249b0ccd7f6b487fa26 \
--hash=sha256:85e071da78d92a214212cacea81c6da557cab307f2c34b5f85b628e94803f9c0 \
--hash=sha256:863e3b5f4d9915aaf1b8ec79ae560ad21f0b8d5e3adc31e73126491bb86dee1d \
--hash=sha256:86966db35c4040fdca64f0816a1c1dd8dbd027d90fca5a57e00e1ca4cd41b879 \
--hash=sha256:8b5a9a39c45d852b62693d9b3f3e0fe052541f804296ff401a72a1b60edafb29 \
--hash=sha256:8dc20bde86802df2ed8397a08d793da0ad7a5fd4ea3ac85d757bf5dd4ad7c252 \
--hash=sha256:962064de37b9aef801d33bc579690f8bfe6c5e70e29b61783f60bcba838a14d6 \
--hash=sha256:9cb177bc55b010b19798dc5497d540dea67fd13a8d9e882b2dae71de0cf09eb3 \
--hash=sha256:9d729d60f8d53a7361707f4b68a9663c968882dd4f09e0d58c044c8bf5faee7b \
--hash=sha256:a13fc473b6db0be619e45f11f9e81260f7302f8d180c49a22b6e6120022596b3 \
--hash=sha256:a700a4031bc0fd6936e78a752eefb79092cecad2599ea9c8039c548bc097f9bc \
--hash=sha256:a7d018bfedb375a8d979ac758b120ba846a7fe764911a64465fd87b8729f4a6a \
--hash=sha256:b6c231c9c2fadbae4011ca5e7e83e12dc4a5072f1a1d85a0a7b3ed754d145a40 \
--hash=sha256:bd0c630cf256b0a7fd9d0a11c9413b42fef5101219ce6ed5a09624f5a65392c7 \
--hash=sha256:c090d4860032b857d94144d1a9976b8e36709e40386db289aaf6672de2a81966 \
--hash=sha256:d5e081bc082825f8b139f9e9fe42942cb4054524598aaeb177ff476cc76d09d2 \
--hash=sha256:d7315ed1dab0286adca467377c8381cd748f3dc92235f22a7dfc42745644a96a \
--hash=sha256:e1708fac43ef8b419c975926ce1eaf793b0c13b7356cfab6ab0dc34c0a02ac0f \
--hash=sha256:e73d63fd04e3a9d6bc187f5455d81abfad05660b212c8804bf3b407e984cd2bc \
--hash=sha256:e8370eb6925bb8c1c4264fec52b0384b44f675f191df91cbe0140ec9f0955646 \
--hash=sha256:ecb63014bb7f4ce653f8be7f1df8cbc6093a5a2811211770f6606cc92b5a78fd \
--hash=sha256:fc8a63918b04b8571789688b2780ab2b4a33ab44bfe8ccea36d3eba51228c953 \
--hash=sha256:fea80f4f4cf83b54c3a051f2f727870ee51e22f0248d3114b8e755d160b38cfb
# via # via
# livegraphsdjango # livegraphsdjango
# pandas # pandas
@@ -270,67 +354,106 @@ packaging==25.0 \
# via # via
# black # black
# gunicorn # gunicorn
# kombu
# plotly # plotly
# pytest # pytest
pandas==2.2.3 \ pandas==2.3.3 \
--hash=sha256:15c0e1e02e93116177d29ff83e8b1619c93ddc9c49083f237d4312337a61165d \ --hash=sha256:0242fe9a49aa8b4d78a4fa03acb397a58833ef6199e9aa40a95f027bb3a1b6e7 \
--hash=sha256:1db71525a1538b30142094edb9adc10be3f3e176748cd7acc2240c2f2e5aa3a4 \ --hash=sha256:1611aedd912e1ff81ff41c745822980c49ce4a7907537be8692c8dbc31924593 \
--hash=sha256:22a9d949bfc9a502d320aa04e5d02feab689d61da4e7764b62c30b991c42c5f0 \ --hash=sha256:1b07204a219b3b7350abaae088f451860223a52cfb8a6c53358e7948735158e5 \
--hash=sha256:3508d914817e153ad359d7e069d752cdd736a247c322d932eb89e6bc84217f28 \ --hash=sha256:2462b1a365b6109d275250baaae7b760fd25c726aaca0054649286bcfbb3e8ec \
--hash=sha256:38cf8125c40dae9d5acc10fa66af8ea6fdf760b2714ee482ca691fc66e6fcb18 \ --hash=sha256:2e3ebdb170b5ef78f19bfb71b0dc5dc58775032361fa188e814959b74d726dd5 \
--hash=sha256:3b71f27954685ee685317063bf13c7709a7ba74fc996b84fc6821c59b0f06468 \ --hash=sha256:318d77e0e42a628c04dc56bcef4b40de67918f7041c2b061af1da41dcff670ac \
--hash=sha256:4f18ba62b61d7e192368b84517265a99b4d7ee8912f8708660fb4a366cc82667 \ --hash=sha256:3869faf4bd07b3b66a9f462417d0ca3a9df29a9f6abd5d0d0dbab15dac7abe87 \
--hash=sha256:61c5ad4043f791b61dd4752191d9f07f0ae412515d59ba8f005832a532f8736d \ --hash=sha256:4e0a175408804d566144e170d0476b15d78458795bb18f1304fb94160cabf40c \
--hash=sha256:6374c452ff3ec675a8f46fd9ab25c4ad0ba590b71cf0656f8b6daa5202bca3fb \ --hash=sha256:56851a737e3470de7fa88e6131f41281ed440d29a9268dcbf0002da5ac366713 \
--hash=sha256:800250ecdadb6d9c78eae4990da62743b857b470883fa27f652db8bdde7f6659 \ --hash=sha256:6253c72c6a1d990a410bc7de641d34053364ef8bcd3126f7e7450125887dffe3 \
--hash=sha256:ad5b65698ab28ed8d7f18790a0dc58005c7629f227be9ecc1072aa74c0c1d43a \ --hash=sha256:6435cb949cb34ec11cc9860246ccb2fdc9ecd742c12d3304989017d53f039a78 \
--hash=sha256:ba96630bc17c875161df3818780af30e43be9b166ce51c9a18c1feae342906c2 \ --hash=sha256:6d2cefc361461662ac48810cb14365a365ce864afe85ef1f447ff5a1e99ea81c \
--hash=sha256:f00d1345d84d8c86a63e476bb4955e46458b304b9575dcf71102b5c705320015 \ --hash=sha256:74ecdf1d301e812db96a465a525952f4dde225fdb6d8e5a521d47e1f42041e21 \
--hash=sha256:f3a255b2c19987fbbe62a9dfd6cff7ff2aa9ccab3fc75218fd4b7530f01efa24 --hash=sha256:75ea25f9529fdec2d2e93a42c523962261e567d250b0013b16210e1d40d7c2e5 \
--hash=sha256:900f47d8f20860de523a1ac881c4c36d65efcb2eb850e6948140fa781736e110 \
--hash=sha256:93c2d9ab0fc11822b5eece72ec9587e172f63cff87c00b062f6e37448ced4493 \
--hash=sha256:a21d830e78df0a515db2b3d2f5570610f5e6bd2e27749770e8bb7b524b89b450 \
--hash=sha256:a45c765238e2ed7d7c608fc5bc4a6f88b642f2f01e70c0c23d2224dd21829d86 \
--hash=sha256:bdcd9d1167f4885211e401b3036c0c8d9e274eee67ea8d0758a256d60704cfe8 \
--hash=sha256:c46467899aaa4da076d5abc11084634e2d197e9460643dd455ac3db5856b24d6 \
--hash=sha256:c4fc4c21971a1a9f4bdb4c73978c7f7256caa3e62b323f70d6cb80db583350bc \
--hash=sha256:d051c0e065b94b7a3cea50eb1ec32e912cd96dba41647eb24104b6c6c14c5788 \
--hash=sha256:e05e1af93b977f7eafa636d043f9f94c7ee3ac81af99c13508215942e64c993b \
--hash=sha256:e32e7cc9af0f1cc15548288a51a3b681cc2a219faa838e995f7dc53dbab1062d \
--hash=sha256:ee15f284898e7b246df8087fc82b87b01686f98ee67d85a17b7ab44143a3a9a0 \
--hash=sha256:ee67acbbf05014ea6c763beb097e03cd629961c8a632075eeb34247120abcb4b \
--hash=sha256:f8bfc0e12dc78f777f323f55c58649591b2cd0c43534e8355c51d3fede5f4dee
# via livegraphsdjango # via livegraphsdjango
pathspec==0.12.1 \ pathspec==0.12.1 \
--hash=sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08 \ --hash=sha256:a0d503e138a4c123b27490a4f7beda6a01c6f288df0e4a8b79c7eb0dc7b4cc08 \
--hash=sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712 --hash=sha256:a482d51503a1ab33b1c67a6c3813a26953dbdc71c31dacaef9a838c4e29f5712
# via black # via
pbr==6.1.1 \ # black
--hash=sha256:38d4daea5d9fa63b3f626131b9d34947fd0c8be9b05a29276870580050a25a76 \ # mypy
--hash=sha256:93ea72ce6989eb2eed99d0f75721474f69ad88128afdef5ac377eb797c4bf76b platformdirs==4.5.0 \
# via stevedore --hash=sha256:70ddccdd7c99fc5942e9fc25636a8b34d04c24b335100223152c2803e4063312 \
platformdirs==4.3.8 \ --hash=sha256:e578a81bb873cbb89a41fcc904c7ef523cc18284b7e3b3ccf06aca1403b7ebd3
--hash=sha256:3d512d96e16bcb959a814c9f348431070822a6496326a4be0911c40b5a74c2bc \
--hash=sha256:ff7059bb7eb1179e2685604f4aaf157cfd9535242bd23742eadc3c13542139b4
# via # via
# black # black
# virtualenv # virtualenv
plotly==6.1.0 \ plotly==6.4.0 \
--hash=sha256:a29d3ed523c9d7960095693af1ee52689830df0f9c6bae3e5e92c20c4f5684c3 \ --hash=sha256:68c6db2ed2180289ef978f087841148b7efda687552276da15a6e9b92107052a \
--hash=sha256:f13f497ccc2d97f06f771a30b27fab0cbd220f2975865f4ecbc75057135521de --hash=sha256:a1062eafbdc657976c2eedd276c90e184ccd6c21282a5e9ee8f20efca9c9a4c5
# via livegraphsdjango # via livegraphsdjango
pluggy==1.6.0 \ pluggy==1.6.0 \
--hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \ --hash=sha256:7dcc130b76258d33b90f61b658791dede3486c3e6bfb003ee5c9bfb396dd22f3 \
--hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746 --hash=sha256:e920276dd6813095e9377c0bc5566d94c932c33b27a3e3945d8389c374dd4746
# via pytest # via pytest
pre-commit==4.2.0 \ pre-commit==4.3.0 \
--hash=sha256:601283b9757afd87d40c4c4a9b2b5de9637a8ea02eaff7adc2d0fb4e04841146 \ --hash=sha256:2b0747ad7e6e967169136edffee14c16e148a778a54e4f967921aa1ebf2308d8 \
--hash=sha256:a009ca7205f1eb497d10b845e52c838a98b6cdd2102a6c8e4540e94ee75c58bd --hash=sha256:499fe450cc9d42e9d58e606262795ecb64dd05438943c62b66f6a8673da30b16
prompt-toolkit==3.0.51 \ prompt-toolkit==3.0.52 \
--hash=sha256:52742911fde84e2d423e2f9a4cf1de7d7ac4e51958f648d9540e0fb8db077b07 \ --hash=sha256:28cde192929c8e7321de85de1ddbe736f1375148b02f2e17edd840042b1be855 \
--hash=sha256:931a162e3b27fc90c86f1b48bb1fb2c528c2761475e57c9c06de13311c7b54ed --hash=sha256:9aac639a3bbd33284347de5ad8d68ecc044b91a762dc39b7c21095fcd6a19955
# via click-repl # via click-repl
pygments==2.19.1 \ psycopg2-binary==2.9.11 \
--hash=sha256:61c16d2a8576dc0649d9f39e089b5f02bcd27fba10d8fb4dcc28173f7a45151f \ --hash=sha256:04195548662fa544626c8ea0f06561eb6203f1984ba5b4562764fbeb4c3d14b1 \
--hash=sha256:9ea1544ad55cecf4b8242fab6dd35a93bbce657034b0611ee383099054ab6d8c --hash=sha256:32770a4d666fbdafab017086655bcddab791d7cb260a16679cc5a7338b64343b \
# via rich --hash=sha256:366df99e710a2acd90efed3764bb1e28df6c675d33a7fb40df9b7281694432ee \
pytest==8.3.5 \ --hash=sha256:4012c9c954dfaccd28f94e84ab9f94e12df76b4afb22331b1f0d3154893a6316 \
--hash=sha256:c69214aa47deac29fad6c2a4f590b9c4a9fdb16a403176fe154b79c0b4d4d820 \ --hash=sha256:47f212c1d3be608a12937cc131bd85502954398aaa1320cb4c14421a0ffccf4c \
--hash=sha256:f4efe70cc14e511565ac476b57c279e12a855b11f48f212af1080ef2263d3845 --hash=sha256:5c6ff3335ce08c75afaed19e08699e8aacf95d4a260b495a4a8545244fe2ceb3 \
--hash=sha256:84011ba3109e06ac412f95399b704d3d6950e386b7994475b231cf61eec2fc1f \
--hash=sha256:8c55b385daa2f92cb64b12ec4536c66954ac53654c7f15a203578da4e78105c0 \
--hash=sha256:92e3b669236327083a2e33ccfa0d320dd01b9803b3e14dd986a4fc54aa00f4e1 \
--hash=sha256:9b52a3f9bb540a3e4ec0f6ba6d31339727b2950c9772850d6545b7eae0b9d7c5 \
--hash=sha256:9bd81e64e8de111237737b29d68039b9c813bdf520156af36d26819c9a979e5f \
--hash=sha256:b31e90fdd0f968c2de3b26ab014314fe814225b6c324f770952f7d38abf17e3c \
--hash=sha256:b6aed9e096bf63f9e75edf2581aa9a7e7186d97ab5c177aa6c87797cd591236c \
--hash=sha256:b8fb3db325435d34235b044b199e56cdf9ff41223a4b9752e8576465170bb38c \
--hash=sha256:ba34475ceb08cccbdd98f6b46916917ae6eeb92b5ae111df10b544c3a4621dc4 \
--hash=sha256:c0377174bf1dd416993d16edc15357f6eb17ac998244cca19bc67cdc0e2e5766 \
--hash=sha256:c3cb3a676873d7506825221045bd70e0427c905b9c8ee8d6acd70cfcbd6e576d \
--hash=sha256:d526864e0f67f74937a8fce859bd56c979f5e2ec57ca7c627f5f1071ef7fee60 \
--hash=sha256:db4fd476874ccfdbb630a54426964959e58da4c61c9feba73e6094d51303d7d8 \
--hash=sha256:e0deeb03da539fa3577fcb0b3f2554a97f7e5477c246098dbb18091a4a01c16f \
--hash=sha256:e35b7abae2b0adab776add56111df1735ccc71406e56203515e228a8dc07089f \
--hash=sha256:efff12b432179443f54e230fdf60de1f6cc726b6c832db8701227d089310e8aa \
--hash=sha256:fcf21be3ce5f5659daefd2b3b3b6e4727b028221ddc94e6c1523425579664747
# via livegraphsdjango
pygments==2.19.2 \
--hash=sha256:636cb2477cec7f8952536970bc533bc43743542f70392ae026374600add5b887 \
--hash=sha256:86540386c03d588bb81d44bc3928634ff26449851e99741617ecb9037ee5ec0b
# via
# pytest
# rich
pytest==8.4.2 \
--hash=sha256:86c0d0b93306b961d58d62a4db4879f27fe25513d4b969df351abdddb3c30e01 \
--hash=sha256:872f880de3fc3a5bdc88a11b39c9710c3497a547cfa9320bc3c5e62fbf272e79
# via pytest-django # via pytest-django
pytest-django==4.11.1 \ pytest-django==4.11.1 \
--hash=sha256:1b63773f648aa3d8541000c26929c1ea63934be1cfa674c76436966d73fe6a10 \ --hash=sha256:1b63773f648aa3d8541000c26929c1ea63934be1cfa674c76436966d73fe6a10 \
--hash=sha256:a949141a1ee103cb0e7a20f1451d355f83f5e4a5d07bdd4dcfdd1fd0ff227991 --hash=sha256:a949141a1ee103cb0e7a20f1451d355f83f5e4a5d07bdd4dcfdd1fd0ff227991
python-crontab==3.2.0 \ python-crontab==3.3.0 \
--hash=sha256:40067d1dd39ade3460b2ad8557c7651514cd3851deffff61c5c60e1227c5c36b \ --hash=sha256:007c8aee68dddf3e04ec4dce0fac124b93bd68be7470fc95d2a9617a15de291b \
--hash=sha256:82cb9b6a312d41ff66fd3caf3eed7115c28c195bfb50711bc2b4b9592feb9fe5 --hash=sha256:739a778b1a771379b75654e53fd4df58e5c63a9279a63b5dfe44c0fcc3ee7884
# via django-celery-beat # via django-celery-beat
python-dateutil==2.9.0.post0 \ python-dateutil==2.9.0.post0 \
--hash=sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3 \ --hash=sha256:37dd54208da7e1cd875388217d5e00ebd4179249f90fb72437e91a35459a0ad3 \
@@ -338,81 +461,100 @@ python-dateutil==2.9.0.post0 \
# via # via
# celery # celery
# pandas # pandas
# python-crontab python-dotenv==1.2.1 \
python-dotenv==1.1.0 \ --hash=sha256:42667e897e16ab0d66954af0e60a9caa94f0fd4ecf3aaf6d2d260eec1aa36ad6 \
--hash=sha256:41f90bc6f5f177fb41f53e87666db362025010eb28f60a01c9143bfa33a2b2d5 \ --hash=sha256:b81ee9561e9ca4004139c6cbba3a238c32b03e4894671e181b671e8cb8425d61
--hash=sha256:d7c01d9e2293916c18baf562d95698754b0dbbb5e74d457c45d4f6561fb9d55d
# via livegraphsdjango # via livegraphsdjango
pytokens==0.3.0 \
--hash=sha256:2f932b14ed08de5fcf0b391ace2642f858f1394c0857202959000b68ed7a458a \
--hash=sha256:95b2b5eaf832e469d141a378872480ede3f251a5a5041b8ec6e581d3ac71bbf3
# via black
pytz==2025.2 \ pytz==2025.2 \
--hash=sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3 \ --hash=sha256:360b9e3dbb49a209c21ad61809c7fb453643e048b38924c765813546746e81c3 \
--hash=sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00 --hash=sha256:5ddf76296dd8c44c26eb8f4b6f35488f3ccbf6fbbd7adee0b7262d43f0ec2f00
# via pandas # via pandas
pyyaml==6.0.2 \ pyyaml==6.0.3 \
--hash=sha256:0ffe8360bab4910ef1b9e87fb812d8bc0a308b0d0eef8c8f44e0254ab3b07133 \ --hash=sha256:00c4bdeba853cc34e7dd471f16b4114f4162dc03e6b7afcc2128711f0eca823c \
--hash=sha256:17e311b6c678207928d649faa7cb0d7b4c26a0ba73d41e99c4fff6b6c3276484 \ --hash=sha256:02893d100e99e03eda1c8fd5c441d8c60103fd175728e23e431db1b589cf5ab3 \
--hash=sha256:41e4e3953a79407c794916fa277a82531dd93aad34e29c2a514c2c0c5fe971cc \ --hash=sha256:0f29edc409a6392443abf94b9cf89ce99889a1dd5376d94316ae5145dfedd5d6 \
--hash=sha256:50187695423ffe49e2deacb8cd10510bc361faac997de9efef88badc3bb9e2d1 \ --hash=sha256:16249ee61e95f858e83976573de0f5b2893b3677ba71c9dd36b9cf8be9ac6d65 \
--hash=sha256:68ccc6023a3400877818152ad9a1033e3db8625d899c72eacb5a668902e4d652 \ --hash=sha256:2283a07e2c21a2aa78d9c4442724ec1eb15f5e42a723b99cb3d822d48f5f7ad1 \
--hash=sha256:70b189594dbe54f75ab3a1acec5f1e3faa7e8cf2f1e08d9b561cb41b845f69d5 \ --hash=sha256:34d5fcd24b8445fadc33f9cf348c1047101756fd760b4dacb5c3e99755703310 \
--hash=sha256:8388ee1976c416731879ac16da0aff3f63b286ffdd57cdeb95f3f2e085687563 \ --hash=sha256:4a2e8cebe2ff6ab7d1050ecd59c25d4c8bd7e6f400f5f82b96557ac0abafd0ac \
--hash=sha256:bc2fa7c6b47d6bc618dd7fb02ef6fdedb1090ec036abab80d4681424b84c1183 \ --hash=sha256:4ad1906908f2f5ae4e5a8ddfce73c320c2a1429ec52eafd27138b7f1cbe341c9 \
--hash=sha256:d584d9ec91ad65861cc08d42e834324ef890a082e591037abe114850ff7bbc3e \ --hash=sha256:501a031947e3a9025ed4405a168e6ef5ae3126c59f90ce0cd6f2bfc477be31b7 \
--hash=sha256:efdca5630322a10774e8e98e1af481aad470dd62c3170801852d752aa7a783ba --hash=sha256:5190d403f121660ce8d1d2c1bb2ef1bd05b5f68533fc5c2ea899bd15f4399b35 \
--hash=sha256:5498cd1645aa724a7c71c8f378eb29ebe23da2fc0d7a08071d89469bf1d2defb \
--hash=sha256:66e1674c3ef6f541c35191caae2d429b967b99e02040f5ba928632d9a7f0f065 \
--hash=sha256:6adc77889b628398debc7b65c073bcb99c4a0237b248cacaf3fe8a557563ef6c \
--hash=sha256:79005a0d97d5ddabfeeea4cf676af11e647e41d81c9a7722a193022accdb6b7c \
--hash=sha256:7c6610def4f163542a622a73fb39f534f8c101d690126992300bf3207eab9764 \
--hash=sha256:8d1fab6bb153a416f9aeb4b8763bc0f22a5586065f86f7664fc23339fc1c1fac \
--hash=sha256:8da9669d359f02c0b91ccc01cac4a67f16afec0dac22c2ad09f46bee0697eba8 \
--hash=sha256:93dda82c9c22deb0a405ea4dc5f2d0cda384168e466364dec6255b293923b2f3 \
--hash=sha256:a33284e20b78bd4a18c8c2282d549d10bc8408a2a7ff57653c0cf0b9be0afce5 \
--hash=sha256:a80cb027f6b349846a3bf6d73b5e95e782175e52f22108cfa17876aaeff93702 \
--hash=sha256:b3bc83488de33889877a0f2543ade9f70c67d66d9ebb4ac959502e12de895788 \
--hash=sha256:c1ff362665ae507275af2853520967820d9124984e0f7466736aea23d8611fba \
--hash=sha256:c458b6d084f9b935061bc36216e8a69a7e293a2f1e68bf956dcd9e6cbcd143f5 \
--hash=sha256:d0eae10f8159e8fdad514efdc92d74fd8d682c933a6dd088030f3834bc8e6b26 \
--hash=sha256:d76623373421df22fb4cf8817020cbb7ef15c725b9d5e45f17e189bfc384190f \
--hash=sha256:ebc55a14a21cb14062aa4162f906cd962b28e2e9ea38f9b4391244cd8de4ae0b \
--hash=sha256:eda16858a3cab07b80edaf74336ece1f986ba330fdb8ee0d6c0d68fe82bc96be \
--hash=sha256:ee2922902c45ae8ccada2c5b501ab86c36525b883eff4255313a253a3160861c \
--hash=sha256:f7057c9a337546edc7973c0d3ba84ddcdf0daa14533c2065749c9075001090e6
# via # via
# bandit # bandit
# pre-commit # pre-commit
redis==6.1.0 \ redis==7.0.1 \
--hash=sha256:3b72622f3d3a89df2a6041e82acd896b0e67d9f54e9bcd906d091d23ba5219f6 \ --hash=sha256:4977af3c7d67f8f0eb8b6fec0dafc9605db9343142f634041fb0235f67c0588a \
--hash=sha256:c928e267ad69d3069af28a9823a07726edf72c7e37764f43dc0123f37928c075 --hash=sha256:c949df947dca995dc68fdf5a7863950bf6df24f8d6022394585acc98e81624f1
# via livegraphsdjango # via livegraphsdjango
requests==2.32.3 \ requests==2.32.5 \
--hash=sha256:55365417734eb18255590a9ff9eb97e9e1da868d4ccd6402399eaf68af20a760 \ --hash=sha256:2462f94637a34fd532264295e186976db0f5d453d1cdd31473c85a6a161affb6 \
--hash=sha256:70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6 --hash=sha256:dbba0bac56e100853db0ea71b82b4dfd5fe2bf6d3754a8893c3af500cec7d7cf
# via livegraphsdjango # via livegraphsdjango
rich==14.0.0 \ rich==14.2.0 \
--hash=sha256:1c9491e1951aac09caffd42f448ee3d04e58923ffe14993f6e83068dc395d7e0 \ --hash=sha256:73ff50c7c0c1c77c8243079283f4edb376f0f6442433aecb8ce7e6d0b92d1fe4 \
--hash=sha256:82f1bc23a6a21ebca4ae0c45af9bdbc492ed20231dcb63f297d6d1021a9d5725 --hash=sha256:76bc51fe2e57d2b1be1f96c524b890b816e334ab4c1e45888799bfaab0021edd
# via bandit # via bandit
ruff==0.11.10 \ ruff==0.14.3 \
--hash=sha256:1067245bad978e7aa7b22f67113ecc6eb241dca0d9b696144256c3a879663bca \ --hash=sha256:0e2f8a0bbcffcfd895df39c9a4ecd59bb80dca03dc43f7fb63e647ed176b741e \
--hash=sha256:2f071b0deed7e9245d5820dac235cbdd4ef99d7b12ff04c330a241ad3534319f \ --hash=sha256:1ec1ac071e7e37e0221d2f2dbaf90897a988c531a8592a6a5959f0603a1ecf5e \
--hash=sha256:3afead355f1d16d95630df28d4ba17fb2cb9c8dfac8d21ced14984121f639bad \ --hash=sha256:26eb477ede6d399d898791d01961e16b86f02bc2486d0d1a7a9bb2379d055dc1 \
--hash=sha256:4a60e3a0a617eafba1f2e4186d827759d65348fa53708ca547e384db28406a0b \ --hash=sha256:3d6bc90307c469cb9d28b7cfad90aaa600b10d67c6e22026869f585e1e8a2db0 \
--hash=sha256:5a94acf798a82db188f6f36575d80609072b032105d114b0f98661e1679c9125 \ --hash=sha256:469e35872a09c0e45fecf48dd960bfbce056b5db2d5e6b50eca329b4f853ae20 \
--hash=sha256:5b6a9cc5b62c03cc1fea0044ed8576379dbaf751d5503d718c973d5418483641 \ --hash=sha256:4ff876d2ab2b161b6de0aa1f5bd714e8e9b4033dc122ee006925fbacc4f62153 \
--hash=sha256:5cc725fbb4d25b0f185cb42df07ab6b76c4489b4bfb740a175f3a59c70e8a224 \ --hash=sha256:678fdd7c7d2d94851597c23ee6336d25f9930b460b55f8598e011b57c74fd8c5 \
--hash=sha256:607ecbb6f03e44c9e0a93aedacb17b4eb4f3563d00e8b474298a201622677947 \ --hash=sha256:71ff6edca490c308f083156938c0c1a66907151263c4abdcb588602c6e696a14 \
--hash=sha256:7b3a522fa389402cd2137df9ddefe848f727250535c70dafa840badffb56b7a4 \ --hash=sha256:786ee3ce6139772ff9272aaf43296d975c0217ee1b97538a98171bf0d21f87ed \
--hash=sha256:859a7bfa7bc8888abbea31ef8a2b411714e6a80f0d173c2a82f9041ed6b50f58 \ --hash=sha256:7bfc42f81862749a7136267a343990f865e71fe2f99cf8d2958f684d23ce3dfa \
--hash=sha256:8b4564e9f99168c0f9195a0fd5fa5928004b33b377137f978055e40008a082c5 \ --hash=sha256:876b21e6c824f519446715c1342b8e60f97f93264012de9d8d10314f8a79c371 \
--hash=sha256:968220a57e09ea5e4fd48ed1c646419961a0570727c7e069842edd018ee8afed \ --hash=sha256:a497ec0c3d2c88561b6d90f9c29f5ae68221ac00d471f306fa21fa4264ce5fcd \
--hash=sha256:d522fb204b4959909ecac47da02830daec102eeb100fb50ea9554818d47a5fa6 \ --hash=sha256:a65e448cfd7e9c59fae8cf37f9221585d3354febaad9a07f29158af1528e165f \
--hash=sha256:da8ec977eaa4b7bf75470fb575bea2cb41a0e07c7ea9d5a0a97d13dbca697bf2 \ --hash=sha256:afcdc4b5335ef440d19e7df9e8ae2ad9f749352190e96d481dc501b753f0733e \
--hash=sha256:dc061a98d32a97211af7e7f3fa1d4ca2fcf919fb96c28f39551f35fc55bdbc19 \ --hash=sha256:b6fd8c79b457bedd2abf2702b9b472147cd860ed7855c73a5247fa55c9117654 \
--hash=sha256:ddf8967e08227d1bd95cc0851ef80d2ad9c7c0c5aab1eba31db49cf0a7b99523 \ --hash=sha256:cd6291d0061811c52b8e392f946889916757610d45d004e41140d81fb6cd5ddc \
--hash=sha256:ef69637b35fb8b210743926778d0e45e1bffa850a7c61e428c6b971549b5f5d1 \ --hash=sha256:d7b7006ac0756306db212fd37116cce2bd307e1e109375e1c6c106002df0ae5f \
--hash=sha256:f4854fd09c7aed5b1590e996a81aeff0c9ff51378b084eb5a0b9cd9518e6cff2 --hash=sha256:e231e1be58fc568950a04fbe6887c8e4b85310e7889727e2b81db205c45059eb \
setuptools==80.7.1 \ --hash=sha256:f3d91857d023ba93e14ed2d462ab62c3428f9bbf2b4fbac50a03ca66d31991f7
--hash=sha256:ca5cc1069b85dc23070a6628e6bcecb3292acac802399c7f8edc0100619f9009 \
--hash=sha256:f6ffc5f0142b1bd8d0ca94ee91b30c0ca862ffd50826da1ea85258a06fd94552
# via pbr
six==1.17.0 \ six==1.17.0 \
--hash=sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274 \ --hash=sha256:4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274 \
--hash=sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81 --hash=sha256:ff70335d468e7eb6ec65b95b99d3a2836546063f63acc5171de367e834932a81
# via python-dateutil # via python-dateutil
sqlalchemy==2.0.41 \ sqlalchemy==2.0.44 \
--hash=sha256:4eeb195cdedaf17aab6b247894ff2734dcead6c08f748e617bfe05bd5a218443 \ --hash=sha256:0ae7454e1ab1d780aee69fd2aae7d6b8670a581d8847f2d1e0f7ddfbf47e5a22 \
--hash=sha256:4f67766965996e63bb46cfbf2ce5355fc32d9dd3b8ad7e536a920ff9ee422e23 \ --hash=sha256:0b1af8392eb27b372ddb783b317dea0f650241cea5bd29199b22235299ca2e45 \
--hash=sha256:57df5dc6fdb5ed1a88a1ed2195fd31927e705cad62dedd86b46972752a80f576 \ --hash=sha256:15f3326f7f0b2bfe406ee562e17f43f36e16167af99c4c0df61db668de20002d \
--hash=sha256:82ca366a844eb551daff9d2e6e7a9e5e76d2612c8564f58db6c19a726869c1df \ --hash=sha256:19de7ca1246fbef9f9d1bff8f1ab25641569df226364a0e40457dc5457c54b05 \
--hash=sha256:a62448526dd9ed3e3beedc93df9bb6b55a436ed1474db31a2af13b313a70a7e1 \ --hash=sha256:1e77faf6ff919aa8cd63f1c4e561cac1d9a454a191bb864d5dd5e545935e5a40 \
--hash=sha256:bfc9064f6658a3d1cadeaa0ba07570b83ce6801a1314985bf98ec9b95d74e15f \ --hash=sha256:2b61188657e3a2b9ac4e8f04d6cf8e51046e28175f79464c67f2fd35bceb0976 \
--hash=sha256:c153265408d18de4cc5ded1941dcd8315894572cddd3c58df5d5b5705b3fa28d \ --hash=sha256:b87e7b91a5d5973dda5f00cd61ef72ad75a1db73a386b62877d4875a8840959c \
--hash=sha256:d4ae769b9c1c7757e4ccce94b0641bc203bbdf43ba7a2413ab2523d8d047d8dc \ --hash=sha256:c1c80faaee1a6c3428cecf40d16a2365bcf56c424c92c2b6f0f9ad204b899e9e \
--hash=sha256:dc56c9788617b8964ad02e8fcfeed4001c1f8ba91a9e1f31483c0dffb207002a \ --hash=sha256:ee51625c2d51f8baadf2829fae817ad0b66b140573939dd69284d2ba3553ae73 \
--hash=sha256:edba70118c4be3c2b1f90754d308d0b79c6fe2c0fdc52d8ddf603916f83f4db9 --hash=sha256:ff486e183d151e51b1d694c7aa1695747599bb00b9f5f604092b54b74c64a8e1
# via # via
# celery # kombu
# livegraphsdjango # livegraphsdjango
sqlparse==0.5.3 \ sqlparse==0.5.3 \
--hash=sha256:09f67787f56a0b16ecdbde1bfc7f5d9c3371ca683cfeaa8e6ff60b4807ec9272 \ --hash=sha256:09f67787f56a0b16ecdbde1bfc7f5d9c3371ca683cfeaa8e6ff60b4807ec9272 \
@@ -420,9 +562,9 @@ sqlparse==0.5.3 \
# via # via
# django # django
# django-debug-toolbar # django-debug-toolbar
stevedore==5.4.1 \ stevedore==5.5.0 \
--hash=sha256:3135b5ae50fe12816ef291baff420acb727fcd356106e3e9cbfa9e5985cd6f4b \ --hash=sha256:18363d4d268181e8e8452e71a38cd77630f345b2ef6b4a8d5614dac5ee0d18cf \
--hash=sha256:d10a31c7b86cba16c1f6e8d15416955fc797052351a56af15e608ad20811fcfe --hash=sha256:d31496a4f4df9825e1a1e4f1f74d19abb0154aff311c3b376fcc89dae8fccd73
# via bandit # via bandit
tinycss2==1.4.0 \ tinycss2==1.4.0 \
--hash=sha256:10c0972f6fc0fbee87c3edb76549357415e94548c1ae10ebccdea16fb404a9b7 \ --hash=sha256:10c0972f6fc0fbee87c3edb76549357415e94548c1ae10ebccdea16fb404a9b7 \
@@ -430,14 +572,34 @@ tinycss2==1.4.0 \
# via # via
# bleach # bleach
# livegraphsdjango # livegraphsdjango
types-pyyaml==6.0.12.20250516 \ ty==0.0.1a25 \
--hash=sha256:8478208feaeb53a34cb5d970c56a7cd76b72659442e733e268a94dc72b2d0530 \ --hash=sha256:0a90d897a7c1a5ae9b41a4c7b0a42262a06361476ad88d783dbedd7913edadbc \
--hash=sha256:9f21a70216fc0fa1b216a8176db5f9e0af6eb35d2f2932acb87689d03a5bf6ba --hash=sha256:168fc8aee396d617451acc44cd28baffa47359777342836060c27aa6f37e2445 \
--hash=sha256:1711dd587eccf04fd50c494dc39babe38f4cb345bc3901bf1d8149cac570e979 \
--hash=sha256:192edac94675a468bac7f6e04687a77a64698e4e1fe01f6a048bf9b6dde5b703 \
--hash=sha256:4a247061bd32bae3865a236d7f8b6c9916c80995db30ae1600999010f90623a9 \
--hash=sha256:5550b24b9dd0e0f8b4b2c1f0fcc608a55d0421dd67b6c364bc7bf25762334511 \
--hash=sha256:5f4c9b0cf7995e2e3de9bab4d066063dea92019f2f62673b7574e3612643dd35 \
--hash=sha256:93c7e7ab2859af0f866d34d27f4ae70dd4fb95b847387f082de1197f9f34e068 \
--hash=sha256:949523621f336e01bc7d687b7bd08fe838edadbdb6563c2c057ed1d264e820cf \
--hash=sha256:94f78f621458c05e59e890061021198197f29a7b51a33eda82bbb036e7ed73d7 \
--hash=sha256:a2fad3d8e92bb4d57a8872a6f56b1aef54539d36f23ebb01abe88ac4338efafb \
--hash=sha256:a9f3bbf523b49935bbd76e230408d858dce0d614f44f5807bbbd0954f64e0f01 \
--hash=sha256:d35b2c1f94a014a22875d2745aa0432761d2a9a8eb7212630d5caf547daeef6d \
--hash=sha256:d9656fca8062a2c6709c30d76d662c96d2e7dbfee8f70e55ec6b6afd67b5d447 \
--hash=sha256:dde2962d448ed87c48736e9a4bb13715a4cced705525e732b1c0dac1d4c66e3d \
--hash=sha256:eab6e33ebe202a71a50c3d5a5580e3bc1a85cda3ffcdc48cec3f1c693b7a873b \
--hash=sha256:f13ea9815f4a54a0a303ca7bf411b0650e3c2a24fc6c7889ffba2c94f5e97a6a \
--hash=sha256:f6b9a31da43424cdab483703a54a561b93aabba84630788505329fc5294a9c62
types-pyyaml==6.0.12.20250915 \
--hash=sha256:0f8b54a528c303f0e6f7165687dd33fafa81c807fcac23f632b63aa624ced1d3 \
--hash=sha256:e7d4d9e064e89a3b3cae120b4990cd370874d2bf12fa5f46c97018dd5d3c9ab6
# via django-stubs # via django-stubs
typing-extensions==4.13.2 \ typing-extensions==4.15.0 \
--hash=sha256:a439e7c04b49fec3e5d3e2beaa21755cadbbdc391694e28ccdd36ca4a1408f8c \ --hash=sha256:0cea48d173cc12fa28ecabc3b837ea3cf6f38c6d1136f85cbaaf598984861466 \
--hash=sha256:e6c81219bd689f51865d9e372991c540bda33a0379d5573cddb9a3a23f7caaef --hash=sha256:f0fa19c6845758ab08074a0cfa8b7aecb71c999ca73d62883bc25cc018c4e548
# via # via
# cron-descriptor
# django-stubs # django-stubs
# django-stubs-ext # django-stubs-ext
# mypy # mypy
@@ -450,9 +612,9 @@ tzdata==2025.2 \
# django-celery-beat # django-celery-beat
# kombu # kombu
# pandas # pandas
urllib3==2.4.0 \ urllib3==2.5.0 \
--hash=sha256:414bc6535b787febd7567804cc015fee39daab8ad86268f1310a9250697de466 \ --hash=sha256:3fc47733c7e419d4bc3f6b3dc2b4f890bb743906a30d56ba4a5bfa4bbff92760 \
--hash=sha256:4e16665048960a0900c702d4a66415956a584919c03361cac9f1df5c5dd7e813 --hash=sha256:e6b01673c0fa6a13e374b50871808eb3bf7046c4b125b216f6bf1cc604cff0dc
# via requests # via requests
vine==5.1.0 \ vine==5.1.0 \
--hash=sha256:40fdf3c48b2cfe1c38a49e9ae2da6fda88e4794c810050a728bd7413811fb1dc \ --hash=sha256:40fdf3c48b2cfe1c38a49e9ae2da6fda88e4794c810050a728bd7413811fb1dc \
@@ -461,13 +623,13 @@ vine==5.1.0 \
# amqp # amqp
# celery # celery
# kombu # kombu
virtualenv==20.31.2 \ virtualenv==20.35.4 \
--hash=sha256:36efd0d9650ee985f0cad72065001e66d49a6f24eb44d98980f630686243cf11 \ --hash=sha256:643d3914d73d3eeb0c552cbb12d7e82adf0e504dbf86a3182f8771a153a1971c \
--hash=sha256:e10c0a9d02835e592521be48b332b6caee6887f332c111aa79a09b9e79efc2af --hash=sha256:c21c9cede36c9753eeade68ba7d523529f228a403463376cf821eaae2b650f1b
# via pre-commit # via pre-commit
wcwidth==0.2.13 \ wcwidth==0.2.14 \
--hash=sha256:3da69048e4540d84af32131829ff948f1e022c1c6bdb8d6102117aac784f6859 \ --hash=sha256:4d478375d31bc5395a3c55c40ccdf3354688364cd61c4f6adacaa9215d0b3605 \
--hash=sha256:72ea0c06399eb286d978fdedb6923a9eb47e1c486ce63e9b4e64fc18303972b5 --hash=sha256:a7bb560c8aee30f9957e5f9895805edd20602f2d7f720186dfd906e82b4982e1
# via prompt-toolkit # via prompt-toolkit
webencodings==0.5.1 \ webencodings==0.5.1 \
--hash=sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78 \ --hash=sha256:a0af1213f3c2226497a97e2b3aa01a7e4bee4f403f95be16fc9acd2947514a78 \
@@ -475,11 +637,11 @@ webencodings==0.5.1 \
# via # via
# bleach # bleach
# tinycss2 # tinycss2
whitenoise==6.9.0 \ whitenoise==6.11.0 \
--hash=sha256:8c4a7c9d384694990c26f3047e118c691557481d624f069b7f7752a2f735d609 \ --hash=sha256:0f5bfce6061ae6611cd9396a8231e088722e4fc67bc13a111be74c738d99375f \
--hash=sha256:c8a489049b7ee9889617bb4c274a153f3d979e8f51d2efd0f5b403caf41c57df --hash=sha256:b2aeb45950597236f53b5342b3121c5de69c8da0109362aee506ce88e022d258
# via livegraphsdjango # via livegraphsdjango
xlsxwriter==3.2.3 \ xlsxwriter==3.2.9 \
--hash=sha256:593f8296e8a91790c6d0378ab08b064f34a642b3feb787cf6738236bd0a4860d \ --hash=sha256:254b1c37a368c444eac6e2f867405cc9e461b0ed97a3233b2ac1e574efb4140c \
--hash=sha256:ad6fd41bdcf1b885876b1f6b7087560aecc9ae5a9cc2ba97dcac7ab2e210d3d5 --hash=sha256:9a5db42bc5dff014806c58a20b9eae7322a134abb6fce3c92c181bfb275ec5b3
# via livegraphsdjango # via livegraphsdjango

7
seed.spec.ts Normal file
View File

@@ -0,0 +1,7 @@
import { test } from "@playwright/test";
test.describe("Test group", () => {
test("seed", async ({ page: _page }) => {
// generate code here.
});
});

View File

@@ -1,14 +1,18 @@
#!/bin/bash #!/usr/bin/env bash
# Set UV_LINK_MODE to copy to avoid hardlink warnings # Set UV_LINK_MODE to copy to avoid hardlink warnings
export UV_LINK_MODE=copy export UV_LINK_MODE=copy
# Check if Redis is running # Check if Redis is running
if ! redis-cli ping >/dev/null 2>&1; then if ! redis-cli ping >/dev/null 2>&1; then
echo "Starting Redis server..." echo "Starting Redis server..."
redis-server --daemonize yes redis-server --daemonize yes
sleep 1 sleep 1
# Verify Redis is now running # Verify Redis is now running
if redis-cli ping >/dev/null 2>&1; then if redis-cli ping >/dev/null 2>&1; then
echo "✅ Redis server is now running" echo "✅ Redis server is now running"
else else
@@ -22,6 +26,7 @@ else
fi fi
# Set environment variables for Redis if it's running # Set environment variables for Redis if it's running
if redis-cli ping >/dev/null 2>&1; then if redis-cli ping >/dev/null 2>&1; then
export CELERY_BROKER_URL=redis://localhost:6379/0 export CELERY_BROKER_URL=redis://localhost:6379/0
export CELERY_RESULT_BACKEND=redis://localhost:6379/0 export CELERY_RESULT_BACKEND=redis://localhost:6379/0
@@ -33,4 +38,5 @@ else
fi fi
# Start the application using foreman # Start the application using foreman
foreman start foreman start

29
tsconfig.json Normal file
View File

@@ -0,0 +1,29 @@
{
"compilerOptions": {
// Environment setup & latest features
"lib": ["ESNext"],
"target": "ESNext",
"module": "Preserve",
"moduleDetection": "force",
"jsx": "react-jsx",
"allowJs": true,
// Bundler mode
"moduleResolution": "bundler",
"allowImportingTsExtensions": true,
"verbatimModuleSyntax": true,
"noEmit": true,
// Best practices
"strict": true,
"skipLibCheck": true,
"noFallthroughCasesInSwitch": true,
"noUncheckedIndexedAccess": true,
"noImplicitOverride": true,
// Some stricter flags (disabled by default)
"noUnusedLocals": false,
"noUnusedParameters": false,
"noPropertyAccessFromIndexSignature": false
}
}

26
ty.toml Normal file
View File

@@ -0,0 +1,26 @@
# ty Type Checker Configuration
[environment]
# Django project root for first-party module resolution
root = ["dashboard_project"]
# Python version (matches pyproject.toml requires-python)
python-version = "3.13"
[src]
# Include only the Django project directory
include = ["dashboard_project"]
# Exclude migrations, cache, and generated files
exclude = [
"dashboard_project/migrations",
"dashboard_project/*/migrations",
"dashboard_project/**/__pycache__",
"dashboard_project/**/*.pyc"
]
# Respect .gitignore files
respect-ignore-files = true
[terminal]
# Use concise output for cleaner CI/CD logs
output-format = "concise"
# Treat warnings as errors in CI
error-on-warning = false

836
uv.lock generated

File diff suppressed because it is too large Load Diff