minor updates
This commit is contained in:
72
CHANGELOG.md
Normal file
72
CHANGELOG.md
Normal file
@@ -0,0 +1,72 @@
|
||||
# Changelog
|
||||
|
||||
All notable changes to this project will be documented in this file.
|
||||
|
||||
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
|
||||
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
|
||||
|
||||
## [1.1.0] - 2025-06-07
|
||||
|
||||
### Added
|
||||
- Multi-column table of contents in README.md for improved navigation
|
||||
- Enhanced documentation structure with better organization
|
||||
- Improved visual layout for better user experience
|
||||
|
||||
### Changed
|
||||
- Updated README.md table of contents to use a two-column layout
|
||||
- Reorganized documentation sections for better logical flow
|
||||
- Enhanced readability of navigation structure
|
||||
|
||||
### Documentation
|
||||
- Restructured table of contents for better accessibility
|
||||
- Improved visual hierarchy in documentation
|
||||
- Enhanced user experience for documentation navigation
|
||||
|
||||
## [1.0.0] - 2024-12-01
|
||||
|
||||
### Added
|
||||
- Initial release of WiFi DensePose
|
||||
- Real-time WiFi-based human pose estimation using CSI data
|
||||
- DensePose neural network integration
|
||||
- RESTful API with comprehensive endpoints
|
||||
- WebSocket streaming for real-time data
|
||||
- Multi-person tracking capabilities
|
||||
- Fall detection and activity recognition
|
||||
- Healthcare, fitness, smart home, and security domain configurations
|
||||
- Comprehensive CLI interface
|
||||
- Docker and Kubernetes deployment support
|
||||
- 100% test coverage
|
||||
- Production-ready monitoring and logging
|
||||
- Hardware abstraction layer for multiple WiFi devices
|
||||
- Phase sanitization and signal processing
|
||||
- Authentication and rate limiting
|
||||
- Background task management
|
||||
- Database integration with PostgreSQL and Redis
|
||||
- Prometheus metrics and Grafana dashboards
|
||||
- Comprehensive documentation and examples
|
||||
|
||||
### Features
|
||||
- Privacy-preserving pose detection without cameras
|
||||
- Sub-50ms latency with 30 FPS processing
|
||||
- Support for up to 10 simultaneous person tracking
|
||||
- Enterprise-grade security and scalability
|
||||
- Cross-platform compatibility (Linux, macOS, Windows)
|
||||
- GPU acceleration support
|
||||
- Real-time analytics and alerting
|
||||
- Configurable confidence thresholds
|
||||
- Zone-based occupancy monitoring
|
||||
- Historical data analysis
|
||||
- Performance optimization tools
|
||||
- Load testing capabilities
|
||||
- Infrastructure as Code (Terraform, Ansible)
|
||||
- CI/CD pipeline integration
|
||||
- Comprehensive error handling and logging
|
||||
|
||||
### Documentation
|
||||
- Complete user guide and API reference
|
||||
- Deployment and troubleshooting guides
|
||||
- Hardware setup and calibration instructions
|
||||
- Performance benchmarks and optimization tips
|
||||
- Contributing guidelines and code standards
|
||||
- Security best practices
|
||||
- Example configurations and use cases
|
||||
123
README.md
123
README.md
@@ -24,60 +24,75 @@ A cutting-edge WiFi-based human pose estimation system that leverages Channel St
|
||||
|
||||
## 📋 Table of Contents
|
||||
|
||||
1. [🚀 Key Features](#-key-features)
|
||||
2. [🏗️ System Architecture](#️-system-architecture)
|
||||
- [Core Components](#core-components)
|
||||
3. [📦 Installation](#-installation)
|
||||
- [Using pip (Recommended)](#using-pip-recommended)
|
||||
- [From Source](#from-source)
|
||||
- [Using Docker](#using-docker)
|
||||
- [System Requirements](#system-requirements)
|
||||
4. [🚀 Quick Start](#-quick-start)
|
||||
- [Basic Setup](#1-basic-setup)
|
||||
- [Start the System](#2-start-the-system)
|
||||
- [Using the REST API](#3-using-the-rest-api)
|
||||
- [Real-time Streaming](#4-real-time-streaming)
|
||||
5. [🖥️ CLI Usage](#️-cli-usage)
|
||||
- [Installation](#cli-installation)
|
||||
- [Basic Commands](#basic-commands)
|
||||
- [Configuration Commands](#configuration-commands)
|
||||
- [Monitoring Commands](#monitoring-commands)
|
||||
- [Examples](#cli-examples)
|
||||
6. [📚 Documentation](#-documentation)
|
||||
- [Core Documentation](#-core-documentation)
|
||||
- [Quick Links](#-quick-links)
|
||||
- [API Overview](#-api-overview)
|
||||
7. [🔧 Hardware Setup](#-hardware-setup)
|
||||
- [Supported Hardware](#supported-hardware)
|
||||
- [Physical Setup](#physical-setup)
|
||||
- [Network Configuration](#network-configuration)
|
||||
- [Environment Calibration](#environment-calibration)
|
||||
8. [⚙️ Configuration](#️-configuration)
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Domain-Specific Configurations](#domain-specific-configurations)
|
||||
- [Advanced Configuration](#advanced-configuration)
|
||||
9. [🧪 Testing](#-testing)
|
||||
- [Running Tests](#running-tests)
|
||||
- [Test Categories](#test-categories)
|
||||
- [Mock Testing](#mock-testing)
|
||||
- [Continuous Integration](#continuous-integration)
|
||||
10. [🚀 Deployment](#-deployment)
|
||||
- [Production Deployment](#production-deployment)
|
||||
- [Infrastructure as Code](#infrastructure-as-code)
|
||||
- [Monitoring and Logging](#monitoring-and-logging)
|
||||
11. [📊 Performance Metrics](#-performance-metrics)
|
||||
- [Benchmark Results](#benchmark-results)
|
||||
- [Performance Optimization](#performance-optimization)
|
||||
- [Load Testing](#load-testing)
|
||||
12. [🤝 Contributing](#-contributing)
|
||||
- [Development Setup](#development-setup)
|
||||
- [Code Standards](#code-standards)
|
||||
- [Contribution Process](#contribution-process)
|
||||
- [Code Review Checklist](#code-review-checklist)
|
||||
- [Issue Templates](#issue-templates)
|
||||
13. [📄 License](#-license)
|
||||
14. [🙏 Acknowledgments](#-acknowledgments)
|
||||
15. [📞 Support](#-support)
|
||||
<table>
|
||||
<tr>
|
||||
<td width="50%">
|
||||
|
||||
**🚀 Getting Started**
|
||||
- [Key Features](#-key-features)
|
||||
- [System Architecture](#️-system-architecture)
|
||||
- [Installation](#-installation)
|
||||
- [Using pip (Recommended)](#using-pip-recommended)
|
||||
- [From Source](#from-source)
|
||||
- [Using Docker](#using-docker)
|
||||
- [System Requirements](#system-requirements)
|
||||
- [Quick Start](#-quick-start)
|
||||
- [Basic Setup](#1-basic-setup)
|
||||
- [Start the System](#2-start-the-system)
|
||||
- [Using the REST API](#3-using-the-rest-api)
|
||||
- [Real-time Streaming](#4-real-time-streaming)
|
||||
|
||||
**🖥️ Usage & Configuration**
|
||||
- [CLI Usage](#️-cli-usage)
|
||||
- [Installation](#cli-installation)
|
||||
- [Basic Commands](#basic-commands)
|
||||
- [Configuration Commands](#configuration-commands)
|
||||
- [Examples](#cli-examples)
|
||||
- [Documentation](#-documentation)
|
||||
- [Core Documentation](#-core-documentation)
|
||||
- [Quick Links](#-quick-links)
|
||||
- [API Overview](#-api-overview)
|
||||
- [Hardware Setup](#-hardware-setup)
|
||||
- [Supported Hardware](#supported-hardware)
|
||||
- [Physical Setup](#physical-setup)
|
||||
- [Network Configuration](#network-configuration)
|
||||
- [Environment Calibration](#environment-calibration)
|
||||
|
||||
</td>
|
||||
<td width="50%">
|
||||
|
||||
**⚙️ Advanced Topics**
|
||||
- [Configuration](#️-configuration)
|
||||
- [Environment Variables](#environment-variables)
|
||||
- [Domain-Specific Configurations](#domain-specific-configurations)
|
||||
- [Advanced Configuration](#advanced-configuration)
|
||||
- [Testing](#-testing)
|
||||
- [Running Tests](#running-tests)
|
||||
- [Test Categories](#test-categories)
|
||||
- [Mock Testing](#mock-testing)
|
||||
- [Continuous Integration](#continuous-integration)
|
||||
- [Deployment](#-deployment)
|
||||
- [Production Deployment](#production-deployment)
|
||||
- [Infrastructure as Code](#infrastructure-as-code)
|
||||
- [Monitoring and Logging](#monitoring-and-logging)
|
||||
|
||||
**📊 Performance & Community**
|
||||
- [Performance Metrics](#-performance-metrics)
|
||||
- [Benchmark Results](#benchmark-results)
|
||||
- [Performance Optimization](#performance-optimization)
|
||||
- [Load Testing](#load-testing)
|
||||
- [Contributing](#-contributing)
|
||||
- [Development Setup](#development-setup)
|
||||
- [Code Standards](#code-standards)
|
||||
- [Contribution Process](#contribution-process)
|
||||
- [Code Review Checklist](#code-review-checklist)
|
||||
- [License](#-license)
|
||||
- [Acknowledgments](#-acknowledgments)
|
||||
- [Support](#-support)
|
||||
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
## 🏗️ System Architecture
|
||||
|
||||
|
||||
112
alembic.ini
Normal file
112
alembic.ini
Normal file
@@ -0,0 +1,112 @@
|
||||
# A generic, single database configuration.
|
||||
|
||||
[alembic]
|
||||
# path to migration scripts
|
||||
script_location = src/database/migrations
|
||||
|
||||
# template used to generate migration file names; The default value is %%(rev)s_%%(slug)s
|
||||
# Uncomment the line below if you want the files to be prepended with date and time
|
||||
# file_template = %%(year)d_%%(month).2d_%%(day).2d_%%(hour).2d%%(minute).2d-%%(rev)s_%%(slug)s
|
||||
|
||||
# sys.path path, will be prepended to sys.path if present.
|
||||
# defaults to the current working directory.
|
||||
prepend_sys_path = .
|
||||
|
||||
# timezone to use when rendering the date within the migration file
|
||||
# as well as the filename.
|
||||
# If specified, requires the python-dateutil library that can be
|
||||
# installed by adding `alembic[tz]` to the pip requirements
|
||||
# string value is passed to dateutil.tz.gettz()
|
||||
# leave blank for localtime
|
||||
# timezone =
|
||||
|
||||
# max length of characters to apply to the
|
||||
# "slug" field
|
||||
# truncate_slug_length = 40
|
||||
|
||||
# set to 'true' to run the environment during
|
||||
# the 'revision' command, regardless of autogenerate
|
||||
# revision_environment = false
|
||||
|
||||
# set to 'true' to allow .pyc and .pyo files without
|
||||
# a source .py file to be detected as revisions in the
|
||||
# versions/ directory
|
||||
# sourceless = false
|
||||
|
||||
# version number format
|
||||
version_num_format = %04d
|
||||
|
||||
# version path separator; As mentioned above, this is the character used to split
|
||||
# version_locations. The default within new alembic.ini files is "os", which uses
|
||||
# os.pathsep. If this key is omitted entirely, it falls back to the legacy
|
||||
# behavior of splitting on spaces and/or commas.
|
||||
# Valid values for version_path_separator are:
|
||||
#
|
||||
# version_path_separator = :
|
||||
# version_path_separator = ;
|
||||
# version_path_separator = space
|
||||
version_path_separator = os
|
||||
|
||||
# set to 'true' to search source files recursively
|
||||
# in each "version_locations" directory
|
||||
# new in Alembic version 1.10
|
||||
# recursive_version_locations = false
|
||||
|
||||
# the output encoding used when revision files
|
||||
# are written from script.py.mako
|
||||
# output_encoding = utf-8
|
||||
|
||||
sqlalchemy.url = sqlite:///./data/wifi_densepose_fallback.db
|
||||
|
||||
|
||||
[post_write_hooks]
|
||||
# post_write_hooks defines scripts or Python functions that are run
|
||||
# on newly generated revision scripts. See the documentation for further
|
||||
# detail and examples
|
||||
|
||||
# format using "black" - use the console_scripts runner, against the "black" entrypoint
|
||||
# hooks = black
|
||||
# black.type = console_scripts
|
||||
# black.entrypoint = black
|
||||
# black.options = -l 79 REVISION_SCRIPT_FILENAME
|
||||
|
||||
# lint with attempts to fix using "ruff" - use the exec runner, execute a binary
|
||||
# hooks = ruff
|
||||
# ruff.type = exec
|
||||
# ruff.executable = %(here)s/.venv/bin/ruff
|
||||
# ruff.options = --fix REVISION_SCRIPT_FILENAME
|
||||
|
||||
# Logging configuration
|
||||
[loggers]
|
||||
keys = root,sqlalchemy,alembic
|
||||
|
||||
[handlers]
|
||||
keys = console
|
||||
|
||||
[formatters]
|
||||
keys = generic
|
||||
|
||||
[logger_root]
|
||||
level = WARN
|
||||
handlers = console
|
||||
qualname =
|
||||
|
||||
[logger_sqlalchemy]
|
||||
level = WARN
|
||||
handlers =
|
||||
qualname = sqlalchemy.engine
|
||||
|
||||
[logger_alembic]
|
||||
level = INFO
|
||||
handlers =
|
||||
qualname = alembic
|
||||
|
||||
[handler_console]
|
||||
class = StreamHandler
|
||||
args = (sys.stderr,)
|
||||
level = NOTSET
|
||||
formatter = generic
|
||||
|
||||
[formatter_generic]
|
||||
format = %(levelname)-5.5s [%(name)s] %(message)s
|
||||
datefmt = %H:%M:%S
|
||||
0
data/wifi_densepose_fallback.db
Normal file
0
data/wifi_densepose_fallback.db
Normal file
32
example.env
32
example.env
@@ -41,18 +41,40 @@ CORS_ORIGINS=* # Use specific origins in production: https://example.com,https:
|
||||
# =============================================================================
|
||||
|
||||
# Database connection (optional - defaults to SQLite in development)
|
||||
# DATABASE_URL=postgresql://user:password@localhost:5432/wifi_densepose
|
||||
# DATABASE_POOL_SIZE=10
|
||||
# DATABASE_MAX_OVERFLOW=20
|
||||
# For PostgreSQL (recommended for production):
|
||||
DATABASE_URL=postgresql://wifi_user:wifi_password@localhost:5432/wifi_densepose
|
||||
DATABASE_POOL_SIZE=10
|
||||
DATABASE_MAX_OVERFLOW=20
|
||||
|
||||
# Alternative: Individual database connection parameters
|
||||
# DB_HOST=localhost
|
||||
# DB_PORT=5432
|
||||
# DB_NAME=wifi_densepose
|
||||
# DB_USER=wifi_user
|
||||
# DB_PASSWORD=wifi_password
|
||||
|
||||
# Database failsafe settings
|
||||
ENABLE_DATABASE_FAILSAFE=true
|
||||
SQLITE_FALLBACK_PATH=./data/wifi_densepose_fallback.db
|
||||
|
||||
# =============================================================================
|
||||
# REDIS SETTINGS (Optional - for caching and rate limiting)
|
||||
# =============================================================================
|
||||
|
||||
# Redis connection (optional - defaults to localhost in development)
|
||||
# REDIS_URL=redis://localhost:6379/0
|
||||
REDIS_URL=redis://localhost:6379/0
|
||||
# REDIS_PASSWORD=your-redis-password
|
||||
# REDIS_DB=0
|
||||
REDIS_DB=0
|
||||
REDIS_ENABLED=true
|
||||
REDIS_REQUIRED=false
|
||||
ENABLE_REDIS_FAILSAFE=true
|
||||
|
||||
# Redis connection settings
|
||||
REDIS_HOST=localhost
|
||||
REDIS_PORT=6379
|
||||
REDIS_MAX_CONNECTIONS=10
|
||||
REDIS_SOCKET_TIMEOUT=5
|
||||
REDIS_CONNECT_TIMEOUT=5
|
||||
|
||||
# =============================================================================
|
||||
# HARDWARE SETTINGS
|
||||
|
||||
@@ -4,7 +4,7 @@ build-backend = "setuptools.build_meta"
|
||||
|
||||
[project]
|
||||
name = "wifi-densepose"
|
||||
version = "1.0.0"
|
||||
version = "1.1.0"
|
||||
description = "WiFi-based human pose estimation using CSI data and DensePose neural networks"
|
||||
readme = "README.md"
|
||||
license = "MIT"
|
||||
|
||||
@@ -21,6 +21,16 @@ python-jose[cryptography]>=3.3.0
|
||||
python-multipart>=0.0.6
|
||||
passlib[bcrypt]>=1.7.4
|
||||
|
||||
# Database dependencies
|
||||
sqlalchemy>=2.0.0
|
||||
asyncpg>=0.28.0
|
||||
aiosqlite>=0.19.0
|
||||
redis>=4.5.0
|
||||
|
||||
# CLI dependencies
|
||||
click>=8.0.0
|
||||
alembic>=1.10.0
|
||||
|
||||
# Hardware interface dependencies
|
||||
asyncio-mqtt>=0.11.0
|
||||
aiohttp>=3.8.0
|
||||
|
||||
@@ -29,7 +29,7 @@ Author: WiFi-DensePose Team
|
||||
License: MIT
|
||||
"""
|
||||
|
||||
__version__ = "1.0.0"
|
||||
__version__ = "1.1.0"
|
||||
__author__ = "WiFi-DensePose Team"
|
||||
__email__ = "team@wifi-densepose.com"
|
||||
__license__ = "MIT"
|
||||
|
||||
17
src/app.py
17
src/app.py
@@ -310,4 +310,19 @@ def setup_root_endpoints(app: FastAPI, settings: Settings):
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error resetting services: {e}")
|
||||
return {"error": str(e)}
|
||||
return {"error": str(e)}
|
||||
|
||||
|
||||
# Create default app instance for uvicorn
|
||||
def get_app() -> FastAPI:
|
||||
"""Get the default application instance."""
|
||||
from src.config.settings import get_settings
|
||||
from src.services.orchestrator import ServiceOrchestrator
|
||||
|
||||
settings = get_settings()
|
||||
orchestrator = ServiceOrchestrator(settings)
|
||||
return create_app(settings, orchestrator)
|
||||
|
||||
|
||||
# Default app instance for uvicorn
|
||||
app = get_app()
|
||||
109
src/cli.py
109
src/cli.py
@@ -212,6 +212,7 @@ def init(ctx, url: Optional[str]):
|
||||
from src.database.connection import get_database_manager
|
||||
from alembic.config import Config
|
||||
from alembic import command
|
||||
import os
|
||||
|
||||
# Get settings
|
||||
settings = get_settings_with_config(ctx.obj.get('config_file'))
|
||||
@@ -228,10 +229,19 @@ def init(ctx, url: Optional[str]):
|
||||
|
||||
asyncio.run(init_db())
|
||||
|
||||
# Run migrations
|
||||
alembic_cfg = Config("alembic.ini")
|
||||
command.upgrade(alembic_cfg, "head")
|
||||
logger.info("Database migrations applied successfully")
|
||||
# Run migrations if alembic.ini exists
|
||||
alembic_ini_path = "alembic.ini"
|
||||
if os.path.exists(alembic_ini_path):
|
||||
try:
|
||||
alembic_cfg = Config(alembic_ini_path)
|
||||
# Set the database URL in the config
|
||||
alembic_cfg.set_main_option("sqlalchemy.url", settings.get_database_url())
|
||||
command.upgrade(alembic_cfg, "head")
|
||||
logger.info("Database migrations applied successfully")
|
||||
except Exception as migration_error:
|
||||
logger.warning(f"Migration failed, but database is initialized: {migration_error}")
|
||||
else:
|
||||
logger.info("No alembic.ini found, skipping migrations")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize database: {e}")
|
||||
@@ -493,6 +503,97 @@ def validate(ctx):
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@config.command()
|
||||
@click.option(
|
||||
'--format',
|
||||
type=click.Choice(['text', 'json']),
|
||||
default='text',
|
||||
help='Output format (default: text)'
|
||||
)
|
||||
@click.pass_context
|
||||
def failsafe(ctx, format: str):
|
||||
"""Show failsafe status and configuration."""
|
||||
|
||||
try:
|
||||
import json
|
||||
from src.database.connection import get_database_manager
|
||||
|
||||
# Get settings
|
||||
settings = get_settings_with_config(ctx.obj.get('config_file'))
|
||||
|
||||
async def check_failsafe_status():
|
||||
db_manager = get_database_manager(settings)
|
||||
|
||||
# Initialize database to check current state
|
||||
try:
|
||||
await db_manager.initialize()
|
||||
except Exception as e:
|
||||
logger.warning(f"Database initialization failed: {e}")
|
||||
|
||||
# Collect failsafe status
|
||||
failsafe_status = {
|
||||
"database": {
|
||||
"failsafe_enabled": settings.enable_database_failsafe,
|
||||
"using_sqlite_fallback": db_manager.is_using_sqlite_fallback(),
|
||||
"sqlite_fallback_path": settings.sqlite_fallback_path,
|
||||
"primary_database_url": settings.get_database_url() if not db_manager.is_using_sqlite_fallback() else None,
|
||||
},
|
||||
"redis": {
|
||||
"failsafe_enabled": settings.enable_redis_failsafe,
|
||||
"redis_enabled": settings.redis_enabled,
|
||||
"redis_required": settings.redis_required,
|
||||
"redis_available": db_manager.is_redis_available(),
|
||||
"redis_url": settings.get_redis_url() if settings.redis_enabled else None,
|
||||
},
|
||||
"overall_status": "healthy"
|
||||
}
|
||||
|
||||
# Determine overall status
|
||||
if failsafe_status["database"]["using_sqlite_fallback"] or not failsafe_status["redis"]["redis_available"]:
|
||||
failsafe_status["overall_status"] = "degraded"
|
||||
|
||||
# Output results
|
||||
if format == 'json':
|
||||
click.echo(json.dumps(failsafe_status, indent=2))
|
||||
else:
|
||||
click.echo("=== Failsafe Status ===\n")
|
||||
|
||||
# Database status
|
||||
click.echo("Database:")
|
||||
if failsafe_status["database"]["using_sqlite_fallback"]:
|
||||
click.echo(" ⚠️ Using SQLite fallback database")
|
||||
click.echo(f" Path: {failsafe_status['database']['sqlite_fallback_path']}")
|
||||
else:
|
||||
click.echo(" ✓ Using primary database (PostgreSQL)")
|
||||
|
||||
click.echo(f" Failsafe enabled: {'Yes' if failsafe_status['database']['failsafe_enabled'] else 'No'}")
|
||||
|
||||
# Redis status
|
||||
click.echo("\nRedis:")
|
||||
if not failsafe_status["redis"]["redis_enabled"]:
|
||||
click.echo(" - Redis disabled")
|
||||
elif not failsafe_status["redis"]["redis_available"]:
|
||||
click.echo(" ⚠️ Redis unavailable (failsafe active)")
|
||||
else:
|
||||
click.echo(" ✓ Redis available")
|
||||
|
||||
click.echo(f" Failsafe enabled: {'Yes' if failsafe_status['redis']['failsafe_enabled'] else 'No'}")
|
||||
click.echo(f" Required: {'Yes' if failsafe_status['redis']['redis_required'] else 'No'}")
|
||||
|
||||
# Overall status
|
||||
status_icon = "✓" if failsafe_status["overall_status"] == "healthy" else "⚠️"
|
||||
click.echo(f"\nOverall Status: {status_icon} {failsafe_status['overall_status'].upper()}")
|
||||
|
||||
if failsafe_status["overall_status"] == "degraded":
|
||||
click.echo("\nNote: System is running in degraded mode using failsafe configurations.")
|
||||
|
||||
asyncio.run(check_failsafe_status())
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to check failsafe status: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
@cli.command()
|
||||
def version():
|
||||
"""Show version information."""
|
||||
|
||||
@@ -63,6 +63,15 @@ class Settings(BaseSettings):
|
||||
redis_enabled: bool = Field(default=True, description="Enable Redis")
|
||||
redis_host: str = Field(default="localhost", description="Redis host")
|
||||
redis_port: int = Field(default=6379, description="Redis port")
|
||||
redis_required: bool = Field(default=False, description="Require Redis connection (fail if unavailable)")
|
||||
redis_max_connections: int = Field(default=10, description="Maximum Redis connections")
|
||||
redis_socket_timeout: int = Field(default=5, description="Redis socket timeout in seconds")
|
||||
redis_connect_timeout: int = Field(default=5, description="Redis connection timeout in seconds")
|
||||
|
||||
# Failsafe settings
|
||||
enable_database_failsafe: bool = Field(default=True, description="Enable automatic SQLite failsafe when PostgreSQL unavailable")
|
||||
enable_redis_failsafe: bool = Field(default=True, description="Enable automatic Redis failsafe (disable when unavailable)")
|
||||
sqlite_fallback_path: str = Field(default="./data/wifi_densepose_fallback.db", description="SQLite fallback database path")
|
||||
|
||||
# Hardware settings
|
||||
wifi_interface: str = Field(default="wlan0", description="WiFi interface name")
|
||||
@@ -88,6 +97,7 @@ class Settings(BaseSettings):
|
||||
description="Log format"
|
||||
)
|
||||
log_file: Optional[str] = Field(default=None, description="Log file path")
|
||||
log_directory: str = Field(default="./logs", description="Log directory path")
|
||||
log_max_size: int = Field(default=10485760, description="Max log file size in bytes (10MB)")
|
||||
log_backup_count: int = Field(default=5, description="Number of log backup files")
|
||||
|
||||
@@ -103,6 +113,7 @@ class Settings(BaseSettings):
|
||||
data_storage_path: str = Field(default="./data", description="Data storage directory")
|
||||
model_storage_path: str = Field(default="./models", description="Model storage directory")
|
||||
temp_storage_path: str = Field(default="./temp", description="Temporary storage directory")
|
||||
backup_directory: str = Field(default="./backups", description="Backup storage directory")
|
||||
max_storage_size_gb: int = Field(default=100, description="Maximum storage size in GB")
|
||||
|
||||
# API settings
|
||||
@@ -241,8 +252,16 @@ class Settings(BaseSettings):
|
||||
if self.is_development:
|
||||
return f"sqlite:///{self.data_storage_path}/wifi_densepose.db"
|
||||
|
||||
# SQLite failsafe for production if enabled
|
||||
if self.enable_database_failsafe:
|
||||
return f"sqlite:///{self.sqlite_fallback_path}"
|
||||
|
||||
raise ValueError("Database URL must be configured for non-development environments")
|
||||
|
||||
def get_sqlite_fallback_url(self) -> str:
|
||||
"""Get SQLite fallback database URL."""
|
||||
return f"sqlite:///{self.sqlite_fallback_path}"
|
||||
|
||||
def get_redis_url(self) -> Optional[str]:
|
||||
"""Get Redis URL with fallback."""
|
||||
if not self.redis_enabled:
|
||||
@@ -334,6 +353,8 @@ class Settings(BaseSettings):
|
||||
self.data_storage_path,
|
||||
self.model_storage_path,
|
||||
self.temp_storage_path,
|
||||
self.log_directory,
|
||||
self.backup_directory,
|
||||
]
|
||||
|
||||
for directory in directories:
|
||||
|
||||
@@ -8,7 +8,7 @@ from typing import Optional, Dict, Any, AsyncGenerator
|
||||
from contextlib import asynccontextmanager
|
||||
from datetime import datetime
|
||||
|
||||
from sqlalchemy import create_engine, event, pool
|
||||
from sqlalchemy import create_engine, event, pool, text
|
||||
from sqlalchemy.ext.asyncio import create_async_engine, AsyncSession, async_sessionmaker
|
||||
from sqlalchemy.orm import sessionmaker, Session
|
||||
from sqlalchemy.pool import QueuePool, NullPool
|
||||
@@ -65,12 +65,35 @@ class DatabaseManager:
|
||||
raise DatabaseConnectionError(f"Database initialization failed: {e}")
|
||||
|
||||
async def _initialize_postgresql(self):
|
||||
"""Initialize PostgreSQL connections."""
|
||||
"""Initialize PostgreSQL connections with SQLite failsafe."""
|
||||
postgresql_failed = False
|
||||
|
||||
try:
|
||||
# Try PostgreSQL first
|
||||
await self._initialize_postgresql_primary()
|
||||
logger.info("PostgreSQL connections initialized")
|
||||
return
|
||||
except Exception as e:
|
||||
postgresql_failed = True
|
||||
logger.error(f"PostgreSQL initialization failed: {e}")
|
||||
|
||||
if not self.settings.enable_database_failsafe:
|
||||
raise DatabaseConnectionError(f"PostgreSQL connection failed and failsafe disabled: {e}")
|
||||
|
||||
logger.warning("Falling back to SQLite database")
|
||||
|
||||
# Fallback to SQLite if PostgreSQL failed and failsafe is enabled
|
||||
if postgresql_failed and self.settings.enable_database_failsafe:
|
||||
await self._initialize_sqlite_fallback()
|
||||
logger.info("SQLite fallback database initialized")
|
||||
|
||||
async def _initialize_postgresql_primary(self):
|
||||
"""Initialize primary PostgreSQL connections."""
|
||||
# Build database URL
|
||||
if self.settings.database_url:
|
||||
if self.settings.database_url and "postgresql" in self.settings.database_url:
|
||||
db_url = self.settings.database_url
|
||||
async_db_url = self.settings.database_url.replace("postgresql://", "postgresql+asyncpg://")
|
||||
else:
|
||||
elif self.settings.db_host and self.settings.db_name and self.settings.db_user:
|
||||
db_url = (
|
||||
f"postgresql://{self.settings.db_user}:{self.settings.db_password}"
|
||||
f"@{self.settings.db_host}:{self.settings.db_port}/{self.settings.db_name}"
|
||||
@@ -79,6 +102,8 @@ class DatabaseManager:
|
||||
f"postgresql+asyncpg://{self.settings.db_user}:{self.settings.db_password}"
|
||||
f"@{self.settings.db_host}:{self.settings.db_port}/{self.settings.db_name}"
|
||||
)
|
||||
else:
|
||||
raise ValueError("PostgreSQL connection parameters not configured")
|
||||
|
||||
# Create async engine (don't specify poolclass for async engines)
|
||||
self._async_engine = create_async_engine(
|
||||
@@ -122,11 +147,65 @@ class DatabaseManager:
|
||||
|
||||
# Test connections
|
||||
await self._test_postgresql_connection()
|
||||
|
||||
async def _initialize_sqlite_fallback(self):
|
||||
"""Initialize SQLite fallback database."""
|
||||
import os
|
||||
|
||||
logger.info("PostgreSQL connections initialized")
|
||||
# Ensure directory exists
|
||||
sqlite_path = self.settings.sqlite_fallback_path
|
||||
os.makedirs(os.path.dirname(sqlite_path), exist_ok=True)
|
||||
|
||||
# Build SQLite URLs
|
||||
db_url = f"sqlite:///{sqlite_path}"
|
||||
async_db_url = f"sqlite+aiosqlite:///{sqlite_path}"
|
||||
|
||||
# Create async engine for SQLite
|
||||
self._async_engine = create_async_engine(
|
||||
async_db_url,
|
||||
echo=self.settings.db_echo,
|
||||
future=True,
|
||||
)
|
||||
|
||||
# Create sync engine for SQLite
|
||||
self._sync_engine = create_engine(
|
||||
db_url,
|
||||
poolclass=NullPool, # SQLite doesn't need connection pooling
|
||||
echo=self.settings.db_echo,
|
||||
future=True,
|
||||
)
|
||||
|
||||
# Create session factories
|
||||
self._async_session_factory = async_sessionmaker(
|
||||
self._async_engine,
|
||||
class_=AsyncSession,
|
||||
expire_on_commit=False,
|
||||
)
|
||||
|
||||
self._sync_session_factory = sessionmaker(
|
||||
self._sync_engine,
|
||||
expire_on_commit=False,
|
||||
)
|
||||
|
||||
# Add connection event listeners
|
||||
self._setup_connection_events()
|
||||
|
||||
# Test SQLite connection
|
||||
await self._test_sqlite_connection()
|
||||
|
||||
async def _test_sqlite_connection(self):
|
||||
"""Test SQLite connection."""
|
||||
try:
|
||||
async with self._async_engine.begin() as conn:
|
||||
result = await conn.execute(text("SELECT 1"))
|
||||
result.fetchone() # Don't await this - fetchone() is not async
|
||||
logger.debug("SQLite connection test successful")
|
||||
except Exception as e:
|
||||
logger.error(f"SQLite connection test failed: {e}")
|
||||
raise DatabaseConnectionError(f"SQLite connection test failed: {e}")
|
||||
|
||||
async def _initialize_redis(self):
|
||||
"""Initialize Redis connection."""
|
||||
"""Initialize Redis connection with failsafe."""
|
||||
if not self.settings.redis_enabled:
|
||||
logger.info("Redis disabled, skipping initialization")
|
||||
return
|
||||
@@ -160,10 +239,15 @@ class DatabaseManager:
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to initialize Redis: {e}")
|
||||
|
||||
if self.settings.redis_required:
|
||||
raise
|
||||
raise DatabaseConnectionError(f"Redis connection failed and is required: {e}")
|
||||
elif self.settings.enable_redis_failsafe:
|
||||
logger.warning("Redis initialization failed, continuing without Redis (failsafe enabled)")
|
||||
self._redis_client = None
|
||||
else:
|
||||
logger.warning("Redis initialization failed but not required, continuing without Redis")
|
||||
self._redis_client = None
|
||||
|
||||
def _setup_connection_events(self):
|
||||
"""Setup database connection event listeners."""
|
||||
@@ -195,8 +279,8 @@ class DatabaseManager:
|
||||
"""Test PostgreSQL connection."""
|
||||
try:
|
||||
async with self._async_engine.begin() as conn:
|
||||
result = await conn.execute("SELECT 1")
|
||||
await result.fetchone()
|
||||
result = await conn.execute(text("SELECT 1"))
|
||||
result.fetchone() # Don't await this - fetchone() is not async
|
||||
logger.debug("PostgreSQL connection test successful")
|
||||
except Exception as e:
|
||||
logger.error(f"PostgreSQL connection test failed: {e}")
|
||||
@@ -265,31 +349,48 @@ class DatabaseManager:
|
||||
async def health_check(self) -> Dict[str, Any]:
|
||||
"""Perform database health check."""
|
||||
health_status = {
|
||||
"postgresql": {"status": "unknown", "details": {}},
|
||||
"database": {"status": "unknown", "details": {}},
|
||||
"redis": {"status": "unknown", "details": {}},
|
||||
"overall": "unknown"
|
||||
}
|
||||
|
||||
# Check PostgreSQL
|
||||
# Check Database (PostgreSQL or SQLite)
|
||||
try:
|
||||
start_time = datetime.utcnow()
|
||||
async with self.get_async_session() as session:
|
||||
result = await session.execute("SELECT 1")
|
||||
await result.fetchone()
|
||||
result = await session.execute(text("SELECT 1"))
|
||||
result.fetchone() # Don't await this - fetchone() is not async
|
||||
|
||||
response_time = (datetime.utcnow() - start_time).total_seconds()
|
||||
|
||||
health_status["postgresql"] = {
|
||||
"status": "healthy",
|
||||
"details": {
|
||||
"response_time_ms": round(response_time * 1000, 2),
|
||||
# Determine database type and status
|
||||
is_sqlite = self.is_using_sqlite_fallback()
|
||||
db_type = "sqlite_fallback" if is_sqlite else "postgresql"
|
||||
|
||||
details = {
|
||||
"type": db_type,
|
||||
"response_time_ms": round(response_time * 1000, 2),
|
||||
}
|
||||
|
||||
# Add pool info for PostgreSQL
|
||||
if not is_sqlite and hasattr(self._async_engine, 'pool'):
|
||||
details.update({
|
||||
"pool_size": self._async_engine.pool.size(),
|
||||
"checked_out": self._async_engine.pool.checkedout(),
|
||||
"overflow": self._async_engine.pool.overflow(),
|
||||
}
|
||||
})
|
||||
|
||||
# Add failsafe info
|
||||
if is_sqlite:
|
||||
details["failsafe_active"] = True
|
||||
details["fallback_path"] = self.settings.sqlite_fallback_path
|
||||
|
||||
health_status["database"] = {
|
||||
"status": "healthy",
|
||||
"details": details
|
||||
}
|
||||
except Exception as e:
|
||||
health_status["postgresql"] = {
|
||||
health_status["database"] = {
|
||||
"status": "unhealthy",
|
||||
"details": {"error": str(e)}
|
||||
}
|
||||
@@ -324,15 +425,22 @@ class DatabaseManager:
|
||||
}
|
||||
|
||||
# Determine overall status
|
||||
postgresql_healthy = health_status["postgresql"]["status"] == "healthy"
|
||||
database_healthy = health_status["database"]["status"] == "healthy"
|
||||
redis_healthy = (
|
||||
health_status["redis"]["status"] in ["healthy", "disabled"] or
|
||||
not self.settings.redis_required
|
||||
)
|
||||
|
||||
if postgresql_healthy and redis_healthy:
|
||||
health_status["overall"] = "healthy"
|
||||
elif postgresql_healthy:
|
||||
# Check if using failsafe modes
|
||||
using_sqlite_fallback = self.is_using_sqlite_fallback()
|
||||
redis_unavailable = not self.is_redis_available() and self.settings.redis_enabled
|
||||
|
||||
if database_healthy and redis_healthy:
|
||||
if using_sqlite_fallback or redis_unavailable:
|
||||
health_status["overall"] = "degraded" # Working but using failsafe
|
||||
else:
|
||||
health_status["overall"] = "healthy"
|
||||
elif database_healthy:
|
||||
health_status["overall"] = "degraded"
|
||||
else:
|
||||
health_status["overall"] = "unhealthy"
|
||||
@@ -394,6 +502,36 @@ class DatabaseManager:
|
||||
self._initialized = False
|
||||
logger.info("Database connections closed")
|
||||
|
||||
def is_using_sqlite_fallback(self) -> bool:
|
||||
"""Check if currently using SQLite fallback database."""
|
||||
if not self._async_engine:
|
||||
return False
|
||||
return "sqlite" in str(self._async_engine.url)
|
||||
|
||||
def is_redis_available(self) -> bool:
|
||||
"""Check if Redis is available."""
|
||||
return self._redis_client is not None
|
||||
|
||||
async def test_connection(self) -> bool:
|
||||
"""Test database connection for CLI validation."""
|
||||
try:
|
||||
if not self._initialized:
|
||||
await self.initialize()
|
||||
|
||||
# Test database connection (PostgreSQL or SQLite)
|
||||
async with self.get_async_session() as session:
|
||||
result = await session.execute(text("SELECT 1"))
|
||||
result.fetchone() # Don't await this - fetchone() is not async
|
||||
|
||||
# Test Redis connection if enabled
|
||||
if self._redis_client:
|
||||
await self._redis_client.ping()
|
||||
|
||||
return True
|
||||
except Exception as e:
|
||||
logger.error(f"Database connection test failed: {e}")
|
||||
return False
|
||||
|
||||
async def reset_connections(self):
|
||||
"""Reset all database connections."""
|
||||
logger.info("Resetting database connections")
|
||||
@@ -438,8 +576,8 @@ class DatabaseHealthCheck:
|
||||
try:
|
||||
start_time = datetime.utcnow()
|
||||
async with self.db_manager.get_async_session() as session:
|
||||
result = await session.execute("SELECT version()")
|
||||
version = (await result.fetchone())[0]
|
||||
result = await session.execute(text("SELECT version()"))
|
||||
version = result.fetchone()[0] # Don't await this - fetchone() is not async
|
||||
|
||||
response_time = (datetime.utcnow() - start_time).total_seconds()
|
||||
|
||||
|
||||
109
src/database/migrations/env.py
Normal file
109
src/database/migrations/env.py
Normal file
@@ -0,0 +1,109 @@
|
||||
"""Alembic environment configuration for WiFi-DensePose API."""
|
||||
|
||||
import asyncio
|
||||
import os
|
||||
import sys
|
||||
from logging.config import fileConfig
|
||||
from pathlib import Path
|
||||
|
||||
from sqlalchemy import pool
|
||||
from sqlalchemy.engine import Connection
|
||||
from sqlalchemy.ext.asyncio import async_engine_from_config
|
||||
|
||||
from alembic import context
|
||||
|
||||
# Add the project root to the Python path
|
||||
project_root = Path(__file__).parent.parent.parent.parent
|
||||
sys.path.insert(0, str(project_root))
|
||||
|
||||
# Import the models and settings
|
||||
from src.database.models import Base
|
||||
from src.config.settings import get_settings
|
||||
|
||||
# this is the Alembic Config object, which provides
|
||||
# access to the values within the .ini file in use.
|
||||
config = context.config
|
||||
|
||||
# Interpret the config file for Python logging.
|
||||
# This line sets up loggers basically.
|
||||
if config.config_file_name is not None:
|
||||
fileConfig(config.config_file_name)
|
||||
|
||||
# add your model's MetaData object here
|
||||
# for 'autogenerate' support
|
||||
target_metadata = Base.metadata
|
||||
|
||||
# other values from the config, defined by the needs of env.py,
|
||||
# can be acquired:
|
||||
# my_important_option = config.get_main_option("my_important_option")
|
||||
# ... etc.
|
||||
|
||||
|
||||
def get_database_url():
|
||||
"""Get the database URL from settings."""
|
||||
try:
|
||||
settings = get_settings()
|
||||
return settings.get_database_url()
|
||||
except Exception:
|
||||
# Fallback to SQLite if settings can't be loaded
|
||||
return "sqlite:///./data/wifi_densepose_fallback.db"
|
||||
|
||||
|
||||
def run_migrations_offline() -> None:
|
||||
"""Run migrations in 'offline' mode.
|
||||
|
||||
This configures the context with just a URL
|
||||
and not an Engine, though an Engine is acceptable
|
||||
here as well. By skipping the Engine creation
|
||||
we don't even need a DBAPI to be available.
|
||||
|
||||
Calls to context.execute() here emit the given string to the
|
||||
script output.
|
||||
|
||||
"""
|
||||
url = get_database_url()
|
||||
context.configure(
|
||||
url=url,
|
||||
target_metadata=target_metadata,
|
||||
literal_binds=True,
|
||||
dialect_opts={"paramstyle": "named"},
|
||||
)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
def do_run_migrations(connection: Connection) -> None:
|
||||
"""Run migrations with a database connection."""
|
||||
context.configure(connection=connection, target_metadata=target_metadata)
|
||||
|
||||
with context.begin_transaction():
|
||||
context.run_migrations()
|
||||
|
||||
|
||||
async def run_async_migrations() -> None:
|
||||
"""Run migrations in async mode."""
|
||||
configuration = config.get_section(config.config_ini_section)
|
||||
configuration["sqlalchemy.url"] = get_database_url()
|
||||
|
||||
connectable = async_engine_from_config(
|
||||
configuration,
|
||||
prefix="sqlalchemy.",
|
||||
poolclass=pool.NullPool,
|
||||
)
|
||||
|
||||
async with connectable.connect() as connection:
|
||||
await connection.run_sync(do_run_migrations)
|
||||
|
||||
await connectable.dispose()
|
||||
|
||||
|
||||
def run_migrations_online() -> None:
|
||||
"""Run migrations in 'online' mode."""
|
||||
asyncio.run(run_async_migrations())
|
||||
|
||||
|
||||
if context.is_offline_mode():
|
||||
run_migrations_offline()
|
||||
else:
|
||||
run_migrations_online()
|
||||
26
src/database/migrations/script.py.mako
Normal file
26
src/database/migrations/script.py.mako
Normal file
@@ -0,0 +1,26 @@
|
||||
"""${message}
|
||||
|
||||
Revision ID: ${up_revision}
|
||||
Revises: ${down_revision | comma,n}
|
||||
Create Date: ${create_date}
|
||||
|
||||
"""
|
||||
from alembic import op
|
||||
import sqlalchemy as sa
|
||||
${imports if imports else ""}
|
||||
|
||||
# revision identifiers, used by Alembic.
|
||||
revision = ${repr(up_revision)}
|
||||
down_revision = ${repr(down_revision)}
|
||||
branch_labels = ${repr(branch_labels)}
|
||||
depends_on = ${repr(depends_on)}
|
||||
|
||||
|
||||
def upgrade() -> None:
|
||||
"""Upgrade database schema."""
|
||||
${upgrades if upgrades else "pass"}
|
||||
|
||||
|
||||
def downgrade() -> None:
|
||||
"""Downgrade database schema."""
|
||||
${downgrades if downgrades else "pass"}
|
||||
@@ -160,7 +160,7 @@ class Session(Base, UUIDMixin, TimestampMixin):
|
||||
|
||||
# Metadata
|
||||
tags = Column(ARRAY(String), nullable=True)
|
||||
metadata = Column(JSON, nullable=True)
|
||||
meta_data = Column(JSON, nullable=True)
|
||||
|
||||
# Statistics
|
||||
total_frames = Column(Integer, default=0, nullable=False)
|
||||
@@ -191,7 +191,7 @@ class Session(Base, UUIDMixin, TimestampMixin):
|
||||
"config": self.config,
|
||||
"device_id": str(self.device_id),
|
||||
"tags": self.tags,
|
||||
"metadata": self.metadata,
|
||||
"metadata": self.meta_data,
|
||||
"total_frames": self.total_frames,
|
||||
"processed_frames": self.processed_frames,
|
||||
"error_count": self.error_count,
|
||||
@@ -240,7 +240,7 @@ class CSIData(Base, UUIDMixin, TimestampMixin):
|
||||
is_valid = Column(Boolean, default=True, nullable=False)
|
||||
|
||||
# Metadata
|
||||
metadata = Column(JSON, nullable=True)
|
||||
meta_data = Column(JSON, nullable=True)
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
@@ -278,7 +278,7 @@ class CSIData(Base, UUIDMixin, TimestampMixin):
|
||||
"processed_at": self.processed_at.isoformat() if self.processed_at else None,
|
||||
"quality_score": self.quality_score,
|
||||
"is_valid": self.is_valid,
|
||||
"metadata": self.metadata,
|
||||
"metadata": self.meta_data,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
@@ -317,7 +317,7 @@ class PoseDetection(Base, UUIDMixin, TimestampMixin):
|
||||
is_valid = Column(Boolean, default=True, nullable=False)
|
||||
|
||||
# Metadata
|
||||
metadata = Column(JSON, nullable=True)
|
||||
meta_data = Column(JSON, nullable=True)
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
@@ -350,7 +350,7 @@ class PoseDetection(Base, UUIDMixin, TimestampMixin):
|
||||
"image_quality": self.image_quality,
|
||||
"pose_quality": self.pose_quality,
|
||||
"is_valid": self.is_valid,
|
||||
"metadata": self.metadata,
|
||||
"metadata": self.meta_data,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
@@ -378,7 +378,7 @@ class SystemMetric(Base, UUIDMixin, TimestampMixin):
|
||||
|
||||
# Metadata
|
||||
description = Column(Text, nullable=True)
|
||||
metadata = Column(JSON, nullable=True)
|
||||
meta_data = Column(JSON, nullable=True)
|
||||
|
||||
# Constraints and indexes
|
||||
__table_args__ = (
|
||||
@@ -402,7 +402,7 @@ class SystemMetric(Base, UUIDMixin, TimestampMixin):
|
||||
"source": self.source,
|
||||
"component": self.component,
|
||||
"description": self.description,
|
||||
"metadata": self.metadata,
|
||||
"metadata": self.meta_data,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
}
|
||||
@@ -437,7 +437,7 @@ class AuditLog(Base, UUIDMixin, TimestampMixin):
|
||||
error_message = Column(Text, nullable=True)
|
||||
|
||||
# Metadata
|
||||
metadata = Column(JSON, nullable=True)
|
||||
meta_data = Column(JSON, nullable=True)
|
||||
tags = Column(ARRAY(String), nullable=True)
|
||||
|
||||
# Constraints and indexes
|
||||
@@ -467,7 +467,7 @@ class AuditLog(Base, UUIDMixin, TimestampMixin):
|
||||
"changes": self.changes,
|
||||
"success": self.success,
|
||||
"error_message": self.error_message,
|
||||
"metadata": self.metadata,
|
||||
"metadata": self.meta_data,
|
||||
"tags": self.tags,
|
||||
"created_at": self.created_at.isoformat() if self.created_at else None,
|
||||
"updated_at": self.updated_at.isoformat() if self.updated_at else None,
|
||||
|
||||
@@ -128,9 +128,8 @@ class HardwareService:
|
||||
mock_mode=self.settings.mock_hardware
|
||||
)
|
||||
|
||||
# Connect to router
|
||||
if not self.settings.mock_hardware:
|
||||
await router_interface.connect()
|
||||
# Connect to router (always connect, even in mock mode)
|
||||
await router_interface.connect()
|
||||
|
||||
self.router_interfaces[router_id] = router_interface
|
||||
self.logger.info(f"Router interface initialized: {router_id}")
|
||||
|
||||
@@ -58,7 +58,7 @@ class MonitoringTask:
|
||||
source=metric_data.get("source", self.name),
|
||||
component=metric_data.get("component"),
|
||||
description=metric_data.get("description"),
|
||||
metadata=metric_data.get("metadata"),
|
||||
meta_data=metric_data.get("metadata"),
|
||||
)
|
||||
session.add(metric)
|
||||
|
||||
|
||||
Reference in New Issue
Block a user