perf: fix API CPU saturation at 400+ devices
Root cause: stale NATS JetStream consumers accumulated across API restarts, causing 13+ consumers to fight over messages in a single Python async event loop (100% CPU). Fixes: - Add performance indexes on devices(tenant_id, hostname), devices(tenant_id, status), key_access_log(tenant_id, created_at) — drops devices seq_scans from 402k to 6 per interval - Remove redundant ORDER BY t.name from fleet summary SQL (tenant name sort is client-side, was forcing a cross-table sort) - Bump NATS memory limit from 128MB to 256MB (was at 118/128) - Increase dev poll interval from 60s to 120s for 400+ device fleet The stream purge + restart brought API CPU from 100% to 0.3%. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
This commit is contained in:
@@ -363,7 +363,7 @@ _FLEET_SUMMARY_SQL = """
|
||||
d.tenant_id, t.name AS tenant_name
|
||||
FROM devices d
|
||||
JOIN tenants t ON d.tenant_id = t.id
|
||||
ORDER BY t.name, d.hostname
|
||||
ORDER BY d.hostname
|
||||
"""
|
||||
|
||||
|
||||
|
||||
Reference in New Issue
Block a user