enhance utilisation of postgres features
continuous-integration/drone/push Build is passing

This commit is contained in:
2026-04-20 10:19:27 +10:00
parent 98e92a8264
commit 8ccf5a7009
28 changed files with 2836 additions and 422 deletions
+29 -2
View File
@@ -113,6 +113,17 @@ The import command:
- Auto-creates runtime tables (hourly/daily/monthly snapshot tables and cache tables) when needed. - Auto-creates runtime tables (hourly/daily/monthly snapshot tables and cache tables) when needed.
- Replaces existing data in imported Postgres tables during the run. - Replaces existing data in imported Postgres tables during the run.
If you want a one-time canonical aggregation benchmark (Go vs SQL cores) and exit, use:
```shell
vctp -settings /path/to/vctp.yml -benchmark-aggregations -benchmark-runs 3
```
The benchmark command:
- Uses canonical cache sources (`vm_hourly_stats` for daily, `vm_daily_rollup` for monthly).
- Runs Go and SQL aggregation cores for the latest available daily/monthly windows.
- Writes results to startup logs and exits without changing scheduled defaults.
## Database Configuration ## Database Configuration
By default the app uses SQLite and creates/opens `db.sqlite3`. By default the app uses SQLite and creates/opens `db.sqlite3`.
@@ -205,8 +216,12 @@ Hourly and daily snapshot table retention can be configured in the settings file
## Runtime Environment Flags ## Runtime Environment Flags
These optional flags are read from the process environment (for example via `/etc/default/vctp`): These optional flags are read from the process environment (for example via `/etc/default/vctp`):
- `DAILY_AGG_GO`: set to `1` (default in `src/vctp.default`) to use the Go daily aggregation path. - `DAILY_AGG_GO`: set to `1` (default in `src/vctp.default`) to force Go for manual daily runs.
- `MONTHLY_AGG_GO`: set to `1` (default in `src/vctp.default`) to use the Go monthly aggregation path. - `DAILY_AGG_SQL`: set to `1` to force legacy SQL fallback for manual daily runs.
- `MONTHLY_AGG_GO`: set to `1` (default in `src/vctp.default`) to force Go for manual monthly runs.
- `MONTHLY_AGG_SQL`: set to `1` to force legacy SQL fallback for manual monthly runs.
Scheduled aggregation engine selection is controlled by YAML (`settings.scheduled_aggregation_engine`), not these env vars.
## Authentication and Authorization ## Authentication and Authorization
Authentication uses LDAP bind + JWT bearer tokens. Authentication uses LDAP bind + JWT bearer tokens.
@@ -246,6 +261,16 @@ Debug endpoints:
- `/debug/pprof/*` handlers are only registered when `settings.enable_pprof: true`. - `/debug/pprof/*` handlers are only registered when `settings.enable_pprof: true`.
- When enabled, they require an authenticated `admin` token. - When enabled, they require an authenticated `admin` token.
## Airgapped Static Assets
vCTP is safe for airgapped operation without internet/CDN dependencies for UI/docs assets:
- CSS, JS, and favicon assets are bundled into the binary via Go `embed` and served from local routes (`/assets/*`, `/favicon*`).
- Swagger UI is vendored under `server/router/swagger-ui-dist` and served locally from `/swagger/*`.
- Swagger spec is served locally from `/swagger.json` (`validatorUrl` is disabled in the initializer).
- Static responses include cache headers. In release builds, versioned assets are served with long-lived cache headers and immutable caching.
This means runtime access to external asset hosts is not required.
## Credential Encryption Lifecycle ## Credential Encryption Lifecycle
At startup, vCTP resolves `settings.vcenter_password` using this order: At startup, vCTP resolves `settings.vcenter_password` using this order:
@@ -343,6 +368,8 @@ Snapshots:
- `title_cell` (optional): explicit title cell; if omitted, derived from `pivot_range` - `title_cell` (optional): explicit title cell; if omitted, derived from `pivot_range`
- `settings.hourly_snapshot_retry_seconds`: interval for retrying failed hourly snapshots (default: 300 seconds) - `settings.hourly_snapshot_retry_seconds`: interval for retrying failed hourly snapshots (default: 300 seconds)
- `settings.hourly_snapshot_max_retries`: maximum retry attempts per vCenter snapshot (default: 3) - `settings.hourly_snapshot_max_retries`: maximum retry attempts per vCenter snapshot (default: 3)
- `settings.postgres_vm_hourly_partitioning_enabled`: Postgres-only toggle to migrate/manage `vm_hourly_stats` as monthly range partitions (default: `false`)
- `settings.scheduled_aggregation_engine`: scheduled daily/monthly engine (`go` default, `sql` for canonical SQL rollout)
Filters/chargeback: Filters/chargeback:
- `settings.tenants_to_filter`: list of tenant name patterns to exclude - `settings.tenants_to_filter`: list of tenant name patterns to exclude
+5 -5
View File
@@ -8,14 +8,14 @@ templ Header() {
<meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta name="viewport" content="width=device-width, initial-scale=1.0"/>
<meta name="description" content="vCTP dashboard and API endpoint"/> <meta name="description" content="vCTP dashboard and API endpoint"/>
<meta name="color-scheme" content="light"/> <meta name="color-scheme" content="light"/>
<meta name="theme-color" content="#1b61c9"/> <meta name="theme-color" content="#195fc8"/>
<title>vCTP API</title> <title>vCTP API</title>
<link rel="icon" href="/favicon.ico"/> <link rel="icon" href={ "/favicon.ico?v=" + version.Value }/>
<link rel="icon" type="image/png" sizes="16x16" href="/favicon-16x16.png"/> <link rel="icon" type="image/png" sizes="16x16" href={ "/favicon-16x16.png?v=" + version.Value }/>
<link rel="icon" type="image/png" sizes="32x32" href="/favicon-32x32.png"/> <link rel="icon" type="image/png" sizes="32x32" href={ "/favicon-32x32.png?v=" + version.Value }/>
<script src="/assets/js/htmx@v2.0.2.min.js"></script> <script src="/assets/js/htmx@v2.0.2.min.js"></script>
<script src={ "/assets/js/web3-charts.js?v=" + version.Value }></script> <script src={ "/assets/js/web3-charts.js?v=" + version.Value }></script>
<link href={ "/assets/css/output@" + version.Value + ".css" } rel="stylesheet"/> <link href={ "/assets/css/output@" + version.Value + ".css" } rel="stylesheet"/>
<link href="/assets/css/web3.css" rel="stylesheet"/> <link href={ "/assets/css/web3.css?v=" + version.Value } rel="stylesheet"/>
</head> </head>
} }
+60 -8
View File
@@ -31,33 +31,85 @@ func Header() templ.Component {
templ_7745c5c3_Var1 = templ.NopComponent templ_7745c5c3_Var1 = templ.NopComponent
} }
ctx = templ.ClearChildren(ctx) ctx = templ.ClearChildren(ctx)
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<head><meta charset=\"UTF-8\"><meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\"><meta name=\"description\" content=\"vCTP dashboard and API endpoint\"><meta name=\"color-scheme\" content=\"light\"><meta name=\"theme-color\" content=\"#1b61c9\"><title>vCTP API</title><link rel=\"icon\" href=\"/favicon.ico\"><link rel=\"icon\" type=\"image/png\" sizes=\"16x16\" href=\"/favicon-16x16.png\"><link rel=\"icon\" type=\"image/png\" sizes=\"32x32\" href=\"/favicon-32x32.png\"><script src=\"/assets/js/htmx@v2.0.2.min.js\"></script><script src=\"") templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 1, "<head><meta charset=\"UTF-8\"><meta name=\"viewport\" content=\"width=device-width, initial-scale=1.0\"><meta name=\"description\" content=\"vCTP dashboard and API endpoint\"><meta name=\"color-scheme\" content=\"light\"><meta name=\"theme-color\" content=\"#195fc8\"><title>vCTP API</title><link rel=\"icon\" href=\"")
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
var templ_7745c5c3_Var2 string var templ_7745c5c3_Var2 templ.SafeURL
templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinStringErrs("/assets/js/web3-charts.js?v=" + version.Value) templ_7745c5c3_Var2, templ_7745c5c3_Err = templ.JoinURLErrs("/favicon.ico?v=" + version.Value)
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 17, Col: 62} return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 13, Col: 59}
} }
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2)) _, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var2))
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "\"></script><link href=\"") templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 2, "\"><link rel=\"icon\" type=\"image/png\" sizes=\"16x16\" href=\"")
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
var templ_7745c5c3_Var3 templ.SafeURL var templ_7745c5c3_Var3 templ.SafeURL
templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinURLErrs("/assets/css/output@" + version.Value + ".css") templ_7745c5c3_Var3, templ_7745c5c3_Err = templ.JoinURLErrs("/favicon-16x16.png?v=" + version.Value)
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 18, Col: 61} return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 14, Col: 96}
} }
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3)) _, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var3))
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "\" rel=\"stylesheet\"><link href=\"/assets/css/web3.css\" rel=\"stylesheet\"></head>") templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 3, "\"><link rel=\"icon\" type=\"image/png\" sizes=\"32x32\" href=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var4 templ.SafeURL
templ_7745c5c3_Var4, templ_7745c5c3_Err = templ.JoinURLErrs("/favicon-32x32.png?v=" + version.Value)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 15, Col: 96}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var4))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 4, "\"><script src=\"/assets/js/htmx@v2.0.2.min.js\"></script><script src=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var5 string
templ_7745c5c3_Var5, templ_7745c5c3_Err = templ.JoinStringErrs("/assets/js/web3-charts.js?v=" + version.Value)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 17, Col: 62}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var5))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 5, "\"></script><link href=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var6 templ.SafeURL
templ_7745c5c3_Var6, templ_7745c5c3_Err = templ.JoinURLErrs("/assets/css/output@" + version.Value + ".css")
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 18, Col: 61}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var6))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 6, "\" rel=\"stylesheet\"><link href=\"")
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
var templ_7745c5c3_Var7 templ.SafeURL
templ_7745c5c3_Var7, templ_7745c5c3_Err = templ.JoinURLErrs("/assets/css/web3.css?v=" + version.Value)
if templ_7745c5c3_Err != nil {
return templ.Error{Err: templ_7745c5c3_Err, FileName: `core/header.templ`, Line: 19, Col: 56}
}
_, templ_7745c5c3_Err = templ_7745c5c3_Buffer.WriteString(templ.EscapeString(templ_7745c5c3_Var7))
if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err
}
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 7, "\" rel=\"stylesheet\"></head>")
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
+17 -3
View File
@@ -12,6 +12,20 @@ type SegmentedLink struct {
Class string Class string
} }
func actionLinkClass(class string) string {
if class == "" {
return "web2-button"
}
return class
}
func segmentedLinkClass(class string) string {
if class == "" {
return "web3-button"
}
return class
}
templ PageHeader(pill string, title string, subtitle string, actions []ActionLink) { templ PageHeader(pill string, title string, subtitle string, actions []ActionLink) {
<div class="web2-page-head-row"> <div class="web2-page-head-row">
<div class="web2-head-copy"> <div class="web2-head-copy">
@@ -26,7 +40,7 @@ templ PageHeader(pill string, title string, subtitle string, actions []ActionLin
if len(actions) > 0 { if len(actions) > 0 {
<div class="web2-actions"> <div class="web2-actions">
for _, action := range actions { for _, action := range actions {
<a class={ action.Class } href={ action.Href }>{ action.Label }</a> <a class={ actionLinkClass(action.Class) } href={ action.Href }>{ action.Label }</a>
} }
</div> </div>
} }
@@ -37,7 +51,7 @@ templ SegmentedActions(actions []SegmentedLink) {
if len(actions) > 0 { if len(actions) > 0 {
<div class="web3-button-group"> <div class="web3-button-group">
for _, action := range actions { for _, action := range actions {
<a class={ action.Class } href={ action.Href }>{ action.Label }</a> <a class={ segmentedLinkClass(action.Class) } href={ action.Href }>{ action.Label }</a>
} }
</div> </div>
} }
@@ -45,7 +59,7 @@ templ SegmentedActions(actions []SegmentedLink) {
templ SectionHead(title string, badge string) { templ SectionHead(title string, badge string) {
<div class="web2-section-head"> <div class="web2-section-head">
<h2>{ title }</h2> <h2 class="web2-section-title">{ title }</h2>
if badge != "" { if badge != "" {
<span class="web2-badge">{ badge }</span> <span class="web2-badge">{ badge }</span>
} }
+17 -6
View File
@@ -1,6 +1,9 @@
package views package views
import "vctp/components/core" import (
"strings"
"vctp/components/core"
)
type BuildInfo struct { type BuildInfo struct {
BuildTime string BuildTime string
@@ -8,6 +11,14 @@ type BuildInfo struct {
GoVersion string GoVersion string
} }
func truncateSHA(sha string) string {
trimmed := strings.TrimSpace(sha)
if len(trimmed) <= 14 {
return trimmed
}
return trimmed[:14] + "..."
}
templ Index(info BuildInfo) { templ Index(info BuildInfo) {
<!DOCTYPE html> <!DOCTYPE html>
<html lang="en"> <html lang="en">
@@ -41,15 +52,15 @@ templ Index(info BuildInfo) {
</div> </div>
<div class="web2-card"> <div class="web2-card">
<p class="web2-kpi-label">SHA1 Version</p> <p class="web2-kpi-label">SHA1 Version</p>
<p class="web2-kpi-value">{ info.SHA1Ver }</p> <p class="web2-kpi-value web2-kpi-value-mono web2-kpi-truncate" title={ info.SHA1Ver }>{ truncateSHA(info.SHA1Ver) }</p>
</div> </div>
<div class="web2-card"> <div class="web2-card">
<p class="web2-kpi-label">Go Runtime</p> <p class="web2-kpi-label">Go Runtime</p>
<p class="web2-kpi-value">{ info.GoVersion }</p> <p class="web2-kpi-value">{ info.GoVersion }</p>
</div> </div>
</section> </section>
<section class="grid gap-6 lg:grid-cols-3"> <section class="web2-index-sections">
<div class="web2-card"> <div class="web2-card web2-card-overview web2-index-overview">
<h2 class="mb-2">Overview</h2> <h2 class="mb-2">Overview</h2>
<p class="web2-page-subtitle"> <p class="web2-page-subtitle">
vCTP is a vSphere Chargeback Tracking Platform. vCTP is a vSphere Chargeback Tracking Platform.
@@ -61,7 +72,7 @@ templ Index(info BuildInfo) {
Use <code class="web2-code">/api/auth/me</code> to inspect active claims and roles during integration and diagnostics. Use <code class="web2-code">/api/auth/me</code> to inspect active claims and roles during integration and diagnostics.
</p> </p>
</div> </div>
<div class="web2-card"> <div class="web2-card web2-card-featured web2-index-featured">
<h2 class="mb-2">Snapshots and Reports</h2> <h2 class="mb-2">Snapshots and Reports</h2>
<div class="web2-paragraphs web2-page-subtitle"> <div class="web2-paragraphs web2-page-subtitle">
<p>Hourly snapshots capture inventory per vCenter (concurrency via <code class="web2-code">hourly_snapshot_concurrency</code>), then daily and monthly summaries are derived from those snapshots.</p> <p>Hourly snapshots capture inventory per vCenter (concurrency via <code class="web2-code">hourly_snapshot_concurrency</code>), then daily and monthly summaries are derived from those snapshots.</p>
@@ -76,7 +87,7 @@ templ Index(info BuildInfo) {
<p>Monthly aggregation reports include a Daily Totals sheet with full-day interval labels (YYYY-MM-DD to YYYY-MM-DD) and prorated totals.</p> <p>Monthly aggregation reports include a Daily Totals sheet with full-day interval labels (YYYY-MM-DD to YYYY-MM-DD) and prorated totals.</p>
</div> </div>
</div> </div>
<div class="web2-card"> <div class="web2-card web2-index-wide">
<h2 class="mb-2">Prorating and Aggregation</h2> <h2 class="mb-2">Prorating and Aggregation</h2>
<div class="web2-paragraphs web2-page-subtitle"> <div class="web2-paragraphs web2-page-subtitle">
<p><code class="web2-code">SamplesPresent</code> is the count of snapshots in which the VM appears; <code class="web2-code">TotalSamples</code> is the count of unique snapshot times for that vCenter/day.</p> <p><code class="web2-code">SamplesPresent</code> is the count of snapshots in which the VM appears; <code class="web2-code">TotalSamples</code> is the count of unique snapshot times for that vCenter/day.</p>
File diff suppressed because one or more lines are too long
+256 -15
View File
@@ -10,6 +10,7 @@ import (
"strconv" "strconv"
"strings" "strings"
"sync" "sync"
"sync/atomic"
"time" "time"
"vctp/db/queries" "vctp/db/queries"
@@ -49,6 +50,21 @@ type loggerContextKey struct{}
var ensureOnceRegistry sync.Map var ensureOnceRegistry sync.Map
var vmHourlyStatsPostgresPartitioningEnabled atomic.Bool
func init() {
vmHourlyStatsPostgresPartitioningEnabled.Store(false)
}
// SetVmHourlyStatsPostgresPartitioningEnabled toggles postgres monthly partitioning for vm_hourly_stats.
func SetVmHourlyStatsPostgresPartitioningEnabled(enabled bool) {
vmHourlyStatsPostgresPartitioningEnabled.Store(enabled)
}
func vmHourlyStatsPartitioningEnabled() bool {
return vmHourlyStatsPostgresPartitioningEnabled.Load()
}
// WithLoggerContext stores a logger in context for downstream DB helper logging. // WithLoggerContext stores a logger in context for downstream DB helper logging.
func WithLoggerContext(ctx context.Context, logger *slog.Logger) context.Context { func WithLoggerContext(ctx context.Context, logger *slog.Logger) context.Context {
if ctx == nil { if ctx == nil {
@@ -700,7 +716,18 @@ func CheckpointDatabase(ctx context.Context, dbConn *sqlx.DB) (string, error) {
// EnsureVmHourlyStats creates the shared per-snapshot cache table used by Go aggregations. // EnsureVmHourlyStats creates the shared per-snapshot cache table used by Go aggregations.
func EnsureVmHourlyStats(ctx context.Context, dbConn *sqlx.DB) error { func EnsureVmHourlyStats(ctx context.Context, dbConn *sqlx.DB) error {
ddl := ` driver := strings.ToLower(dbConn.DriverName())
if (driver == "pgx" || driver == "postgres") && vmHourlyStatsPartitioningEnabled() {
return ensureOncePerDB(dbConn, "vm_hourly_stats_partitioned", func() error {
if err := ensureVmHourlyStatsPartitionedPostgres(ctx, dbConn); err != nil {
return err
}
return ensureVmHourlyStatsIndexes(ctx, dbConn, driver)
})
}
return ensureOncePerDB(dbConn, "vm_hourly_stats_unpartitioned", func() error {
ddl := `
CREATE TABLE IF NOT EXISTS vm_hourly_stats ( CREATE TABLE IF NOT EXISTS vm_hourly_stats (
"SnapshotTime" BIGINT NOT NULL, "SnapshotTime" BIGINT NOT NULL,
"Vcenter" TEXT NOT NULL, "Vcenter" TEXT NOT NULL,
@@ -721,27 +748,241 @@ CREATE TABLE IF NOT EXISTS vm_hourly_stats (
"SrmPlaceholder" TEXT, "SrmPlaceholder" TEXT,
PRIMARY KEY ("Vcenter","VmId","SnapshotTime") PRIMARY KEY ("Vcenter","VmId","SnapshotTime")
);` );`
return ensureOncePerDB(dbConn, "vm_hourly_stats", func() error {
if _, err := execLog(ctx, dbConn, ddl); err != nil { if _, err := execLog(ctx, dbConn, ddl); err != nil {
return err return err
} }
indexQueries := []string{ return ensureVmHourlyStatsIndexes(ctx, dbConn, driver)
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vmuuid_time_idx ON vm_hourly_stats ("VmUuid","SnapshotTime")`, })
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vmid_time_idx ON vm_hourly_stats ("VmId","SnapshotTime")`, }
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_name_time_idx ON vm_hourly_stats (lower("Name"),"SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_snapshottime_idx ON vm_hourly_stats ("SnapshotTime")`, // EnsureVmHourlyStatsPartitionForSnapshot creates the month partition for snapshotUnix when postgres partitioning is enabled.
func EnsureVmHourlyStatsPartitionForSnapshot(ctx context.Context, dbConn *sqlx.DB, snapshotUnix int64) error {
driver := strings.ToLower(dbConn.DriverName())
if driver != "pgx" && driver != "postgres" {
return nil
}
if !vmHourlyStatsPartitioningEnabled() || snapshotUnix <= 0 {
return nil
}
snapshotMonth := time.Unix(snapshotUnix, 0).UTC().Format("200601")
key := "vm_hourly_stats_partition_month_" + snapshotMonth
return ensureOncePerDB(dbConn, key, func() error {
partitioned, err := isVmHourlyStatsPartitioned(ctx, dbConn)
if err != nil {
return err
} }
failedIndexes := 0 if !partitioned {
for _, q := range indexQueries { return nil
if _, err := execLog(ctx, dbConn, q); err != nil { }
failedIndexes++ return ensureVmHourlyStatsMonthPartition(ctx, dbConn, time.Unix(snapshotUnix, 0).UTC())
})
}
func ensureVmHourlyStatsIndexes(ctx context.Context, dbConn *sqlx.DB, driver string) error {
indexQueries := []string{
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_snapshottime_idx ON vm_hourly_stats ("SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vmuuid_time_idx ON vm_hourly_stats ("VmUuid","SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vmid_time_idx ON vm_hourly_stats ("VmId","SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_name_time_idx ON vm_hourly_stats (lower("Name"),"SnapshotTime")`,
}
if driver == "pgx" || driver == "postgres" {
indexQueries = append(indexQueries,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vcenter_time_idx ON vm_hourly_stats ("Vcenter","SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vcenter_vmid_time_idx ON vm_hourly_stats ("Vcenter","VmId","SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vcenter_vmuuid_time_idx ON vm_hourly_stats ("Vcenter","VmUuid","SnapshotTime")`,
`CREATE INDEX IF NOT EXISTS vm_hourly_stats_vcenter_name_time_idx ON vm_hourly_stats ("Vcenter",lower("Name"),"SnapshotTime")`,
)
}
failedIndexes := 0
for _, q := range indexQueries {
if _, err := execLog(ctx, dbConn, q); err != nil {
failedIndexes++
}
}
if failedIndexes > 0 {
slog.Warn("vm_hourly_stats index ensure incomplete; continuing without retries until restart", "failed_indexes", failedIndexes)
}
return nil
}
func ensureVmHourlyStatsPartitionedPostgres(ctx context.Context, dbConn *sqlx.DB) error {
partitioned, err := isVmHourlyStatsPartitioned(ctx, dbConn)
if err != nil {
return err
}
if !partitioned {
exists := TableExists(ctx, dbConn, "vm_hourly_stats")
if exists {
if err := migrateVmHourlyStatsToPartitioned(ctx, dbConn); err != nil {
return err
}
} else {
if _, err := execLog(ctx, dbConn, vmHourlyStatsPartitionedDDL()); err != nil {
return err
} }
} }
if failedIndexes > 0 { }
slog.Warn("vm_hourly_stats index ensure incomplete; continuing without retries until restart", "failed_indexes", failedIndexes)
} if err := ensureVmHourlyStatsDefaultPartition(ctx, dbConn); err != nil {
return err
}
if err := ensureVmHourlyStatsPartitionsForExistingData(ctx, dbConn); err != nil {
return err
}
nowUTC := time.Now().UTC()
if err := ensureVmHourlyStatsMonthPartition(ctx, dbConn, nowUTC); err != nil {
return err
}
if err := ensureVmHourlyStatsMonthPartition(ctx, dbConn, nowUTC.AddDate(0, 1, 0)); err != nil {
return err
}
return nil
}
func vmHourlyStatsPartitionedDDL() string {
return `
CREATE TABLE IF NOT EXISTS vm_hourly_stats (
"SnapshotTime" BIGINT NOT NULL,
"Vcenter" TEXT NOT NULL,
"VmId" TEXT,
"VmUuid" TEXT,
"Name" TEXT,
"CreationTime" BIGINT,
"DeletionTime" BIGINT,
"ResourcePool" TEXT,
"Datacenter" TEXT,
"Cluster" TEXT,
"Folder" TEXT,
"ProvisionedDisk" REAL,
"VcpuCount" BIGINT,
"RamGB" BIGINT,
"IsTemplate" TEXT,
"PoweredOn" TEXT,
"SrmPlaceholder" TEXT,
CONSTRAINT vm_hourly_stats_partitioned_pk PRIMARY KEY ("Vcenter","VmId","SnapshotTime")
) PARTITION BY RANGE ("SnapshotTime");
`
}
func isVmHourlyStatsPartitioned(ctx context.Context, dbConn *sqlx.DB) (bool, error) {
var count int
err := getLog(ctx, dbConn, &count, `
SELECT COUNT(1)
FROM pg_partitioned_table pt
JOIN pg_class c ON c.oid = pt.partrelid
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname = 'public'
AND c.relname = 'vm_hourly_stats'
`)
if err != nil {
return false, err
}
return count > 0, nil
}
func migrateVmHourlyStatsToPartitioned(ctx context.Context, dbConn *sqlx.DB) error {
tx, err := dbConn.BeginTxx(ctx, nil)
if err != nil {
return err
}
defer tx.Rollback()
if _, err := tx.ExecContext(ctx, `LOCK TABLE vm_hourly_stats IN ACCESS EXCLUSIVE MODE`); err != nil {
return err
}
var partitionedCount int
if err := tx.GetContext(ctx, &partitionedCount, `
SELECT COUNT(1)
FROM pg_partitioned_table pt
JOIN pg_class c ON c.oid = pt.partrelid
JOIN pg_namespace n ON n.oid = c.relnamespace
WHERE n.nspname = 'public'
AND c.relname = 'vm_hourly_stats'
`); err != nil {
return err
}
if partitionedCount > 0 {
return tx.Commit()
}
backupTable := fmt.Sprintf("vm_hourly_stats_unpartitioned_%d", time.Now().UTC().UnixNano())
if _, err := SafeTableName(backupTable); err != nil {
return err
}
if _, err := tx.ExecContext(ctx, fmt.Sprintf(`ALTER TABLE vm_hourly_stats RENAME TO %s`, backupTable)); err != nil {
return err
}
if _, err := tx.ExecContext(ctx, vmHourlyStatsPartitionedDDL()); err != nil {
return err
}
if _, err := tx.ExecContext(ctx, `CREATE TABLE IF NOT EXISTS vm_hourly_stats_default PARTITION OF vm_hourly_stats DEFAULT`); err != nil {
return err
}
if _, err := tx.ExecContext(ctx, fmt.Sprintf(`INSERT INTO vm_hourly_stats SELECT * FROM %s`, backupTable)); err != nil {
return err
}
if _, err := tx.ExecContext(ctx, fmt.Sprintf(`DROP TABLE %s`, backupTable)); err != nil {
return err
}
return tx.Commit()
}
func ensureVmHourlyStatsDefaultPartition(ctx context.Context, dbConn *sqlx.DB) error {
_, err := execLog(ctx, dbConn, `CREATE TABLE IF NOT EXISTS vm_hourly_stats_default PARTITION OF vm_hourly_stats DEFAULT`)
return err
}
func ensureVmHourlyStatsPartitionsForExistingData(ctx context.Context, dbConn *sqlx.DB) error {
var bounds struct {
Min sql.NullInt64 `db:"min_snapshot"`
Max sql.NullInt64 `db:"max_snapshot"`
}
if err := getLog(ctx, dbConn, &bounds, `
SELECT MIN("SnapshotTime") AS min_snapshot, MAX("SnapshotTime") AS max_snapshot
FROM vm_hourly_stats
`); err != nil {
return err
}
if !bounds.Min.Valid || !bounds.Max.Valid {
return nil return nil
}) }
start := monthStartUTC(time.Unix(bounds.Min.Int64, 0).UTC())
end := monthStartUTC(time.Unix(bounds.Max.Int64, 0).UTC())
guard := 0
for m := start; !m.After(end); m = m.AddDate(0, 1, 0) {
if err := ensureVmHourlyStatsMonthPartition(ctx, dbConn, m); err != nil {
return err
}
guard++
if guard > 240 {
return fmt.Errorf("vm_hourly_stats partition range guard exceeded while creating existing-data partitions")
}
}
return nil
}
func ensureVmHourlyStatsMonthPartition(ctx context.Context, dbConn *sqlx.DB, month time.Time) error {
start := monthStartUTC(month.UTC())
end := start.AddDate(0, 1, 0)
partitionName := fmt.Sprintf("vm_hourly_stats_%s", start.Format("200601"))
if _, err := SafeTableName(partitionName); err != nil {
return err
}
query := fmt.Sprintf(
`CREATE TABLE IF NOT EXISTS %s PARTITION OF vm_hourly_stats FOR VALUES FROM (%d) TO (%d)`,
partitionName,
start.Unix(),
end.Unix(),
)
_, err := execLog(ctx, dbConn, query)
return err
}
func monthStartUTC(t time.Time) time.Time {
return time.Date(t.Year(), t.Month(), 1, 0, 0, 0, 0, time.UTC)
} }
// EnsureVmLifecycleCache creates an upsert cache for first/last seen VM info. // EnsureVmLifecycleCache creates an upsert cache for first/last seen VM info.
+89
View File
@@ -0,0 +1,89 @@
# Design System Inspired by Airtable
## 1. Visual Theme & Atmosphere
Airtable's website is a clean, enterprise-friendly platform that communicates "sophisticated simplicity" through a white canvas with deep navy text (`#181d26`) and Airtable Blue (`#1b61c9`) as the primary interactive accent. The Haas font family (display + text variants) creates a Swiss-precision typography system with positive letter-spacing throughout.
**Key Characteristics:**
- White canvas with deep navy text (`#181d26`)
- Airtable Blue (`#1b61c9`) as primary CTA and link color
- Haas + Haas Groot Disp dual font system
- Positive letter-spacing on body text (0.08px0.28px)
- 12px radius buttons, 16px32px for cards
- Multi-layer blue-tinted shadow: `rgba(45,127,249,0.28) 0px 1px 3px`
- Semantic theme tokens: `--theme_*` CSS variable naming
## 2. Color Palette & Roles
### Primary
- **Deep Navy** (`#181d26`): Primary text
- **Airtable Blue** (`#1b61c9`): CTA buttons, links
- **White** (`#ffffff`): Primary surface
- **Spotlight** (`rgba(249,252,255,0.97)`): `--theme_button-text-spotlight`
### Semantic
- **Success Green** (`#006400`): `--theme_success-text`
- **Weak Text** (`rgba(4,14,32,0.69)`): `--theme_text-weak`
- **Secondary Active** (`rgba(7,12,20,0.82)`): `--theme_button-text-secondary-active`
### Neutral
- **Dark Gray** (`#333333`): Secondary text
- **Mid Blue** (`#254fad`): Link/accent blue variant
- **Border** (`#e0e2e6`): Card borders
- **Light Surface** (`#f8fafc`): Subtle surface
### Shadows
- **Blue-tinted** (`rgba(0,0,0,0.32) 0px 0px 1px, rgba(0,0,0,0.08) 0px 0px 2px, rgba(45,127,249,0.28) 0px 1px 3px, rgba(0,0,0,0.06) 0px 0px 0px 0.5px inset`)
- **Soft** (`rgba(15,48,106,0.05) 0px 0px 20px`)
## 3. Typography Rules
### Font Families
- **Primary**: `Haas`, fallbacks: `-apple-system, system-ui, Segoe UI, Roboto`
- **Display**: `Haas Groot Disp`, fallback: `Haas`
### Hierarchy
| Role | Font | Size | Weight | Line Height | Letter Spacing |
|------|------|------|--------|-------------|----------------|
| Display Hero | Haas | 48px | 400 | 1.15 | normal |
| Display Bold | Haas Groot Disp | 48px | 900 | 1.50 | normal |
| Section Heading | Haas | 40px | 400 | 1.25 | normal |
| Sub-heading | Haas | 32px | 400500 | 1.151.25 | normal |
| Card Title | Haas | 24px | 400 | 1.201.30 | 0.12px |
| Feature | Haas | 20px | 400 | 1.251.50 | 0.1px |
| Body | Haas | 18px | 400 | 1.35 | 0.18px |
| Body Medium | Haas | 16px | 500 | 1.30 | 0.080.16px |
| Button | Haas | 16px | 500 | 1.251.30 | 0.08px |
| Caption | Haas | 14px | 400500 | 1.251.35 | 0.070.28px |
## 4. Component Stylings
### Buttons
- **Primary Blue**: `#1b61c9`, white text, 16px 24px padding, 12px radius
- **White**: white bg, `#181d26` text, 12px radius, 1px border white
- **Cookie Consent**: `#1b61c9` bg, 2px radius (sharp)
### Cards: `1px solid #e0e2e6`, 16px24px radius
### Inputs: Standard Haas styling
## 5. Layout
- Spacing: 148px (8px base)
- Radius: 2px (small), 12px (buttons), 16px (cards), 24px (sections), 32px (large), 50% (circles)
## 6. Depth
- Blue-tinted multi-layer shadow system
- Soft ambient: `rgba(15,48,106,0.05) 0px 0px 20px`
## 7. Do's and Don'ts
### Do: Use Airtable Blue for CTAs, Haas with positive tracking, 12px radius buttons
### Don't: Skip positive letter-spacing, use heavy shadows
## 8. Responsive Behavior
Breakpoints: 4251664px (23 breakpoints)
## 9. Agent Prompt Guide
- Text: Deep Navy (`#181d26`)
- CTA: Airtable Blue (`#1b61c9`)
- Background: White (`#ffffff`)
- Border: `#e0e2e6`
+279 -58
View File
@@ -1,89 +1,310 @@
# Design System Inspired by Airtable # Design System Inspired by Cursor
## 1. Visual Theme & Atmosphere ## 1. Visual Theme & Atmosphere
Airtable's website is a clean, enterprise-friendly platform that communicates "sophisticated simplicity" through a white canvas with deep navy text (`#181d26`) and Airtable Blue (`#1b61c9`) as the primary interactive accent. The Haas font family (display + text variants) creates a Swiss-precision typography system with positive letter-spacing throughout. Cursor's website is a study in warm minimalism meets code-editor elegance. The entire experience is built on a warm off-white canvas (`#f2f1ed`) with dark warm-brown text (`#26251e`) -- not pure black, not neutral gray, but a deeply warm near-black with a yellowish undertone that evokes old paper, ink, and craft. This warmth permeates every surface: backgrounds lean toward cream (`#e6e5e0`, `#ebeae5`), borders dissolve into transparent warm overlays using `oklab` color space, and even the error state (`#cf2d56`) carries warmth rather than clinical red. The result feels more like a premium print publication than a tech website.
The custom CursorGothic font is the typographic signature -- a gothic sans-serif with aggressive negative letter-spacing at display sizes (-2.16px at 72px) that creates a compressed, engineered feel. As a secondary voice, the jjannon serif font (with OpenType `"cswh"` contextual swash alternates) provides literary counterpoint for body copy and editorial passages. The monospace voice comes from berkeleyMono, a refined coding font that connects the marketing site to Cursor's core identity as a code editor. This three-font system (gothic display, serif body, mono code) gives Cursor one of the most typographically rich palettes in developer tooling.
The border system is particularly distinctive -- Cursor uses `oklab()` color space for border colors, applying warm brown at various alpha levels (0.1, 0.2, 0.55) to create borders that feel organic rather than mechanical. The signature border color `oklab(0.263084 -0.00230259 0.0124794 / 0.1)` is not a simple rgba value but a perceptually uniform color that maintains visual consistency across different backgrounds.
**Key Characteristics:** **Key Characteristics:**
- White canvas with deep navy text (`#181d26`) - CursorGothic with aggressive negative letter-spacing (-2.16px at 72px, -0.72px at 36px) for compressed display headings
- Airtable Blue (`#1b61c9`) as primary CTA and link color - jjannon serif for body text with OpenType `"cswh"` (contextual swash alternates)
- Haas + Haas Groot Disp dual font system - berkeleyMono for code and technical labels
- Positive letter-spacing on body text (0.08px0.28px) - Warm off-white background (`#f2f1ed`) instead of pure white -- the entire system is warm-shifted
- 12px radius buttons, 16px32px for cards - Primary text color `#26251e` (warm near-black with yellow undertone)
- Multi-layer blue-tinted shadow: `rgba(45,127,249,0.28) 0px 1px 3px` - Accent orange `#f54e00` for brand highlight and links
- Semantic theme tokens: `--theme_*` CSS variable naming - oklab-space borders at various alpha levels for perceptually uniform edge treatment
- Pill-shaped elements with extreme radius (33.5M px, effectively full-pill)
- 8px base spacing system with fine-grained sub-8px increments (1.5px, 2px, 2.5px, 3px, 4px, 5px, 6px)
- Any box containing text content (cards, panels, table shells, callouts) must use a white background (`#ffffff`) for readability and contrast.
## 2. Color Palette & Roles ## 2. Color Palette & Roles
### Primary ### Primary
- **Deep Navy** (`#181d26`): Primary text - **Cursor Dark** (`#26251e`): Primary text, headings, dark UI surfaces. A warm near-black with distinct yellow-brown undertone -- the defining color of the system.
- **Airtable Blue** (`#1b61c9`): CTA buttons, links - **Cursor Cream** (`#f2f1ed`): Page background, primary surface. Not white but a warm cream that sets the entire warm tone.
- **White** (`#ffffff`): Primary surface - **Cursor Light** (`#e6e5e0`): Secondary surface, button backgrounds, card fills. A slightly warmer, slightly darker cream.
- **Spotlight** (`rgba(249,252,255,0.97)`): `--theme_button-text-spotlight` - **Pure White** (`#ffffff`): Used sparingly for maximum contrast elements and specific surface highlights.
- **True Black** (`#000000`): Minimal use, specific code/console contexts.
### Accent
- **Cursor Orange** (`#f54e00`): Brand accent, `--color-accent`. A vibrant red-orange used for primary CTAs, active links, and brand moments. Warm and urgent.
- **Gold** (`#c08532`): Secondary accent, warm gold for premium or highlighted contexts.
### Semantic ### Semantic
- **Success Green** (`#006400`): `--theme_success-text` - **Error** (`#cf2d56`): `--color-error`. A warm crimson-rose rather than cold red.
- **Weak Text** (`rgba(4,14,32,0.69)`): `--theme_text-weak` - **Success** (`#1f8a65`): `--color-success`. A muted teal-green, warm-shifted.
- **Secondary Active** (`rgba(7,12,20,0.82)`): `--theme_button-text-secondary-active`
### Neutral ### Timeline / Feature Colors
- **Dark Gray** (`#333333`): Secondary text - **Thinking** (`#dfa88f`): Warm peach for "thinking" state in AI timeline.
- **Mid Blue** (`#254fad`): Link/accent blue variant - **Grep** (`#9fc9a2`): Soft sage green for search/grep operations.
- **Border** (`#e0e2e6`): Card borders - **Read** (`#9fbbe0`): Soft blue for file reading operations.
- **Light Surface** (`#f8fafc`): Subtle surface - **Edit** (`#c0a8dd`): Soft lavender for editing operations.
### Shadows ### Surface Scale
- **Blue-tinted** (`rgba(0,0,0,0.32) 0px 0px 1px, rgba(0,0,0,0.08) 0px 0px 2px, rgba(45,127,249,0.28) 0px 1px 3px, rgba(0,0,0,0.06) 0px 0px 0px 0.5px inset`) - **Surface 100** (`#f7f7f4`): Lightest button/card surface, barely tinted.
- **Soft** (`rgba(15,48,106,0.05) 0px 0px 20px`) - **Surface 200** (`#f2f1ed`): Primary page background.
- **Surface 300** (`#ebeae5`): Button default background, subtle emphasis.
- **Surface 400** (`#e6e5e0`): Card backgrounds, secondary surfaces.
- **Surface 500** (`#e1e0db`): Tertiary button background, deeper emphasis.
### Border Colors
- **Border Primary** (`oklab(0.263084 -0.00230259 0.0124794 / 0.1)`): Standard border, 10% warm brown in oklab space.
- **Border Medium** (`oklab(0.263084 -0.00230259 0.0124794 / 0.2)`): Emphasized border, 20% warm brown.
- **Border Strong** (`rgba(38, 37, 30, 0.55)`): Strong borders, table rules.
- **Border Solid** (`#26251e`): Full-opacity dark border for maximum contrast.
- **Border Light** (`#f2f1ed`): Light border matching page background.
### Shadows & Depth
- **Card Shadow** (`rgba(0,0,0,0.14) 0px 28px 70px, rgba(0,0,0,0.1) 0px 14px 32px, oklab(0.263084 -0.00230259 0.0124794 / 0.1) 0px 0px 0px 1px`): Heavy elevated card with warm oklab border ring.
- **Ambient Shadow** (`rgba(0,0,0,0.02) 0px 0px 16px, rgba(0,0,0,0.008) 0px 0px 8px`): Subtle ambient glow for floating elements.
## 3. Typography Rules ## 3. Typography Rules
### Font Families ### Font Family
- **Primary**: `Haas`, fallbacks: `-apple-system, system-ui, Segoe UI, Roboto` - **Display/Headlines**: `CursorGothic`, with fallbacks: `CursorGothic Fallback, system-ui, Helvetica Neue, Helvetica, Arial`
- **Display**: `Haas Groot Disp`, fallback: `Haas` - **Body/Editorial**: `jjannon`, with fallbacks: `Iowan Old Style, Palatino Linotype, URW Palladio L, P052, ui-serif, Georgia, Cambria, Times New Roman, Times`
- **Code/Technical**: `berkeleyMono`, with fallbacks: `ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, Liberation Mono, Courier New`
- **UI/System**: `system-ui`, with fallbacks: `-apple-system, Segoe UI, Helvetica Neue, Arial`
- **Icons**: `CursorIcons16` (icon font at 14px and 12px)
- **OpenType Features**: `"cswh"` on jjannon body text, `"ss09"` on CursorGothic buttons/captions
### Hierarchy ### Hierarchy
| Role | Font | Size | Weight | Line Height | Letter Spacing | | Role | Font | Size | Weight | Line Height | Letter Spacing | Notes |
|------|------|------|--------|-------------|----------------| |------|------|------|--------|-------------|----------------|-------|
| Display Hero | Haas | 48px | 400 | 1.15 | normal | | Display Hero | CursorGothic | 72px (4.50rem) | 400 | 1.10 (tight) | -2.16px | Maximum compression, hero statements |
| Display Bold | Haas Groot Disp | 48px | 900 | 1.50 | normal | | Section Heading | CursorGothic | 36px (2.25rem) | 400 | 1.20 (tight) | -0.72px | Feature sections, CTA headlines |
| Section Heading | Haas | 40px | 400 | 1.25 | normal | | Sub-heading | CursorGothic | 26px (1.63rem) | 400 | 1.25 (tight) | -0.325px | Card headings, sub-sections |
| Sub-heading | Haas | 32px | 400500 | 1.151.25 | normal | | Title Small | CursorGothic | 22px (1.38rem) | 400 | 1.30 (tight) | -0.11px | Smaller titles, list headings |
| Card Title | Haas | 24px | 400 | 1.201.30 | 0.12px | | Body Serif | jjannon | 19.2px (1.20rem) | 500 | 1.50 | normal | Editorial body with `"cswh"` |
| Feature | Haas | 20px | 400 | 1.251.50 | 0.1px | | Body Serif SM | jjannon | 17.28px (1.08rem) | 400 | 1.35 | normal | Standard body text, descriptions |
| Body | Haas | 18px | 400 | 1.35 | 0.18px | | Body Sans | CursorGothic | 16px (1.00rem) | 400 | 1.50 | normal/0.08px | UI body text |
| Body Medium | Haas | 16px | 500 | 1.30 | 0.080.16px | | Button Label | CursorGothic | 14px (0.88rem) | 400 | 1.00 (tight) | normal | Primary button text |
| Button | Haas | 16px | 500 | 1.251.30 | 0.08px | | Button Caption | CursorGothic | 14px (0.88rem) | 400 | 1.50 | 0.14px | Secondary button with `"ss09"` |
| Caption | Haas | 14px | 400500 | 1.251.35 | 0.070.28px | | Caption | CursorGothic | 11px (0.69rem) | 400-500 | 1.50 | normal | Small captions, metadata |
| System Heading | system-ui | 20px (1.25rem) | 700 | 1.55 | normal | System UI headings |
| System Caption | system-ui | 13px (0.81rem) | 500-600 | 1.33 | normal | System UI labels |
| System Micro | system-ui | 11px (0.69rem) | 500 | 1.27 (tight) | 0.048px | Uppercase micro labels |
| Mono Body | berkeleyMono | 12px (0.75rem) | 400 | 1.67 (relaxed) | normal | Code blocks |
| Mono Small | berkeleyMono | 11px (0.69rem) | 400 | 1.33 | -0.275px | Inline code, terminal |
| Lato Heading | Lato | 16px (1.00rem) | 600 | 1.33 | normal | Lato section headings |
| Lato Caption | Lato | 14px (0.88rem) | 400-600 | 1.33 | normal | Lato captions |
| Lato Micro | Lato | 12px (0.75rem) | 400-600 | 1.27 (tight) | 0.053px | Lato small labels |
### Principles
- **Gothic compression for impact**: CursorGothic at display sizes uses -2.16px letter-spacing at 72px, progressively relaxing: -0.72px at 36px, -0.325px at 26px, -0.11px at 22px, normal at 16px and below. The tracking creates a sense of precision engineering.
- **Serif for soul**: jjannon provides literary warmth. The `"cswh"` feature adds contextual swash alternates that give body text a calligraphic quality.
- **Three typographic voices**: Gothic (display/UI), serif (editorial/body), mono (code/technical). Each serves a distinct communication purpose.
- **Weight restraint**: CursorGothic uses weight 400 almost exclusively, relying on size and tracking for hierarchy rather than weight. System-ui components use 500-700 for functional emphasis.
## 4. Component Stylings ## 4. Component Stylings
### Buttons ### Buttons
- **Primary Blue**: `#1b61c9`, white text, 16px 24px padding, 12px radius
- **White**: white bg, `#181d26` text, 12px radius, 1px border white
- **Cookie Consent**: `#1b61c9` bg, 2px radius (sharp)
### Cards: `1px solid #e0e2e6`, 16px24px radius **Primary (Warm Surface)**
### Inputs: Standard Haas styling - Background: `#ebeae5` (Surface 300)
- Text: `#26251e` (Cursor Dark)
- Padding: 10px 12px 10px 14px
- Radius: 8px
- Outline: none
- Hover: text shifts to `var(--color-error)` (`#cf2d56`)
- Focus shadow: `rgba(0,0,0,0.1) 0px 4px 12px`
- Use: Primary actions, main CTAs
## 5. Layout **Secondary Pill**
- Spacing: 148px (8px base) - Background: `#e6e5e0` (Surface 400)
- Radius: 2px (small), 12px (buttons), 16px (cards), 24px (sections), 32px (large), 50% (circles) - Text: `oklab(0.263 / 0.6)` (60% warm brown)
- Padding: 3px 8px
- Radius: full pill (33.5M px)
- Hover: text shifts to `var(--color-error)`
- Use: Tags, filters, secondary actions
## 6. Depth **Tertiary Pill**
- Blue-tinted multi-layer shadow system - Background: `#e1e0db` (Surface 500)
- Soft ambient: `rgba(15,48,106,0.05) 0px 0px 20px` - Text: `oklab(0.263 / 0.6)` (60% warm brown)
- Radius: full pill
- Use: Active filter state, selected tags
## 7. Do's and Don'ts **Ghost (Transparent)**
### Do: Use Airtable Blue for CTAs, Haas with positive tracking, 12px radius buttons - Background: `rgba(38, 37, 30, 0.06)` (6% warm brown)
### Don't: Skip positive letter-spacing, use heavy shadows - Text: `rgba(38, 37, 30, 0.55)` (55% warm brown)
- Padding: 6px 12px
- Use: Tertiary actions, dismiss buttons
**Light Surface**
- Background: `#f7f7f4` (Surface 100) or `#f2f1ed` (Surface 200)
- Text: `#26251e` or `oklab(0.263 / 0.9)` (90%)
- Padding: 0px 8px 1px 12px
- Use: Dropdown triggers, subtle interactive elements
### Cards & Containers
- Background: `#ffffff` for any text-bearing card or panel
- Border: `1px solid oklab(0.263 / 0.1)` (warm brown at 10%)
- Radius: 8px (standard), 4px (compact), 10px (featured)
- Shadow: `rgba(0,0,0,0.14) 0px 28px 70px, rgba(0,0,0,0.1) 0px 14px 32px` for elevated cards
- Hover: shadow intensification
### Inputs & Forms
- Background: transparent or surface
- Text: `#26251e`
- Padding: 8px 8px 6px (textarea)
- Border: `1px solid oklab(0.263 / 0.1)`
- Focus: border shifts to `oklab(0.263 / 0.2)` or accent orange
### Navigation
- Clean horizontal nav on warm cream background
- Cursor logotype left-aligned (~96x24px)
- Links: 14px CursorGothic or system-ui, weight 500
- CTA button: warm surface with Cursor Dark text
- Tab navigation: bottom border `1px solid oklab(0.263 / 0.1)` with active tab differentiation
### Image Treatment
- Code editor screenshots with `1px solid oklab(0.263 / 0.1)` border
- Rounded corners: 8px standard
- AI chat/timeline screenshots dominate feature sections
- Warm gradient or solid cream backgrounds behind hero images
### Distinctive Components
**AI Timeline**
- Vertical timeline showing AI operations: thinking (peach), grep (sage), read (blue), edit (lavender)
- Each step uses its semantic color with matching text
- Connected with vertical lines
- Core visual metaphor for Cursor's AI-first coding experience
**Code Editor Previews**
- Dark code editor screenshots with warm cream border frame
- berkeleyMono for code text
- Syntax highlighting using timeline colors
**Pricing Cards**
- Warm surface backgrounds with bordered containers
- Feature lists using jjannon serif for readability
- CTA buttons with accent orange or primary dark styling
## 5. Layout Principles
### Spacing System
- Base unit: 8px
- Fine scale: 1.5px, 2px, 2.5px, 3px, 4px, 5px, 6px (sub-8px for micro-adjustments)
- Standard scale: 8px, 10px, 12px, 14px (derived from extraction)
- Extended scale (inferred): 16px, 24px, 32px, 48px, 64px, 96px
- Notable: fine-grained sub-8px increments for precise icon/text alignment
### Grid & Container
- Max content width: approximately 1200px
- Hero: centered single-column with generous top padding (80-120px)
- Feature sections: 2-3 column grids for cards and features
- Full-width sections with warm cream or slightly darker backgrounds
- Sidebar layouts for documentation and settings pages
### Whitespace Philosophy
- **Warm negative space**: The cream background means whitespace has warmth and texture, unlike cold white minimalism. Large empty areas feel cozy rather than clinical.
- **Compressed text, open layout**: Aggressive negative letter-spacing on CursorGothic headlines is balanced by generous surrounding margins. Text is dense; space around it breathes.
- **Section variation**: Alternating surface tones (cream → lighter cream → cream) create subtle section differentiation without harsh boundaries.
### Border Radius Scale
- Micro (1.5px): Fine detail elements
- Small (2px): Inline elements, code spans
- Medium (3px): Small containers, inline badges
- Standard (4px): Cards, images, compact buttons
- Comfortable (8px): Primary buttons, cards, menus
- Featured (10px): Larger containers, featured cards
- Full Pill (33.5M px / 9999px): Pill buttons, tags, badges
## 6. Depth & Elevation
| Level | Treatment | Use |
|-------|-----------|-----|
| Flat (Level 0) | No shadow | Page background, text blocks |
| Border Ring (Level 1) | `oklab(0.263 / 0.1) 0px 0px 0px 1px` | Standard card/container border (warm oklab) |
| Border Medium (Level 1b) | `oklab(0.263 / 0.2) 0px 0px 0px 1px` | Emphasized borders, active states |
| Ambient (Level 2) | `rgba(0,0,0,0.02) 0px 0px 16px, rgba(0,0,0,0.008) 0px 0px 8px` | Floating elements, subtle glow |
| Elevated Card (Level 3) | `rgba(0,0,0,0.14) 0px 28px 70px, rgba(0,0,0,0.1) 0px 14px 32px, oklab ring` | Modals, popovers, elevated cards |
| Focus | `rgba(0,0,0,0.1) 0px 4px 12px` on button focus | Interactive focus feedback |
**Shadow Philosophy**: Cursor's depth system is built around two ideas. First, borders use perceptually uniform oklab color space rather than rgba, ensuring warm brown borders look consistent across different background tones. Second, elevation shadows use dramatically large blur values (28px, 70px) with moderate opacity (0.14, 0.1), creating a diffused, atmospheric lift rather than hard-edged drop shadows. Cards don't feel like they float above the page -- they feel like the page has gently opened a space for them.
### Decorative Depth
- Warm cream surface variations create subtle tonal depth without shadows
- oklab borders at 10% and 20% create a spectrum of edge definition
- No harsh divider lines -- section separation through background tone shifts and spacing
## 7. Interaction & Motion
### Hover States
- Buttons: text color shifts to `--color-error` (`#cf2d56`) on hover -- a distinctive warm crimson that signals interactivity
- Links: color shift to accent orange (`#f54e00`) or underline decoration with `rgba(38, 37, 30, 0.4)`
- Cards: shadow intensification on hover (ambient → elevated)
### Focus States
- Shadow-based focus: `rgba(0,0,0,0.1) 0px 4px 12px` for depth-based focus indication
- Border focus: `oklab(0.263 / 0.2)` (20% border) for input/form focus
- Consistent warm tone in all focus states -- no cold blue focus rings
### Transitions
- Color transitions: 150ms ease for text/background color changes
- Shadow transitions: 200ms ease for elevation changes
- Transform: subtle scale or translate for interactive feedback
## 8. Responsive Behavior ## 8. Responsive Behavior
Breakpoints: 4251664px (23 breakpoints)
### Breakpoints
| Name | Width | Key Changes |
|------|-------|-------------|
| Mobile | <600px | Single column, reduced padding, stacked navigation |
| Tablet Small | 600-768px | 2-column grids begin |
| Tablet | 768-900px | Expanded card grids, sidebar appears |
| Desktop Small | 900-1279px | Full layout forming |
| Desktop | >1279px | Full layout, maximum content width |
### Touch Targets
- Buttons use comfortable padding (6px-14px vertical, 8px-14px horizontal)
- Pill buttons maintain tap-friendly sizing with 3px-10px padding
- Navigation links at 14px with adequate spacing for touch
### Collapsing Strategy
- Hero: 72px CursorGothic → 36px → 26px on smaller screens, maintaining proportional letter-spacing
- Navigation: horizontal links → hamburger menu on mobile
- Feature cards: 3-column → 2-column → single column stacked
- Code editor screenshots: maintain aspect ratio, may shrink with border treatment preserved
- Timeline visualization: horizontal → vertical stacking
- Section spacing: 80px+ → 48px → 32px on mobile
### Image Behavior
- Editor screenshots maintain warm border treatment at all sizes
- AI timeline adapts from horizontal to vertical layout
- Product screenshots use responsive images with consistent border radius
- Full-width hero images scale proportionally
## 9. Agent Prompt Guide ## 9. Agent Prompt Guide
- Text: Deep Navy (`#181d26`)
- CTA: Airtable Blue (`#1b61c9`) ### Quick Color Reference
- Background: White (`#ffffff`) - Primary CTA background: `#ebeae5` (warm cream button)
- Border: `#e0e2e6` - Page background: `#f2f1ed` (warm off-white)
- Text color: `#26251e` (warm near-black)
- Secondary text: `rgba(38, 37, 30, 0.55)` (55% warm brown)
- Accent: `#f54e00` (orange)
- Error/hover: `#cf2d56` (warm crimson)
- Success: `#1f8a65` (muted teal)
- Border: `oklab(0.263084 -0.00230259 0.0124794 / 0.1)` or `rgba(38, 37, 30, 0.1)` as fallback
### Example Component Prompts
- "Create a hero section on `#f2f1ed` warm cream background. Headline at 72px CursorGothic weight 400, line-height 1.10, letter-spacing -2.16px, color `#26251e`. Subtitle at 17.28px jjannon weight 400, line-height 1.35, color `rgba(38,37,30,0.55)`. Primary CTA button (`#ebeae5` bg, 8px radius, 10px 14px padding) with hover text shift to `#cf2d56`."
- "Design a card: `#e6e5e0` background, border `1px solid rgba(38,37,30,0.1)`. Radius 8px. Title at 22px CursorGothic weight 400, letter-spacing -0.11px. Body at 17.28px jjannon weight 400, color `rgba(38,37,30,0.55)`. Use `#f54e00` for link accents."
- "Build a pill tag: `#e6e5e0` background, `rgba(38,37,30,0.6)` text, full-pill radius (9999px), 3px 8px padding, 14px CursorGothic weight 400."
- "Create navigation: sticky `#f2f1ed` background with backdrop-filter blur. 14px system-ui weight 500 for links, `#26251e` text. CTA button right-aligned with `#ebeae5` bg and 8px radius. Bottom border `1px solid rgba(38,37,30,0.1)`."
- "Design an AI timeline showing four steps: Thinking (`#dfa88f`), Grep (`#9fc9a2`), Read (`#9fbbe0`), Edit (`#c0a8dd`). Each step: 14px system-ui label + 16px CursorGothic description + vertical connecting line in `rgba(38,37,30,0.1)`."
### Iteration Guide
1. Always use warm tones -- `#f2f1ed` background, `#26251e` text, never pure white/black for primary surfaces
2. Letter-spacing scales with font size for CursorGothic: -2.16px at 72px, -0.72px at 36px, -0.325px at 26px, normal at 16px
3. Use `rgba(38, 37, 30, alpha)` as a CSS-compatible fallback for oklab borders
4. Three fonts, three voices: CursorGothic (display/UI), jjannon (editorial), berkeleyMono (code)
5. Pill shapes (9999px radius) for tags and filters; 8px radius for primary buttons and cards
6. Hover states use `#cf2d56` text color -- the warm crimson shift is a signature interaction
7. Shadows use large blur values (28px, 70px) for diffused atmospheric depth
8. The sub-8px spacing scale (1.5, 2, 2.5, 3, 4, 5, 6px) is critical for icon/text micro-alignment
+206 -89
View File
@@ -1,32 +1,54 @@
:root { :root {
--theme_text_primary: #181d26; --theme_text_primary: #0f1d33;
--theme_text_weak: rgba(4, 14, 32, 0.69); --theme_text_weak: rgba(15, 29, 51, 0.67);
--theme_text_inverse: rgba(249, 252, 255, 0.97); --theme_text_inverse: #f6faff;
--theme_text_secondary_active: rgba(7, 12, 20, 0.82); --theme_text_inverse_muted: rgba(246, 250, 255, 0.77);
--theme_accent_blue: #1b61c9; --theme_text_secondary_active: rgba(13, 24, 42, 0.84);
--theme_accent_blue_hover: #164fa6; --theme_text_button_spotlight: #f7fbff;
--theme_accent_blue_soft: #e8f0fc; --theme_text_success: #0f6a35;
--theme_success_text: #006400; --theme_text_placeholder: rgba(15, 29, 51, 0.47);
--theme_accent_blue: #195fc8;
--theme_accent_blue_hover: #144ea7;
--theme_accent_blue_soft: #e8f1ff;
--theme_accent_blue_border: rgba(25, 95, 200, 0.44);
--theme_surface_primary: #ffffff; --theme_surface_primary: #ffffff;
--theme_surface_subtle: #f8fafc; --theme_surface_subtle: #f3f8ff;
--theme_surface_shell: #fdfefe; --theme_surface_shell: #f7fbff;
--theme_surface_section: #fbfdff; --theme_surface_section: #ffffff;
--theme_border: #e0e2e6; --theme_surface_raised: #ffffff;
--theme_shadow_card: rgba(0, 0, 0, 0.32) 0 0 1px, rgba(0, 0, 0, 0.08) 0 0 2px, rgba(45, 127, 249, 0.28) 0 1px 3px, rgba(0, 0, 0, 0.06) 0 0 0 0.5px inset; --theme_surface_code: #ecf3ff;
--theme_shadow_soft: rgba(15, 48, 106, 0.05) 0 0 20px; --theme_surface_table_head: rgba(233, 242, 255, 0.92);
--theme_shadow_button: rgba(45, 127, 249, 0.28) 0 1px 3px; --theme_surface_chart_tooltip: rgba(12, 21, 37, 0.96);
--theme_border: #d5e0ee;
--theme_border_medium: #bccde4;
--theme_border_strong: #4d6281;
--theme_border_focus: rgba(25, 95, 200, 0.78);
--theme_shadow_card: rgba(4, 16, 34, 0.08) 0 8px 22px, rgba(25, 95, 200, 0.09) 0 1px 3px;
--theme_shadow_ambient: rgba(19, 68, 142, 0.08) 0 18px 42px -24px;
--theme_shadow_soft: var(--theme_shadow_ambient);
--theme_shadow_button: rgba(25, 95, 200, 0.26) 0 2px 6px;
--theme_shadow_button_ring: 0 0 0 2px rgba(25, 95, 200, 0.28);
--theme_shadow_table_inset: inset rgba(25, 95, 200, 0.14) 0 0 0 1px;
--theme_shadow_tooltip: rgba(5, 14, 26, 0.28) 0 14px 32px;
--theme_gradient_hero_primary: radial-gradient(circle at 8% 4%, rgba(25, 95, 200, 0.09) 0, rgba(25, 95, 200, 0) 31%);
--theme_gradient_hero_secondary: radial-gradient(circle at 90% 12%, rgba(20, 78, 167, 0.08) 0, rgba(20, 78, 167, 0) 29%);
--theme_gradient_card_accent: linear-gradient(180deg, #195fc8 0%, #144ea7 100%);
--theme_focus_outline: 2px solid rgba(25, 95, 200, 0.58);
--theme_radius_button: 12px; --theme_radius_button: 12px;
--theme_radius_card: 16px; --theme_radius_card: 16px;
--theme_radius_section: 24px; --theme_radius_section: 24px;
--theme_radius_large: 32px; --theme_radius_large: 30px;
--theme_font_body: "Haas", "Neue Haas Grotesk Text Pro", "Avenir Next", "Segoe UI", "Helvetica Neue", Arial, sans-serif; --theme_radius_code: 8px;
--theme_font_display: "Haas Groot Disp", "Haas", "Neue Haas Grotesk Display Pro", "Avenir Next", "Segoe UI", "Helvetica Neue", Arial, sans-serif; --theme_radius_pill: 9999px;
--theme_font_code: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace; --theme_font_body: "jjannon", "Iowan Old Style", "Palatino Linotype", "URW Palladio L", "P052", ui-serif, Georgia, Cambria, "Times New Roman", Times, serif;
--theme_letter_body: 0.12px; --theme_font_display: "CursorGothic", "CursorGothic Fallback", system-ui, "Helvetica Neue", Helvetica, Arial, sans-serif;
--theme_letter_caption: 0.2px; --theme_font_code: "berkeleyMono", ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
--theme_letter_button: 0.08px; --theme_font_ui: system-ui, -apple-system, "Segoe UI", "Helvetica Neue", Arial, sans-serif;
--theme_letter_body: 0.012em;
--theme_letter_caption: 0.085em;
--theme_letter_button: 0.01em;
--theme_transition_fast: 150ms ease; --theme_transition_fast: 150ms ease;
--theme_transition_base: 220ms ease; --theme_transition_base: 200ms ease;
--web2-blue: var(--theme_accent_blue); --web2-blue: var(--theme_accent_blue);
--web2-slate: var(--theme_text_primary); --web2-slate: var(--theme_text_primary);
--web2-muted: var(--theme_text_weak); --web2-muted: var(--theme_text_weak);
@@ -55,8 +77,8 @@ body {
.web2-bg { .web2-bg {
background: background:
radial-gradient(circle at 8% 4%, rgba(27, 97, 201, 0.08) 0, rgba(27, 97, 201, 0) 30%), var(--theme_gradient_hero_primary),
radial-gradient(circle at 90% 12%, rgba(37, 79, 173, 0.08) 0, rgba(37, 79, 173, 0) 28%), var(--theme_gradient_hero_secondary),
var(--theme_surface_shell); var(--theme_surface_shell);
} }
@@ -93,12 +115,12 @@ body {
font-family: var(--theme_font_display); font-family: var(--theme_font_display);
font-size: clamp(1.95rem, 1.2rem + 1.9vw, 2.65rem); font-size: clamp(1.95rem, 1.2rem + 1.9vw, 2.65rem);
line-height: 1.15; line-height: 1.15;
letter-spacing: 0.06px; letter-spacing: -0.325px;
} }
.web2-page-subtitle { .web2-page-subtitle {
margin-top: 0.45rem; margin-top: 0.45rem;
font-size: 0.96rem; font-size: 1.08rem;
line-height: 1.45; line-height: 1.45;
color: var(--theme_text_weak); color: var(--theme_text_weak);
} }
@@ -126,28 +148,91 @@ body {
} }
.web2-kpi-label { .web2-kpi-label {
font-size: 0.72rem; font-size: 0.7rem;
font-weight: 600; font-family: var(--theme_font_ui);
font-weight: 500;
text-transform: uppercase; text-transform: uppercase;
letter-spacing: 0.22em; letter-spacing: 0.16em;
color: var(--theme_text_weak); color: var(--theme_text_weak);
} }
.web2-kpi-value { .web2-kpi-value {
margin-top: 0.55rem; margin-top: 0.55rem;
font-size: 1.3rem; font-size: 1.2rem;
font-weight: 600; font-family: var(--theme_font_display);
font-weight: 400;
line-height: 1.2; line-height: 1.2;
color: var(--theme_text_primary); color: var(--theme_text_primary);
letter-spacing: -0.11px;
}
.web2-kpi-value-mono {
font-family: var(--theme_font_code);
font-size: 0.98rem;
font-weight: 400;
letter-spacing: -0.015em;
}
.web2-kpi-truncate {
display: block;
max-width: 100%;
overflow: hidden;
text-overflow: ellipsis;
white-space: nowrap;
}
.web2-card-overview {
background: var(--theme_surface_primary);
}
.web2-card-featured {
background: var(--theme_surface_primary);
border-color: var(--theme_border_medium);
box-shadow: var(--theme_shadow_card), var(--theme_shadow_soft);
position: relative;
}
.web2-card-featured::before {
content: "";
position: absolute;
left: 0;
top: 0;
bottom: 0;
width: 3px;
background: var(--theme_gradient_card_accent);
border-top-left-radius: var(--theme_radius_card);
border-bottom-left-radius: var(--theme_radius_card);
}
.web2-index-sections {
display: grid;
gap: 1.35rem;
grid-template-columns: minmax(0, 1fr);
}
.web2-index-overview {
box-shadow: var(--theme_shadow_soft);
}
.web2-index-featured {
margin-top: 0.45rem;
border-width: 1px;
border-color: var(--theme_border_medium);
box-shadow: var(--theme_shadow_card), var(--theme_shadow_soft);
}
.web2-index-wide {
grid-column: 1 / -1;
} }
.web2-note { .web2-note {
padding: 0.78rem 0.92rem; padding: 0.78rem 0.92rem;
border-radius: var(--theme_radius_button); border-radius: var(--theme_radius_card);
border: 1px solid rgba(27, 97, 201, 0.28); border: 1px solid var(--theme_border_medium);
background: rgba(27, 97, 201, 0.08); background: var(--theme_surface_subtle);
color: var(--theme_text_secondary_active); color: var(--theme_text_secondary_active);
font-size: 0.89rem; font-family: var(--theme_font_ui);
font-size: 0.86rem;
line-height: 1.45; line-height: 1.45;
} }
@@ -164,20 +249,23 @@ body {
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
border-radius: var(--theme_radius_card); border-radius: var(--theme_radius_card);
padding: 1.4rem 1.6rem; padding: 1.4rem 1.6rem;
box-shadow: var(--theme_shadow_card); box-shadow: var(--theme_shadow_card), var(--theme_shadow_soft);
} }
.web2-card h2 { .web2-card h2,
.web2-section-title {
position: relative; position: relative;
margin: 0;
padding-left: 0.9rem; padding-left: 0.9rem;
font-size: 1.1rem; font-size: 1.36rem;
font-family: var(--theme_font_display); font-family: var(--theme_font_display);
font-weight: 600; font-weight: 400;
letter-spacing: var(--theme_letter_button); letter-spacing: -0.11px;
color: var(--theme_text_primary); color: var(--theme_text_primary);
} }
.web2-card h2::before { .web2-card h2::before,
.web2-section-title::before {
content: ""; content: "";
position: absolute; position: absolute;
left: 0; left: 0;
@@ -187,7 +275,7 @@ body {
height: 72%; height: 72%;
background: var(--theme_accent_blue); background: var(--theme_accent_blue);
border-radius: 999px; border-radius: 999px;
box-shadow: 0 0 0 1px rgba(27, 97, 201, 0.22); box-shadow: none;
} }
.web2-pill { .web2-pill {
@@ -197,18 +285,19 @@ body {
background: var(--theme_surface_subtle); background: var(--theme_surface_subtle);
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
color: var(--theme_text_weak); color: var(--theme_text_weak);
padding: 0.28rem 0.72rem; padding: 0.22rem 0.78rem;
border-radius: var(--theme_radius_button); border-radius: var(--theme_radius_pill);
font-size: 0.82rem; font-family: var(--theme_font_display);
font-weight: 500; font-size: 0.76rem;
letter-spacing: 0.24px; font-weight: 400;
letter-spacing: 0;
} }
.web2-code { .web2-code {
font-family: var(--theme_font_code); font-family: var(--theme_font_code);
background: rgba(27, 97, 201, 0.07); background: var(--theme_surface_code);
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
border-radius: 8px; border-radius: var(--theme_radius_code);
padding: 0.12rem 0.42rem; padding: 0.12rem 0.42rem;
font-size: 0.84em; font-size: 0.84em;
color: var(--theme_text_primary); color: var(--theme_text_primary);
@@ -221,23 +310,25 @@ body {
.web2-link { .web2-link {
color: var(--theme_accent_blue); color: var(--theme_accent_blue);
text-decoration: none; text-decoration: none;
font-weight: 600; font-weight: 500;
transition: color var(--theme_transition_fast), text-decoration-color var(--theme_transition_fast); transition: color var(--theme_transition_fast), text-decoration-color var(--theme_transition_fast);
} }
.web2-link:hover { .web2-link:hover {
color: var(--theme_accent_blue_hover); color: var(--theme_accent_blue_hover);
text-decoration: underline; text-decoration: underline;
text-decoration-color: var(--theme_border_medium);
} }
.web2-button { .web2-button {
background: var(--theme_accent_blue); background: var(--theme_accent_blue);
color: var(--theme_text_inverse); color: var(--theme_text_button_spotlight);
padding: 0.56rem 1rem; padding: 0.62rem 0.9rem 0.6rem 1rem;
border-radius: var(--theme_radius_button); border-radius: var(--theme_radius_button);
border: 1px solid rgba(22, 79, 166, 0.95); border: 1px solid var(--theme_accent_blue_border);
box-shadow: var(--theme_shadow_button); box-shadow: var(--theme_shadow_button);
font-weight: 600; font-family: var(--theme_font_display);
font-weight: 400;
letter-spacing: var(--theme_letter_button); letter-spacing: var(--theme_letter_button);
text-decoration: none; text-decoration: none;
transition: transform var(--theme_transition_fast), background var(--theme_transition_fast), border-color var(--theme_transition_fast), box-shadow var(--theme_transition_fast); transition: transform var(--theme_transition_fast), background var(--theme_transition_fast), border-color var(--theme_transition_fast), box-shadow var(--theme_transition_fast);
@@ -245,22 +336,24 @@ body {
.web2-button:hover { .web2-button:hover {
background: var(--theme_accent_blue_hover); background: var(--theme_accent_blue_hover);
border-color: var(--theme_accent_blue_hover); color: var(--theme_text_button_spotlight);
box-shadow: rgba(37, 79, 173, 0.32) 0 3px 7px; border-color: var(--theme_accent_blue_border);
box-shadow: var(--theme_shadow_button);
transform: translateY(-1px); transform: translateY(-1px);
} }
.web2-button.secondary { .web2-button.secondary {
background: var(--theme_surface_primary); background: var(--theme_surface_raised);
color: var(--theme_text_secondary_active); color: var(--theme_text_secondary_active);
border-color: var(--theme_border); border-color: var(--theme_border);
box-shadow: none; box-shadow: none;
border-radius: var(--theme_radius_pill);
} }
.web2-button.secondary:hover { .web2-button.secondary:hover {
background: var(--theme_surface_subtle); background: var(--theme_surface_subtle);
border-color: #c7cdd6;
color: var(--theme_text_primary); color: var(--theme_text_primary);
border-color: var(--theme_border_medium);
transform: none; transform: none;
} }
@@ -274,13 +367,14 @@ body {
} }
.web3-button { .web3-button {
background: var(--theme_surface_subtle); background: var(--theme_surface_primary);
color: var(--theme_text_primary); color: var(--theme_text_primary);
padding: 0.52rem 1.02rem; padding: 0.52rem 1.02rem;
border-radius: var(--theme_radius_button); border-radius: var(--theme_radius_button);
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
text-decoration: none; text-decoration: none;
font-weight: 600; font-family: var(--theme_font_display);
font-weight: 400;
letter-spacing: var(--theme_letter_button); letter-spacing: var(--theme_letter_button);
transition: background var(--theme_transition_fast), border-color var(--theme_transition_fast), color var(--theme_transition_fast), box-shadow var(--theme_transition_fast); transition: background var(--theme_transition_fast), border-color var(--theme_transition_fast), color var(--theme_transition_fast), box-shadow var(--theme_transition_fast);
display: inline-flex; display: inline-flex;
@@ -289,15 +383,16 @@ body {
} }
.web3-button:hover { .web3-button:hover {
background: #edf2fa; background: var(--theme_accent_blue_soft);
border-color: #c8d2e1; border-color: var(--theme_border_medium);
color: var(--theme_accent_blue_hover);
} }
.web3-button.active { .web3-button.active {
background: var(--theme_accent_blue_soft); background: var(--theme_accent_blue_soft);
border-color: rgba(27, 97, 201, 0.45); border-color: var(--theme_accent_blue_border);
color: var(--theme_accent_blue); color: var(--theme_accent_blue);
box-shadow: rgba(45, 127, 249, 0.28) 0 0 0 2px; box-shadow: var(--theme_shadow_button_ring);
} }
.web3-button-group { .web3-button-group {
@@ -313,6 +408,7 @@ body {
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
border-radius: var(--theme_radius_card); border-radius: var(--theme_radius_card);
background: var(--theme_surface_primary); background: var(--theme_surface_primary);
box-shadow: var(--theme_shadow_table_inset);
} }
.web2-list li { .web2-list li {
@@ -334,11 +430,12 @@ body {
.web2-table thead th { .web2-table thead th {
text-align: left; text-align: left;
padding: 0.8rem 0.55rem; padding: 0.8rem 0.55rem;
font-weight: 700; font-family: var(--theme_font_ui);
font-weight: 600;
color: var(--theme_text_weak); color: var(--theme_text_weak);
letter-spacing: var(--theme_letter_caption); letter-spacing: var(--theme_letter_caption);
border-bottom: 1px solid var(--theme_border); border-bottom: 1px solid var(--theme_border);
background: rgba(248, 250, 252, 0.95); background: var(--theme_surface_table_head);
} }
.web2-table tbody td { .web2-table tbody td {
@@ -347,15 +444,15 @@ body {
} }
.web2-table tbody tr:nth-child(odd) { .web2-table tbody tr:nth-child(odd) {
background: var(--theme_surface_subtle);
}
.web2-table tbody tr:nth-child(even) {
background: var(--theme_surface_primary); background: var(--theme_surface_primary);
} }
.web2-table tbody tr:nth-child(even) {
background: var(--theme_surface_subtle);
}
.web2-group-row td { .web2-group-row td {
background: #eaf1fb; background: var(--theme_surface_subtle);
color: var(--theme_text_primary); color: var(--theme_text_primary);
border-bottom: 1px solid var(--theme_border); border-bottom: 1px solid var(--theme_border);
padding: 0.7rem 0.55rem; padding: 0.7rem 0.55rem;
@@ -367,11 +464,12 @@ body {
gap: 0.25rem; gap: 0.25rem;
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
padding: 0.16rem 0.5rem; padding: 0.16rem 0.5rem;
border-radius: var(--theme_radius_button); border-radius: var(--theme_radius_pill);
font-size: 0.8rem; font-family: var(--theme_font_display);
font-size: 0.74rem;
letter-spacing: var(--theme_letter_caption); letter-spacing: var(--theme_letter_caption);
color: var(--theme_text_weak); color: var(--theme_text_weak);
background: var(--theme_surface_subtle); background: var(--theme_surface_raised);
} }
.web2-form-grid { .web2-form-grid {
@@ -406,17 +504,17 @@ body {
} }
.web2-input::placeholder { .web2-input::placeholder {
color: rgba(4, 14, 32, 0.46); color: var(--theme_text_placeholder);
} }
.web2-input:hover { .web2-input:hover {
border-color: #c9d0db; border-color: var(--theme_border_medium);
} }
.web2-input:focus-visible { .web2-input:focus-visible {
outline: none; outline: none;
border-color: rgba(27, 97, 201, 0.8); border-color: var(--theme_border_focus);
box-shadow: rgba(45, 127, 249, 0.26) 0 0 0 3px; box-shadow: var(--theme_shadow_button_ring);
} }
.web2-form-actions { .web2-form-actions {
@@ -433,11 +531,12 @@ body {
padding: 0.95rem 1rem; padding: 0.95rem 1rem;
border-radius: var(--theme_radius_card); border-radius: var(--theme_radius_card);
border: 1px solid var(--theme_border); border: 1px solid var(--theme_border);
background: linear-gradient(180deg, rgba(255, 255, 255, 1) 0, rgba(248, 250, 252, 0.75) 100%); background: var(--theme_surface_raised);
} }
.web2-subcard-label { .web2-subcard-label {
font-size: 0.72rem; font-size: 0.72rem;
font-family: var(--theme_font_ui);
font-weight: 600; font-weight: 600;
text-transform: uppercase; text-transform: uppercase;
letter-spacing: 0.2em; letter-spacing: 0.2em;
@@ -446,8 +545,9 @@ body {
.web2-subcard-value { .web2-subcard-value {
margin-top: 0.45rem; margin-top: 0.45rem;
font-size: 1rem; font-size: 1.08rem;
font-weight: 600; font-family: var(--theme_font_display);
font-weight: 400;
color: var(--theme_text_primary); color: var(--theme_text_primary);
} }
@@ -463,6 +563,7 @@ body {
.web2-details-summary { .web2-details-summary {
cursor: pointer; cursor: pointer;
font-size: 0.86rem; font-size: 0.86rem;
font-family: var(--theme_font_ui);
font-weight: 600; font-weight: 600;
color: var(--theme_text_secondary_active); color: var(--theme_text_secondary_active);
} }
@@ -493,14 +594,14 @@ body {
top: 0; top: 0;
opacity: 0; opacity: 0;
pointer-events: none; pointer-events: none;
background: rgba(24, 29, 38, 0.97); background: var(--theme_surface_chart_tooltip);
color: var(--theme_text_inverse); color: var(--theme_text_inverse);
padding: 0.55rem 0.65rem; padding: 0.55rem 0.65rem;
border-radius: var(--theme_radius_button); border-radius: var(--theme_radius_button);
font-size: 0.75rem; font-size: 0.75rem;
line-height: 1.35; line-height: 1.35;
min-width: 170px; min-width: 170px;
box-shadow: 0 10px 30px rgba(2, 6, 23, 0.25); box-shadow: var(--theme_shadow_tooltip);
z-index: 20; z-index: 20;
transition: opacity 80ms linear; transition: opacity 80ms linear;
} }
@@ -511,7 +612,7 @@ body {
.web3-chart-tooltip-title { .web3-chart-tooltip-title {
font-weight: 700; font-weight: 700;
color: #e2e8f0; color: var(--theme_text_inverse);
margin-bottom: 0.35rem; margin-bottom: 0.35rem;
} }
@@ -525,12 +626,12 @@ body {
.web3-chart-tooltip-label { .web3-chart-tooltip-label {
display: inline-flex; display: inline-flex;
align-items: center; align-items: center;
color: #cbd5e1; color: var(--theme_text_inverse_muted);
} }
.web3-chart-tooltip-value { .web3-chart-tooltip-value {
font-weight: 700; font-weight: 700;
color: #f8fafc; color: var(--theme_text_inverse);
} }
.web3-chart-tooltip-swatch { .web3-chart-tooltip-swatch {
@@ -553,7 +654,7 @@ body {
padding-top: 0.75rem; padding-top: 0.75rem;
text-align: center; text-align: center;
font-size: 0.74rem; font-size: 0.74rem;
font-style: italic; font-style: normal;
letter-spacing: var(--theme_letter_caption); letter-spacing: var(--theme_letter_caption);
color: var(--theme_text_weak); color: var(--theme_text_weak);
} }
@@ -564,7 +665,7 @@ summary:focus-visible,
.web2-button:focus-visible, .web2-button:focus-visible,
.web3-button:focus-visible, .web3-button:focus-visible,
.web2-link:focus-visible { .web2-link:focus-visible {
outline: 2px solid rgba(27, 97, 201, 0.7); outline: var(--theme_focus_outline);
outline-offset: 2px; outline-offset: 2px;
} }
@@ -591,6 +692,12 @@ summary:focus-visible,
.web2-page-subtitle { .web2-page-subtitle {
font-size: 0.9rem; font-size: 0.9rem;
} }
.web2-index-sections {
gap: 1.15rem;
}
.web2-index-featured {
margin-top: 0.75rem;
}
.web2-actions { .web2-actions {
width: 100%; width: 100%;
} }
@@ -611,3 +718,13 @@ summary:focus-visible,
grid-template-columns: repeat(3, minmax(0, 1fr)); grid-template-columns: repeat(3, minmax(0, 1fr));
} }
} }
@media (min-width: 1024px) {
.web2-index-sections {
grid-template-columns: minmax(0, 4fr) minmax(0, 8fr);
gap: 1.5rem;
}
.web2-index-featured {
margin-top: 0;
}
}
+36
View File
@@ -31,6 +31,9 @@ const (
defaultAuthJWTIssuer = "vctp" defaultAuthJWTIssuer = "vctp"
defaultAuthJWTAudience = "vctp-api" defaultAuthJWTAudience = "vctp-api"
defaultAuthClockSkewSeconds = 60 defaultAuthClockSkewSeconds = 60
scheduledAggregationEngineGo = "go"
scheduledAggregationEngineSQL = "sql"
) )
type Settings struct { type Settings struct {
@@ -94,6 +97,11 @@ type SettingsYML struct {
HourlySnapshotTimeoutSeconds int `yaml:"hourly_snapshot_timeout_seconds"` HourlySnapshotTimeoutSeconds int `yaml:"hourly_snapshot_timeout_seconds"`
HourlySnapshotRetrySeconds int `yaml:"hourly_snapshot_retry_seconds"` HourlySnapshotRetrySeconds int `yaml:"hourly_snapshot_retry_seconds"`
HourlySnapshotMaxRetries int `yaml:"hourly_snapshot_max_retries"` HourlySnapshotMaxRetries int `yaml:"hourly_snapshot_max_retries"`
CaptureWriteBatchSize int `yaml:"capture_write_batch_size"`
SnapshotTableCompatMode *bool `yaml:"snapshot_table_compat_mode"`
AsyncReportGeneration *bool `yaml:"async_report_generation"`
PostgresVmHourlyPartitioning *bool `yaml:"postgres_vm_hourly_partitioning_enabled"`
ScheduledAggregationEngine string `yaml:"scheduled_aggregation_engine"`
DailyJobTimeoutSeconds int `yaml:"daily_job_timeout_seconds"` DailyJobTimeoutSeconds int `yaml:"daily_job_timeout_seconds"`
MonthlyJobTimeoutSeconds int `yaml:"monthly_job_timeout_seconds"` MonthlyJobTimeoutSeconds int `yaml:"monthly_job_timeout_seconds"`
MonthlyAggregationGranularity string `yaml:"monthly_aggregation_granularity"` MonthlyAggregationGranularity string `yaml:"monthly_aggregation_granularity"`
@@ -250,6 +258,29 @@ func applyDefaultsAndValidateSettings(cfg *SettingsYML) error {
if s.AuthClockSkewSeconds == 0 { if s.AuthClockSkewSeconds == 0 {
s.AuthClockSkewSeconds = defaultAuthClockSkewSeconds s.AuthClockSkewSeconds = defaultAuthClockSkewSeconds
} }
if s.CaptureWriteBatchSize <= 0 {
s.CaptureWriteBatchSize = 1000
}
if s.SnapshotTableCompatMode == nil {
v := true
s.SnapshotTableCompatMode = &v
}
if s.AsyncReportGeneration == nil {
v := true
s.AsyncReportGeneration = &v
}
if s.PostgresVmHourlyPartitioning == nil {
v := false
s.PostgresVmHourlyPartitioning = &v
}
s.ScheduledAggregationEngine = strings.ToLower(strings.TrimSpace(s.ScheduledAggregationEngine))
if s.ScheduledAggregationEngine == "" {
s.ScheduledAggregationEngine = scheduledAggregationEngineGo
}
s.MonthlyAggregationGranularity = strings.ToLower(strings.TrimSpace(s.MonthlyAggregationGranularity))
if s.MonthlyAggregationGranularity == "" {
s.MonthlyAggregationGranularity = "daily"
}
s.AuthJWTSigningKey = strings.TrimSpace(s.AuthJWTSigningKey) s.AuthJWTSigningKey = strings.TrimSpace(s.AuthJWTSigningKey)
s.LDAPBindAddress = strings.TrimSpace(s.LDAPBindAddress) s.LDAPBindAddress = strings.TrimSpace(s.LDAPBindAddress)
s.LDAPBaseDN = strings.TrimSpace(s.LDAPBaseDN) s.LDAPBaseDN = strings.TrimSpace(s.LDAPBaseDN)
@@ -265,6 +296,11 @@ func applyDefaultsAndValidateSettings(cfg *SettingsYML) error {
if s.AuthClockSkewSeconds < 0 { if s.AuthClockSkewSeconds < 0 {
return errors.New("settings.auth_clock_skew_seconds must be >= 0") return errors.New("settings.auth_clock_skew_seconds must be >= 0")
} }
switch s.ScheduledAggregationEngine {
case scheduledAggregationEngineGo, scheduledAggregationEngineSQL:
default:
return fmt.Errorf("settings.scheduled_aggregation_engine must be %q or %q", scheduledAggregationEngineGo, scheduledAggregationEngineSQL)
}
if len(s.AuthGroupRoleMappings) > 0 { if len(s.AuthGroupRoleMappings) > 0 {
normalized := make(map[string]string, len(s.AuthGroupRoleMappings)) normalized := make(map[string]string, len(s.AuthGroupRoleMappings))
+39
View File
@@ -63,6 +63,45 @@ func TestReadYMLSettingsAppliesAuthDefaults(t *testing.T) {
if got.AuthClockSkewSeconds != defaultAuthClockSkewSeconds { if got.AuthClockSkewSeconds != defaultAuthClockSkewSeconds {
t.Fatalf("expected default auth_clock_skew_seconds=%d, got %d", defaultAuthClockSkewSeconds, got.AuthClockSkewSeconds) t.Fatalf("expected default auth_clock_skew_seconds=%d, got %d", defaultAuthClockSkewSeconds, got.AuthClockSkewSeconds)
} }
if got.CaptureWriteBatchSize != 1000 {
t.Fatalf("expected default capture_write_batch_size=1000, got %d", got.CaptureWriteBatchSize)
}
if got.SnapshotTableCompatMode == nil || !*got.SnapshotTableCompatMode {
t.Fatalf("expected default snapshot_table_compat_mode=true, got %#v", got.SnapshotTableCompatMode)
}
if got.AsyncReportGeneration == nil || !*got.AsyncReportGeneration {
t.Fatalf("expected default async_report_generation=true, got %#v", got.AsyncReportGeneration)
}
if got.PostgresVmHourlyPartitioning == nil || *got.PostgresVmHourlyPartitioning {
t.Fatalf("expected default postgres_vm_hourly_partitioning_enabled=false, got %#v", got.PostgresVmHourlyPartitioning)
}
if got.ScheduledAggregationEngine != scheduledAggregationEngineGo {
t.Fatalf("expected default scheduled_aggregation_engine=%q, got %q", scheduledAggregationEngineGo, got.ScheduledAggregationEngine)
}
if got.MonthlyAggregationGranularity != "daily" {
t.Fatalf("expected default monthly_aggregation_granularity=daily, got %q", got.MonthlyAggregationGranularity)
}
}
func TestReadYMLSettingsRejectsInvalidScheduledAggregationEngine(t *testing.T) {
tmpDir := t.TempDir()
settingsPath := filepath.Join(tmpDir, "vctp.yml")
content := `settings:
scheduled_aggregation_engine: "hybrid"
`
if err := os.WriteFile(settingsPath, []byte(content), 0o600); err != nil {
t.Fatalf("failed to write settings file: %v", err)
}
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
s := New(logger, settingsPath)
err := s.ReadYMLSettings()
if err == nil {
t.Fatal("expected invalid scheduled_aggregation_engine to fail")
}
if !strings.Contains(strings.ToLower(err.Error()), "scheduled_aggregation_engine") {
t.Fatalf("expected error to mention scheduled_aggregation_engine, got: %v", err)
}
} }
func TestReadYMLSettingsRejectsInvalidAuthMode(t *testing.T) { func TestReadYMLSettingsRejectsInvalidAuthMode(t *testing.T) {
+326
View File
@@ -0,0 +1,326 @@
package tasks
import (
"context"
"database/sql"
"fmt"
"slices"
"time"
"vctp/db"
"github.com/jmoiron/sqlx"
)
type AggregationBenchmarkStats struct {
Runs int
Min time.Duration
Median time.Duration
Avg time.Duration
Max time.Duration
}
type AggregationBenchmarkReport struct {
Runs int
DailyWindowStart time.Time
DailyWindowEnd time.Time
DailyGo AggregationBenchmarkStats
DailySQL AggregationBenchmarkStats
DailyGoRowsWritten int64
DailySQLRowsWritten int64
MonthlyWindowStart time.Time
MonthlyWindowEnd time.Time
MonthlyGo AggregationBenchmarkStats
MonthlySQL AggregationBenchmarkStats
MonthlyGoRowsWritten int64
MonthlySQLRowsWritten int64
}
// RunCanonicalAggregationBenchmark compares Go and SQL aggregation cores on canonical cache tables.
func (c *CronTask) RunCanonicalAggregationBenchmark(ctx context.Context, runs int) (AggregationBenchmarkReport, error) {
if runs <= 0 {
runs = 3
}
report := AggregationBenchmarkReport{Runs: runs}
dbConn := c.Database.DB()
hourlyStart, hourlyEnd, err := latestDailyWindowFromHourlyCache(ctx, dbConn)
if err != nil {
return report, err
}
if !hourlyStart.IsZero() {
report.DailyWindowStart = hourlyStart
report.DailyWindowEnd = hourlyEnd
goDurations := make([]time.Duration, 0, runs)
sqlDurations := make([]time.Duration, 0, runs)
var goRows, sqlRows int64
for i := 0; i < runs; i++ {
dur, rows, runErr := c.benchmarkDailyGoCore(ctx, hourlyStart, hourlyEnd)
if runErr != nil {
return report, fmt.Errorf("daily go benchmark run %d failed: %w", i+1, runErr)
}
goDurations = append(goDurations, dur)
goRows = rows
dur, rows, runErr = c.benchmarkDailySQLCore(ctx, hourlyStart, hourlyEnd)
if runErr != nil {
return report, fmt.Errorf("daily sql benchmark run %d failed: %w", i+1, runErr)
}
sqlDurations = append(sqlDurations, dur)
sqlRows = rows
}
report.DailyGo = summarizeDurations(goDurations)
report.DailySQL = summarizeDurations(sqlDurations)
report.DailyGoRowsWritten = goRows
report.DailySQLRowsWritten = sqlRows
}
monthlyStart, monthlyEnd, err := latestMonthlyWindowFromDailyRollup(ctx, dbConn)
if err != nil {
return report, err
}
if !monthlyStart.IsZero() {
report.MonthlyWindowStart = monthlyStart
report.MonthlyWindowEnd = monthlyEnd
goDurations := make([]time.Duration, 0, runs)
sqlDurations := make([]time.Duration, 0, runs)
var goRows, sqlRows int64
for i := 0; i < runs; i++ {
dur, rows, runErr := c.benchmarkMonthlyGoCore(ctx, monthlyStart, monthlyEnd)
if runErr != nil {
return report, fmt.Errorf("monthly go benchmark run %d failed: %w", i+1, runErr)
}
goDurations = append(goDurations, dur)
goRows = rows
dur, rows, runErr = c.benchmarkMonthlySQLCore(ctx, monthlyStart, monthlyEnd)
if runErr != nil {
return report, fmt.Errorf("monthly sql benchmark run %d failed: %w", i+1, runErr)
}
sqlDurations = append(sqlDurations, dur)
sqlRows = rows
}
report.MonthlyGo = summarizeDurations(goDurations)
report.MonthlySQL = summarizeDurations(sqlDurations)
report.MonthlyGoRowsWritten = goRows
report.MonthlySQLRowsWritten = sqlRows
}
if report.DailyWindowStart.IsZero() && report.MonthlyWindowStart.IsZero() {
return report, fmt.Errorf("no benchmarkable canonical windows found (vm_hourly_stats/vm_daily_rollup are empty)")
}
return report, nil
}
func (c *CronTask) benchmarkDailyGoCore(ctx context.Context, dayStart, dayEnd time.Time) (time.Duration, int64, error) {
tableName, err := benchmarkSummaryTableName("benchmark_daily_go")
if err != nil {
return 0, 0, err
}
dbConn := c.Database.DB()
if err := db.EnsureSummaryTable(ctx, dbConn, tableName); err != nil {
return 0, 0, err
}
defer dropSnapshotTable(ctx, dbConn, tableName)
started := time.Now()
aggMap, snapTimes, err := c.scanHourlyCache(ctx, dayStart, dayEnd)
if err != nil {
return 0, 0, err
}
if len(aggMap) == 0 || len(snapTimes) == 0 {
return 0, 0, fmt.Errorf("no daily rows found in canonical hourly cache")
}
totalSamplesByVcenter := sampleCountsByVcenter(aggMap)
if err := c.insertDailyAggregates(ctx, tableName, aggMap, len(snapTimes), totalSamplesByVcenter); err != nil {
return 0, 0, err
}
elapsed := time.Since(started)
rows, err := db.TableRowCount(ctx, dbConn, tableName)
if err != nil {
return 0, 0, err
}
return elapsed, rows, nil
}
func (c *CronTask) benchmarkDailySQLCore(ctx context.Context, dayStart, dayEnd time.Time) (time.Duration, int64, error) {
tableName, err := benchmarkSummaryTableName("benchmark_daily_sql")
if err != nil {
return 0, 0, err
}
dbConn := c.Database.DB()
if err := db.EnsureSummaryTable(ctx, dbConn, tableName); err != nil {
return 0, 0, err
}
defer dropSnapshotTable(ctx, dbConn, tableName)
insertQuery, err := db.BuildDailySummaryInsert(tableName, buildCanonicalHourlySummaryUnion(dayStart, dayEnd))
if err != nil {
return 0, 0, err
}
started := time.Now()
if _, err := dbConn.ExecContext(ctx, insertQuery); err != nil {
return 0, 0, err
}
elapsed := time.Since(started)
rows, err := db.TableRowCount(ctx, dbConn, tableName)
if err != nil {
return 0, 0, err
}
return elapsed, rows, nil
}
func (c *CronTask) benchmarkMonthlyGoCore(ctx context.Context, monthStart, monthEnd time.Time) (time.Duration, int64, error) {
tableName, err := benchmarkSummaryTableName("benchmark_monthly_go")
if err != nil {
return 0, 0, err
}
dbConn := c.Database.DB()
if err := db.EnsureSummaryTable(ctx, dbConn, tableName); err != nil {
return 0, 0, err
}
defer dropSnapshotTable(ctx, dbConn, tableName)
started := time.Now()
aggMap, err := c.scanDailyRollup(ctx, monthStart, monthEnd)
if err != nil {
return 0, 0, err
}
if len(aggMap) == 0 {
return 0, 0, fmt.Errorf("no monthly rows found in canonical daily rollup")
}
if err := c.insertMonthlyAggregates(ctx, tableName, aggMap); err != nil {
return 0, 0, err
}
elapsed := time.Since(started)
rows, err := db.TableRowCount(ctx, dbConn, tableName)
if err != nil {
return 0, 0, err
}
return elapsed, rows, nil
}
func (c *CronTask) benchmarkMonthlySQLCore(ctx context.Context, monthStart, monthEnd time.Time) (time.Duration, int64, error) {
tableName, err := benchmarkSummaryTableName("benchmark_monthly_sql")
if err != nil {
return 0, 0, err
}
dbConn := c.Database.DB()
if err := db.EnsureSummaryTable(ctx, dbConn, tableName); err != nil {
return 0, 0, err
}
defer dropSnapshotTable(ctx, dbConn, tableName)
insertQuery, err := db.BuildMonthlySummaryInsert(tableName, buildCanonicalDailyRollupSummaryUnion(monthStart, monthEnd))
if err != nil {
return 0, 0, err
}
started := time.Now()
if _, err := dbConn.ExecContext(ctx, insertQuery); err != nil {
return 0, 0, err
}
elapsed := time.Since(started)
rows, err := db.TableRowCount(ctx, dbConn, tableName)
if err != nil {
return 0, 0, err
}
return elapsed, rows, nil
}
func benchmarkSummaryTableName(prefix string) (string, error) {
return db.SafeTableName(fmt.Sprintf("%s_%d", prefix, time.Now().UTC().UnixNano()))
}
func latestDailyWindowFromHourlyCache(ctx context.Context, dbConn *sqlx.DB) (time.Time, time.Time, error) {
if !db.TableExists(ctx, dbConn, "vm_hourly_stats") {
return time.Time{}, time.Time{}, nil
}
query := dbConn.Rebind(`
SELECT MAX("SnapshotTime")
FROM vm_hourly_stats
WHERE "SnapshotTime" > ?
`)
var maxSnapshot sql.NullInt64
if err := dbConn.GetContext(ctx, &maxSnapshot, query, 0); err != nil {
return time.Time{}, time.Time{}, err
}
if !maxSnapshot.Valid || maxSnapshot.Int64 <= 0 {
return time.Time{}, time.Time{}, nil
}
dayStart := time.Unix(maxSnapshot.Int64, 0).UTC()
dayStart = time.Date(dayStart.Year(), dayStart.Month(), dayStart.Day(), 0, 0, 0, 0, time.UTC)
dayEnd := dayStart.AddDate(0, 0, 1)
countQuery := dbConn.Rebind(`
SELECT COUNT(1)
FROM vm_hourly_stats
WHERE "SnapshotTime" >= ? AND "SnapshotTime" < ?
`)
var count int64
if err := dbConn.GetContext(ctx, &count, countQuery, dayStart.Unix(), dayEnd.Unix()); err != nil {
return time.Time{}, time.Time{}, err
}
if count == 0 {
return time.Time{}, time.Time{}, nil
}
return dayStart, dayEnd, nil
}
func latestMonthlyWindowFromDailyRollup(ctx context.Context, dbConn *sqlx.DB) (time.Time, time.Time, error) {
if !db.TableExists(ctx, dbConn, "vm_daily_rollup") {
return time.Time{}, time.Time{}, nil
}
query := dbConn.Rebind(`
SELECT MAX("Date")
FROM vm_daily_rollup
WHERE "Date" > ?
`)
var maxDate sql.NullInt64
if err := dbConn.GetContext(ctx, &maxDate, query, 0); err != nil {
return time.Time{}, time.Time{}, err
}
if !maxDate.Valid || maxDate.Int64 <= 0 {
return time.Time{}, time.Time{}, nil
}
monthStart := time.Unix(maxDate.Int64, 0).UTC()
monthStart = time.Date(monthStart.Year(), monthStart.Month(), 1, 0, 0, 0, 0, time.UTC)
monthEnd := monthStart.AddDate(0, 1, 0)
countQuery := dbConn.Rebind(`
SELECT COUNT(1)
FROM vm_daily_rollup
WHERE "Date" >= ? AND "Date" < ?
`)
var count int64
if err := dbConn.GetContext(ctx, &count, countQuery, monthStart.Unix(), monthEnd.Unix()); err != nil {
return time.Time{}, time.Time{}, err
}
if count == 0 {
return time.Time{}, time.Time{}, nil
}
return monthStart, monthEnd, nil
}
func summarizeDurations(values []time.Duration) AggregationBenchmarkStats {
if len(values) == 0 {
return AggregationBenchmarkStats{}
}
sorted := append([]time.Duration(nil), values...)
slices.Sort(sorted)
total := time.Duration(0)
for _, v := range sorted {
total += v
}
median := sorted[len(sorted)/2]
if len(sorted)%2 == 0 {
median = (sorted[(len(sorted)/2)-1] + sorted[len(sorted)/2]) / 2
}
return AggregationBenchmarkStats{
Runs: len(sorted),
Min: sorted[0],
Median: median,
Avg: total / time.Duration(len(sorted)),
Max: sorted[len(sorted)-1],
}
}
+275 -65
View File
@@ -16,6 +16,8 @@ import (
"vctp/internal/metrics" "vctp/internal/metrics"
"vctp/internal/report" "vctp/internal/report"
"vctp/internal/settings" "vctp/internal/settings"
"github.com/jmoiron/sqlx"
) )
// RunVcenterDailyAggregate summarizes hourly snapshots into a daily summary table. // RunVcenterDailyAggregate summarizes hourly snapshots into a daily summary table.
@@ -34,15 +36,15 @@ func (c *CronTask) RunVcenterDailyAggregate(ctx context.Context, logger *slog.Lo
targetTime := time.Now().AddDate(0, 0, -1) targetTime := time.Now().AddDate(0, 0, -1)
logger.Info("Daily summary job starting", "target_date", targetTime.Format("2006-01-02")) logger.Info("Daily summary job starting", "target_date", targetTime.Format("2006-01-02"))
// Always force regeneration on the scheduled run to refresh data even if a manual run happened earlier. // Always force regeneration on the scheduled run to refresh data even if a manual run happened earlier.
return c.aggregateDailySummary(jobCtx, targetTime, true) return c.aggregateDailySummaryWithMode(jobCtx, targetTime, true, true)
}) })
} }
func (c *CronTask) AggregateDailySummary(ctx context.Context, date time.Time, force bool) error { func (c *CronTask) AggregateDailySummary(ctx context.Context, date time.Time, force bool) error {
return c.aggregateDailySummary(ctx, date, force) return c.aggregateDailySummaryWithMode(ctx, date, force, false)
} }
func (c *CronTask) aggregateDailySummary(ctx context.Context, targetTime time.Time, force bool) error { func (c *CronTask) aggregateDailySummaryWithMode(ctx context.Context, targetTime time.Time, force bool, scheduled bool) error {
jobStart := time.Now() jobStart := time.Now()
dayStart := time.Date(targetTime.Year(), targetTime.Month(), targetTime.Day(), 0, 0, 0, 0, targetTime.Location()) dayStart := time.Date(targetTime.Year(), targetTime.Month(), targetTime.Day(), 0, 0, 0, 0, targetTime.Location())
dayEnd := dayStart.AddDate(0, 0, 1) dayEnd := dayStart.AddDate(0, 0, 1)
@@ -71,10 +73,31 @@ func (c *CronTask) aggregateDailySummary(ctx context.Context, targetTime time.Ti
} }
} }
// If enabled, use the Go fan-out/reduce path to parallelize aggregation. if scheduled && c.scheduledAggregationEngine() == "sql" {
if os.Getenv("DAILY_AGG_GO") == "1" { c.Logger.Info("scheduled_aggregation_engine=sql enabled; using canonical SQL daily aggregation path")
if err := c.aggregateDailySummarySQLCanonical(ctx, dayStart, dayEnd, summaryTable); err != nil {
c.Logger.Warn("scheduled canonical SQL daily aggregation failed; falling back to go path", "error", err)
} else {
metrics.RecordDailyAggregation(time.Since(jobStart), nil)
c.Logger.Debug("Finished daily inventory aggregation (SQL canonical path)", "summary_table", summaryTable)
return nil
}
}
// Canonical Go aggregation is the default for both scheduled and manual runs.
// Legacy SQL/union aggregation stays available as a manual fallback/backfill path.
forceGoAgg := os.Getenv("DAILY_AGG_GO") == "1"
forceSQLAgg := !scheduled && os.Getenv("DAILY_AGG_SQL") == "1"
useGoAgg := scheduled || forceGoAgg || !forceSQLAgg
if forceSQLAgg && !forceGoAgg {
c.Logger.Info("DAILY_AGG_SQL=1 enabled; using SQL fallback path for manual daily aggregation")
}
if useGoAgg {
c.Logger.Debug("Using go implementation of aggregation") c.Logger.Debug("Using go implementation of aggregation")
if err := c.aggregateDailySummaryGo(ctx, dayStart, dayEnd, summaryTable, force); err != nil { if err := c.aggregateDailySummaryGo(ctx, dayStart, dayEnd, summaryTable, force, scheduled); err != nil {
if scheduled {
return err
}
c.Logger.Warn("go-based daily aggregation failed, falling back to SQL path", "error", err) c.Logger.Warn("go-based daily aggregation failed, falling back to SQL path", "error", err)
} else { } else {
metrics.RecordDailyAggregation(time.Since(jobStart), nil) metrics.RecordDailyAggregation(time.Since(jobStart), nil)
@@ -200,7 +223,7 @@ func (c *CronTask) aggregateDailySummary(ctx context.Context, targetTime time.Ti
reportStart := time.Now() reportStart := time.Now()
c.Logger.Debug("Generating daily report", "table", summaryTable) c.Logger.Debug("Generating daily report", "table", summaryTable)
if err := c.generateReport(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate daily report", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate daily report", "error", err, "table", summaryTable)
metrics.RecordDailyAggregation(time.Since(jobStart), err) metrics.RecordDailyAggregation(time.Since(jobStart), err)
return err return err
@@ -225,34 +248,106 @@ func dailySummaryTableName(t time.Time) (string, error) {
return db.SafeTableName(fmt.Sprintf("inventory_daily_summary_%s", t.Format("20060102"))) return db.SafeTableName(fmt.Sprintf("inventory_daily_summary_%s", t.Format("20060102")))
} }
func (c *CronTask) aggregateDailySummarySQLCanonical(ctx context.Context, dayStart, dayEnd time.Time, summaryTable string) error {
jobStart := time.Now()
dbConn := c.Database.DB()
if !db.TableExists(ctx, dbConn, "vm_hourly_stats") {
return fmt.Errorf("vm_hourly_stats table not found for canonical SQL daily aggregation")
}
unionQuery := buildCanonicalHourlySummaryUnion(dayStart, dayEnd)
insertQuery, err := db.BuildDailySummaryInsert(summaryTable, unionQuery)
if err != nil {
return err
}
if _, err := dbConn.ExecContext(ctx, insertQuery); err != nil {
return err
}
if applied, err := db.ApplyLifecycleDeletionToSummary(ctx, dbConn, summaryTable, dayStart.Unix(), dayEnd.Unix()); err != nil {
c.Logger.Warn("failed to apply lifecycle deletions to daily summary (SQL canonical)", "error", err, "table", summaryTable)
} else {
c.Logger.Info("Daily aggregation deletion times", "source_lifecycle_cache", applied)
}
if applied, err := db.ApplyLifecycleCreationToSummary(ctx, dbConn, summaryTable); err != nil {
c.Logger.Warn("failed to apply lifecycle creations to daily summary (SQL canonical)", "error", err, "table", summaryTable)
} else {
c.Logger.Info("Daily aggregation creation times", "source_lifecycle_cache", applied)
}
if err := db.RefineCreationDeletionFromUnion(ctx, dbConn, summaryTable, buildHourlyCacheLifecycleUnion(dayStart, dayEnd)); err != nil {
c.Logger.Warn("failed to refine creation/deletion times (SQL canonical)", "error", err, "table", summaryTable)
}
if err := db.UpdateSummaryPresenceByWindow(ctx, dbConn, summaryTable, dayStart.Unix(), dayEnd.Unix()); err != nil {
c.Logger.Warn("failed to update daily AvgIsPresent from lifecycle window (SQL canonical)", "error", err, "table", summaryTable)
}
db.AnalyzeTableIfPostgres(ctx, dbConn, summaryTable)
rowCount, err := db.TableRowCount(ctx, dbConn, summaryTable)
if err != nil {
c.Logger.Warn("unable to count daily summary rows (SQL canonical)", "error", err, "table", summaryTable)
}
if rowCount == 0 {
return fmt.Errorf("no VM records aggregated for %s", dayStart.Format("2006-01-02"))
}
logMissingCreationSummary(ctx, c.Logger, c.Database, summaryTable, rowCount)
if err := report.RegisterSnapshot(ctx, c.Database, "daily", summaryTable, dayStart, rowCount); err != nil {
c.Logger.Warn("failed to register daily snapshot (SQL canonical)", "error", err, "table", summaryTable)
}
if refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, summaryTable, "daily", dayStart.Unix()); err != nil {
c.Logger.Warn("failed to refresh vcenter daily aggregate totals cache (SQL canonical)", "error", err, "table", summaryTable)
} else {
c.Logger.Debug("refreshed vcenter daily aggregate totals cache", "table", summaryTable, "rows", refreshed)
}
if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate daily report (SQL canonical)", "error", err, "table", summaryTable)
return err
}
driver := strings.ToLower(dbConn.DriverName())
action, checkpointErr := db.CheckpointDatabase(ctx, dbConn)
if checkpointErr != nil {
c.Logger.Warn("failed to run database checkpoint after daily aggregation (SQL canonical)", "driver", driver, "action", action, "error", checkpointErr)
}
c.Logger.Debug("Finished daily inventory aggregation (SQL canonical path)", "summary_table", summaryTable, "duration", time.Since(jobStart))
return nil
}
func buildCanonicalHourlySummaryUnion(start, end time.Time) string {
return fmt.Sprintf(`
SELECT
NULL AS "InventoryId",
COALESCE("Name",'') AS "Name",
COALESCE("Vcenter",'') AS "Vcenter",
COALESCE("VmId",'') AS "VmId",
NULL AS "EventKey",
NULL AS "CloudId",
COALESCE("CreationTime",0) AS "CreationTime",
COALESCE("DeletionTime",0) AS "DeletionTime",
COALESCE("ResourcePool",'') AS "ResourcePool",
COALESCE("Datacenter",'') AS "Datacenter",
COALESCE("Cluster",'') AS "Cluster",
COALESCE("Folder",'') AS "Folder",
COALESCE("ProvisionedDisk",0) AS "ProvisionedDisk",
COALESCE("VcpuCount",0) AS "VcpuCount",
COALESCE("RamGB",0) AS "RamGB",
COALESCE("IsTemplate",'') AS "IsTemplate",
COALESCE("PoweredOn",'') AS "PoweredOn",
COALESCE("SrmPlaceholder",'') AS "SrmPlaceholder",
COALESCE("VmUuid",'') AS "VmUuid",
"SnapshotTime"
FROM vm_hourly_stats
WHERE "SnapshotTime" >= %d
AND "SnapshotTime" < %d
AND %s
`, start.Unix(), end.Unix(), templateExclusionFilter())
}
// aggregateDailySummaryGo performs daily aggregation by reading hourly tables in parallel, // aggregateDailySummaryGo performs daily aggregation by reading hourly tables in parallel,
// reducing in Go, and writing the summary table. It mirrors the outputs of the SQL path // reducing in Go, and writing the summary table. It mirrors the outputs of the SQL path
// as closely as possible while improving CPU utilization on multi-core hosts. // as closely as possible while improving CPU utilization on multi-core hosts.
func (c *CronTask) aggregateDailySummaryGo(ctx context.Context, dayStart, dayEnd time.Time, summaryTable string, force bool) error { func (c *CronTask) aggregateDailySummaryGo(ctx context.Context, dayStart, dayEnd time.Time, summaryTable string, force bool, canonicalOnly bool) error {
jobStart := time.Now() jobStart := time.Now()
dbConn := c.Database.DB() dbConn := c.Database.DB()
hourlyTables := make([]string, 0, 64)
hourlySnapshots, err := report.SnapshotRecordsWithFallback(ctx, c.Database, "hourly", "inventory_hourly_", "epoch", dayStart, dayEnd) unionQuery := ""
if err != nil {
return err
}
hourlySnapshots = filterRecordsInRange(hourlySnapshots, dayStart, dayEnd)
hourlySnapshots = filterSnapshotsWithRows(ctx, dbConn, hourlySnapshots)
c.Logger.Info("Daily aggregation hourly snapshot count (go path)", "count", len(hourlySnapshots), "date", dayStart.Format("2006-01-02"))
if len(hourlySnapshots) == 0 {
return fmt.Errorf("no hourly snapshot tables found for %s", dayStart.Format("2006-01-02"))
} else {
c.Logger.Debug("Found hourly snapshot tables for daily aggregation", "date", dayStart.Format("2006-01-02"), "tables", len(hourlySnapshots))
}
hourlyTables := make([]string, 0, len(hourlySnapshots))
for _, snapshot := range hourlySnapshots {
hourlyTables = append(hourlyTables, snapshot.TableName)
}
unionQuery, err := buildUnionQuery(hourlyTables, summaryUnionColumns, templateExclusionFilter())
if err != nil {
return err
}
// Clear existing summary if forcing. // Clear existing summary if forcing.
if rowsExist, err := db.TableHasRows(ctx, dbConn, summaryTable); err != nil { if rowsExist, err := db.TableHasRows(ctx, dbConn, summaryTable); err != nil {
@@ -266,41 +361,75 @@ func (c *CronTask) aggregateDailySummaryGo(ctx context.Context, dayStart, dayEnd
} }
} }
totalSamples := len(hourlyTables) totalSamples := 0
var ( var (
aggMap map[dailyAggKey]*dailyAggVal aggMap map[dailyAggKey]*dailyAggVal
snapTimes []int64 snapTimes []int64
) )
if db.TableExists(ctx, dbConn, "vm_hourly_stats") { if canonicalOnly {
if !db.TableExists(ctx, dbConn, "vm_hourly_stats") {
return fmt.Errorf("vm_hourly_stats table not found for canonical daily aggregation")
}
cacheAgg, cacheTimes, cacheErr := c.scanHourlyCache(ctx, dayStart, dayEnd) cacheAgg, cacheTimes, cacheErr := c.scanHourlyCache(ctx, dayStart, dayEnd)
if cacheErr != nil { if cacheErr != nil {
c.Logger.Warn("failed to use hourly cache, falling back to table scans", "error", cacheErr) return cacheErr
} else if len(cacheAgg) > 0 {
c.Logger.Debug("using hourly cache for daily aggregation", "date", dayStart.Format("2006-01-02"), "snapshots", len(cacheTimes), "vm_count", len(cacheAgg))
aggMap = cacheAgg
snapTimes = cacheTimes
totalSamples = len(cacheTimes)
} }
} if len(cacheAgg) == 0 {
if aggMap == nil {
var errScan error
aggMap, errScan = c.scanHourlyTablesParallel(ctx, hourlySnapshots)
if errScan != nil {
return errScan
}
c.Logger.Debug("scanned hourly tables for daily aggregation", "date", dayStart.Format("2006-01-02"), "tables", len(hourlySnapshots), "vm_count", len(aggMap))
if len(aggMap) == 0 {
return fmt.Errorf("no VM records aggregated for %s", dayStart.Format("2006-01-02")) return fmt.Errorf("no VM records aggregated for %s", dayStart.Format("2006-01-02"))
} }
c.Logger.Debug("using canonical hourly cache for daily aggregation", "date", dayStart.Format("2006-01-02"), "snapshots", len(cacheTimes), "vm_count", len(cacheAgg))
// Build ordered list of snapshot times for deletion inference. aggMap = cacheAgg
snapTimes = make([]int64, 0, len(hourlySnapshots)) snapTimes = cacheTimes
for _, snap := range hourlySnapshots { totalSamples = len(cacheTimes)
snapTimes = append(snapTimes, snap.SnapshotTime.Unix()) unionQuery = buildHourlyCacheLifecycleUnion(dayStart, dayEnd)
} else {
hourlySnapshots, err := report.SnapshotRecordsWithFallback(ctx, c.Database, "hourly", "inventory_hourly_", "epoch", dayStart, dayEnd)
if err != nil {
return err
}
hourlySnapshots = filterRecordsInRange(hourlySnapshots, dayStart, dayEnd)
hourlySnapshots = filterSnapshotsWithRows(ctx, dbConn, hourlySnapshots)
c.Logger.Info("Daily aggregation hourly snapshot count (go path)", "count", len(hourlySnapshots), "date", dayStart.Format("2006-01-02"))
if len(hourlySnapshots) == 0 {
return fmt.Errorf("no hourly snapshot tables found for %s", dayStart.Format("2006-01-02"))
}
for _, snapshot := range hourlySnapshots {
hourlyTables = append(hourlyTables, snapshot.TableName)
}
unionQuery, err = buildUnionQuery(hourlyTables, summaryUnionColumns, templateExclusionFilter())
if err != nil {
return err
}
totalSamples = len(hourlyTables)
if db.TableExists(ctx, dbConn, "vm_hourly_stats") {
cacheAgg, cacheTimes, cacheErr := c.scanHourlyCache(ctx, dayStart, dayEnd)
if cacheErr != nil {
c.Logger.Warn("failed to use hourly cache, falling back to table scans", "error", cacheErr)
} else if len(cacheAgg) > 0 {
c.Logger.Debug("using hourly cache for daily aggregation", "date", dayStart.Format("2006-01-02"), "snapshots", len(cacheTimes), "vm_count", len(cacheAgg))
aggMap = cacheAgg
snapTimes = cacheTimes
totalSamples = len(cacheTimes)
}
}
if aggMap == nil {
var errScan error
aggMap, errScan = c.scanHourlyTablesParallel(ctx, hourlySnapshots)
if errScan != nil {
return errScan
}
c.Logger.Debug("scanned hourly tables for daily aggregation", "date", dayStart.Format("2006-01-02"), "tables", len(hourlySnapshots), "vm_count", len(aggMap))
if len(aggMap) == 0 {
return fmt.Errorf("no VM records aggregated for %s", dayStart.Format("2006-01-02"))
}
// Build ordered list of snapshot times for deletion inference.
snapTimes = make([]int64, 0, len(hourlySnapshots))
for _, snap := range hourlySnapshots {
snapTimes = append(snapTimes, snap.SnapshotTime.Unix())
}
slices.Sort(snapTimes)
} }
slices.Sort(snapTimes)
} }
lifecycleDeletions := c.applyLifecycleDeletions(ctx, aggMap, dayStart, dayEnd) lifecycleDeletions := c.applyLifecycleDeletions(ctx, aggMap, dayStart, dayEnd)
@@ -316,22 +445,36 @@ func (c *CronTask) aggregateDailySummaryGo(ctx context.Context, dayStart, dayEnd
c.Logger.Info("Daily aggregation creation times", "source_inventory", inventoryCreations) c.Logger.Info("Daily aggregation creation times", "source_inventory", inventoryCreations)
// Get the first hourly snapshot on/after dayEnd to help confirm deletions that happen on the last snapshot of the day. // Get the first hourly snapshot on/after dayEnd to help confirm deletions that happen on the last snapshot of the day.
var nextSnapshotTable string var (
nextSnapshotQuery := dbConn.Rebind(` nextSnapshotTable string
nextSnapshotTime int64
)
nextPresenceByVcenter := make(map[string]map[string]struct{}, 8)
if canonicalOnly {
presence, snapshotTime, err := loadNextHourlyCachePresence(ctx, dbConn, dayEnd)
if err != nil {
c.Logger.Warn("failed to load next-hourly presence from canonical cache", "error", err)
} else {
nextPresenceByVcenter = presence
nextSnapshotTime = snapshotTime
}
} else {
nextSnapshotQuery := dbConn.Rebind(`
SELECT table_name SELECT table_name
FROM snapshot_registry FROM snapshot_registry
WHERE snapshot_type = 'hourly' AND snapshot_time >= ? WHERE snapshot_type = 'hourly' AND snapshot_time >= ?
ORDER BY snapshot_time ASC ORDER BY snapshot_time ASC
LIMIT 1 LIMIT 1
`) `)
nextSnapshotRows, nextErr := c.Database.DB().QueryxContext(ctx, nextSnapshotQuery, dayEnd.Unix()) nextSnapshotRows, nextErr := c.Database.DB().QueryxContext(ctx, nextSnapshotQuery, dayEnd.Unix())
if nextErr == nil { if nextErr == nil {
if nextSnapshotRows.Next() { if nextSnapshotRows.Next() {
if scanErr := nextSnapshotRows.Scan(&nextSnapshotTable); scanErr != nil { if scanErr := nextSnapshotRows.Scan(&nextSnapshotTable); scanErr != nil {
nextSnapshotTable = "" nextSnapshotTable = ""
}
} }
nextSnapshotRows.Close()
} }
nextSnapshotRows.Close()
} }
// Build per-vCenter snapshot timelines from observed VM samples so deletion // Build per-vCenter snapshot timelines from observed VM samples so deletion
@@ -362,7 +505,6 @@ LIMIT 1
vcenterSnapTimes[vcenter] = times vcenterSnapTimes[vcenter] = times
} }
nextPresenceByVcenter := make(map[string]map[string]struct{}, 8)
if nextSnapshotTable != "" && db.TableExists(ctx, dbConn, nextSnapshotTable) { if nextSnapshotTable != "" && db.TableExists(ctx, dbConn, nextSnapshotTable) {
rows, err := querySnapshotRows(ctx, dbConn, nextSnapshotTable, []string{"Vcenter", "VmId", "VmUuid", "Name"}, "") rows, err := querySnapshotRows(ctx, dbConn, nextSnapshotTable, []string{"Vcenter", "VmId", "VmUuid", "Name"}, "")
if err == nil { if err == nil {
@@ -439,7 +581,7 @@ LIMIT 1
if !presentByID && !presentByUUID && !presentByName { if !presentByID && !presentByUUID && !presentByName {
v.deletion = firstMiss v.deletion = firstMiss
inferredDeletions++ inferredDeletions++
c.Logger.Debug("cross-day deletion inferred from next snapshot", "vcenter", v.key.Vcenter, "vm_id", v.key.VmId, "vm_uuid", v.key.VmUuid, "name", v.key.Name, "deletion", firstMiss, "next_table", nextSnapshotTable) c.Logger.Debug("cross-day deletion inferred from next snapshot", "vcenter", v.key.Vcenter, "vm_id", v.key.VmId, "vm_uuid", v.key.VmUuid, "name", v.key.Name, "deletion", firstMiss, "next_table", nextSnapshotTable, "next_snapshot_time", nextSnapshotTime)
} }
} }
if v.deletion == 0 { if v.deletion == 0 {
@@ -521,7 +663,7 @@ LIMIT 1
} }
reportStart := time.Now() reportStart := time.Now()
c.Logger.Debug("Generating daily report", "table", summaryTable) c.Logger.Debug("Generating daily report", "table", summaryTable)
if err := c.generateReport(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate daily report", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate daily report", "error", err, "table", summaryTable)
return err return err
} }
@@ -1115,6 +1257,74 @@ WHERE "SnapshotTime" >= ? AND "SnapshotTime" < ?`
return agg, snapTimes, rows.Err() return agg, snapTimes, rows.Err()
} }
func buildHourlyCacheLifecycleUnion(start, end time.Time) string {
return fmt.Sprintf(`
SELECT
"VmId","VmUuid","Name","Vcenter","CreationTime","DeletionTime","SnapshotTime"
FROM vm_hourly_stats
WHERE "SnapshotTime" >= %d AND "SnapshotTime" < %d
`, start.Unix(), end.Unix())
}
func loadNextHourlyCachePresence(ctx context.Context, dbConn *sqlx.DB, dayEnd time.Time) (map[string]map[string]struct{}, int64, error) {
presence := make(map[string]map[string]struct{}, 8)
query := dbConn.Rebind(`
WITH next_by_vcenter AS (
SELECT "Vcenter", MIN("SnapshotTime") AS snapshot_time
FROM vm_hourly_stats
WHERE "SnapshotTime" >= ?
GROUP BY "Vcenter"
)
SELECT h."Vcenter", h."VmId", h."VmUuid", h."Name", n.snapshot_time
FROM next_by_vcenter n
JOIN vm_hourly_stats h
ON h."Vcenter" = n."Vcenter"
AND h."SnapshotTime" = n.snapshot_time
`)
rows, err := dbConn.QueryxContext(ctx, query, dayEnd.Unix())
if err != nil {
return nil, 0, err
}
defer rows.Close()
var minSnapshotTime int64
for rows.Next() {
var (
vcenter string
vmID, vmUUID sql.NullString
name sql.NullString
snapshotTime sql.NullInt64
)
if err := rows.Scan(&vcenter, &vmID, &vmUUID, &name, &snapshotTime); err != nil {
continue
}
if strings.TrimSpace(vcenter) == "" {
continue
}
if snapshotTime.Valid && snapshotTime.Int64 > 0 && (minSnapshotTime == 0 || snapshotTime.Int64 < minSnapshotTime) {
minSnapshotTime = snapshotTime.Int64
}
vcPresence := presence[vcenter]
if vcPresence == nil {
vcPresence = make(map[string]struct{}, 1024)
presence[vcenter] = vcPresence
}
if vmID.Valid && strings.TrimSpace(vmID.String) != "" {
vcPresence["id:"+strings.TrimSpace(vmID.String)] = struct{}{}
}
if vmUUID.Valid && strings.TrimSpace(vmUUID.String) != "" {
vcPresence["uuid:"+strings.TrimSpace(vmUUID.String)] = struct{}{}
}
if name.Valid && strings.TrimSpace(name.String) != "" {
vcPresence["name:"+strings.ToLower(strings.TrimSpace(name.String))] = struct{}{}
}
}
if err := rows.Err(); err != nil {
return nil, 0, err
}
return presence, minSnapshotTime, nil
}
func (c *CronTask) insertDailyAggregates(ctx context.Context, table string, agg map[dailyAggKey]*dailyAggVal, totalSamples int, totalSamplesByVcenter map[string]int) error { func (c *CronTask) insertDailyAggregates(ctx context.Context, table string, agg map[dailyAggKey]*dailyAggVal, totalSamples int, totalSamplesByVcenter map[string]int) error {
dbConn := c.Database.DB() dbConn := c.Database.DB()
tx, err := dbConn.Beginx() tx, err := dbConn.Beginx()
+193
View File
@@ -3,6 +3,7 @@ package tasks
import ( import (
"context" "context"
"fmt" "fmt"
"strconv"
"strings" "strings"
"vctp/db" "vctp/db"
@@ -18,6 +19,15 @@ func insertHourlyCache(ctx context.Context, dbConn *sqlx.DB, rows []InventorySna
return err return err
} }
driver := strings.ToLower(dbConn.DriverName()) driver := strings.ToLower(dbConn.DriverName())
if isPostgresDriver(driver) {
if len(rows) > 0 {
if err := db.EnsureVmHourlyStatsPartitionForSnapshot(ctx, dbConn, rows[0].SnapshotTime); err != nil {
return err
}
}
return insertHourlyCachePostgresMultiRow(ctx, dbConn, rows)
}
conflict := "" conflict := ""
verb := "INSERT INTO" verb := "INSERT INTO"
if driver == "sqlite" { if driver == "sqlite" {
@@ -73,10 +83,64 @@ func insertHourlyCache(ctx context.Context, dbConn *sqlx.DB, rows []InventorySna
return tx.Commit() return tx.Commit()
} }
func insertHourlyCachePostgresMultiRow(ctx context.Context, dbConn *sqlx.DB, rows []InventorySnapshotRow) error {
cols := []string{
"SnapshotTime", "Vcenter", "VmId", "VmUuid", "Name", "CreationTime", "DeletionTime", "ResourcePool",
"Datacenter", "Cluster", "Folder", "ProvisionedDisk", "VcpuCount", "RamGB", "IsTemplate", "PoweredOn", "SrmPlaceholder",
}
conflict := ` ON CONFLICT ("Vcenter","VmId","SnapshotTime") DO UPDATE SET
"VmUuid"=EXCLUDED."VmUuid",
"Name"=EXCLUDED."Name",
"CreationTime"=EXCLUDED."CreationTime",
"DeletionTime"=EXCLUDED."DeletionTime",
"ResourcePool"=EXCLUDED."ResourcePool",
"Datacenter"=EXCLUDED."Datacenter",
"Cluster"=EXCLUDED."Cluster",
"Folder"=EXCLUDED."Folder",
"ProvisionedDisk"=EXCLUDED."ProvisionedDisk",
"VcpuCount"=EXCLUDED."VcpuCount",
"RamGB"=EXCLUDED."RamGB",
"IsTemplate"=EXCLUDED."IsTemplate",
"PoweredOn"=EXCLUDED."PoweredOn",
"SrmPlaceholder"=EXCLUDED."SrmPlaceholder"`
tx, err := dbConn.BeginTxx(ctx, nil)
if err != nil {
return err
}
maxRows := postgresMaxRowsPerStatement(len(cols))
for start := 0; start < len(rows); start += maxRows {
end := min(start+maxRows, len(rows))
chunk := rows[start:end]
args := make([]any, 0, len(chunk)*len(cols))
for _, row := range chunk {
args = append(args,
row.SnapshotTime, row.Vcenter, row.VmId, row.VmUuid, row.Name, row.CreationTime, row.DeletionTime, row.ResourcePool,
row.Datacenter, row.Cluster, row.Folder, row.ProvisionedDisk, row.VcpuCount, row.RamGB, row.IsTemplate, row.PoweredOn, row.SrmPlaceholder,
)
}
stmt := buildPostgresMultiRowInsertSQL("vm_hourly_stats", cols, len(chunk), conflict)
if _, err := tx.ExecContext(ctx, stmt, args...); err != nil {
tx.Rollback()
return err
}
}
return tx.Commit()
}
func insertHourlyBatch(ctx context.Context, dbConn *sqlx.DB, tableName string, rows []InventorySnapshotRow) error { func insertHourlyBatch(ctx context.Context, dbConn *sqlx.DB, tableName string, rows []InventorySnapshotRow) error {
if len(rows) == 0 { if len(rows) == 0 {
return nil return nil
} }
if _, err := db.SafeTableName(tableName); err != nil {
return err
}
driver := strings.ToLower(dbConn.DriverName())
if isPostgresDriver(driver) {
return insertHourlyBatchPostgresMultiRow(ctx, dbConn, tableName, rows)
}
tx, err := dbConn.BeginTxx(ctx, nil) tx, err := dbConn.BeginTxx(ctx, nil)
if err != nil { if err != nil {
return err return err
@@ -168,6 +232,135 @@ func insertHourlyBatch(ctx context.Context, dbConn *sqlx.DB, tableName string, r
return tx.Commit() return tx.Commit()
} }
func insertHourlyBatchPostgresMultiRow(ctx context.Context, dbConn *sqlx.DB, tableName string, rows []InventorySnapshotRow) error {
baseCols := []string{
"InventoryId", "Name", "Vcenter", "VmId", "EventKey", "CloudId", "CreationTime", "DeletionTime",
"ResourcePool", "Datacenter", "Cluster", "Folder", "ProvisionedDisk", "VcpuCount",
"RamGB", "IsTemplate", "PoweredOn", "SrmPlaceholder", "VmUuid", "SnapshotTime",
}
err := execHourlySnapshotInsertPostgres(ctx, dbConn, tableName, baseCols, rows, false)
if err == nil {
return nil
}
if !isLegacyIsPresentError(err) {
return err
}
withLegacy := append(append([]string{}, baseCols...), "IsPresent")
if legacyErr := execHourlySnapshotInsertPostgres(ctx, dbConn, tableName, withLegacy, rows, true); legacyErr != nil {
return legacyErr
}
return nil
}
func execHourlySnapshotInsertPostgres(ctx context.Context, dbConn *sqlx.DB, tableName string, cols []string, rows []InventorySnapshotRow, includeLegacyIsPresent bool) error {
tx, err := dbConn.BeginTxx(ctx, nil)
if err != nil {
return err
}
maxRows := postgresMaxRowsPerStatement(len(cols))
for start := 0; start < len(rows); start += maxRows {
end := min(start+maxRows, len(rows))
chunk := rows[start:end]
args := make([]any, 0, len(chunk)*len(cols))
for _, row := range chunk {
args = append(args,
row.InventoryId,
row.Name,
row.Vcenter,
row.VmId,
row.EventKey,
row.CloudId,
row.CreationTime,
row.DeletionTime,
row.ResourcePool,
row.Datacenter,
row.Cluster,
row.Folder,
row.ProvisionedDisk,
row.VcpuCount,
row.RamGB,
row.IsTemplate,
row.PoweredOn,
row.SrmPlaceholder,
row.VmUuid,
row.SnapshotTime,
)
if includeLegacyIsPresent {
args = append(args, "TRUE")
}
}
stmt := buildPostgresMultiRowInsertSQL(tableName, cols, len(chunk), "")
if _, err := tx.ExecContext(ctx, stmt, args...); err != nil {
tx.Rollback()
return err
}
}
return tx.Commit()
}
func isPostgresDriver(driver string) bool {
switch strings.ToLower(strings.TrimSpace(driver)) {
case "pgx", "postgres":
return true
default:
return false
}
}
func postgresMaxRowsPerStatement(colCount int) int {
if colCount <= 0 {
return 1
}
const maxBindParams = 65535
rows := maxBindParams / colCount
if rows <= 0 {
return 1
}
return rows
}
func buildPostgresMultiRowInsertSQL(tableName string, cols []string, rowCount int, suffix string) string {
if rowCount <= 0 {
return ""
}
var b strings.Builder
b.WriteString(`INSERT INTO `)
b.WriteString(tableName)
b.WriteString(` ("`)
b.WriteString(strings.Join(cols, `","`))
b.WriteString(`") VALUES `)
param := 1
for row := 0; row < rowCount; row++ {
if row > 0 {
b.WriteString(`,`)
}
b.WriteString(`(`)
for col := 0; col < len(cols); col++ {
if col > 0 {
b.WriteString(`,`)
}
b.WriteString(`$`)
b.WriteString(strconv.Itoa(param))
param++
}
b.WriteString(`)`)
}
if suffix != "" {
b.WriteString(suffix)
}
return b.String()
}
func isLegacyIsPresentError(err error) bool {
if err == nil {
return false
}
return strings.Contains(strings.ToLower(err.Error()), "ispresent")
}
func dropSnapshotTable(ctx context.Context, dbConn *sqlx.DB, table string) error { func dropSnapshotTable(ctx context.Context, dbConn *sqlx.DB, table string) error {
if _, err := db.SafeTableName(table); err != nil { if _, err := db.SafeTableName(table); err != nil {
return err return err
+53
View File
@@ -0,0 +1,53 @@
package tasks
import "testing"
func TestPostgresMaxRowsPerStatement(t *testing.T) {
tests := []struct {
name string
cols int
expect int
}{
{name: "zero columns", cols: 0, expect: 1},
{name: "hourly cache columns", cols: 17, expect: 3855},
{name: "hourly snapshot columns", cols: 20, expect: 3276},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := postgresMaxRowsPerStatement(tc.cols)
if got != tc.expect {
t.Fatalf("unexpected max rows: cols=%d got=%d want=%d", tc.cols, got, tc.expect)
}
})
}
}
func TestBuildPostgresMultiRowInsertSQL(t *testing.T) {
got := buildPostgresMultiRowInsertSQL("vm_hourly_stats", []string{"A", "B"}, 2, "")
want := `INSERT INTO vm_hourly_stats ("A","B") VALUES ($1,$2),($3,$4)`
if got != want {
t.Fatalf("unexpected SQL\nwant: %s\ngot: %s", want, got)
}
withSuffix := buildPostgresMultiRowInsertSQL("vm_hourly_stats", []string{"A"}, 1, ` ON CONFLICT ("A") DO NOTHING`)
wantSuffix := `INSERT INTO vm_hourly_stats ("A") VALUES ($1) ON CONFLICT ("A") DO NOTHING`
if withSuffix != wantSuffix {
t.Fatalf("unexpected SQL with suffix\nwant: %s\ngot: %s", wantSuffix, withSuffix)
}
}
func TestIsLegacyIsPresentError(t *testing.T) {
if !isLegacyIsPresentError(assertErr(`null value in column "IsPresent" violates not-null constraint`)) {
t.Fatal("expected legacy IsPresent error to be detected")
}
if isLegacyIsPresentError(assertErr("duplicate key value violates unique constraint")) {
t.Fatal("expected non-IsPresent errors to be ignored")
}
}
type testErr string
func (e testErr) Error() string { return string(e) }
func assertErr(msg string) error { return testErr(msg) }
+16 -12
View File
@@ -247,13 +247,15 @@ func updateDeletionTimeInHourlyCache(ctx context.Context, dbConn *sqlx.DB, vcent
} }
// markMissingFromPrevious marks VMs that were present in the previous snapshot but missing now. // markMissingFromPrevious marks VMs that were present in the previous snapshot but missing now.
// When updateCompatSnapshot is true, legacy hourly snapshot tables are updated as well.
func (c *CronTask) markMissingFromPrevious(ctx context.Context, dbConn *sqlx.DB, prevTable string, vcenter string, snapshotTime time.Time, func (c *CronTask) markMissingFromPrevious(ctx context.Context, dbConn *sqlx.DB, prevTable string, vcenter string, snapshotTime time.Time,
currentByID map[string]InventorySnapshotRow, currentByUuid map[string]struct{}, currentByName map[string]struct{}, currentByID map[string]InventorySnapshotRow, currentByUuid map[string]struct{}, currentByName map[string]struct{},
invByID map[string]queries.Inventory, invByUuid map[string]queries.Inventory, invByName map[string]queries.Inventory) (int, bool) { invByID map[string]queries.Inventory, invByUuid map[string]queries.Inventory, invByName map[string]queries.Inventory, updateCompatSnapshot bool) (int, bool) {
if err := db.ValidateTableName(prevTable); err != nil { if err := db.ValidateTableName(prevTable); err != nil {
return 0, false return 0, false
} }
prevSnapUnix, _ := parseSnapshotTime(prevTable)
type prevRow struct { type prevRow struct {
VmId sql.NullString `db:"VmId"` VmId sql.NullString `db:"VmId"`
@@ -342,17 +344,19 @@ func (c *CronTask) markMissingFromPrevious(ctx context.Context, dbConn *sqlx.DB,
if err := db.MarkVmDeletedWithDetails(ctx, dbConn, vcenter, inv.VmId.String, vmUUID, inv.Name, inv.Cluster.String, delTime.Int64); err != nil { if err := db.MarkVmDeletedWithDetails(ctx, dbConn, vcenter, inv.VmId.String, vmUUID, inv.Name, inv.Cluster.String, delTime.Int64); err != nil {
c.Logger.Warn("failed to mark lifecycle cache deleted from previous snapshot", "error", err, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter) c.Logger.Warn("failed to mark lifecycle cache deleted from previous snapshot", "error", err, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter)
} }
if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, prevTable, vcenter, inv.VmId.String, vmUUID, inv.Name, delTime.Int64); err != nil { if prevSnapUnix > 0 {
c.Logger.Warn("failed to update hourly snapshot deletion time", "error", err, "table", prevTable, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter) if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, vcenter, inv.VmId.String, vmUUID, inv.Name, prevSnapUnix, delTime.Int64); err != nil {
} else if rowsAffected > 0 { c.Logger.Warn("failed to update hourly cache deletion time", "error", err, "snapshot_time", prevSnapUnix, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter)
tableUpdated = true } else if cacheRows > 0 {
c.Logger.Debug("updated hourly snapshot deletion time", "table", prevTable, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter, "deletion_time", delTime.Int64) c.Logger.Debug("updated hourly cache deletion time", "snapshot_time", prevSnapUnix, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter, "deletion_time", delTime.Int64)
if snapUnix, ok := parseSnapshotTime(prevTable); ok { }
if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, vcenter, inv.VmId.String, vmUUID, inv.Name, snapUnix, delTime.Int64); err != nil { }
c.Logger.Warn("failed to update hourly cache deletion time", "error", err, "snapshot_time", snapUnix, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter) if updateCompatSnapshot {
} else if cacheRows > 0 { if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, prevTable, vcenter, inv.VmId.String, vmUUID, inv.Name, delTime.Int64); err != nil {
c.Logger.Debug("updated hourly cache deletion time", "snapshot_time", snapUnix, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter, "deletion_time", delTime.Int64) c.Logger.Warn("failed to update hourly snapshot deletion time", "error", err, "table", prevTable, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter)
} } else if rowsAffected > 0 {
tableUpdated = true
c.Logger.Debug("updated hourly snapshot deletion time", "table", prevTable, "vm_id", inv.VmId.String, "vm_uuid", vmUUID, "vcenter", vcenter, "deletion_time", delTime.Int64)
} }
} }
c.Logger.Debug("Detected VM missing compared to previous snapshot", "name", inv.Name, "vm_id", inv.VmId.String, "vm_uuid", inv.VmUuid.String, "vcenter", vcenter, "snapshot_time", snapshotTime, "prev_table", prevTable) c.Logger.Debug("Detected VM missing compared to previous snapshot", "name", inv.Name, "vm_id", inv.VmId.String, "vm_uuid", inv.VmUuid.String, "vcenter", vcenter, "snapshot_time", snapshotTime, "prev_table", prevTable)
+14 -12
View File
@@ -29,7 +29,7 @@ func presenceKeys(vmID, vmUUID, name string) []string {
// backfillLifecycleDeletionsToday looks for VMs in the lifecycle cache that are not in the current inventory, // backfillLifecycleDeletionsToday looks for VMs in the lifecycle cache that are not in the current inventory,
// have no DeletedAt, and determines their deletion time from today's hourly snapshots, optionally checking the next snapshot (next day) to confirm. // have no DeletedAt, and determines their deletion time from today's hourly snapshots, optionally checking the next snapshot (next day) to confirm.
// It returns any hourly snapshot tables that were updated with deletion times. // It returns any hourly snapshot tables that were updated with deletion times.
func backfillLifecycleDeletionsToday(ctx context.Context, logger *slog.Logger, dbConn *sqlx.DB, vcenter string, snapshotTime time.Time, present map[string]InventorySnapshotRow) ([]string, error) { func backfillLifecycleDeletionsToday(ctx context.Context, logger *slog.Logger, dbConn *sqlx.DB, vcenter string, snapshotTime time.Time, present map[string]InventorySnapshotRow, updateCompatSnapshot bool) ([]string, error) {
dayStart := truncateDate(snapshotTime) dayStart := truncateDate(snapshotTime)
dayEnd := dayStart.Add(24 * time.Hour) dayEnd := dayStart.Add(24 * time.Hour)
@@ -68,17 +68,19 @@ func backfillLifecycleDeletionsToday(ctx context.Context, logger *slog.Logger, d
continue continue
} }
if lastSeenTable != "" { if lastSeenTable != "" {
if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, lastSeenTable, vcenter, cand.vmID, cand.vmUUID, cand.name, deletion); err != nil { if snapUnix, ok := parseSnapshotTime(lastSeenTable); ok {
logger.Warn("lifecycle backfill failed to update hourly snapshot deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "cluster", cand.cluster, "table", lastSeenTable, "deletion", deletion, "error", err) if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, vcenter, cand.vmID, cand.vmUUID, cand.name, snapUnix, deletion); err != nil {
} else if rowsAffected > 0 { logger.Warn("lifecycle backfill failed to update hourly cache deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "snapshot_time", snapUnix, "deletion", deletion, "error", err)
updatedTables[lastSeenTable] = struct{}{} } else if cacheRows > 0 {
logger.Debug("lifecycle backfill updated hourly snapshot deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "cluster", cand.cluster, "table", lastSeenTable, "deletion", deletion) logger.Debug("lifecycle backfill updated hourly cache deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "snapshot_time", snapUnix, "deletion", deletion)
if snapUnix, ok := parseSnapshotTime(lastSeenTable); ok { }
if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, vcenter, cand.vmID, cand.vmUUID, cand.name, snapUnix, deletion); err != nil { }
logger.Warn("lifecycle backfill failed to update hourly cache deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "snapshot_time", snapUnix, "deletion", deletion, "error", err) if updateCompatSnapshot {
} else if cacheRows > 0 { if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, lastSeenTable, vcenter, cand.vmID, cand.vmUUID, cand.name, deletion); err != nil {
logger.Debug("lifecycle backfill updated hourly cache deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "snapshot_time", snapUnix, "deletion", deletion) logger.Warn("lifecycle backfill failed to update hourly snapshot deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "cluster", cand.cluster, "table", lastSeenTable, "deletion", deletion, "error", err)
} } else if rowsAffected > 0 {
updatedTables[lastSeenTable] = struct{}{}
logger.Debug("lifecycle backfill updated hourly snapshot deletion time", "vcenter", vcenter, "vm_id", cand.vmID, "vm_uuid", cand.vmUUID, "name", cand.name, "cluster", cand.cluster, "table", lastSeenTable, "deletion", deletion)
} }
} }
} }
+245 -80
View File
@@ -121,6 +121,7 @@ func (c *CronTask) RunVcenterSnapshotHourly(ctx context.Context, logger *slog.Lo
if err := c.Settings.ReadYMLSettings(); err != nil { if err := c.Settings.ReadYMLSettings(); err != nil {
return err return err
} }
db.SetVmHourlyStatsPostgresPartitioningEnabled(c.postgresVmHourlyPartitioningEnabled())
ctx = settings.MarkReloadedInContext(ctx, c.Settings) ctx = settings.MarkReloadedInContext(ctx, c.Settings)
if c.FirstHourlySnapshotCheck { if c.FirstHourlySnapshotCheck {
@@ -143,15 +144,20 @@ func (c *CronTask) RunVcenterSnapshotHourly(ctx context.Context, logger *slog.Lo
c.FirstHourlySnapshotCheck = false c.FirstHourlySnapshotCheck = false
} }
tableName, err := hourlyInventoryTableName(startTime)
if err != nil {
return err
}
dbConn := c.Database.DB() dbConn := c.Database.DB()
db.ApplySQLiteTuning(ctx, dbConn) db.ApplySQLiteTuning(ctx, dbConn)
if err := ensureDailyInventoryTable(ctx, dbConn, tableName); err != nil { compatMode := c.snapshotTableCompatModeEnabled()
return err tableName := ""
if compatMode {
tableName, err = hourlyInventoryTableName(startTime)
if err != nil {
return err
}
if err := ensureDailyInventoryTable(ctx, dbConn, tableName); err != nil {
return err
}
} else {
c.Logger.Info("Snapshot table compatibility mode disabled; writing canonical hourly cache only")
} }
var wg sync.WaitGroup var wg sync.WaitGroup
@@ -202,17 +208,21 @@ func (c *CronTask) RunVcenterSnapshotHourly(ctx context.Context, logger *slog.Lo
return err return err
} }
rowCount, err := db.TableRowCount(ctx, dbConn, tableName) rowCount := int64(-1)
if err != nil { if tableName != "" {
c.Logger.Warn("unable to count hourly snapshot rows", "error", err, "table", tableName) var countErr error
rowCount = -1 rowCount, countErr = db.TableRowCount(ctx, dbConn, tableName)
} if countErr != nil {
if err := report.RegisterSnapshot(ctx, c.Database, "hourly", tableName, startTime, rowCount); err != nil { c.Logger.Warn("unable to count hourly snapshot rows", "error", countErr, "table", tableName)
c.Logger.Warn("failed to register hourly snapshot", "error", err, "table", tableName) rowCount = -1
}
if err := report.RegisterSnapshot(ctx, c.Database, "hourly", tableName, startTime, rowCount); err != nil {
c.Logger.Warn("failed to register hourly snapshot", "error", err, "table", tableName)
}
} }
metrics.RecordHourlySnapshot(startTime, rowCount, err) metrics.RecordHourlySnapshot(startTime, rowCount, err)
var deferredTables []string deferredTables := make([]string, 0, 8)
deferredReportTables.Range(func(key, _ any) bool { deferredReportTables.Range(func(key, _ any) bool {
name, ok := key.(string) name, ok := key.(string)
if ok && strings.TrimSpace(name) != "" && name != tableName { if ok && strings.TrimSpace(name) != "" && name != tableName {
@@ -220,17 +230,31 @@ func (c *CronTask) RunVcenterSnapshotHourly(ctx context.Context, logger *slog.Lo
} }
return true return true
}) })
sort.Strings(deferredTables) if tableName != "" {
for _, reportTable := range deferredTables { deferredTables = append(deferredTables, tableName)
if err := c.generateReport(ctx, reportTable); err != nil { }
c.Logger.Warn("failed to regenerate deferred hourly report after deletions", "error", err, "table", reportTable) deferredTables = normalizeReportTables(deferredTables)
} else { reportStageStart := time.Now()
c.Logger.Debug("Regenerated deferred hourly report after deletions", "table", reportTable) reportMode := "sync"
if c.asyncReportGenerationEnabled() {
reportMode = "async"
c.queueReportGeneration(deferredTables)
} else {
for _, reportTable := range deferredTables {
if err := c.generateReport(ctx, reportTable); err != nil {
c.Logger.Warn("failed to regenerate deferred hourly report after deletions", "error", err, "table", reportTable)
} else {
c.Logger.Debug("Regenerated deferred hourly report after deletions", "table", reportTable)
}
} }
} }
if err := c.generateReport(ctx, tableName); err != nil { c.Logger.Info(
c.Logger.Warn("failed to generate hourly report", "error", err, "table", tableName) "Hourly snapshot stage complete",
} "stage", "report_generation",
"mode", reportMode,
"tables", len(deferredTables),
"duration", time.Since(reportStageStart),
)
c.Logger.Debug("Finished hourly vcenter snapshot", "vcenter_count", len(c.Settings.Values.Settings.VcenterAddresses), "table", tableName, "row_count", rowCount) c.Logger.Debug("Finished hourly vcenter snapshot", "vcenter_count", len(c.Settings.Values.Settings.VcenterAddresses), "table", tableName, "row_count", rowCount)
return nil return nil
@@ -631,6 +655,13 @@ func intWithDefault(value int, fallback int) int {
return value return value
} }
func boolWithDefault(value *bool, fallback bool) bool {
if value == nil {
return fallback
}
return *value
}
func durationFromSeconds(seconds int, fallback time.Duration) time.Duration { func durationFromSeconds(seconds int, fallback time.Duration) time.Duration {
if seconds > 0 { if seconds > 0 {
return time.Duration(seconds) * time.Second return time.Duration(seconds) * time.Second
@@ -665,6 +696,96 @@ func (c *CronTask) reportsDir() string {
return "/var/lib/vctp/reports" return "/var/lib/vctp/reports"
} }
func (c *CronTask) captureWriteBatchSize() int {
if c.Settings != nil && c.Settings.Values != nil {
return intWithDefault(c.Settings.Values.Settings.CaptureWriteBatchSize, 1000)
}
return 1000
}
func (c *CronTask) snapshotTableCompatModeEnabled() bool {
if c.Settings != nil && c.Settings.Values != nil {
return boolWithDefault(c.Settings.Values.Settings.SnapshotTableCompatMode, true)
}
return true
}
func (c *CronTask) asyncReportGenerationEnabled() bool {
if c.Settings != nil && c.Settings.Values != nil {
return boolWithDefault(c.Settings.Values.Settings.AsyncReportGeneration, true)
}
return true
}
func (c *CronTask) postgresVmHourlyPartitioningEnabled() bool {
if c.Settings != nil && c.Settings.Values != nil {
return boolWithDefault(c.Settings.Values.Settings.PostgresVmHourlyPartitioning, false)
}
return false
}
func (c *CronTask) scheduledAggregationEngine() string {
if c.Settings == nil || c.Settings.Values == nil {
return "go"
}
engine := strings.ToLower(strings.TrimSpace(c.Settings.Values.Settings.ScheduledAggregationEngine))
if engine == "" {
return "go"
}
switch engine {
case "go", "sql":
return engine
default:
return "go"
}
}
func normalizeReportTables(tables []string) []string {
if len(tables) == 0 {
return nil
}
seen := make(map[string]struct{}, len(tables))
out := make([]string, 0, len(tables))
for _, table := range tables {
trimmed := strings.TrimSpace(table)
if trimmed == "" {
continue
}
if _, ok := seen[trimmed]; ok {
continue
}
seen[trimmed] = struct{}{}
out = append(out, trimmed)
}
sort.Strings(out)
return out
}
func (c *CronTask) queueReportGeneration(tables []string) {
tables = normalizeReportTables(tables)
if len(tables) == 0 {
return
}
c.Logger.Info("Queueing async report generation", "tables", len(tables))
go func(reportTables []string) {
ctx, cancel := context.WithTimeout(context.Background(), 60*time.Minute)
defer cancel()
for _, reportTable := range reportTables {
if err := c.generateReport(ctx, reportTable); err != nil {
c.Logger.Warn("failed to generate async report", "table", reportTable, "error", err)
}
}
}(append([]string(nil), tables...))
}
func (c *CronTask) generateReportWithPolicy(ctx context.Context, table string) error {
if c.asyncReportGenerationEnabled() {
c.queueReportGeneration([]string{table})
return nil
}
return c.generateReport(ctx, table)
}
func (c *CronTask) generateReport(ctx context.Context, tableName string) error { func (c *CronTask) generateReport(ctx context.Context, tableName string) error {
dest := c.reportsDir() dest := c.reportsDir()
start := time.Now() start := time.Now()
@@ -1332,6 +1453,7 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
log := c.Logger.With("vcenter", url) log := c.Logger.With("vcenter", url)
ctx = db.WithLoggerContext(ctx, log) ctx = db.WithLoggerContext(ctx, log)
started := time.Now() started := time.Now()
captureStageStart := time.Now()
log.Debug("connecting to vcenter for hourly snapshot", "url", url) log.Debug("connecting to vcenter for hourly snapshot", "url", url)
vc, resources, cleanup, err := c.initVcenterResources(ctx, log, url, startTime, started) vc, resources, cleanup, err := c.initVcenterResources(ctx, log, url, startTime, started)
if err != nil { if err != nil {
@@ -1365,12 +1487,54 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
for _, row := range presentSnapshots { for _, row := range presentSnapshots {
batch = append(batch, row) batch = append(batch, row)
} }
log.Info(
"Hourly snapshot stage complete",
"stage", "capture",
"duration", time.Since(captureStageStart),
"present_rows", len(presentSnapshots),
"inventory_rows", len(inventoryRows),
"batch_rows", len(batch),
)
log.Debug("inserting hourly snapshot batch", "vcenter", url, "rows", len(batch))
writeBatchSize := c.captureWriteBatchSize()
for start := 0; start < len(batch); start += writeBatchSize {
end := min(start+writeBatchSize, len(batch))
chunk := batch[start:end]
if err := insertHourlyCache(ctx, dbConn, chunk); err != nil {
log.Warn("failed to insert hourly cache rows", "vcenter", url, "error", err, "chunk_start", start, "chunk_size", len(chunk))
}
if tableName != "" {
if err := insertHourlyBatch(ctx, dbConn, tableName, chunk); err != nil {
metrics.RecordVcenterSnapshot(url, time.Since(started), totals.VmCount, err)
if upErr := db.UpsertSnapshotRun(ctx, c.Database.DB(), url, startTime, false, err.Error()); upErr != nil {
log.Warn("failed to record snapshot run", "url", url, "error", upErr)
}
return err
}
}
}
// Record per-vCenter totals snapshot.
totalsStageStart := time.Now()
if err := db.InsertVcenterTotals(ctx, dbConn, url, startTime, totals.VmCount, totals.VcpuTotal, totals.RamTotal); err != nil {
slog.Warn("failed to insert vcenter totals", "vcenter", url, "snapshot_time", startTime.Unix(), "error", err)
}
log.Info(
"Hourly snapshot stage complete",
"stage", "totals_refresh",
"duration", time.Since(totalsStageStart),
"vm_count", totals.VmCount,
)
log.Debug("checking inventory for missing VMs") log.Debug("checking inventory for missing VMs")
reconcileStageStart := time.Now()
missingCount, deletionsMarked, candidates := prepareDeletionCandidates(ctx, log, dbConn, q, url, inventoryRows, presentSnapshots, presentByUuid, presentByName, startTime) missingCount, deletionsMarked, candidates := prepareDeletionCandidates(ctx, log, dbConn, q, url, inventoryRows, presentSnapshots, presentByUuid, presentByName, startTime)
newCount := 0 newCount := 0
prevTableName := "" prevTableName := ""
reportTables := make(map[string]struct{}) reportTables := make(map[string]struct{})
compatSnapshotUpdates := strings.TrimSpace(tableName) != ""
// If deletions detected, refine deletion time using vCenter events in a small window. // If deletions detected, refine deletion time using vCenter events in a small window.
if missingCount > 0 { if missingCount > 0 {
@@ -1461,18 +1625,20 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
if name == "" { if name == "" {
name = snapRow.Name name = snapRow.Name
} }
if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, snapTable, url, cand.vmID, vmUUID, name, delTs.Int64); err != nil { if snapUnix, ok := parseSnapshotTime(snapTable); ok {
log.Warn("failed to update hourly snapshot deletion time from event", "table", snapTable, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "error", err) if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, url, cand.vmID, vmUUID, name, snapUnix, delTs.Int64); err != nil {
} else if rowsAffected > 0 { log.Warn("failed to update hourly cache deletion time from event", "snapshot_time", snapUnix, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "error", err)
reportTables[snapTable] = struct{}{} } else if cacheRows > 0 {
deletionsMarked = true log.Debug("updated hourly cache deletion time from event", "snapshot_time", snapUnix, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "event_time", t)
log.Debug("updated hourly snapshot deletion time from event", "table", snapTable, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "event_time", t) }
if snapUnix, ok := parseSnapshotTime(snapTable); ok { }
if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, url, cand.vmID, vmUUID, name, snapUnix, delTs.Int64); err != nil { if compatSnapshotUpdates {
log.Warn("failed to update hourly cache deletion time from event", "snapshot_time", snapUnix, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "error", err) if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, snapTable, url, cand.vmID, vmUUID, name, delTs.Int64); err != nil {
} else if cacheRows > 0 { log.Warn("failed to update hourly snapshot deletion time from event", "table", snapTable, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "error", err)
log.Debug("updated hourly cache deletion time from event", "snapshot_time", snapUnix, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "event_time", t) } else if rowsAffected > 0 {
} reportTables[snapTable] = struct{}{}
deletionsMarked = true
log.Debug("updated hourly snapshot deletion time from event", "table", snapTable, "vm_id", cand.vmID, "vm_uuid", vmUUID, "vcenter", url, "event_time", t)
} }
} }
} }
@@ -1496,27 +1662,9 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
} }
} }
log.Debug("inserting hourly snapshot batch", "vcenter", url, "rows", len(batch))
if err := insertHourlyCache(ctx, dbConn, batch); err != nil {
log.Warn("failed to insert hourly cache rows", "vcenter", url, "error", err)
}
if err := insertHourlyBatch(ctx, dbConn, tableName, batch); err != nil {
metrics.RecordVcenterSnapshot(url, time.Since(started), totals.VmCount, err)
if upErr := db.UpsertSnapshotRun(ctx, c.Database.DB(), url, startTime, false, err.Error()); upErr != nil {
log.Warn("failed to record snapshot run", "url", url, "error", upErr)
}
return err
}
// Record per-vCenter totals snapshot.
if err := db.InsertVcenterTotals(ctx, dbConn, url, startTime, totals.VmCount, totals.VcpuTotal, totals.RamTotal); err != nil {
slog.Warn("failed to insert vcenter totals", "vcenter", url, "snapshot_time", startTime.Unix(), "error", err)
}
// Discover previous snapshots once per run (serial) to avoid concurrent probes across vCenters. // Discover previous snapshots once per run (serial) to avoid concurrent probes across vCenters.
var prevTableTouched bool var prevTableTouched bool
prevTableName, newCount, missingCount, prevTableTouched = c.compareWithPreviousSnapshot(ctx, dbConn, url, startTime, presentSnapshots, presentByUuid, presentByName, inventoryByVmID, inventoryByUuid, inventoryByName, missingCount) prevTableName, newCount, missingCount, prevTableTouched = c.compareWithPreviousSnapshot(ctx, dbConn, url, startTime, presentSnapshots, presentByUuid, presentByName, inventoryByVmID, inventoryByUuid, inventoryByName, missingCount, compatSnapshotUpdates)
if prevTableTouched && prevTableName != "" { if prevTableTouched && prevTableName != "" {
reportTables[prevTableName] = struct{}{} reportTables[prevTableName] = struct{}{}
deletionsMarked = true deletionsMarked = true
@@ -1527,15 +1675,6 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
// Fallback: locate a previous table only if we didn't already find one. // Fallback: locate a previous table only if we didn't already find one.
if prevTableName == "" { if prevTableName == "" {
if prevTable, err := latestHourlySnapshotBefore(ctx, dbConn, startTime, loggerFromCtx(ctx, c.Logger)); err == nil && prevTable != "" { if prevTable, err := latestHourlySnapshotBefore(ctx, dbConn, startTime, loggerFromCtx(ctx, c.Logger)); err == nil && prevTable != "" {
moreMissing, tableUpdated := c.markMissingFromPrevious(ctx, dbConn, prevTable, url, startTime, presentSnapshots, presentByUuid, presentByName, inventoryByVmID, inventoryByUuid, inventoryByName)
if moreMissing > 0 {
missingCount += moreMissing
}
if tableUpdated {
reportTables[prevTable] = struct{}{}
deletionsMarked = true
}
// Reuse this table name for later snapshot lookups when correlating deletion events.
prevTableName = prevTable prevTableName = prevTable
} }
} }
@@ -1599,18 +1738,20 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
tableToUpdate = prevTableName tableToUpdate = prevTableName
} }
if tableToUpdate != "" { if tableToUpdate != "" {
if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, tableToUpdate, url, vmID, inv.VmUuid.String, inv.Name, delTs.Int64); err != nil { if snapUnix, ok := parseSnapshotTime(tableToUpdate); ok {
c.Logger.Warn("count-drop: failed to update hourly snapshot deletion time from event", "table", tableToUpdate, "vm_id", vmID, "vcenter", url, "error", err) if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, url, vmID, inv.VmUuid.String, inv.Name, snapUnix, delTs.Int64); err != nil {
} else if rowsAffected > 0 { c.Logger.Warn("count-drop: failed to update hourly cache deletion time", "snapshot_time", snapUnix, "vm_id", vmID, "vm_uuid", inv.VmUuid.String, "vcenter", url, "error", err)
reportTables[tableToUpdate] = struct{}{} } else if cacheRows > 0 {
deletionsMarked = true c.Logger.Debug("count-drop: updated hourly cache deletion time", "snapshot_time", snapUnix, "vm_id", vmID, "vm_uuid", inv.VmUuid.String, "vcenter", url, "event_time", t)
c.Logger.Debug("count-drop: updated hourly snapshot deletion time from event", "table", tableToUpdate, "vm_id", vmID, "vm_uuid", inv.VmUuid.String, "vcenter", url, "event_time", t) }
if snapUnix, ok := parseSnapshotTime(tableToUpdate); ok { }
if cacheRows, err := updateDeletionTimeInHourlyCache(ctx, dbConn, url, vmID, inv.VmUuid.String, inv.Name, snapUnix, delTs.Int64); err != nil { if compatSnapshotUpdates {
c.Logger.Warn("count-drop: failed to update hourly cache deletion time", "snapshot_time", snapUnix, "vm_id", vmID, "vm_uuid", inv.VmUuid.String, "vcenter", url, "error", err) if rowsAffected, err := updateDeletionTimeInSnapshot(ctx, dbConn, tableToUpdate, url, vmID, inv.VmUuid.String, inv.Name, delTs.Int64); err != nil {
} else if cacheRows > 0 { c.Logger.Warn("count-drop: failed to update hourly snapshot deletion time from event", "table", tableToUpdate, "vm_id", vmID, "vcenter", url, "error", err)
c.Logger.Debug("count-drop: updated hourly cache deletion time", "snapshot_time", snapUnix, "vm_id", vmID, "vm_uuid", inv.VmUuid.String, "vcenter", url, "event_time", t) } else if rowsAffected > 0 {
} reportTables[tableToUpdate] = struct{}{}
deletionsMarked = true
c.Logger.Debug("count-drop: updated hourly snapshot deletion time from event", "table", tableToUpdate, "vm_id", vmID, "vm_uuid", inv.VmUuid.String, "vcenter", url, "event_time", t)
} }
} }
} }
@@ -1621,7 +1762,7 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
} }
// Backfill lifecycle deletions for VMs missing from inventory and without DeletedAt. // Backfill lifecycle deletions for VMs missing from inventory and without DeletedAt.
if backfillTables, err := backfillLifecycleDeletionsToday(ctx, log, dbConn, url, startTime, presentSnapshots); err != nil { if backfillTables, err := backfillLifecycleDeletionsToday(ctx, log, dbConn, url, startTime, presentSnapshots, compatSnapshotUpdates); err != nil {
log.Warn("failed to backfill lifecycle deletions for today", "vcenter", url, "error", err) log.Warn("failed to backfill lifecycle deletions for today", "vcenter", url, "error", err)
} else if len(backfillTables) > 0 { } else if len(backfillTables) > 0 {
for _, table := range backfillTables { for _, table := range backfillTables {
@@ -1629,6 +1770,14 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
} }
deletionsMarked = true deletionsMarked = true
} }
log.Info(
"Hourly snapshot stage complete",
"stage", "reconcile",
"duration", time.Since(reconcileStageStart),
"missing_marked", missingCount,
"created_since_prev", newCount,
"tables_touched", len(reportTables),
)
log.Info("Hourly snapshot summary", log.Info("Hourly snapshot summary",
"vcenter", url, "vcenter", url,
@@ -1644,25 +1793,40 @@ func (c *CronTask) captureHourlySnapshotForVcenter(ctx context.Context, startTim
if upErr := db.UpsertSnapshotRun(ctx, c.Database.DB(), url, startTime, true, ""); upErr != nil { if upErr := db.UpsertSnapshotRun(ctx, c.Database.DB(), url, startTime, true, ""); upErr != nil {
log.Warn("failed to record snapshot run", "url", url, "error", upErr) log.Warn("failed to record snapshot run", "url", url, "error", upErr)
} }
reportStageStart := time.Now()
queuedReports := 0
generatedReports := 0
if deletionsMarked { if deletionsMarked {
if len(reportTables) == 0 { if len(reportTables) == 0 && strings.TrimSpace(tableName) != "" {
reportTables[tableName] = struct{}{} reportTables[tableName] = struct{}{}
} }
if deferredReportTables != nil { if deferredReportTables != nil {
for reportTable := range reportTables { for reportTable := range reportTables {
deferredReportTables.Store(reportTable, struct{}{}) deferredReportTables.Store(reportTable, struct{}{})
queuedReports++
} }
log.Debug("Queued hourly report regeneration after deletions", "tables", len(reportTables)) log.Debug("Queued hourly report regeneration after deletions", "tables", len(reportTables))
} else { } else {
for reportTable := range reportTables { for reportTable := range reportTables {
if err := c.generateReport(ctx, reportTable); err != nil { if err := c.generateReportWithPolicy(ctx, reportTable); err != nil {
log.Warn("failed to regenerate hourly report after deletions", "error", err, "table", reportTable) log.Warn("failed to regenerate hourly report after deletions", "error", err, "table", reportTable)
} else { } else {
generatedReports++
log.Debug("Regenerated hourly report after deletions", "table", reportTable) log.Debug("Regenerated hourly report after deletions", "table", reportTable)
} }
} }
} }
} }
log.Info(
"Hourly snapshot stage complete",
"stage", "report_generation",
"duration", time.Since(reportStageStart),
"deletions_marked", deletionsMarked,
"tables", len(reportTables),
"queued_tables", queuedReports,
"generated_tables", generatedReports,
"deferred", deferredReportTables != nil,
)
return nil return nil
} }
@@ -1680,6 +1844,7 @@ func (c *CronTask) compareWithPreviousSnapshot(
inventoryByUuid map[string]queries.Inventory, inventoryByUuid map[string]queries.Inventory,
inventoryByName map[string]queries.Inventory, inventoryByName map[string]queries.Inventory,
missingCount int, missingCount int,
updateCompatSnapshot bool,
) (string, int, int, bool) { ) (string, int, int, bool) {
prevTableName, prevTableErr := latestHourlySnapshotBefore(ctx, dbConn, startTime, loggerFromCtx(ctx, c.Logger)) prevTableName, prevTableErr := latestHourlySnapshotBefore(ctx, dbConn, startTime, loggerFromCtx(ctx, c.Logger))
if prevTableErr != nil { if prevTableErr != nil {
@@ -1691,7 +1856,7 @@ func (c *CronTask) compareWithPreviousSnapshot(
newCount := 0 newCount := 0
prevTableTouched := false prevTableTouched := false
if prevTableName != "" { if prevTableName != "" {
moreMissing, tableUpdated := c.markMissingFromPrevious(ctx, dbConn, prevTableName, url, startTime, presentSnapshots, presentByUuid, presentByName, inventoryByVmID, inventoryByUuid, inventoryByName) moreMissing, tableUpdated := c.markMissingFromPrevious(ctx, dbConn, prevTableName, url, startTime, presentSnapshots, presentByUuid, presentByName, inventoryByVmID, inventoryByUuid, inventoryByName, updateCompatSnapshot)
missingCount += moreMissing missingCount += moreMissing
if tableUpdated { if tableUpdated {
prevTableTouched = true prevTableTouched = true
+192 -49
View File
@@ -32,15 +32,15 @@ func (c *CronTask) RunVcenterMonthlyAggregate(ctx context.Context, logger *slog.
now := time.Now() now := time.Now()
firstOfThisMonth := time.Date(now.Year(), now.Month(), 1, 0, 0, 0, 0, now.Location()) firstOfThisMonth := time.Date(now.Year(), now.Month(), 1, 0, 0, 0, 0, now.Location())
targetMonth := firstOfThisMonth.AddDate(0, -1, 0) targetMonth := firstOfThisMonth.AddDate(0, -1, 0)
return c.aggregateMonthlySummary(jobCtx, targetMonth, false) return c.aggregateMonthlySummaryWithMode(jobCtx, targetMonth, false, true)
}) })
} }
func (c *CronTask) AggregateMonthlySummary(ctx context.Context, month time.Time, force bool) error { func (c *CronTask) AggregateMonthlySummary(ctx context.Context, month time.Time, force bool) error {
return c.aggregateMonthlySummary(ctx, month, force) return c.aggregateMonthlySummaryWithMode(ctx, month, force, false)
} }
func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time.Time, force bool) error { func (c *CronTask) aggregateMonthlySummaryWithMode(ctx context.Context, targetMonth time.Time, force bool, scheduled bool) error {
jobStart := time.Now() jobStart := time.Now()
if err := report.EnsureSnapshotRegistry(ctx, c.Database); err != nil { if err := report.EnsureSnapshotRegistry(ctx, c.Database); err != nil {
return err return err
@@ -48,11 +48,14 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
granularity := strings.ToLower(strings.TrimSpace(c.Settings.Values.Settings.MonthlyAggregationGranularity)) granularity := strings.ToLower(strings.TrimSpace(c.Settings.Values.Settings.MonthlyAggregationGranularity))
if granularity == "" { if granularity == "" {
granularity = "hourly" granularity = "daily"
}
if scheduled {
granularity = "daily"
} }
if granularity != "hourly" && granularity != "daily" { if granularity != "hourly" && granularity != "daily" {
c.Logger.Warn("unknown monthly aggregation granularity; defaulting to hourly", "granularity", granularity) c.Logger.Warn("unknown monthly aggregation granularity; defaulting to daily", "granularity", granularity)
granularity = "hourly" granularity = "daily"
} }
monthStart := time.Date(targetMonth.Year(), targetMonth.Month(), 1, 0, 0, 0, 0, targetMonth.Location()) monthStart := time.Date(targetMonth.Year(), targetMonth.Month(), 1, 0, 0, 0, 0, targetMonth.Location())
@@ -60,7 +63,14 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
dbConn := c.Database.DB() dbConn := c.Database.DB()
db.SetPostgresWorkMem(ctx, dbConn, c.Settings.Values.Settings.PostgresWorkMemMB) db.SetPostgresWorkMem(ctx, dbConn, c.Settings.Values.Settings.PostgresWorkMemMB)
driver := strings.ToLower(dbConn.DriverName()) driver := strings.ToLower(dbConn.DriverName())
useGoAgg := os.Getenv("MONTHLY_AGG_GO") == "1" // Canonical Go aggregation is the default for both scheduled and manual runs.
// Legacy SQL/union aggregation stays available as a manual fallback/backfill path.
forceGoAgg := os.Getenv("MONTHLY_AGG_GO") == "1"
forceSQLAgg := !scheduled && os.Getenv("MONTHLY_AGG_SQL") == "1"
useGoAgg := scheduled || forceGoAgg || !forceSQLAgg
if forceSQLAgg && !forceGoAgg {
c.Logger.Info("MONTHLY_AGG_SQL=1 enabled; using SQL fallback path for manual monthly aggregation")
}
if !useGoAgg && granularity == "hourly" && driver == "sqlite" { if !useGoAgg && granularity == "hourly" && driver == "sqlite" {
c.Logger.Warn("SQL monthly aggregation is slow on sqlite; overriding to Go path", "granularity", granularity) c.Logger.Warn("SQL monthly aggregation is slow on sqlite; overriding to Go path", "granularity", granularity)
useGoAgg = true useGoAgg = true
@@ -68,26 +78,28 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
var snapshots []report.SnapshotRecord var snapshots []report.SnapshotRecord
var unionColumns []string var unionColumns []string
if granularity == "daily" { if !scheduled {
dailySnapshots, err := report.SnapshotRecordsWithFallback(ctx, c.Database, "daily", "inventory_daily_summary_", "20060102", monthStart, monthEnd) if granularity == "daily" {
if err != nil { dailySnapshots, err := report.SnapshotRecordsWithFallback(ctx, c.Database, "daily", "inventory_daily_summary_", "20060102", monthStart, monthEnd)
return err if err != nil {
return err
}
dailySnapshots = filterRecordsInRange(dailySnapshots, monthStart, monthEnd)
dailySnapshots = filterSnapshotsWithRows(ctx, dbConn, dailySnapshots)
snapshots = dailySnapshots
unionColumns = monthlyUnionColumns
} else {
hourlySnapshots, err := report.SnapshotRecordsWithFallback(ctx, c.Database, "hourly", "inventory_hourly_", "epoch", monthStart, monthEnd)
if err != nil {
return err
}
hourlySnapshots = filterRecordsInRange(hourlySnapshots, monthStart, monthEnd)
hourlySnapshots = filterSnapshotsWithRows(ctx, dbConn, hourlySnapshots)
snapshots = hourlySnapshots
unionColumns = summaryUnionColumns
} }
dailySnapshots = filterRecordsInRange(dailySnapshots, monthStart, monthEnd)
dailySnapshots = filterSnapshotsWithRows(ctx, dbConn, dailySnapshots)
snapshots = dailySnapshots
unionColumns = monthlyUnionColumns
} else {
hourlySnapshots, err := report.SnapshotRecordsWithFallback(ctx, c.Database, "hourly", "inventory_hourly_", "epoch", monthStart, monthEnd)
if err != nil {
return err
}
hourlySnapshots = filterRecordsInRange(hourlySnapshots, monthStart, monthEnd)
hourlySnapshots = filterSnapshotsWithRows(ctx, dbConn, hourlySnapshots)
snapshots = hourlySnapshots
unionColumns = summaryUnionColumns
} }
if len(snapshots) == 0 { if !scheduled && len(snapshots) == 0 {
return fmt.Errorf("no %s snapshot tables found for %s", granularity, targetMonth.Format("2006-01")) return fmt.Errorf("no %s snapshot tables found for %s", granularity, targetMonth.Format("2006-01"))
} }
@@ -110,12 +122,26 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
} }
} }
if scheduled && c.scheduledAggregationEngine() == "sql" {
c.Logger.Info("scheduled_aggregation_engine=sql enabled; using canonical SQL monthly aggregation path")
if err := c.aggregateMonthlySummarySQLCanonical(ctx, monthStart, monthEnd, monthlyTable); err != nil {
c.Logger.Warn("scheduled canonical SQL monthly aggregation failed; falling back to go path", "error", err)
} else {
metrics.RecordMonthlyAggregation(time.Since(jobStart), nil)
c.Logger.Debug("Finished monthly inventory aggregation (SQL canonical path)", "summary_table", monthlyTable)
return nil
}
}
// Optional Go-based aggregation path. // Optional Go-based aggregation path.
if useGoAgg { if useGoAgg {
switch granularity { switch granularity {
case "daily": case "daily":
c.Logger.Debug("Using go implementation of monthly aggregation (daily)") c.Logger.Debug("Using go implementation of monthly aggregation (daily)")
if err := c.aggregateMonthlySummaryGo(ctx, monthStart, monthEnd, monthlyTable, snapshots); err != nil { if err := c.aggregateMonthlySummaryGo(ctx, monthStart, monthEnd, monthlyTable, snapshots, scheduled); err != nil {
if scheduled {
return err
}
c.Logger.Warn("go-based monthly aggregation failed, falling back to SQL path", "error", err) c.Logger.Warn("go-based monthly aggregation failed, falling back to SQL path", "error", err)
} else { } else {
metrics.RecordMonthlyAggregation(time.Since(jobStart), nil) metrics.RecordMonthlyAggregation(time.Since(jobStart), nil)
@@ -123,6 +149,9 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
return nil return nil
} }
case "hourly": case "hourly":
if scheduled {
return fmt.Errorf("scheduled monthly aggregation does not support hourly source mode")
}
c.Logger.Debug("Using go implementation of monthly aggregation (hourly)") c.Logger.Debug("Using go implementation of monthly aggregation (hourly)")
if err := c.aggregateMonthlySummaryGoHourly(ctx, monthStart, monthEnd, monthlyTable, snapshots); err != nil { if err := c.aggregateMonthlySummaryGoHourly(ctx, monthStart, monthEnd, monthlyTable, snapshots); err != nil {
c.Logger.Warn("go-based monthly aggregation failed, falling back to SQL path", "error", err) c.Logger.Warn("go-based monthly aggregation failed, falling back to SQL path", "error", err)
@@ -135,6 +164,9 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
c.Logger.Warn("MONTHLY_AGG_GO is set but granularity is unsupported; using SQL path", "granularity", granularity) c.Logger.Warn("MONTHLY_AGG_GO is set but granularity is unsupported; using SQL path", "granularity", granularity)
} }
} }
if scheduled {
return fmt.Errorf("scheduled monthly aggregation requires go daily-rollup path")
}
tables := make([]string, 0, len(snapshots)) tables := make([]string, 0, len(snapshots))
for _, snapshot := range snapshots { for _, snapshot := range snapshots {
@@ -190,7 +222,7 @@ func (c *CronTask) aggregateMonthlySummary(ctx context.Context, targetMonth time
db.AnalyzeTableIfPostgres(ctx, dbConn, monthlyTable) db.AnalyzeTableIfPostgres(ctx, dbConn, monthlyTable)
if err := c.generateReport(ctx, monthlyTable); err != nil { if err := c.generateReportWithPolicy(ctx, monthlyTable); err != nil {
c.Logger.Warn("failed to generate monthly report", "error", err, "table", monthlyTable) c.Logger.Warn("failed to generate monthly report", "error", err, "table", monthlyTable)
metrics.RecordMonthlyAggregation(time.Since(jobStart), err) metrics.RecordMonthlyAggregation(time.Since(jobStart), err)
return err return err
@@ -205,6 +237,52 @@ func monthlySummaryTableName(t time.Time) (string, error) {
return db.SafeTableName(fmt.Sprintf("inventory_monthly_summary_%s", t.Format("200601"))) return db.SafeTableName(fmt.Sprintf("inventory_monthly_summary_%s", t.Format("200601")))
} }
func (c *CronTask) aggregateMonthlySummarySQLCanonical(ctx context.Context, monthStart, monthEnd time.Time, summaryTable string) error {
jobStart := time.Now()
dbConn := c.Database.DB()
if !db.TableExists(ctx, dbConn, "vm_daily_rollup") {
return fmt.Errorf("vm_daily_rollup table not found for canonical SQL monthly aggregation")
}
unionQuery := buildCanonicalDailyRollupSummaryUnion(monthStart, monthEnd)
insertQuery, err := db.BuildMonthlySummaryInsert(summaryTable, unionQuery)
if err != nil {
return err
}
if _, err := dbConn.ExecContext(ctx, insertQuery); err != nil {
return err
}
if applied, err := db.ApplyLifecycleDeletionToSummary(ctx, dbConn, summaryTable, monthStart.Unix(), monthEnd.Unix()); err != nil {
c.Logger.Warn("failed to apply lifecycle deletions to monthly summary (SQL canonical)", "error", err, "table", summaryTable)
} else {
c.Logger.Info("Monthly aggregation deletion times", "source_lifecycle_cache", applied)
}
if err := db.RefineCreationDeletionFromUnion(ctx, dbConn, summaryTable, buildDailyRollupLifecycleUnion(monthStart, monthEnd)); err != nil {
c.Logger.Warn("failed to refine creation/deletion times (monthly SQL canonical)", "error", err, "table", summaryTable)
}
if err := db.UpdateSummaryPresenceByWindow(ctx, dbConn, summaryTable, monthStart.Unix(), monthEnd.Unix()); err != nil {
c.Logger.Warn("failed to update monthly AvgIsPresent from lifecycle window (SQL canonical)", "error", err, "table", summaryTable)
}
db.AnalyzeTableIfPostgres(ctx, dbConn, summaryTable)
rowCount, err := db.TableRowCount(ctx, dbConn, summaryTable)
if err != nil {
c.Logger.Warn("unable to count monthly summary rows (SQL canonical)", "error", err, "table", summaryTable)
}
if rowCount == 0 {
return fmt.Errorf("no VM records aggregated for %s", monthStart.Format("2006-01"))
}
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot (SQL canonical)", "error", err, "table", summaryTable)
}
if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate monthly report (SQL canonical)", "error", err, "table", summaryTable)
return err
}
c.Logger.Debug("Finished monthly inventory aggregation (SQL canonical path)", "summary_table", summaryTable, "duration", time.Since(jobStart))
return nil
}
// aggregateMonthlySummaryGoHourly aggregates hourly snapshots directly into the monthly summary table. // aggregateMonthlySummaryGoHourly aggregates hourly snapshots directly into the monthly summary table.
func (c *CronTask) aggregateMonthlySummaryGoHourly(ctx context.Context, monthStart, monthEnd time.Time, summaryTable string, hourlySnapshots []report.SnapshotRecord) error { func (c *CronTask) aggregateMonthlySummaryGoHourly(ctx context.Context, monthStart, monthEnd time.Time, summaryTable string, hourlySnapshots []report.SnapshotRecord) error {
jobStart := time.Now() jobStart := time.Now()
@@ -311,7 +389,7 @@ func (c *CronTask) aggregateMonthlySummaryGoHourly(ctx context.Context, monthSta
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil { if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot (Go hourly)", "error", err, "table", summaryTable) c.Logger.Warn("failed to register monthly snapshot (Go hourly)", "error", err, "table", summaryTable)
} }
if err := c.generateReport(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate monthly report (Go hourly)", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate monthly report (Go hourly)", "error", err, "table", summaryTable)
return err return err
} }
@@ -328,7 +406,7 @@ func (c *CronTask) aggregateMonthlySummaryGoHourly(ctx context.Context, monthSta
// aggregateMonthlySummaryGo mirrors the SQL-based monthly aggregation but performs the work in Go, // aggregateMonthlySummaryGo mirrors the SQL-based monthly aggregation but performs the work in Go,
// reading daily summaries in parallel and reducing them to a single monthly summary table. // reading daily summaries in parallel and reducing them to a single monthly summary table.
func (c *CronTask) aggregateMonthlySummaryGo(ctx context.Context, monthStart, monthEnd time.Time, summaryTable string, dailySnapshots []report.SnapshotRecord) error { func (c *CronTask) aggregateMonthlySummaryGo(ctx context.Context, monthStart, monthEnd time.Time, summaryTable string, dailySnapshots []report.SnapshotRecord, canonicalOnly bool) error {
jobStart := time.Now() jobStart := time.Now()
dbConn := c.Database.DB() dbConn := c.Database.DB()
@@ -336,26 +414,39 @@ func (c *CronTask) aggregateMonthlySummaryGo(ctx context.Context, monthStart, mo
return err return err
} }
// Build union query for lifecycle refinement after inserts. unionQuery := ""
dailyTables := make([]string, 0, len(dailySnapshots)) var (
for _, snapshot := range dailySnapshots { aggMap map[monthlyAggKey]*monthlyAggVal
dailyTables = append(dailyTables, snapshot.TableName) err error
} )
unionQuery, err := buildUnionQuery(dailyTables, monthlyUnionColumns, templateExclusionFilter()) if canonicalOnly {
if err != nil { aggMap, err = c.scanDailyRollup(ctx, monthStart, monthEnd)
return err if err != nil {
} return err
}
unionQuery = buildDailyRollupLifecycleUnion(monthStart, monthEnd)
} else {
// Build union query for lifecycle refinement after inserts.
dailyTables := make([]string, 0, len(dailySnapshots))
for _, snapshot := range dailySnapshots {
dailyTables = append(dailyTables, snapshot.TableName)
}
unionQuery, err = buildUnionQuery(dailyTables, monthlyUnionColumns, templateExclusionFilter())
if err != nil {
return err
}
aggMap, err := c.scanDailyTablesParallel(ctx, dailySnapshots) aggMap, err = c.scanDailyTablesParallel(ctx, dailySnapshots)
if err != nil { if err != nil {
return err return err
} }
if len(aggMap) == 0 { if len(aggMap) == 0 {
cacheAgg, cacheErr := c.scanDailyRollup(ctx, monthStart, monthEnd) cacheAgg, cacheErr := c.scanDailyRollup(ctx, monthStart, monthEnd)
if cacheErr == nil && len(cacheAgg) > 0 { if cacheErr == nil && len(cacheAgg) > 0 {
aggMap = cacheAgg aggMap = cacheAgg
} else if cacheErr != nil { } else if cacheErr != nil {
c.Logger.Warn("failed to read daily rollup cache; using table scan", "error", cacheErr) c.Logger.Warn("failed to read daily rollup cache; using table scan", "error", cacheErr)
}
} }
} }
if len(aggMap) == 0 { if len(aggMap) == 0 {
@@ -387,7 +478,7 @@ func (c *CronTask) aggregateMonthlySummaryGo(ctx context.Context, monthStart, mo
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil { if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot", "error", err, "table", summaryTable) c.Logger.Warn("failed to register monthly snapshot", "error", err, "table", summaryTable)
} }
if err := c.generateReport(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate monthly report (Go)", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate monthly report (Go)", "error", err, "table", summaryTable)
return err return err
} }
@@ -666,6 +757,58 @@ WHERE "Date" >= ? AND "Date" < ?
return agg, rows.Err() return agg, rows.Err()
} }
func buildDailyRollupLifecycleUnion(start, end time.Time) string {
return fmt.Sprintf(`
SELECT
"VmId","VmUuid","Name","Vcenter","CreationTime","DeletionTime","Date" AS "SnapshotTime"
FROM vm_daily_rollup
WHERE "Date" >= %d AND "Date" < %d
`, start.Unix(), end.Unix())
}
func buildCanonicalDailyRollupSummaryUnion(start, end time.Time) string {
return fmt.Sprintf(`
SELECT
NULL AS "InventoryId",
COALESCE("Name",'') AS "Name",
COALESCE("Vcenter",'') AS "Vcenter",
COALESCE("VmId",'') AS "VmId",
NULL AS "EventKey",
NULL AS "CloudId",
COALESCE("CreationTime",0) AS "CreationTime",
COALESCE("DeletionTime",0) AS "DeletionTime",
COALESCE("LastResourcePool",'') AS "ResourcePool",
COALESCE("LastDatacenter",'') AS "Datacenter",
COALESCE("LastCluster",'') AS "Cluster",
COALESCE("LastFolder",'') AS "Folder",
COALESCE("LastProvisionedDisk",0) AS "ProvisionedDisk",
COALESCE("LastVcpuCount",0) AS "VcpuCount",
COALESCE("LastRamGB",0) AS "RamGB",
COALESCE("IsTemplate",'') AS "IsTemplate",
COALESCE("PoweredOn",'') AS "PoweredOn",
COALESCE("SrmPlaceholder",'') AS "SrmPlaceholder",
COALESCE("VmUuid",'') AS "VmUuid",
COALESCE("SamplesPresent",0) AS "SamplesPresent",
CASE WHEN COALESCE("TotalSamples",0) > 0 THEN 1.0 * COALESCE("SumVcpu",0) / "TotalSamples" ELSE NULL END AS "AvgVcpuCount",
CASE WHEN COALESCE("TotalSamples",0) > 0 THEN 1.0 * COALESCE("SumRam",0) / "TotalSamples" ELSE NULL END AS "AvgRamGB",
CASE WHEN COALESCE("TotalSamples",0) > 0 THEN 1.0 * COALESCE("SumDisk",0) / "TotalSamples" ELSE NULL END AS "AvgProvisionedDisk",
CASE WHEN COALESCE("TotalSamples",0) > 0 THEN 1.0 * COALESCE("SamplesPresent",0) / "TotalSamples" ELSE NULL END AS "AvgIsPresent",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("TinHits",0) / "SamplesPresent" ELSE NULL END AS "PoolTinPct",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("BronzeHits",0) / "SamplesPresent" ELSE NULL END AS "PoolBronzePct",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("SilverHits",0) / "SamplesPresent" ELSE NULL END AS "PoolSilverPct",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("GoldHits",0) / "SamplesPresent" ELSE NULL END AS "PoolGoldPct",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("TinHits",0) / "SamplesPresent" ELSE NULL END AS "Tin",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("BronzeHits",0) / "SamplesPresent" ELSE NULL END AS "Bronze",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("SilverHits",0) / "SamplesPresent" ELSE NULL END AS "Silver",
CASE WHEN COALESCE("SamplesPresent",0) > 0 THEN 100.0 * COALESCE("GoldHits",0) / "SamplesPresent" ELSE NULL END AS "Gold",
"Date" AS "SnapshotTime"
FROM vm_daily_rollup
WHERE "Date" >= %d
AND "Date" < %d
AND %s
`, start.Unix(), end.Unix(), templateExclusionFilter())
}
func (c *CronTask) insertMonthlyAggregates(ctx context.Context, summaryTable string, aggMap map[monthlyAggKey]*monthlyAggVal) error { func (c *CronTask) insertMonthlyAggregates(ctx context.Context, summaryTable string, aggMap map[monthlyAggKey]*monthlyAggVal) error {
dbConn := c.Database.DB() dbConn := c.Database.DB()
columns := []string{ columns := []string{
+62
View File
@@ -55,6 +55,8 @@ func main() {
dbCleanup := flag.Bool("db-cleanup", false, "Run a one-time cleanup to drop low-value hourly snapshot indexes and exit") dbCleanup := flag.Bool("db-cleanup", false, "Run a one-time cleanup to drop low-value hourly snapshot indexes and exit")
backfillVcenterCache := flag.Bool("backfill-vcenter-cache", false, "Run a one-time backfill for vcenter latest+aggregate cache tables and exit") backfillVcenterCache := flag.Bool("backfill-vcenter-cache", false, "Run a one-time backfill for vcenter latest+aggregate cache tables and exit")
importSQLite := flag.String("import-sqlite", "", "Import a SQLite database file/DSN into the configured Postgres database and exit") importSQLite := flag.String("import-sqlite", "", "Import a SQLite database file/DSN into the configured Postgres database and exit")
benchmarkAggregations := flag.Bool("benchmark-aggregations", false, "Run a one-time canonical aggregation benchmark (Go vs SQL) and exit")
benchmarkRuns := flag.Int("benchmark-runs", 3, "Number of benchmark iterations per mode when -benchmark-aggregations is set")
flag.Parse() flag.Parse()
bootstrapLogger := log.New(log.LevelInfo, log.OutputText) bootstrapLogger := log.New(log.LevelInfo, log.OutputText)
@@ -74,6 +76,7 @@ func main() {
log.ToOutput(strings.ToLower(strings.TrimSpace(s.Values.Settings.LogOutput))), log.ToOutput(strings.ToLower(strings.TrimSpace(s.Values.Settings.LogOutput))),
) )
s.Logger = logger s.Logger = logger
db.SetVmHourlyStatsPostgresPartitioningEnabled(boolWithDefault(s.Values.Settings.PostgresVmHourlyPartitioning, false))
logger.Info("vCTP starting", "build_time", buildTime, "sha1_version", sha1ver, "go_version", runtime.Version(), "settings_file", *settingsPath) logger.Info("vCTP starting", "build_time", buildTime, "sha1_version", sha1ver, "go_version", runtime.Version(), "settings_file", *settingsPath)
warnDeprecatedPollingSettings(logger, s.Values) warnDeprecatedPollingSettings(logger, s.Values)
@@ -191,6 +194,58 @@ func main() {
) )
return return
} }
if *benchmarkAggregations {
logger.Info("Running one-shot canonical aggregation benchmark",
"runs_per_mode", *benchmarkRuns,
"driver", normalizedDriver,
"scheduled_aggregation_engine", strings.ToLower(strings.TrimSpace(s.Values.Settings.ScheduledAggregationEngine)),
)
ct := &tasks.CronTask{
Logger: logger,
Database: database,
Settings: s,
FirstHourlySnapshotCheck: true,
}
benchReport, err := ct.RunCanonicalAggregationBenchmark(ctx, *benchmarkRuns)
if err != nil {
logger.Error("canonical aggregation benchmark failed", "error", err)
os.Exit(1)
}
if !benchReport.DailyWindowStart.IsZero() {
logger.Info("daily canonical benchmark",
"window_start", benchReport.DailyWindowStart.Format(time.RFC3339),
"window_end", benchReport.DailyWindowEnd.Format(time.RFC3339),
"go_min", benchReport.DailyGo.Min,
"go_median", benchReport.DailyGo.Median,
"go_avg", benchReport.DailyGo.Avg,
"go_max", benchReport.DailyGo.Max,
"go_rows", benchReport.DailyGoRowsWritten,
"sql_min", benchReport.DailySQL.Min,
"sql_median", benchReport.DailySQL.Median,
"sql_avg", benchReport.DailySQL.Avg,
"sql_max", benchReport.DailySQL.Max,
"sql_rows", benchReport.DailySQLRowsWritten,
)
}
if !benchReport.MonthlyWindowStart.IsZero() {
logger.Info("monthly canonical benchmark",
"window_start", benchReport.MonthlyWindowStart.Format(time.RFC3339),
"window_end", benchReport.MonthlyWindowEnd.Format(time.RFC3339),
"go_min", benchReport.MonthlyGo.Min,
"go_median", benchReport.MonthlyGo.Median,
"go_avg", benchReport.MonthlyGo.Avg,
"go_max", benchReport.MonthlyGo.Max,
"go_rows", benchReport.MonthlyGoRowsWritten,
"sql_min", benchReport.MonthlySQL.Min,
"sql_median", benchReport.MonthlySQL.Median,
"sql_avg", benchReport.MonthlySQL.Avg,
"sql_max", benchReport.MonthlySQL.Max,
"sql_rows", benchReport.MonthlySQLRowsWritten,
)
}
logger.Info("Canonical aggregation benchmark complete; exiting")
return
}
// Determine bind IP // Determine bind IP
bindIP := strings.TrimSpace(s.Values.Settings.BindIP) bindIP := strings.TrimSpace(s.Values.Settings.BindIP)
@@ -459,6 +514,13 @@ func durationFromSeconds(value int, fallback int) time.Duration {
return time.Second * time.Duration(value) return time.Second * time.Duration(value)
} }
func boolWithDefault(value *bool, fallback bool) bool {
if value == nil {
return fallback
}
return *value
}
func resolveVcenterPassword(logger *slog.Logger, cipher *secrets.Secrets, legacyDecryptKeys [][]byte, raw string) ([]byte, string, error) { func resolveVcenterPassword(logger *slog.Logger, cipher *secrets.Secrets, legacyDecryptKeys [][]byte, raw string) ([]byte, string, error) {
if strings.TrimSpace(raw) == "" { if strings.TrimSpace(raw) == "" {
return nil, "", fmt.Errorf("vcenter password is empty") return nil, "", fmt.Errorf("vcenter password is empty")
+100
View File
@@ -0,0 +1,100 @@
# Phase 0 Baseline and Regression Snapshot
Date captured: 2026-04-20 (Australia/Sydney)
## Baseline metrics (local `db.sqlite3` + `reports/`)
| Area | Metric | Baseline |
| --- | --- | --- |
| Hourly capture | `snapshot_registry` hourly entries | `930` |
| Hourly capture | Hourly compatibility tables (`inventory_hourly_%`) | `930` |
| Hourly capture | Canonical cache rows (`vm_hourly_stats`) | `489865` |
| Hourly capture | Latest hourly snapshot row count (`snapshot_count`) | `52` |
| Hourly capture | Latest hourly snapshot table | `inventory_hourly_1776635926` |
| Daily aggregation | `snapshot_registry` daily entries | `39` |
| Daily aggregation | Daily summary tables (`inventory_daily_summary_%`) | `40` |
| Daily aggregation | Canonical daily rollup rows (`vm_daily_rollup`) | `1779` |
| Daily aggregation | Latest daily summary table | `inventory_daily_summary_20260419` |
| Daily aggregation | Latest daily snapshot row count (`snapshot_count`) | `52` |
| Monthly aggregation | `snapshot_registry` monthly entries | `1` |
| Monthly aggregation | Latest monthly summary table | `inventory_monthly_summary_202601` |
| Monthly aggregation | Latest monthly snapshot row count (`snapshot_count`) | `62` |
| Report generation | Files present in `reports/` | `10339` |
| Report generation | Most recent files | `inventory_hourly_1776635926.xlsx`, `inventory_daily_summary_20260419.xlsx`, `inventory_hourly_1776635626.xlsx` |
Notes:
- `snapshot_runs` rows: `10254`, success distribution: `TRUE=10254`, attempts min/max/avg: `1/2/1.0001`.
- Runtime histograms/counters for long-running jobs are emitted on `/metrics` and are not persisted in SQLite.
- Hourly per-vCenter duration: `vctp_vcenter_snapshot_duration_seconds`
- Daily duration: `vctp_daily_aggregation_duration_seconds`
- Monthly duration: `vctp_monthly_aggregation_duration_seconds`
- Reports available gauge: `vctp_reports_available`
## API/endpoint contract regression snapshot
Source of truth: `server/router/router.go`.
Unauthenticated/public routes:
- `/`
- `/vm/trace`
- `/vcenters`
- `/vcenters/totals`
- `/vcenters/totals/daily`
- `/vcenters/totals/hourly`
- `/snapshots/hourly`
- `/snapshots/daily`
- `/snapshots/monthly`
- `/metrics`
- `/api/auth/login`
- `/assets/*`, `/favicon*`, `/reports/*`, `/swagger*`
Viewer routes (Bearer auth, viewer/admin role):
- `/api/report/inventory`
- `/api/report/updates`
- `/api/report/snapshot`
- `/api/diagnostics/daily-creation`
Admin routes (Bearer auth, admin role):
- `/api/event/vm/create`
- `/api/event/vm/modify`
- `/api/event/vm/move`
- `/api/event/vm/delete`
- `/api/import/vm`
- `/api/inventory/vm/delete`
- `/api/inventory/vm/update`
- `/api/cleanup/updates`
- `/api/snapshots/aggregate`
- `/api/snapshots/hourly/force`
- `/api/snapshots/migrate`
- `/api/snapshots/repair`
- `/api/snapshots/repair/all`
- `/api/snapshots/regenerate-hourly-reports`
- `/api/vcenters/cache/rebuild`
- `/api/encrypt`
- `/debug/pprof/*` (only when enabled)
`/api/auth/me` route:
- Protected by auth middleware (`withAuth`) but no explicit role gate.
## Report filename behavior regression snapshot
Source of truth: `server/handler/reportDownload.go`, `server/handler/snapshots.go`, `internal/report/snapshots.go`.
HTTP download endpoints:
- `GET /api/report/inventory` -> `Content-Disposition: attachment; filename="inventory_report.xlsx"`
- `GET /api/report/updates` -> `Content-Disposition: attachment; filename="updates_report.xlsx"`
- `GET /api/report/snapshot?table=<tableName>` -> `Content-Disposition: attachment; filename="<tableName>.xlsx"`
On-disk generated report filename:
- `SaveTableReport(...)` writes `<reports_dir>/<tableName>.xlsx`
- Snapshot list pages link to `/reports/<tableName>.xlsx`
## Migration guardrails confirmation
- No auth-model changes: route auth wrappers remain unchanged (`withAuth`, `withAuthRole` usage in router).
- SQLite support retained:
- settings default driver remains sqlite (`src/vctp.yml`, `README.md`).
- hourly canonical write path still has SQLite transactional upsert path (`insertHourlyCache`, `insertHourlyBatch`).
- Compatibility mode enabled by default:
- `settings.snapshot_table_compat_mode` default is `true` in settings defaults.
- runtime check falls back to enabled when unset (`snapshotTableCompatModeEnabled()`).
+61
View File
@@ -269,6 +269,67 @@ The target architecture is:
- Retain explicit backfill and rebuild commands for compatibility tables and reports. - Retain explicit backfill and rebuild commands for compatibility tables and reports.
- Clean up obsolete styling rules and duplicated visual patterns once the new UI system is fully adopted. - Clean up obsolete styling rules and duplicated visual patterns once the new UI system is fully adopted.
## Implementation Checklist
### 0. Baseline and Guardrails
- [x] Capture baseline metrics for hourly capture, daily aggregation, monthly aggregation, and report generation.
- [x] Confirm current API/endpoint contract and report filename behavior with a regression snapshot.
- [x] Add new settings with defaults and config wiring:
- [x] `settings.capture_write_batch_size=1000`
- [x] `settings.snapshot_table_compat_mode=true`
- [x] `settings.async_report_generation=true`
- [x] Add/confirm stage-level logging and timing around capture, reconcile, totals refresh, and report generation.
- [x] Document migration guardrails: no auth-model changes, SQLite support retained, compatibility mode enabled by default.
- Evidence snapshot: see `phase0-baseline.md` for metrics, API/report contract snapshot, and guardrail verification.
### 1. Phase 1: Hot-Path Runtime Wins
- [x] Implement batched hourly writes for canonical tables in capture flow.
- [x] Add PostgreSQL multi-row insert/upsert path (or `COPY`) for `vm_hourly_stats`.
- [x] Keep SQLite transactional batched upsert path without PostgreSQL-only ingestion features.
- [x] Decouple XLSX/report generation from capture hot path via async/deferred stage.
- [x] Ensure scheduled daily aggregation reads canonical data from `vm_hourly_stats` only.
- [x] Ensure scheduled monthly aggregation reads canonical data from `vm_daily_rollup` only.
- [x] Keep legacy compatibility tables enabled during this phase.
- [x] Introduce UI token layer (`--theme_*`) and map shared component primitives before page-specific redesign.
### 2. Phase 2: Canonical Dataflow
- [x] Refactor capture/reconcile ordering so canonical caches are updated first.
- [x] Move deletion/event reconciliation to one post-capture phase per vCenter.
- [x] Remove prior-snapshot table mutations from capture hot path (except explicit compatibility needs).
- [x] Keep SQL union/legacy scan paths available only for fallback, repair, and backfill.
- [x] Verify `snapshot_registry` logical hourly registration remains correct without normal hourly table scans.
- [x] Implement shared Templ page shell improvements across header/footer/cards/buttons/tables/forms.
- [x] Refresh dashboard, snapshots, vCenter totals, and VM trace views to the tokenized design system.
### 3. Phase 3: Postgres-Ready Scale-Up
- [x] Validate/add canonical `vm_hourly_stats` indexes for snapshot time, vCenter+time, VM identity+time, and trace lookup.
- [x] Add PostgreSQL monthly partitioning for `vm_hourly_stats` behind migration controls.
- [ ] Benchmark Go vs SQL on canonical Postgres tables using representative production-scale data.
- Benchmark harness implemented via `-benchmark-aggregations` and `-benchmark-runs`; production-scale Postgres run pending.
- [x] Keep Go as scheduled default unless SQL shows clear and repeatable runtime wins.
- [x] If SQL wins, roll out behind a controlled flag before any default switch.
### 4. Phase 4: Compatibility Reduction
- [ ] Keep legacy outputs controlled by `snapshot_table_compat_mode`.
- [ ] Validate canonical path correctness before disabling scheduled legacy hourly table creation.
- [ ] Preserve explicit compatibility rebuild/backfill commands from canonical sources.
- [ ] Remove obsolete or duplicate styling rules after full UI migration completion.
### 5. Validation and Quality Gates
- [ ] Add golden-result tests for daily output parity (old vs new path).
- [ ] Add golden-result tests for monthly output parity (old vs new path).
- [ ] Add lifecycle edge-case coverage (partial presence, missing create times, deletion refinement, pool and resource changes).
- [ ] Add integration tests for canonical write/read paths and totals cache correctness.
- [ ] Add compatibility tests for legacy table generation, reports, and rebuild flows.
- [ ] Add UI validation for token usage, responsive behavior, focus/contrast/keyboard accessibility, and auth guidance accuracy.
- [ ] Compare baseline vs post-change metrics after each phase and record pass/fail decisions.
### 6. Rollout and Documentation
- [ ] Update operator docs for new settings and default behavior.
- [ ] Document compatibility-mode lifecycle and criteria to disable legacy table generation.
- [ ] Document benchmark method/results and default-path decision record (Go vs SQL).
- [ ] Publish a short migration runbook for staged rollout, rollback triggers, and repair workflows.
## Test Plan ## Test Plan
### Correctness Tests ### Correctness Tests
+23 -3
View File
@@ -1,18 +1,38 @@
package middleware package middleware
import ( import (
"vctp/version"
"net/http" "net/http"
"strings"
"time"
"vctp/version"
) )
// CacheMiddleware sets the Cache-Control header based on the version. // CacheMiddleware sets the Cache-Control header based on the version.
func CacheMiddleware(next http.Handler) http.Handler { func CacheMiddleware(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if version.Value == "dev" { if version.Value == "dev" {
w.Header().Set("Cache-Control", "no-cache") w.Header().Set("Cache-Control", "no-cache, no-store, must-revalidate")
w.Header().Set("Pragma", "no-cache")
w.Header().Set("Expires", "0")
} else { } else {
w.Header().Set("Cache-Control", "public, max-age=31536000") cacheControl := "public, max-age=31536000"
if isVersionedAssetRequest(r) {
cacheControl += ", immutable"
}
w.Header().Set("Cache-Control", cacheControl)
w.Header().Set("Expires", time.Now().UTC().Add(365*24*time.Hour).Format(http.TimeFormat))
} }
w.Header().Set("Vary", "Accept-Encoding")
next.ServeHTTP(w, r) next.ServeHTTP(w, r)
}) })
} }
func isVersionedAssetRequest(r *http.Request) bool {
if r == nil {
return false
}
if r.URL.Query().Get("v") != "" {
return true
}
return strings.Contains(r.URL.Path, "@")
}
+83
View File
@@ -0,0 +1,83 @@
package middleware
import (
"net/http"
"net/http/httptest"
"testing"
"vctp/version"
)
func TestCacheMiddlewareDev(t *testing.T) {
orig := version.Value
version.Value = "dev"
defer func() { version.Value = orig }()
rr := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/assets/css/web3.css", nil)
h := CacheMiddleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}))
h.ServeHTTP(rr, req)
if got := rr.Header().Get("Cache-Control"); got != "no-cache, no-store, must-revalidate" {
t.Fatalf("unexpected Cache-Control: %q", got)
}
if got := rr.Header().Get("Pragma"); got != "no-cache" {
t.Fatalf("unexpected Pragma: %q", got)
}
if got := rr.Header().Get("Expires"); got != "0" {
t.Fatalf("unexpected Expires: %q", got)
}
if got := rr.Header().Get("Vary"); got != "Accept-Encoding" {
t.Fatalf("unexpected Vary: %q", got)
}
}
func TestCacheMiddlewareProd(t *testing.T) {
orig := version.Value
version.Value = "1.2.3"
defer func() { version.Value = orig }()
rr := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/assets/css/web3.css?v=1.2.3", nil)
h := CacheMiddleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}))
h.ServeHTTP(rr, req)
if got := rr.Header().Get("Cache-Control"); got != "public, max-age=31536000, immutable" {
t.Fatalf("unexpected Cache-Control: %q", got)
}
if rr.Header().Get("Expires") == "" {
t.Fatalf("expected Expires header")
}
if got := rr.Header().Get("Vary"); got != "Accept-Encoding" {
t.Fatalf("unexpected Vary: %q", got)
}
if got := rr.Header().Get("Pragma"); got != "" {
t.Fatalf("expected no Pragma in prod, got %q", got)
}
}
func TestCacheMiddlewareProdUnversionedStillCached(t *testing.T) {
orig := version.Value
version.Value = "1.2.3"
defer func() { version.Value = orig }()
rr := httptest.NewRecorder()
req := httptest.NewRequest(http.MethodGet, "/swagger/swagger-ui.css", nil)
h := CacheMiddleware(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(http.StatusOK)
}))
h.ServeHTTP(rr, req)
if got := rr.Header().Get("Cache-Control"); got != "public, max-age=31536000" {
t.Fatalf("unexpected Cache-Control: %q", got)
}
if rr.Header().Get("Expires") == "" {
t.Fatalf("expected Expires header")
}
}
+114
View File
@@ -0,0 +1,114 @@
package router
import (
"net/http"
"net/http/httptest"
"regexp"
"strings"
"testing"
"vctp/version"
)
var externalAssetRefPattern = regexp.MustCompile(`\b(?:src|href)=["']https?://`)
func TestHomePageUsesLocalVersionedAssets(t *testing.T) {
orig := version.Value
version.Value = "1.2.3"
defer func() { version.Value = orig }()
app := testRouter(t, testRouterSettings(t, false))
req := httptest.NewRequest(http.MethodGet, "/", nil)
rr := httptest.NewRecorder()
app.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d, got %d", http.StatusOK, rr.Code)
}
body := rr.Body.String()
for _, want := range []string{
`href="/favicon.ico?v=1.2.3"`,
`href="/favicon-16x16.png?v=1.2.3"`,
`href="/favicon-32x32.png?v=1.2.3"`,
`src="/assets/js/htmx@v2.0.2.min.js"`,
`src="/assets/js/web3-charts.js?v=1.2.3"`,
`href="/assets/css/output@1.2.3.css"`,
`href="/assets/css/web3.css?v=1.2.3"`,
} {
if !strings.Contains(body, want) {
t.Fatalf("expected response body to contain %q", want)
}
}
if externalAssetRefPattern.MatchString(body) {
t.Fatalf("home page contains external asset URL: %s", body)
}
}
func TestSwaggerUIUsesLocalAssetsOnly(t *testing.T) {
app := testRouter(t, testRouterSettings(t, false))
req := httptest.NewRequest(http.MethodGet, "/swagger/", nil)
rr := httptest.NewRecorder()
app.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d, got %d", http.StatusOK, rr.Code)
}
body := rr.Body.String()
for _, want := range []string{
`href="./swagger-ui.css"`,
`src="./swagger-ui-bundle.js"`,
`src="./swagger-ui-standalone-preset.js"`,
`src="./swagger-initializer.js"`,
} {
if !strings.Contains(body, want) {
t.Fatalf("expected swagger index to contain %q", want)
}
}
if externalAssetRefPattern.MatchString(body) {
t.Fatalf("swagger index contains external asset URL: %s", body)
}
}
func TestStaticResourcesAreCacheableInReleaseMode(t *testing.T) {
orig := version.Value
version.Value = "1.2.3"
defer func() { version.Value = orig }()
app := testRouter(t, testRouterSettings(t, false))
tests := []struct {
path string
wantCacheControl string
}{
{path: "/assets/css/web3.css?v=1.2.3", wantCacheControl: "public, max-age=31536000, immutable"},
{path: "/assets/js/htmx@v2.0.2.min.js", wantCacheControl: "public, max-age=31536000, immutable"},
{path: "/favicon.ico?v=1.2.3", wantCacheControl: "public, max-age=31536000, immutable"},
{path: "/swagger/swagger-ui.css", wantCacheControl: "public, max-age=31536000"},
{path: "/swagger.json", wantCacheControl: "public, max-age=31536000"},
}
for _, tc := range tests {
t.Run(tc.path, func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, tc.path, nil)
rr := httptest.NewRecorder()
app.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d for %s, got %d", http.StatusOK, tc.path, rr.Code)
}
if got := rr.Header().Get("Cache-Control"); got != tc.wantCacheControl {
t.Fatalf("unexpected Cache-Control for %s: got %q want %q", tc.path, got, tc.wantCacheControl)
}
if got := rr.Header().Get("Vary"); got != "Accept-Encoding" {
t.Fatalf("unexpected Vary for %s: %q", tc.path, got)
}
if rr.Header().Get("Expires") == "" {
t.Fatalf("expected Expires for %s", tc.path)
}
})
}
}
+5 -5
View File
@@ -2,12 +2,12 @@ CPE_OPTS='-settings /etc/dtms/vctp.yml'
# Aggregation engine selection (default: Go paths enabled). # Aggregation engine selection (default: Go paths enabled).
# DAILY_AGG_GO=1: # DAILY_AGG_GO=1:
# Use the Go fan-out/reduce daily aggregation path. # Force the Go fan-out/reduce daily aggregation path for manual runs.
# MONTHLY_AGG_GO=1: # MONTHLY_AGG_GO=1:
# Use the Go monthly aggregation path for both monthly modes # Force the Go monthly aggregation path for manual runs.
# (hourly or daily source tables, controlled by settings.monthly_aggregation_granularity). # DAILY_AGG_SQL=1 / MONTHLY_AGG_SQL=1:
# Set either option to 0 to prefer the SQL implementation for that layer. # Force legacy SQL fallback for manual runs.
# If a Go aggregation run fails, vCTP automatically falls back to SQL for that run. # Scheduled aggregation selection is controlled by settings.scheduled_aggregation_engine in YAML.
DAILY_AGG_GO=1 DAILY_AGG_GO=1
MONTHLY_AGG_GO=1 MONTHLY_AGG_GO=1
# Additional runtime behavior is configured in the YAML file (`/etc/dtms/vctp.yml` by default). # Additional runtime behavior is configured in the YAML file (`/etc/dtms/vctp.yml` by default).
+8 -1
View File
@@ -49,11 +49,18 @@ settings:
snapshot_cleanup_cron: "30 2 * * *" snapshot_cleanup_cron: "30 2 * * *"
hourly_snapshot_retry_seconds: 300 hourly_snapshot_retry_seconds: 300
hourly_snapshot_max_retries: 3 hourly_snapshot_max_retries: 3
capture_write_batch_size: 1000
snapshot_table_compat_mode: true
async_report_generation: true
# Postgres-only: when true, vm_hourly_stats is migrated/managed as monthly range partitions.
postgres_vm_hourly_partitioning_enabled: false
# Scheduled aggregation engine: go (default) or sql (canonical Postgres SQL path rollout flag).
scheduled_aggregation_engine: "go"
hourly_job_timeout_seconds: 1200 hourly_job_timeout_seconds: 1200
hourly_snapshot_timeout_seconds: 600 hourly_snapshot_timeout_seconds: 600
daily_job_timeout_seconds: 900 daily_job_timeout_seconds: 900
monthly_job_timeout_seconds: 1200 monthly_job_timeout_seconds: 1200
monthly_aggregation_granularity: "hourly" monthly_aggregation_granularity: "daily"
monthly_aggregation_cron: "10 3 1 * *" monthly_aggregation_cron: "10 3 1 * *"
# Optional: override Summary worksheet pivot layout in daily/monthly XLSX reports. # Optional: override Summary worksheet pivot layout in daily/monthly XLSX reports.
# metric values: avg_vcpu, avg_ram, prorated_vm_count, vm_name_count # metric values: avg_vcpu, avg_ram, prorated_vm_count, vm_name_count