Compare commits

...

12 Commits

Author SHA1 Message Date
Nathan Coad fb7e9bdca4 dont include groups in JWT
continuous-integration/drone/push Build is passing
2026-04-21 14:54:19 +10:00
Nathan Coad 35840697fa improve ldap
continuous-integration/drone/push Build is passing
2026-04-21 14:40:10 +10:00
Nathan Coad 4fca10795e add user/group DNs to config
continuous-integration/drone/push Build is passing
2026-04-21 14:24:16 +10:00
Nathan Coad 14d242c8d1 optimising ldap lookup
continuous-integration/drone/push Build is passing
2026-04-21 13:50:07 +10:00
Nathan Coad a8e38784d9 more ldap logging
continuous-integration/drone/push Build is passing
2026-04-21 13:21:32 +10:00
Nathan Coad d2a7145a4c bugfix ldap
continuous-integration/drone/push Build is passing
2026-04-21 13:03:08 +10:00
Nathan Coad 4b1b985862 update ldap
continuous-integration/drone/push Build is passing
2026-04-21 11:00:40 +10:00
Nathan Coad 361ba7719b more auth logging
continuous-integration/drone/push Build is passing
2026-04-21 10:35:10 +10:00
nathan 2c3167a1a0 more updates
continuous-integration/drone/push Build is passing
2026-04-20 19:40:01 +10:00
nathan 916b0b5054 more tests
continuous-integration/drone/push Build is passing
2026-04-20 18:38:12 +10:00
nathan 27cab61e89 improve title overflow
continuous-integration/drone/push Build is passing
2026-04-20 17:10:58 +10:00
nathan 11df6e0560 golden parity + lifecycle edge-case coverage in internal/tasks 2026-04-20 17:09:38 +10:00
19 changed files with 2448 additions and 97 deletions
+129 -1
View File
@@ -124,6 +124,16 @@ The benchmark command:
- Runs Go and SQL aggregation cores for the latest available daily/monthly windows. - Runs Go and SQL aggregation cores for the latest available daily/monthly windows.
- Writes results to startup logs and exits without changing scheduled defaults. - Writes results to startup logs and exits without changing scheduled defaults.
### Benchmark method and decision record
- Run the benchmark on the target environment and database profile before deciding defaults:
- `vctp -settings /path/to/vctp.yml -benchmark-aggregations -benchmark-runs 3`
- Current local comparison snapshot (2026-04-20) is recorded in `phase-metrics-2026-04-20.md`.
- Latest tuned Postgres snapshot (2026-04-21, `runs=3`) showed:
- Daily window (`2026-04-21` to `2026-04-22` UTC): Go avg `2.261369712s` vs SQL avg `1m31.738727387s` (Go ~`40.57x` faster).
- Monthly window (`2026-04-01` to `2026-05-01` UTC): Go avg `3.705308832s` vs SQL avg `3.065612298s` (SQL ~`1.21x` faster).
- Default-path decision remains `settings.scheduled_aggregation_engine: go`.
- Promote SQL only when representative production-scale **Postgres** runs show clear, repeatable wins.
## Database Configuration ## Database Configuration
By default the app uses SQLite and creates/opens `db.sqlite3`. By default the app uses SQLite and creates/opens `db.sqlite3`.
@@ -204,6 +214,80 @@ Validate connectivity before starting vCTP:
psql "postgres://vctp_user:change-this-password@db-hostname:5432/vctp?sslmode=disable" psql "postgres://vctp_user:change-this-password@db-hostname:5432/vctp?sslmode=disable"
``` ```
### PostgreSQL tuning baseline (20 vCPU / 64 GB host)
If your PostgreSQL instance is still running near-default settings, use this as a practical starting profile for vCTP workloads (hourly ingest + daily/monthly aggregation).
Choose one profile:
- Dedicated DB host (PostgreSQL is the primary service on this machine): use the `dedicated` values.
- Shared host (vCTP app + PostgreSQL on same machine): use the `shared` values.
Recommended `postgresql.conf` starting points:
```conf
# Memory
shared_buffers = 16GB # dedicated
# shared_buffers = 12GB # shared
effective_cache_size = 48GB # dedicated
# effective_cache_size = 36GB # shared
work_mem = 32MB # dedicated
# work_mem = 16MB # shared
maintenance_work_mem = 2GB # dedicated
# maintenance_work_mem = 1GB # shared
# WAL / checkpoints
wal_compression = on
checkpoint_timeout = 15min
checkpoint_completion_target = 0.9
max_wal_size = 16GB
min_wal_size = 2GB
# Parallelism and connections
max_connections = 120
max_worker_processes = 20
max_parallel_workers = 20
max_parallel_workers_per_gather = 4
max_parallel_maintenance_workers = 4
# Planner / IO (SSD/NVMe)
random_page_cost = 1.1
effective_io_concurrency = 200
default_statistics_target = 200
# Autovacuum for high-write canonical tables
autovacuum_max_workers = 6
autovacuum_naptime = 30s
autovacuum_vacuum_scale_factor = 0.02
autovacuum_analyze_scale_factor = 0.01
autovacuum_vacuum_cost_limit = 2000
# Useful diagnostics
track_io_timing = on
log_temp_files = 32MB
```
Apply and validate:
- Reload config (`SELECT pg_reload_conf();`) or restart PostgreSQL if required by your platform.
- Confirm active values with:
```sql
SHOW shared_buffers;
SHOW effective_cache_size;
SHOW work_mem;
SHOW maintenance_work_mem;
SHOW max_wal_size;
SHOW autovacuum_vacuum_scale_factor;
```
After tuning, rerun the canonical benchmark and compare against your pre-tuning snapshot:
```shell
vctp -settings /path/to/vctp.yml -benchmark-aggregations -benchmark-runs 3
```
Notes:
- `work_mem` is per sort/hash operation, not per session; avoid setting it too high globally.
- Keep `settings.scheduled_aggregation_engine: go` as default unless repeated production-scale benchmarks show SQL is consistently faster on your canonical Postgres data.
PostgreSQL migrations live in `db/migrations_postgres`, while SQLite migrations remain in PostgreSQL migrations live in `db/migrations_postgres`, while SQLite migrations remain in
`db/migrations`. `db/migrations`.
@@ -269,6 +353,8 @@ settings:
auth_mode: required auth_mode: required
ldap_bind_address: ldaps://ad01.example.com:636 ldap_bind_address: ldaps://ad01.example.com:636
ldap_base_dn: DC=example,DC=com ldap_base_dn: DC=example,DC=com
# Optional user lookup scope; defaults to ldap_base_dn when omitted.
ldap_user_base_dn: OU=Users,DC=example,DC=com
auth_group_role_mappings: auth_group_role_mappings:
"CN=vctp-viewers,OU=Groups,DC=example,DC=com": viewer "CN=vctp-viewers,OU=Groups,DC=example,DC=com": viewer
"CN=vctp-admins,OU=Groups,DC=example,DC=com": admin "CN=vctp-admins,OU=Groups,DC=example,DC=com": admin
@@ -351,6 +437,44 @@ These endpoints are considered legacy and are disabled by default unless `settin
When disabled, they return HTTP `410 Gone` with JSON error payload. When disabled, they return HTTP `410 Gone` with JSON error payload.
## Compatibility mode lifecycle (`snapshot_table_compat_mode`)
- Default is `true` during migration phases.
- `true`: scheduled hourly capture continues writing legacy `inventory_hourly_*` outputs in addition to canonical tables.
- `false`: scheduled hourly capture writes canonical hourly cache and lifecycle/totals caches only.
- Disable criteria:
- parity/integration/compatibility test gates are passing
- baseline-vs-post-change metrics comparison is recorded and accepted
- repair/backfill workflows are validated in the target environment
- Rollback to legacy hourly output is immediate: set `snapshot_table_compat_mode: true` and restart the service.
- Compatibility repair/backfill workflows remain available through:
- `POST /api/snapshots/aggregate`
- `POST /api/snapshots/repair`
- `POST /api/snapshots/repair/all`
- `POST /api/snapshots/regenerate-hourly-reports`
- `POST /api/vcenters/cache/rebuild`
- `vctp -settings /path/to/vctp.yml -backfill-vcenter-cache`
## Migration runbook (staged rollout, rollback, repair)
1. Baseline: capture current metrics/state (`phase0-baseline.md` style snapshot) and verify auth/report contracts.
2. Enable canonical runtime settings (already defaulted): `capture_write_batch_size: 1000`, `snapshot_table_compat_mode: true`, `async_report_generation: true`, `scheduled_aggregation_engine: go`.
3. Deploy and monitor: review `/metrics`, `snapshot_runs`, `cron_status`, and generated reports for at least one full hourly/daily cycle.
4. Validate canonicity gates: run parity/integration/compatibility suites and compare baseline vs post-change metrics.
5. Optional compatibility reduction: set `snapshot_table_compat_mode: false` only after step 4 passes and repair workflows are validated.
6. SQL default switch gate: only evaluate after production-scale Postgres benchmark evidence; otherwise keep `scheduled_aggregation_engine: go`.
Rollback triggers:
- sustained increase in `vctp_*_failed_total` metrics
- missing/stale summary tables or report outputs
- material mismatch between totals endpoints and expected aggregates
- repeated job timeout or cron failure indicators
Rollback actions:
1. Set `scheduled_aggregation_engine: go` (if changed) and restart.
2. Set `snapshot_table_compat_mode: true` and restart.
3. Run `POST /api/snapshots/repair/all`.
4. Run `POST /api/snapshots/regenerate-hourly-reports` and/or `-backfill-vcenter-cache` as needed.
5. Re-check `/metrics`, `snapshot_runs`, and endpoint/report correctness before closing the incident.
## Settings Reference ## Settings Reference
All configuration lives under the top-level `settings:` key in `vctp.yml`. All configuration lives under the top-level `settings:` key in `vctp.yml`.
@@ -388,7 +512,8 @@ Authentication:
- A user must belong to at least one mapped group to receive any role and log in. - A user must belong to at least one mapped group to receive any role and log in.
- `settings.ldap_groups` empty/omitted means no allowlist filter, but mapped-role requirement still applies. - `settings.ldap_groups` empty/omitted means no allowlist filter, but mapped-role requirement still applies.
- `settings.ldap_bind_address`: LDAP/LDAPS URL used for authentication. - `settings.ldap_bind_address`: LDAP/LDAPS URL used for authentication.
- `settings.ldap_base_dn`: LDAP base DN for user/group lookups. - `settings.ldap_base_dn`: LDAP base DN fallback used for user lookup when `settings.ldap_user_base_dn` is not set.
- `settings.ldap_user_base_dn`: optional user lookup base DN; defaults to `settings.ldap_base_dn`.
- `settings.ldap_trust_cert_file`: optional CA cert file for LDAP TLS. - `settings.ldap_trust_cert_file`: optional CA cert file for LDAP TLS.
- `settings.ldap_disable_validation`: disables LDAP TLS cert validation. - `settings.ldap_disable_validation`: disables LDAP TLS cert validation.
- `settings.ldap_insecure`: insecure LDAP TLS mode. - `settings.ldap_insecure`: insecure LDAP TLS mode.
@@ -417,6 +542,9 @@ Snapshots:
- `settings.hourly_index_max_age_days`: age gate for keeping per-hourly-table indexes (`-1` disables cleanup, `0` trims all) - `settings.hourly_index_max_age_days`: age gate for keeping per-hourly-table indexes (`-1` disables cleanup, `0` trims all)
- `settings.snapshot_cleanup_cron`: cron expression for cleanup job - `settings.snapshot_cleanup_cron`: cron expression for cleanup job
- `settings.reports_dir`: directory to store generated XLSX reports (default: `/var/lib/vctp/reports`) - `settings.reports_dir`: directory to store generated XLSX reports (default: `/var/lib/vctp/reports`)
- `settings.capture_write_batch_size`: hourly canonical write batch size (default: `1000`)
- `settings.snapshot_table_compat_mode`: keep writing legacy hourly snapshot tables during migration (default: `true`)
- `settings.async_report_generation`: defer report generation from the hourly capture hot path (default: `true`)
- `settings.report_summary_pivots`: optional list to override Summary worksheet pivot titles/names/ranges in daily/monthly XLSX reports - `settings.report_summary_pivots`: optional list to override Summary worksheet pivot titles/names/ranges in daily/monthly XLSX reports
- `metric`: one of `avg_vcpu`, `avg_ram`, `prorated_vm_count`, `vm_name_count` - `metric`: one of `avg_vcpu`, `avg_ram`, `prorated_vm_count`, `vm_name_count`
- `title`: pivot title text shown on Summary sheet - `title`: pivot title text shown on Summary sheet
+1 -1
View File
@@ -473,7 +473,7 @@ func VcenterTotalsPage(vcenter string, entries []VcenterTotalsEntry, chart Vcent
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "\"></canvas><div id=\"vcenter-totals-tooltip\" class=\"web3-chart-tooltip\" aria-hidden=\"true\"></div></div><script>\n\t\t\t\t\t\t\t\twindow.Web3Charts.renderFromDataset({\n\t\t\t\t\t\t\t\t\tcanvasId: \"vcenter-totals-chart\",\n\t\t\t\t\t\t\t\t\ttooltipId: \"vcenter-totals-tooltip\",\n\t\t\t\t\t\t\t\t})\n\t\t\t\t\t\t\t</script></div>") templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 26, "\"></canvas><div id=\"vcenter-totals-tooltip\" class=\"web3-chart-tooltip\" aria-hidden=\"true\"></div></div><script>\r\n\t\t\t\t\t\t\t\twindow.Web3Charts.renderFromDataset({\r\n\t\t\t\t\t\t\t\t\tcanvasId: \"vcenter-totals-chart\",\r\n\t\t\t\t\t\t\t\t\ttooltipId: \"vcenter-totals-tooltip\",\r\n\t\t\t\t\t\t\t\t})\r\n\t\t\t\t\t\t\t</script></div>")
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
+1 -1
View File
@@ -194,7 +194,7 @@ func VmTracePage(query string, display_query string, vm_id string, vm_uuid strin
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "\"></canvas><div id=\"vm-trace-tooltip\" class=\"web3-chart-tooltip\" aria-hidden=\"true\"></div></div><script>\n\t\t\t\t\t\t\t\twindow.Web3Charts.renderFromDataset({\n\t\t\t\t\t\t\t\t\tcanvasId: \"vm-trace-chart\",\n\t\t\t\t\t\t\t\t\ttooltipId: \"vm-trace-tooltip\",\n\t\t\t\t\t\t\t\t})\n\t\t\t\t\t\t\t</script></div>") templ_7745c5c3_Err = templruntime.WriteString(templ_7745c5c3_Buffer, 10, "\"></canvas><div id=\"vm-trace-tooltip\" class=\"web3-chart-tooltip\" aria-hidden=\"true\"></div></div><script>\r\n\t\t\t\t\t\t\t\twindow.Web3Charts.renderFromDataset({\r\n\t\t\t\t\t\t\t\t\tcanvasId: \"vm-trace-chart\",\r\n\t\t\t\t\t\t\t\t\ttooltipId: \"vm-trace-tooltip\",\r\n\t\t\t\t\t\t\t\t})\r\n\t\t\t\t\t\t\t</script></div>")
if templ_7745c5c3_Err != nil { if templ_7745c5c3_Err != nil {
return templ_7745c5c3_Err return templ_7745c5c3_Err
} }
+20 -20
View File
@@ -89,7 +89,7 @@ body {
} }
.web2-shell-wide { .web2-shell-wide {
max-width: 1420px; max-width: min(1760px, calc(100vw - 2rem));
} }
.web2-page-head { .web2-page-head {
@@ -101,21 +101,26 @@ body {
.web2-page-head-row { .web2-page-head-row {
display: flex; display: flex;
flex-wrap: wrap; flex-wrap: wrap;
align-items: center; align-items: flex-start;
justify-content: space-between; justify-content: space-between;
gap: 1rem; gap: 1rem;
} }
.web2-head-copy { .web2-head-copy {
flex: 1 1 740px;
min-width: 0;
max-width: 72ch; max-width: 72ch;
} }
.web2-page-title { .web2-page-title {
margin-top: 0.6rem; margin-top: 0.6rem;
font-family: var(--theme_font_display); font-family: var(--theme_font_display);
font-size: clamp(1.95rem, 1.2rem + 1.9vw, 2.65rem); font-size: clamp(1.7rem, 1.1rem + 1.6vw, 2.35rem);
line-height: 1.15; line-height: 1.15;
letter-spacing: -0.325px; letter-spacing: -0.325px;
overflow-wrap: anywhere;
word-break: break-word;
hyphens: auto;
} }
.web2-page-subtitle { .web2-page-subtitle {
@@ -129,6 +134,8 @@ body {
display: flex; display: flex;
flex-wrap: wrap; flex-wrap: wrap;
align-items: center; align-items: center;
justify-content: flex-end;
flex: 0 0 auto;
gap: 0.5rem; gap: 0.5rem;
} }
@@ -357,15 +364,6 @@ body {
transform: none; transform: none;
} }
.web2-button-group {
display: flex;
flex-wrap: wrap;
}
.web2-button-group .web2-button {
margin: 0 0.5rem 0.5rem 0;
}
.web3-button { .web3-button {
background: var(--theme_surface_primary); background: var(--theme_surface_primary);
color: var(--theme_text_primary); color: var(--theme_text_primary);
@@ -411,14 +409,6 @@ body {
box-shadow: var(--theme_shadow_table_inset); box-shadow: var(--theme_shadow_table_inset);
} }
.web2-list li {
background: var(--theme_surface_primary);
border: 1px solid var(--theme_border);
border-radius: var(--theme_radius_card);
padding: 0.75rem 1rem;
box-shadow: var(--theme_shadow_card);
}
.web2-table { .web2-table {
width: 100%; width: 100%;
border-collapse: collapse; border-collapse: collapse;
@@ -710,6 +700,16 @@ summary:focus-visible,
} }
} }
@media (min-width: 1500px) {
.web2-shell {
padding-left: 1rem;
padding-right: 1rem;
}
.web2-shell-wide {
max-width: min(1860px, calc(100vw - 1.25rem));
}
}
@media (min-width: 780px) { @media (min-width: 780px) {
.web2-kpi-grid { .web2-kpi-grid {
grid-template-columns: repeat(3, minmax(0, 1fr)); grid-template-columns: repeat(3, minmax(0, 1fr));
+2 -1
View File
@@ -104,7 +104,8 @@ func (s *JWTService) IssueToken(subject string, roles []string, groups []string)
claims := Claims{ claims := Claims{
Subject: subject, Subject: subject,
Roles: compactTrimmedStrings(roles), Roles: compactTrimmedStrings(roles),
Groups: compactTrimmedStrings(groups), // Intentionally omit LDAP groups from JWTs; role claims are sufficient for authorization.
Groups: nil,
Issuer: s.issuer, Issuer: s.issuer,
Audience: s.audience, Audience: s.audience,
IssuedAt: now.Unix(), IssuedAt: now.Unix(),
+15
View File
@@ -57,6 +57,21 @@ func TestIssueAndVerifyTokenRoundTrip(t *testing.T) {
if issuedClaims.ID == "" { if issuedClaims.ID == "" {
t.Fatal("expected jti to be populated") t.Fatal("expected jti to be populated")
} }
if len(issuedClaims.Groups) != 0 {
t.Fatalf("expected groups to be omitted from issued claims, got %#v", issuedClaims.Groups)
}
parts := strings.Split(token, ".")
if len(parts) != 3 {
t.Fatalf("expected jwt to have 3 parts, got %d", len(parts))
}
payloadJSON, err := base64.RawURLEncoding.DecodeString(parts[1])
if err != nil {
t.Fatalf("failed to decode jwt payload: %v", err)
}
if strings.Contains(string(payloadJSON), `"groups"`) {
t.Fatalf("expected jwt payload to omit groups claim, got payload: %s", string(payloadJSON))
}
verifiedClaims, err := svc.VerifyToken(token) verifiedClaims, err := svc.VerifyToken(token)
if err != nil { if err != nil {
+245 -45
View File
@@ -25,6 +25,7 @@ var (
type LDAPConfig struct { type LDAPConfig struct {
BindAddress string BindAddress string
BaseDN string BaseDN string
UserBaseDN string
TrustCertFile string TrustCertFile string
DisableValidation bool DisableValidation bool
Insecure bool Insecure bool
@@ -35,11 +36,17 @@ type LDAPIdentity struct {
Username string Username string
UserDN string UserDN string
Groups []string Groups []string
BindDuration time.Duration
UserLookupDuration time.Duration
GroupMembershipLookupDuration time.Duration
// Diagnostics contains non-sensitive LDAP processing notes useful for debugging auth decisions.
Diagnostics []string
} }
type LDAPAuthenticator struct { type LDAPAuthenticator struct {
bindAddress string bindAddress string
baseDN string baseDN string
userBaseDN string
trustCertFile string trustCertFile string
disableValidation bool disableValidation bool
insecure bool insecure bool
@@ -49,6 +56,7 @@ type LDAPAuthenticator struct {
func NewLDAPAuthenticator(cfg LDAPConfig) (*LDAPAuthenticator, error) { func NewLDAPAuthenticator(cfg LDAPConfig) (*LDAPAuthenticator, error) {
bindAddress := strings.TrimSpace(cfg.BindAddress) bindAddress := strings.TrimSpace(cfg.BindAddress)
baseDN := strings.TrimSpace(cfg.BaseDN) baseDN := strings.TrimSpace(cfg.BaseDN)
userBaseDN := strings.TrimSpace(cfg.UserBaseDN)
trustCertFile := strings.TrimSpace(cfg.TrustCertFile) trustCertFile := strings.TrimSpace(cfg.TrustCertFile)
if bindAddress == "" { if bindAddress == "" {
@@ -57,6 +65,9 @@ func NewLDAPAuthenticator(cfg LDAPConfig) (*LDAPAuthenticator, error) {
if baseDN == "" { if baseDN == "" {
return nil, fmt.Errorf("%w: base DN is required", ErrInvalidLDAPConfig) return nil, fmt.Errorf("%w: base DN is required", ErrInvalidLDAPConfig)
} }
if userBaseDN == "" {
userBaseDN = baseDN
}
if _, err := url.ParseRequestURI(bindAddress); err != nil { if _, err := url.ParseRequestURI(bindAddress); err != nil {
return nil, fmt.Errorf("%w: bind address must be a valid URL: %v", ErrInvalidLDAPConfig, err) return nil, fmt.Errorf("%w: bind address must be a valid URL: %v", ErrInvalidLDAPConfig, err)
} }
@@ -69,6 +80,7 @@ func NewLDAPAuthenticator(cfg LDAPConfig) (*LDAPAuthenticator, error) {
return &LDAPAuthenticator{ return &LDAPAuthenticator{
bindAddress: bindAddress, bindAddress: bindAddress,
baseDN: baseDN, baseDN: baseDN,
userBaseDN: userBaseDN,
trustCertFile: trustCertFile, trustCertFile: trustCertFile,
disableValidation: cfg.DisableValidation, disableValidation: cfg.DisableValidation,
insecure: cfg.Insecure, insecure: cfg.Insecure,
@@ -77,13 +89,14 @@ func NewLDAPAuthenticator(cfg LDAPConfig) (*LDAPAuthenticator, error) {
} }
func (a *LDAPAuthenticator) AuthenticateAndFetchGroups(ctx context.Context, username string, password string) (LDAPIdentity, error) { func (a *LDAPAuthenticator) AuthenticateAndFetchGroups(ctx context.Context, username string, password string) (LDAPIdentity, error) {
username = strings.TrimSpace(username) inputUsername := strings.TrimSpace(username)
if username == "" || password == "" { if inputUsername == "" || password == "" {
return LDAPIdentity{}, ErrLDAPInvalidCredentials return LDAPIdentity{}, ErrLDAPInvalidCredentials
} }
if err := ctxErr(ctx); err != nil { if err := ctxErr(ctx); err != nil {
return LDAPIdentity{}, err return LDAPIdentity{}, err
} }
bindUsername, rewrittenToUPN := normalizeBindUsername(inputUsername, a.baseDN)
conn, err := a.connect() conn, err := a.connect()
if err != nil { if err != nil {
@@ -91,26 +104,54 @@ func (a *LDAPAuthenticator) AuthenticateAndFetchGroups(ctx context.Context, user
} }
defer conn.Close() defer conn.Close()
if err := conn.Bind(username, password); err != nil { bindStartedAt := time.Now()
err = conn.Bind(bindUsername, password)
bindDuration := time.Since(bindStartedAt)
if err != nil {
if ldap.IsErrorWithCode(err, ldap.LDAPResultInvalidCredentials) { if ldap.IsErrorWithCode(err, ldap.LDAPResultInvalidCredentials) {
return LDAPIdentity{}, ErrLDAPInvalidCredentials return LDAPIdentity{}, fmt.Errorf("%w: ldap bind rejected credentials (bind_duration=%s)", ErrLDAPInvalidCredentials, bindDuration)
} }
return LDAPIdentity{}, fmt.Errorf("%w: bind failed: %v", ErrLDAPOperationFailed, err) return LDAPIdentity{}, fmt.Errorf("%w: bind failed: %v (bind_duration=%s)", ErrLDAPOperationFailed, err, bindDuration)
} }
if err := ctxErr(ctx); err != nil { if err := ctxErr(ctx); err != nil {
return LDAPIdentity{}, err return LDAPIdentity{}, err
} }
identity := LDAPIdentity{ identity := LDAPIdentity{
Username: username, Username: inputUsername,
UserDN: username, UserDN: bindUsername,
BindDuration: bindDuration,
}
identity.Diagnostics = append(identity.Diagnostics, fmt.Sprintf("bind_duration_ms=%d", bindDuration.Milliseconds()))
if rewrittenToUPN {
identity.Diagnostics = append(identity.Diagnostics, "bind_username_rewritten_to_upn")
}
identity.Diagnostics = append(identity.Diagnostics,
"user_lookup_base_dn="+a.userBaseDN,
)
if whoami, err := conn.WhoAmI(nil); err != nil {
identity.Diagnostics = append(identity.Diagnostics, fmt.Sprintf("whoami_failed:%v", err))
} else if boundDN := parseWhoAmIDN(whoami.AuthzID); boundDN != "" {
identity.UserDN = boundDN
identity.Diagnostics = append(identity.Diagnostics, "whoami_dn_resolved")
} else if strings.TrimSpace(whoami.AuthzID) == "" {
identity.Diagnostics = append(identity.Diagnostics, "whoami_dn_empty")
} else {
identity.Diagnostics = append(identity.Diagnostics, "whoami_non_dn_authzid")
} }
entry, err := a.lookupUserEntry(conn, username) userLookupStartedAt := time.Now()
entry, lookupStrategy, err := a.lookupUserEntry(conn, inputUsername, identity.UserDN)
identity.UserLookupDuration = time.Since(userLookupStartedAt)
identity.Diagnostics = append(identity.Diagnostics, fmt.Sprintf("user_lookup_duration_ms=%d", identity.UserLookupDuration.Milliseconds()))
if err != nil { if err != nil {
return LDAPIdentity{}, err return LDAPIdentity{}, fmt.Errorf("%w: %v (bind_duration=%s user_lookup_duration=%s)", ErrLDAPOperationFailed, err, identity.BindDuration, identity.UserLookupDuration)
} }
if entry != nil { if entry != nil {
if lookupStrategy == "" {
lookupStrategy = "unknown"
}
identity.Diagnostics = append(identity.Diagnostics, "user_entry_found:"+lookupStrategy)
if strings.TrimSpace(entry.DN) != "" { if strings.TrimSpace(entry.DN) != "" {
identity.UserDN = entry.DN identity.UserDN = entry.DN
} }
@@ -122,9 +163,12 @@ func (a *LDAPAuthenticator) AuthenticateAndFetchGroups(ctx context.Context, user
); v != "" { ); v != "" {
identity.Username = v identity.Username = v
} }
} else {
identity.Diagnostics = append(identity.Diagnostics, "user_entry_not_found")
} }
groupSet := make(map[string]struct{}) groupSet := make(map[string]struct{})
groupLookupStartedAt := time.Now()
if entry != nil { if entry != nil {
for _, groupDN := range entry.GetAttributeValues("memberOf") { for _, groupDN := range entry.GetAttributeValues("memberOf") {
groupDN = strings.TrimSpace(groupDN) groupDN = strings.TrimSpace(groupDN)
@@ -135,30 +179,14 @@ func (a *LDAPAuthenticator) AuthenticateAndFetchGroups(ctx context.Context, user
} }
} }
groupEntries, err := conn.Search(ldap.NewSearchRequest( // Intentionally skip subtree group membership search for now.
a.baseDN, // Authorization is based only on direct group membership values present in the user entry (memberOf).
ldap.ScopeWholeSubtree, identity.GroupMembershipLookupDuration = time.Since(groupLookupStartedAt)
ldap.NeverDerefAliases, identity.Diagnostics = append(identity.Diagnostics, fmt.Sprintf("group_lookup_duration_ms=%d", identity.GroupMembershipLookupDuration.Milliseconds()))
0, identity.Diagnostics = append(identity.Diagnostics, "group_search_skipped_direct_memberof_only")
0,
false,
fmt.Sprintf("(|(member=%s)(uniqueMember=%s)(memberUid=%s))",
ldap.EscapeFilter(identity.UserDN),
ldap.EscapeFilter(identity.UserDN),
ldap.EscapeFilter(username),
),
[]string{"dn"},
nil,
))
if err == nil {
for _, e := range groupEntries.Entries {
if dn := strings.TrimSpace(e.DN); dn != "" {
groupSet[dn] = struct{}{}
}
}
}
identity.Groups = mapKeysSorted(groupSet) identity.Groups = mapKeysSorted(groupSet)
identity.Diagnostics = compactTrimmedStrings(identity.Diagnostics)
return identity, nil return identity, nil
} }
@@ -261,10 +289,27 @@ func (a *LDAPAuthenticator) buildTLSConfig() (*tls.Config, error) {
return tlsConfig, nil return tlsConfig, nil
} }
func (a *LDAPAuthenticator) lookupUserEntry(conn *ldap.Conn, username string) (*ldap.Entry, error) { func (a *LDAPAuthenticator) lookupUserEntry(conn *ldap.Conn, username string, userDNHint string) (*ldap.Entry, string, error) {
dnCandidates := make([]string, 0, 2)
if looksLikeDN(userDNHint) {
dnCandidates = append(dnCandidates, strings.TrimSpace(userDNHint))
}
if looksLikeDN(username) { if looksLikeDN(username) {
dnCandidates = append(dnCandidates, strings.TrimSpace(username))
}
seenDN := make(map[string]struct{}, len(dnCandidates))
for _, dn := range dnCandidates {
key := normalizeDN(dn)
if key == "" {
continue
}
if _, ok := seenDN[key]; ok {
continue
}
seenDN[key] = struct{}{}
searchRes, err := conn.Search(ldap.NewSearchRequest( searchRes, err := conn.Search(ldap.NewSearchRequest(
username, dn,
ldap.ScopeBaseObject, ldap.ScopeBaseObject,
ldap.NeverDerefAliases, ldap.NeverDerefAliases,
1, 1,
@@ -275,32 +320,70 @@ func (a *LDAPAuthenticator) lookupUserEntry(conn *ldap.Conn, username string) (*
nil, nil,
)) ))
if err != nil { if err != nil {
return nil, fmt.Errorf("%w: unable to load user entry: %v", ErrLDAPOperationFailed, err) if ldap.IsErrorWithCode(err, ldap.LDAPResultNoSuchObject) {
continue
} }
if len(searchRes.Entries) == 0 { return nil, "", fmt.Errorf("%w: unable to load user entry by dn: %v", ErrLDAPOperationFailed, err)
}
if len(searchRes.Entries) > 0 {
return searchRes.Entries[0], "dn", nil
}
}
for _, principal := range principalCandidates(username) {
if strings.Contains(principal, "@") {
entry, err := a.searchUserByAttribute(conn, "userPrincipalName", principal)
if err != nil {
return nil, "", err
}
if entry != nil {
return entry, "principal_upn", nil
}
// For UPN principals, avoid fallback attribute probes that are unlikely to match
// and can be expensive on large directory trees.
continue
}
entry, err := a.searchUserByAttribute(conn, "sAMAccountName", principal)
if err != nil {
return nil, "", err
}
if entry != nil {
return entry, "principal_samaccountname", nil
}
// Keep uid lookup as a fallback for non-AD LDAP directories.
entry, err = a.searchUserByAttribute(conn, "uid", principal)
if err != nil {
return nil, "", err
}
if entry != nil {
return entry, "principal_uid", nil
}
}
return nil, "", nil
}
func (a *LDAPAuthenticator) searchUserByAttribute(conn *ldap.Conn, attribute string, value string) (*ldap.Entry, error) {
attribute = strings.TrimSpace(attribute)
value = strings.TrimSpace(value)
if attribute == "" || value == "" {
return nil, nil return nil, nil
} }
return searchRes.Entries[0], nil
}
searchRes, err := conn.Search(ldap.NewSearchRequest( searchRes, err := conn.Search(ldap.NewSearchRequest(
a.baseDN, a.userBaseDN,
ldap.ScopeWholeSubtree, ldap.ScopeWholeSubtree,
ldap.NeverDerefAliases, ldap.NeverDerefAliases,
2, 2,
0, 0,
false, false,
fmt.Sprintf("(|(uid=%s)(cn=%s)(sAMAccountName=%s)(userPrincipalName=%s))", fmt.Sprintf("(%s=%s)", attribute, ldap.EscapeFilter(value)),
ldap.EscapeFilter(username),
ldap.EscapeFilter(username),
ldap.EscapeFilter(username),
ldap.EscapeFilter(username),
),
[]string{"uid", "sAMAccountName", "userPrincipalName", "cn", "memberOf"}, []string{"uid", "sAMAccountName", "userPrincipalName", "cn", "memberOf"},
nil, nil,
)) ))
if err != nil { if err != nil {
return nil, fmt.Errorf("%w: user lookup failed: %v", ErrLDAPOperationFailed, err) return nil, fmt.Errorf("%w: user lookup failed (%s): %v", ErrLDAPOperationFailed, attribute, err)
} }
if len(searchRes.Entries) == 0 { if len(searchRes.Entries) == 0 {
return nil, nil return nil, nil
@@ -341,6 +424,123 @@ func looksLikeDN(value string) bool {
return strings.Contains(value, "=") && strings.Contains(value, ",") return strings.Contains(value, "=") && strings.Contains(value, ",")
} }
func parseWhoAmIDN(authzID string) string {
authzID = strings.TrimSpace(authzID)
if authzID == "" {
return ""
}
lower := strings.ToLower(authzID)
if strings.HasPrefix(lower, "dn:") {
authzID = strings.TrimSpace(authzID[3:])
}
if !looksLikeDN(authzID) {
return ""
}
return authzID
}
func normalizeBindUsername(username string, baseDN string) (string, bool) {
username = strings.TrimSpace(username)
if username == "" {
return "", false
}
if looksLikeDN(username) || strings.Contains(username, "@") {
return username, false
}
// Convert DOMAIN\user to user before UPN rewrite.
if idx := strings.LastIndex(username, `\`); idx >= 0 && idx < len(username)-1 {
username = strings.TrimSpace(username[idx+1:])
}
domain := upnDomainFromBaseDN(baseDN)
if domain == "" {
return username, false
}
if strings.Contains(username, "@") {
return username, false
}
return username + "@" + domain, true
}
func upnDomainFromBaseDN(baseDN string) string {
baseDN = strings.TrimSpace(baseDN)
if baseDN == "" {
return ""
}
parts := strings.Split(baseDN, ",")
labels := make([]string, 0, len(parts))
for _, part := range parts {
part = strings.TrimSpace(part)
if len(part) < 3 || !strings.EqualFold(part[:3], "dc=") {
continue
}
label := strings.TrimSpace(part[3:])
if label == "" {
continue
}
labels = append(labels, label)
}
if len(labels) == 0 {
return ""
}
return strings.Join(labels, ".")
}
func principalCandidates(username string) []string {
username = strings.TrimSpace(username)
if username == "" {
return nil
}
seen := make(map[string]struct{}, 4)
candidates := make([]string, 0, 4)
add := func(value string) {
value = strings.TrimSpace(value)
if value == "" {
return
}
key := strings.ToLower(value)
if _, ok := seen[key]; ok {
return
}
seen[key] = struct{}{}
candidates = append(candidates, value)
}
add(username)
if idx := strings.LastIndex(username, `\`); idx >= 0 && idx < len(username)-1 {
add(username[idx+1:])
}
if idx := strings.Index(username, "@"); idx > 0 {
add(username[:idx])
}
return candidates
}
func buildGroupMembershipFilter(userDN string, principals []string) string {
clauses := make([]string, 0, 2+len(principals))
userDN = strings.TrimSpace(userDN)
if userDN != "" {
escapedDN := ldap.EscapeFilter(userDN)
clauses = append(clauses, "(member="+escapedDN+")", "(uniqueMember="+escapedDN+")")
}
for _, principal := range principals {
principal = strings.TrimSpace(principal)
if principal == "" {
continue
}
clauses = append(clauses, "(memberUid="+ldap.EscapeFilter(principal)+")")
}
if len(clauses) == 0 {
return "(objectClass=group)"
}
return "(|" + strings.Join(clauses, "") + ")"
}
func ctxErr(ctx context.Context) error { func ctxErr(ctx context.Context) error {
if ctx == nil { if ctx == nil {
return nil return nil
+171
View File
@@ -37,3 +37,174 @@ func TestHasAnyGroup(t *testing.T) {
t.Fatal("expected empty required groups to allow") t.Fatal("expected empty required groups to allow")
} }
} }
func TestPrincipalCandidates(t *testing.T) {
tests := []struct {
name string
username string
want []string
}{
{
name: "upn adds local part",
username: "L075239@corpau.wbcau.westpac.com.au",
want: []string{"L075239@corpau.wbcau.westpac.com.au", "L075239"},
},
{
name: "domain slash user adds sam",
username: `CORPAU\L075239`,
want: []string{`CORPAU\L075239`, "L075239"},
},
{
name: "plain username unchanged",
username: "L075239",
want: []string{"L075239"},
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := principalCandidates(tc.username)
if len(got) != len(tc.want) {
t.Fatalf("unexpected candidate count: got=%d want=%d (%#v)", len(got), len(tc.want), got)
}
for i := range tc.want {
if got[i] != tc.want[i] {
t.Fatalf("unexpected candidate at %d: got=%q want=%q", i, got[i], tc.want[i])
}
}
})
}
}
func TestBuildGroupMembershipFilter(t *testing.T) {
filter := buildGroupMembershipFilter(
"CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
[]string{"L075239@corpau.wbcau.westpac.com.au", "L075239"},
)
expected := "(|(member=CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au)(uniqueMember=CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au)(memberUid=L075239@corpau.wbcau.westpac.com.au)(memberUid=L075239))"
if filter != expected {
t.Fatalf("unexpected group filter:\n got: %s\nwant: %s", filter, expected)
}
}
func TestParseWhoAmIDN(t *testing.T) {
tests := []struct {
name string
authzID string
wantDN string
}{
{
name: "dn prefix",
authzID: "dn:CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
wantDN: "CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
},
{
name: "dn prefix upper",
authzID: "DN:CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
wantDN: "CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
},
{
name: "non dn authzid",
authzID: "u:L075239@corpau.wbcau.westpac.com.au",
wantDN: "",
},
{
name: "plain non dn",
authzID: "L075239@corpau.wbcau.westpac.com.au",
wantDN: "",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := parseWhoAmIDN(tc.authzID)
if got != tc.wantDN {
t.Fatalf("unexpected whoami dn parse: got=%q want=%q", got, tc.wantDN)
}
})
}
}
func TestUPNDomainFromBaseDN(t *testing.T) {
tests := []struct {
name string
baseDN string
want string
}{
{
name: "standard dc chain",
baseDN: "dc=corpau,dc=wbcau,dc=westpac,dc=com,dc=au",
want: "corpau.wbcau.westpac.com.au",
},
{
name: "mixed dn parts",
baseDN: "ou=Users,dc=example,dc=com",
want: "example.com",
},
{
name: "no dc parts",
baseDN: "ou=Users,ou=Org",
want: "",
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
got := upnDomainFromBaseDN(tc.baseDN)
if got != tc.want {
t.Fatalf("unexpected upn domain from base dn: got=%q want=%q", got, tc.want)
}
})
}
}
func TestNormalizeBindUsername(t *testing.T) {
tests := []struct {
name string
username string
baseDN string
wantUser string
wantRewrite bool
}{
{
name: "plain sam rewritten",
username: "L075239",
baseDN: "dc=corpau,dc=wbcau,dc=westpac,dc=com,dc=au",
wantUser: "L075239@corpau.wbcau.westpac.com.au",
wantRewrite: true,
},
{
name: "domain user rewritten",
username: `CORPAU\L075239`,
baseDN: "dc=corpau,dc=wbcau,dc=westpac,dc=com,dc=au",
wantUser: "L075239@corpau.wbcau.westpac.com.au",
wantRewrite: true,
},
{
name: "upn unchanged",
username: "L075239@corpau.wbcau.westpac.com.au",
baseDN: "dc=corpau,dc=wbcau,dc=westpac,dc=com,dc=au",
wantUser: "L075239@corpau.wbcau.westpac.com.au",
wantRewrite: false,
},
{
name: "dn unchanged",
username: "CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
baseDN: "dc=corpau,dc=wbcau,dc=westpac,dc=com,dc=au",
wantUser: "CN=User,OU=Users,DC=corpau,DC=wbcau,DC=westpac,DC=com,DC=au",
wantRewrite: false,
},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
gotUser, gotRewrite := normalizeBindUsername(tc.username, tc.baseDN)
if gotUser != tc.wantUser {
t.Fatalf("unexpected normalized bind username: got=%q want=%q", gotUser, tc.wantUser)
}
if gotRewrite != tc.wantRewrite {
t.Fatalf("unexpected rewrite flag: got=%v want=%v", gotRewrite, tc.wantRewrite)
}
})
}
}
+5
View File
@@ -79,6 +79,7 @@ type SettingsYML struct {
LDAPGroups []string `yaml:"ldap_groups"` LDAPGroups []string `yaml:"ldap_groups"`
LDAPBindAddress string `yaml:"ldap_bind_address"` LDAPBindAddress string `yaml:"ldap_bind_address"`
LDAPBaseDN string `yaml:"ldap_base_dn"` LDAPBaseDN string `yaml:"ldap_base_dn"`
LDAPUserBaseDN string `yaml:"ldap_user_base_dn"`
LDAPTrustCertFile string `yaml:"ldap_trust_cert_file"` LDAPTrustCertFile string `yaml:"ldap_trust_cert_file"`
LDAPDisableValidation bool `yaml:"ldap_disable_validation"` LDAPDisableValidation bool `yaml:"ldap_disable_validation"`
LDAPInsecure bool `yaml:"ldap_insecure"` LDAPInsecure bool `yaml:"ldap_insecure"`
@@ -284,6 +285,7 @@ func applyDefaultsAndValidateSettings(cfg *SettingsYML) error {
s.AuthJWTSigningKey = strings.TrimSpace(s.AuthJWTSigningKey) s.AuthJWTSigningKey = strings.TrimSpace(s.AuthJWTSigningKey)
s.LDAPBindAddress = strings.TrimSpace(s.LDAPBindAddress) s.LDAPBindAddress = strings.TrimSpace(s.LDAPBindAddress)
s.LDAPBaseDN = strings.TrimSpace(s.LDAPBaseDN) s.LDAPBaseDN = strings.TrimSpace(s.LDAPBaseDN)
s.LDAPUserBaseDN = strings.TrimSpace(s.LDAPUserBaseDN)
s.LDAPTrustCertFile = strings.TrimSpace(s.LDAPTrustCertFile) s.LDAPTrustCertFile = strings.TrimSpace(s.LDAPTrustCertFile)
s.LDAPGroups = compactTrimmedStrings(s.LDAPGroups) s.LDAPGroups = compactTrimmedStrings(s.LDAPGroups)
@@ -340,6 +342,9 @@ func applyDefaultsAndValidateSettings(cfg *SettingsYML) error {
if s.LDAPBaseDN == "" { if s.LDAPBaseDN == "" {
return errors.New("settings.ldap_base_dn is required when settings.auth_enabled=true") return errors.New("settings.ldap_base_dn is required when settings.auth_enabled=true")
} }
if s.LDAPUserBaseDN == "" {
s.LDAPUserBaseDN = s.LDAPBaseDN
}
if len(s.AuthGroupRoleMappings) == 0 { if len(s.AuthGroupRoleMappings) == 0 {
return errors.New("settings.auth_group_role_mappings must define at least one mapping when settings.auth_enabled=true") return errors.New("settings.auth_group_role_mappings must define at least one mapping when settings.auth_enabled=true")
} }
@@ -193,6 +193,9 @@ func TestReadYMLSettingsAcceptsValidAuthConfigAndNormalizesMappings(t *testing.T
if len(got.LDAPGroups) != 1 || got.LDAPGroups[0] != "cn=vctp-viewers,ou=groups,dc=example,dc=com" { if len(got.LDAPGroups) != 1 || got.LDAPGroups[0] != "cn=vctp-viewers,ou=groups,dc=example,dc=com" {
t.Fatalf("expected ldap_groups to be compacted+trimmed, got %#v", got.LDAPGroups) t.Fatalf("expected ldap_groups to be compacted+trimmed, got %#v", got.LDAPGroups)
} }
if got.LDAPUserBaseDN != "dc=example,dc=com" {
t.Fatalf("expected default ldap_user_base_dn to fall back to ldap_base_dn, got %q", got.LDAPUserBaseDN)
}
if got.AuthGroupRoleMappings["cn=vctp-admins,ou=groups,dc=example,dc=com"] != authRoleAdmin { if got.AuthGroupRoleMappings["cn=vctp-admins,ou=groups,dc=example,dc=com"] != authRoleAdmin {
t.Fatalf("expected admin mapping to normalize role to %q, got %#v", authRoleAdmin, got.AuthGroupRoleMappings) t.Fatalf("expected admin mapping to normalize role to %q, got %#v", authRoleAdmin, got.AuthGroupRoleMappings)
} }
@@ -0,0 +1,511 @@
package tasks
import (
"context"
"database/sql"
"fmt"
"testing"
"time"
"vctp/db"
"vctp/internal/settings"
"github.com/jmoiron/sqlx"
)
func TestCanonicalDailyFlow_WritesRollupAndTotalsCache(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTask(dbConn)
if err := db.EnsureVmHourlyStats(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_hourly_stats: %v", err)
}
dayStart := time.Date(2026, time.March, 10, 0, 0, 0, 0, time.UTC)
dayEnd := dayStart.AddDate(0, 0, 1)
t1 := dayStart.Add(1 * time.Hour).Unix()
t2 := dayStart.Add(2 * time.Hour).Unix()
t3 := dayStart.Add(3 * time.Hour).Unix()
seeds := []hourlySeedRow{
{SnapshotTime: t1, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1", ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 100, VcpuCount: 2, RamGB: 8, CreationTime: dayStart.Add(-1 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1", ResourcePool: "Gold", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 120, VcpuCount: 4, RamGB: 8, CreationTime: dayStart.Add(-1 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-a2", Vcenter: "vc-a", VmID: "vm-a2", VmUUID: "uuid-a2", ResourcePool: "Bronze", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 40, VcpuCount: 1, RamGB: 4, CreationTime: dayStart.Add(-2 * time.Hour).Unix(), DeletionTime: dayStart.Add(4 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t1, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1", ResourcePool: "Silver", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod", ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: dayStart.Add(-3 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1", ResourcePool: "Silver", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod", ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: dayStart.Add(-3 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1", ResourcePool: "Silver", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod", ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: dayStart.Add(-3 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
}
for _, seed := range seeds {
if err := insertHourlyCacheSeedRow(ctx, dbConn, seed); err != nil {
t.Fatalf("failed to insert hourly seed row: %v", err)
}
}
aggMap, snapTimes, err := task.scanHourlyCache(ctx, dayStart, dayEnd)
if err != nil {
t.Fatalf("scanHourlyCache failed: %v", err)
}
if len(aggMap) != 3 {
t.Fatalf("unexpected daily agg key count: got %d want %d", len(aggMap), 3)
}
if len(snapTimes) != 3 {
t.Fatalf("unexpected snapshot time count: got %d want %d", len(snapTimes), 3)
}
totalSamplesByVcenter := sampleCountsByVcenter(aggMap)
if totalSamplesByVcenter["vc-a"] != 2 || totalSamplesByVcenter["vc-b"] != 3 {
t.Fatalf("unexpected per-vcenter sample counts: %#v", totalSamplesByVcenter)
}
summaryTable, err := db.SafeTableName("test_daily_canonical_integration_summary")
if err != nil {
t.Fatalf("failed to build summary table name: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, summaryTable); err != nil {
t.Fatalf("failed to ensure summary table: %v", err)
}
if err := task.insertDailyAggregates(ctx, summaryTable, aggMap, len(snapTimes), totalSamplesByVcenter); err != nil {
t.Fatalf("insertDailyAggregates failed: %v", err)
}
if err := task.persistDailyRollup(ctx, dayStart.Unix(), aggMap, len(snapTimes), totalSamplesByVcenter); err != nil {
t.Fatalf("persistDailyRollup failed: %v", err)
}
rollupAgg, err := task.scanDailyRollup(ctx, dayStart, dayEnd)
if err != nil {
t.Fatalf("scanDailyRollup failed: %v", err)
}
if len(rollupAgg) != len(aggMap) {
t.Fatalf("unexpected rollup agg key count: got %d want %d", len(rollupAgg), len(aggMap))
}
refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, summaryTable, "daily", dayStart.Unix())
if err != nil {
t.Fatalf("ReplaceVcenterAggregateTotalsFromSummary(daily) failed: %v", err)
}
if refreshed != 2 {
t.Fatalf("unexpected daily refreshed vcenter rows: got %d want %d", refreshed, 2)
}
assertSummaryCacheMatchesByVcenter(t, ctx, dbConn, summaryTable, "daily", dayStart.Unix())
assertRollupTotalSamplesForVcenter(t, ctx, dbConn, dayStart.Unix(), "vc-a", 2)
assertRollupTotalSamplesForVcenter(t, ctx, dbConn, dayStart.Unix(), "vc-b", 3)
}
func TestCanonicalMonthlyFlow_WritesSummaryAndTotalsCache(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTask(dbConn)
if err := db.EnsureVmDailyRollup(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_daily_rollup: %v", err)
}
monthStart := time.Date(2026, time.April, 1, 0, 0, 0, 0, time.UTC)
monthEnd := monthStart.AddDate(0, 1, 0)
day1 := monthStart.AddDate(0, 0, 5).Unix()
day2 := monthStart.AddDate(0, 0, 6).Unix()
rollupSeeds := []dailySeedRow{
{
SnapshotTime: day1, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1",
ResourcePool: "Bronze", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod",
ProvisionedDisk: 120, VcpuCount: 4, RamGB: 8, CreationTime: monthStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 6, SumRam: 12, SumDisk: 240, BronzeHits: 2,
},
{
SnapshotTime: day2, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1",
ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod",
ProvisionedDisk: 110, VcpuCount: 2, RamGB: 8, CreationTime: monthStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 4, SumRam: 16, SumDisk: 220, TinHits: 2,
},
{
SnapshotTime: day1, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1",
ResourcePool: "Gold", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod",
ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: monthStart.Add(-10 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 16, SumRam: 64, SumDisk: 400, GoldHits: 2,
},
{
SnapshotTime: day2, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1",
ResourcePool: "Gold", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod",
ProvisionedDisk: 210, VcpuCount: 8, RamGB: 32, CreationTime: monthStart.Add(-10 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 16, SumRam: 64, SumDisk: 420, GoldHits: 2,
},
}
for _, seed := range rollupSeeds {
if err := insertDailyRollupSeedRow(ctx, dbConn, seed); err != nil {
t.Fatalf("failed to insert daily rollup seed row: %v", err)
}
}
aggMap, err := task.scanDailyRollup(ctx, monthStart, monthEnd)
if err != nil {
t.Fatalf("scanDailyRollup failed: %v", err)
}
if len(aggMap) != 2 {
t.Fatalf("unexpected monthly agg key count: got %d want %d", len(aggMap), 2)
}
summaryTable, err := db.SafeTableName("test_monthly_canonical_integration_summary")
if err != nil {
t.Fatalf("failed to build monthly summary table name: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, summaryTable); err != nil {
t.Fatalf("failed to ensure monthly summary table: %v", err)
}
if err := task.insertMonthlyAggregates(ctx, summaryTable, aggMap); err != nil {
t.Fatalf("insertMonthlyAggregates failed: %v", err)
}
refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, summaryTable, "monthly", monthStart.Unix())
if err != nil {
t.Fatalf("ReplaceVcenterAggregateTotalsFromSummary(monthly) failed: %v", err)
}
if refreshed != 2 {
t.Fatalf("unexpected monthly refreshed vcenter rows: got %d want %d", refreshed, 2)
}
monthlyRows, err := loadMonthlySummaryRows(ctx, dbConn, summaryTable)
if err != nil {
t.Fatalf("failed to load monthly summary rows: %v", err)
}
if len(monthlyRows) != 2 {
t.Fatalf("unexpected monthly summary row count: got %d want %d", len(monthlyRows), 2)
}
assertSummaryCacheMatchesByVcenter(t, ctx, dbConn, summaryTable, "monthly", monthStart.Unix())
}
func TestScheduledCanonicalDailyTaskFlow_WritesSummaryRollupRegistryAndTotalsCache(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTaskForAggregateFlow(t, dbConn)
if err := db.EnsureVmHourlyStats(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_hourly_stats: %v", err)
}
dayStart := time.Date(2026, time.March, 12, 0, 0, 0, 0, time.UTC)
t1 := dayStart.Add(1 * time.Hour).Unix()
t2 := dayStart.Add(2 * time.Hour).Unix()
t3 := dayStart.Add(3 * time.Hour).Unix()
seeds := []hourlySeedRow{
{SnapshotTime: t1, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1", ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 100, VcpuCount: 2, RamGB: 8, CreationTime: dayStart.Add(-1 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1", ResourcePool: "Gold", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 120, VcpuCount: 4, RamGB: 8, CreationTime: dayStart.Add(-1 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-a2", Vcenter: "vc-a", VmID: "vm-a2", VmUUID: "uuid-a2", ResourcePool: "Bronze", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 40, VcpuCount: 1, RamGB: 4, CreationTime: dayStart.Add(-2 * time.Hour).Unix(), DeletionTime: dayStart.Add(4 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t1, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1", ResourcePool: "Silver", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod", ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: dayStart.Add(-3 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1", ResourcePool: "Silver", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod", ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: dayStart.Add(-3 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1", ResourcePool: "Silver", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod", ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: dayStart.Add(-3 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
}
for _, seed := range seeds {
if err := insertHourlyCacheSeedRow(ctx, dbConn, seed); err != nil {
t.Fatalf("failed to insert hourly seed row: %v", err)
}
}
if err := task.aggregateDailySummaryWithMode(ctx, dayStart, true, true); err != nil {
t.Fatalf("aggregateDailySummaryWithMode failed: %v", err)
}
summaryTable, err := dailySummaryTableName(dayStart)
if err != nil {
t.Fatalf("failed to build summary table name: %v", err)
}
rows, err := loadDailySummaryRows(ctx, dbConn, summaryTable)
if err != nil {
t.Fatalf("failed to load daily summary rows: %v", err)
}
if len(rows) != 3 {
t.Fatalf("unexpected daily summary row count: got %d want %d", len(rows), 3)
}
assertSnapshotRegistryRow(t, ctx, dbConn, "daily", summaryTable, dayStart.Unix(), int64(len(rows)))
assertSummaryCacheMatchesByVcenter(t, ctx, dbConn, summaryTable, "daily", dayStart.Unix())
assertRollupTotalSamplesForVcenter(t, ctx, dbConn, dayStart.Unix(), "vc-a", 2)
assertRollupTotalSamplesForVcenter(t, ctx, dbConn, dayStart.Unix(), "vc-b", 3)
}
func TestScheduledCanonicalMonthlyTaskFlow_WritesSummaryRegistryAndTotalsCache(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTaskForAggregateFlow(t, dbConn)
if err := db.EnsureVmDailyRollup(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_daily_rollup: %v", err)
}
targetMonth := time.Date(2026, time.April, 20, 0, 0, 0, 0, time.UTC)
monthStart := time.Date(targetMonth.Year(), targetMonth.Month(), 1, 0, 0, 0, 0, targetMonth.Location())
day1 := monthStart.AddDate(0, 0, 5).Unix()
day2 := monthStart.AddDate(0, 0, 6).Unix()
rollupSeeds := []dailySeedRow{
{
SnapshotTime: day1, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1",
ResourcePool: "Bronze", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod",
ProvisionedDisk: 120, VcpuCount: 4, RamGB: 8, CreationTime: monthStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 6, SumRam: 12, SumDisk: 240, BronzeHits: 2,
},
{
SnapshotTime: day2, Name: "vm-a1", Vcenter: "vc-a", VmID: "vm-a1", VmUUID: "uuid-a1",
ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod",
ProvisionedDisk: 110, VcpuCount: 2, RamGB: 8, CreationTime: monthStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 4, SumRam: 16, SumDisk: 220, TinHits: 2,
},
{
SnapshotTime: day1, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1",
ResourcePool: "Gold", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod",
ProvisionedDisk: 200, VcpuCount: 8, RamGB: 32, CreationTime: monthStart.Add(-10 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 16, SumRam: 64, SumDisk: 400, GoldHits: 2,
},
{
SnapshotTime: day2, Name: "vm-b1", Vcenter: "vc-b", VmID: "vm-b1", VmUUID: "uuid-b1",
ResourcePool: "Gold", Datacenter: "dc-b", Cluster: "cluster-b", Folder: "/prod",
ProvisionedDisk: 210, VcpuCount: 8, RamGB: 32, CreationTime: monthStart.Add(-10 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, TotalSamples: 2, SumVcpu: 16, SumRam: 64, SumDisk: 420, GoldHits: 2,
},
}
for _, seed := range rollupSeeds {
if err := insertDailyRollupSeedRow(ctx, dbConn, seed); err != nil {
t.Fatalf("failed to insert daily rollup seed row: %v", err)
}
}
if err := task.aggregateMonthlySummaryWithMode(ctx, targetMonth, true, true); err != nil {
t.Fatalf("aggregateMonthlySummaryWithMode failed: %v", err)
}
summaryTable, err := monthlySummaryTableName(targetMonth)
if err != nil {
t.Fatalf("failed to build monthly summary table name: %v", err)
}
rows, err := loadMonthlySummaryRows(ctx, dbConn, summaryTable)
if err != nil {
t.Fatalf("failed to load monthly summary rows: %v", err)
}
if len(rows) != 2 {
t.Fatalf("unexpected monthly summary row count: got %d want %d", len(rows), 2)
}
assertSnapshotRegistryRow(t, ctx, dbConn, "monthly", summaryTable, monthStart.Unix(), int64(len(rows)))
assertSummaryCacheMatchesByVcenter(t, ctx, dbConn, summaryTable, "monthly", monthStart.Unix())
}
func TestScheduledCanonicalDailyTaskFlow_LifecycleEdgeCases(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTaskForAggregateFlow(t, dbConn)
if err := db.EnsureVmHourlyStats(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_hourly_stats: %v", err)
}
if err := db.EnsureVmLifecycleCache(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_lifecycle_cache: %v", err)
}
dayStart := time.Date(2026, time.March, 13, 0, 0, 0, 0, time.UTC)
dayEnd := dayStart.AddDate(0, 0, 1)
t1 := dayStart.Add(1 * time.Hour).Unix()
t2 := dayStart.Add(2 * time.Hour).Unix()
t3 := dayStart.Add(3 * time.Hour).Unix()
seeds := []hourlySeedRow{
// Deleted VM: appears only once; deletion should be inferred at first missing snapshot (t2).
{SnapshotTime: t1, Name: "vm-gone", Vcenter: "vc-a", VmID: "vm-g", VmUUID: "uuid-g", ResourcePool: "Bronze", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 80, VcpuCount: 4, RamGB: 16, CreationTime: dayStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
// Resource-change VM: verify last pool + averaged CPU/RAM/disk + pool mix percentages.
{SnapshotTime: t1, Name: "vm-change", Vcenter: "vc-a", VmID: "vm-c", VmUUID: "uuid-c", ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 100, VcpuCount: 2, RamGB: 8, CreationTime: dayStart.Add(-48 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-change", Vcenter: "vc-a", VmID: "vm-c", VmUUID: "uuid-c", ResourcePool: "Silver", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 120, VcpuCount: 4, RamGB: 16, CreationTime: dayStart.Add(-48 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-change", Vcenter: "vc-a", VmID: "vm-c", VmUUID: "uuid-c", ResourcePool: "Gold", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 140, VcpuCount: 6, RamGB: 24, CreationTime: dayStart.Add(-48 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
// Missing-creation VM: snapshot rows lack CreationTime; lifecycle cache should backfill FirstSeen (t2).
{SnapshotTime: t2, Name: "vm-partial", Vcenter: "vc-a", VmID: "vm-p", VmUUID: "uuid-p", ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 60, VcpuCount: 2, RamGB: 8, CreationTime: 0, IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-partial", Vcenter: "vc-a", VmID: "vm-p", VmUUID: "uuid-p", ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 60, VcpuCount: 2, RamGB: 8, CreationTime: 0, IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
}
for _, seed := range seeds {
if err := insertHourlyCacheSeedRow(ctx, dbConn, seed); err != nil {
t.Fatalf("failed to insert hourly edge-case seed row: %v", err)
}
}
if err := db.UpsertVmLifecycleCache(ctx, dbConn, "vc-a", "vm-p", "uuid-p", "vm-partial", "cluster-a", time.Unix(t2, 0), sql.NullInt64{}); err != nil {
t.Fatalf("failed to upsert lifecycle cache for vm-partial: %v", err)
}
if err := task.aggregateDailySummaryWithMode(ctx, dayStart, true, true); err != nil {
t.Fatalf("aggregateDailySummaryWithMode failed: %v", err)
}
summaryTable, err := dailySummaryTableName(dayStart)
if err != nil {
t.Fatalf("failed to build summary table name: %v", err)
}
rows, err := loadDailySummaryRows(ctx, dbConn, summaryTable)
if err != nil {
t.Fatalf("failed to load daily summary rows: %v", err)
}
if len(rows) != 3 {
t.Fatalf("unexpected daily summary row count: got %d want %d", len(rows), 3)
}
byKey := mapRowsByKeyDaily(rows)
partial := byKey["vc-a|vm-p|uuid-p|vm-partial"]
if partial.CreationTime != t2 {
t.Fatalf("expected vm-partial creation to be backfilled from lifecycle FirstSeen: got %d want %d", partial.CreationTime, t2)
}
wantPartialPresence := float64(dayEnd.Unix()-t2) / float64(dayEnd.Unix()-dayStart.Unix())
if !approxEqual(partial.AvgIsPresent, wantPartialPresence, 1e-9) {
t.Fatalf("unexpected vm-partial AvgIsPresent after lifecycle creation backfill: got %.12f want %.12f", partial.AvgIsPresent, wantPartialPresence)
}
gone := byKey["vc-a|vm-g|uuid-g|vm-gone"]
if gone.DeletionTime != t2 {
t.Fatalf("expected vm-gone deletion to be inferred from consecutive misses: got %d want %d", gone.DeletionTime, t2)
}
wantGonePresence := float64(t2-dayStart.Unix()) / float64(dayEnd.Unix()-dayStart.Unix())
if !approxEqual(gone.AvgIsPresent, wantGonePresence, 1e-9) {
t.Fatalf("unexpected vm-gone AvgIsPresent after inferred deletion: got %.12f want %.12f", gone.AvgIsPresent, wantGonePresence)
}
change := byKey["vc-a|vm-c|uuid-c|vm-change"]
if change.ResourcePool != "Gold" {
t.Fatalf("unexpected vm-change ResourcePool: got %q want %q", change.ResourcePool, "Gold")
}
if !approxEqual(change.AvgVcpuCount, 4.0, 1e-9) {
t.Fatalf("unexpected vm-change AvgVcpuCount: got %.12f want %.12f", change.AvgVcpuCount, 4.0)
}
if !approxEqual(change.AvgRamGB, 16.0, 1e-9) {
t.Fatalf("unexpected vm-change AvgRamGB: got %.12f want %.12f", change.AvgRamGB, 16.0)
}
if !approxEqual(change.AvgProvisionedDisk, 120.0, 1e-9) {
t.Fatalf("unexpected vm-change AvgProvisionedDisk: got %.12f want %.12f", change.AvgProvisionedDisk, 120.0)
}
if !approxEqual(change.PoolTinPct, 100.0/3.0, 1e-9) || !approxEqual(change.PoolSilverPct, 100.0/3.0, 1e-9) || !approxEqual(change.PoolGoldPct, 100.0/3.0, 1e-9) {
t.Fatalf("unexpected vm-change pool percentages: tin=%.12f silver=%.12f gold=%.12f", change.PoolTinPct, change.PoolSilverPct, change.PoolGoldPct)
}
}
type summaryTotalsByVcenter struct {
Vcenter string `db:"vcenter"`
VmCount int64 `db:"vm_count"`
VcpuTotal int64 `db:"vcpu_total"`
RamTotal int64 `db:"ram_total"`
}
func newTasksTestCronTaskForAggregateFlow(t *testing.T, dbConn *sqlx.DB) *CronTask {
t.Helper()
task := newTasksTestCronTask(dbConn)
cfg := &settings.Settings{Values: &settings.SettingsYML{}}
asyncReports := false
cfg.Values.Settings.AsyncReportGeneration = &asyncReports
cfg.Values.Settings.ReportsDir = t.TempDir()
cfg.Values.Settings.MonthlyAggregationGranularity = "daily"
cfg.Values.Settings.ScheduledAggregationEngine = "go"
task.Settings = cfg
return task
}
func assertSummaryCacheMatchesByVcenter(t *testing.T, ctx context.Context, dbConn *sqlx.DB, summaryTable, snapshotType string, snapshotTime int64) {
t.Helper()
sql := fmt.Sprintf(`
SELECT
"Vcenter" AS vcenter,
COUNT(1) AS vm_count,
CAST(COALESCE(SUM(COALESCE("AvgVcpuCount","VcpuCount")),0) AS BIGINT) AS vcpu_total,
CAST(COALESCE(SUM(COALESCE("AvgRamGB","RamGB")),0) AS BIGINT) AS ram_total
FROM %s
GROUP BY "Vcenter"
`, summaryTable)
var expected []summaryTotalsByVcenter
if err := dbConn.SelectContext(ctx, &expected, sql); err != nil {
t.Fatalf("failed to load expected summary totals: %v", err)
}
if len(expected) == 0 {
t.Fatal("expected non-empty summary totals")
}
cacheCountQuery := dbConn.Rebind(`
SELECT COUNT(1)
FROM vcenter_aggregate_totals
WHERE "SnapshotType" = ? AND "SnapshotTime" = ?
`)
var cacheCount int
if err := dbConn.GetContext(ctx, &cacheCount, cacheCountQuery, snapshotType, snapshotTime); err != nil {
t.Fatalf("failed to count cache rows: %v", err)
}
if cacheCount != len(expected) {
t.Fatalf("unexpected cache row count: got %d want %d", cacheCount, len(expected))
}
for _, exp := range expected {
rows, err := db.ListVcenterAggregateTotals(ctx, dbConn, exp.Vcenter, snapshotType, 10)
if err != nil {
t.Fatalf("ListVcenterAggregateTotals failed for %s/%s: %v", exp.Vcenter, snapshotType, err)
}
var got *db.VcenterTotalRow
for i := range rows {
if rows[i].SnapshotTime == snapshotTime {
got = &rows[i]
break
}
}
if got == nil {
t.Fatalf("missing cache row for vcenter=%s snapshot_type=%s snapshot_time=%d", exp.Vcenter, snapshotType, snapshotTime)
}
if got.VmCount != exp.VmCount || got.VcpuTotal != exp.VcpuTotal || got.RamTotalGB != exp.RamTotal {
t.Fatalf(
"cache mismatch for vcenter=%s snapshot_type=%s: got(vm=%d vcpu=%d ram=%d) want(vm=%d vcpu=%d ram=%d)",
exp.Vcenter, snapshotType,
got.VmCount, got.VcpuTotal, got.RamTotalGB,
exp.VmCount, exp.VcpuTotal, exp.RamTotal,
)
}
}
}
func assertRollupTotalSamplesForVcenter(t *testing.T, ctx context.Context, dbConn *sqlx.DB, dayUnix int64, vcenter string, wantTotalSamples int64) {
t.Helper()
query := dbConn.Rebind(`
SELECT "TotalSamples"
FROM vm_daily_rollup
WHERE "Date" = ? AND "Vcenter" = ?
`)
var got []int64
if err := dbConn.SelectContext(ctx, &got, query, dayUnix, vcenter); err != nil {
t.Fatalf("failed to read rollup total samples for %s: %v", vcenter, err)
}
if len(got) == 0 {
t.Fatalf("no rollup rows found for vcenter=%s date=%d", vcenter, dayUnix)
}
for _, value := range got {
if value != wantTotalSamples {
t.Fatalf("unexpected rollup TotalSamples for vcenter=%s: got %d want %d (rows=%v)", vcenter, value, wantTotalSamples, got)
}
}
}
func assertSnapshotRegistryRow(t *testing.T, ctx context.Context, dbConn *sqlx.DB, snapshotType, tableName string, snapshotTime int64, snapshotCount int64) {
t.Helper()
var row struct {
SnapshotType string `db:"snapshot_type"`
TableName string `db:"table_name"`
SnapshotTime int64 `db:"snapshot_time"`
SnapshotCount int64 `db:"snapshot_count"`
}
query := dbConn.Rebind(`
SELECT snapshot_type, table_name, snapshot_time, snapshot_count
FROM snapshot_registry
WHERE table_name = ?
`)
if err := dbConn.GetContext(ctx, &row, query, tableName); err != nil {
t.Fatalf("failed to load snapshot_registry row for table %s: %v", tableName, err)
}
if row.SnapshotType != snapshotType {
t.Fatalf("unexpected snapshot type for table %s: got %s want %s", tableName, row.SnapshotType, snapshotType)
}
if row.SnapshotTime != snapshotTime {
t.Fatalf("unexpected snapshot time for table %s: got %d want %d", tableName, row.SnapshotTime, snapshotTime)
}
if row.SnapshotCount != snapshotCount {
t.Fatalf("unexpected snapshot count for table %s: got %d want %d", tableName, row.SnapshotCount, snapshotCount)
}
}
+591
View File
@@ -0,0 +1,591 @@
package tasks
import (
"context"
"fmt"
"io"
"log/slog"
"math"
"testing"
"time"
"vctp/db"
"vctp/db/queries"
"github.com/jmoiron/sqlx"
)
type tasksTestDatabase struct {
dbConn *sqlx.DB
logger *slog.Logger
querier db.Querier
}
func (d *tasksTestDatabase) DB() *sqlx.DB { return d.dbConn }
func (d *tasksTestDatabase) Queries() db.Querier { return d.querier }
func (d *tasksTestDatabase) Logger() *slog.Logger {
if d.logger != nil {
return d.logger
}
return slog.New(slog.NewTextHandler(io.Discard, nil))
}
func (d *tasksTestDatabase) Close() error { return d.dbConn.Close() }
type dailySummaryRow struct {
Name string `db:"Name"`
Vcenter string `db:"Vcenter"`
VmId string `db:"VmId"`
VmUuid string `db:"VmUuid"`
ResourcePool string `db:"ResourcePool"`
CreationTime int64 `db:"CreationTime"`
DeletionTime int64 `db:"DeletionTime"`
SnapshotTime int64 `db:"SnapshotTime"`
SamplesPresent int64 `db:"SamplesPresent"`
AvgVcpuCount float64 `db:"AvgVcpuCount"`
AvgRamGB float64 `db:"AvgRamGB"`
AvgProvisionedDisk float64 `db:"AvgProvisionedDisk"`
AvgIsPresent float64 `db:"AvgIsPresent"`
PoolTinPct float64 `db:"PoolTinPct"`
PoolBronzePct float64 `db:"PoolBronzePct"`
PoolSilverPct float64 `db:"PoolSilverPct"`
PoolGoldPct float64 `db:"PoolGoldPct"`
}
type monthlySummaryRow struct {
Name string `db:"Name"`
Vcenter string `db:"Vcenter"`
VmId string `db:"VmId"`
VmUuid string `db:"VmUuid"`
ResourcePool string `db:"ResourcePool"`
CreationTime int64 `db:"CreationTime"`
DeletionTime int64 `db:"DeletionTime"`
SamplesPresent int64 `db:"SamplesPresent"`
AvgVcpuCount float64 `db:"AvgVcpuCount"`
AvgRamGB float64 `db:"AvgRamGB"`
AvgProvisionedDisk float64 `db:"AvgProvisionedDisk"`
AvgIsPresent float64 `db:"AvgIsPresent"`
PoolTinPct float64 `db:"PoolTinPct"`
PoolBronzePct float64 `db:"PoolBronzePct"`
PoolSilverPct float64 `db:"PoolSilverPct"`
PoolGoldPct float64 `db:"PoolGoldPct"`
}
type hourlySeedRow struct {
SnapshotTime int64
Name string
Vcenter string
VmID string
VmUUID string
ResourcePool string
Datacenter string
Cluster string
Folder string
ProvisionedDisk float64
VcpuCount int64
RamGB int64
CreationTime int64
DeletionTime int64
IsTemplate string
PoweredOn string
SrmPlaceholder string
}
type dailySeedRow struct {
SnapshotTime int64
Name string
Vcenter string
VmID string
VmUUID string
ResourcePool string
Datacenter string
Cluster string
Folder string
ProvisionedDisk float64
VcpuCount int64
RamGB int64
CreationTime int64
DeletionTime int64
IsTemplate string
PoweredOn string
SrmPlaceholder string
SamplesPresent int64
AvgVcpuCount float64
AvgRamGB float64
AvgProvisionedDisk float64
AvgIsPresent float64
PoolTinPct float64
PoolBronzePct float64
PoolSilverPct float64
PoolGoldPct float64
Tin float64
Bronze float64
Silver float64
Gold float64
TotalSamples int64
SumVcpu int64
SumRam int64
SumDisk float64
TinHits int64
BronzeHits int64
SilverHits int64
GoldHits int64
}
func TestDailyGoldenParity_SQLUnionVsGoCanonical(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTask(dbConn)
if err := db.EnsureVmHourlyStats(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_hourly_stats: %v", err)
}
dayStart := time.Date(2026, time.January, 15, 0, 0, 0, 0, time.UTC)
dayEnd := dayStart.AddDate(0, 0, 1)
t1 := dayStart.Add(1 * time.Hour).Unix()
t2 := dayStart.Add(2 * time.Hour).Unix()
t3 := dayStart.Add(3 * time.Hour).Unix()
rows := []hourlySeedRow{
{SnapshotTime: t1, Name: "vm-alpha", Vcenter: "vc-a", VmID: "vm-1", VmUUID: "uuid-1", ResourcePool: "Tin", Datacenter: "dc-1", Cluster: "cluster-1", Folder: "/prod", ProvisionedDisk: 100, VcpuCount: 2, RamGB: 8, CreationTime: 0, IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-alpha", Vcenter: "vc-a", VmID: "vm-1", VmUUID: "uuid-1", ResourcePool: "Gold", Datacenter: "dc-1", Cluster: "cluster-1", Folder: "/prod", ProvisionedDisk: 120, VcpuCount: 4, RamGB: 16, CreationTime: dayStart.Add(30 * time.Minute).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-bravo", Vcenter: "vc-a", VmID: "vm-2", VmUUID: "uuid-2", ResourcePool: "Bronze", Datacenter: "dc-1", Cluster: "cluster-1", Folder: "/prod", ProvisionedDisk: 30, VcpuCount: 1, RamGB: 2, CreationTime: dayStart.Add(-2 * time.Hour).Unix(), DeletionTime: dayStart.Add(4 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t1, Name: "vm-charlie", Vcenter: "vc-a", VmID: "vm-3", VmUUID: "uuid-3", ResourcePool: "Silver", Datacenter: "dc-1", Cluster: "cluster-2", Folder: "/prod2", ProvisionedDisk: 50, VcpuCount: 2, RamGB: 4, CreationTime: dayStart.Add(-5 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-charlie", Vcenter: "vc-a", VmID: "vm-3", VmUUID: "uuid-3", ResourcePool: "Silver", Datacenter: "dc-1", Cluster: "cluster-2", Folder: "/prod2", ProvisionedDisk: 50, VcpuCount: 2, RamGB: 4, CreationTime: dayStart.Add(-5 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t3, Name: "vm-template", Vcenter: "vc-a", VmID: "vm-t", VmUUID: "uuid-t", ResourcePool: "Tin", Datacenter: "dc-1", Cluster: "cluster-3", Folder: "/templates", ProvisionedDisk: 500, VcpuCount: 16, RamGB: 64, CreationTime: dayStart.Add(-10 * time.Hour).Unix(), IsTemplate: "TRUE", PoweredOn: "FALSE", SrmPlaceholder: "FALSE"},
}
for _, row := range rows {
if err := insertHourlyCacheSeedRow(ctx, dbConn, row); err != nil {
t.Fatalf("failed to insert vm_hourly_stats row: %v", err)
}
}
hourlyTableTimes := []int64{t1, t2, t3}
hourlyTables := make([]string, 0, len(hourlyTableTimes))
for _, ts := range hourlyTableTimes {
tableName, err := hourlyInventoryTableName(time.Unix(ts, 0).UTC())
if err != nil {
t.Fatalf("failed to build hourly table name: %v", err)
}
hourlyTables = append(hourlyTables, tableName)
if err := db.EnsureSnapshotTable(ctx, dbConn, tableName); err != nil {
t.Fatalf("failed to ensure snapshot table %s: %v", tableName, err)
}
}
for _, row := range rows {
tableName, err := hourlyInventoryTableName(time.Unix(row.SnapshotTime, 0).UTC())
if err != nil {
t.Fatalf("failed to build per-row hourly table name: %v", err)
}
if err := insertHourlySnapshotSeedRow(ctx, dbConn, tableName, row); err != nil {
t.Fatalf("failed to insert snapshot row for table %s: %v", tableName, err)
}
}
oldSummaryTable, err := db.SafeTableName("test_daily_sql_union_summary")
if err != nil {
t.Fatalf("failed to build old summary table name: %v", err)
}
newSummaryTable, err := db.SafeTableName("test_daily_go_cache_summary")
if err != nil {
t.Fatalf("failed to build new summary table name: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, oldSummaryTable); err != nil {
t.Fatalf("failed to ensure old summary table: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, newSummaryTable); err != nil {
t.Fatalf("failed to ensure new summary table: %v", err)
}
unionQuery, err := buildUnionQuery(hourlyTables, summaryUnionColumns, templateExclusionFilter())
if err != nil {
t.Fatalf("failed to build union query: %v", err)
}
insertSQL, err := db.BuildDailySummaryInsert(oldSummaryTable, unionQuery)
if err != nil {
t.Fatalf("failed to build daily sql insert: %v", err)
}
if _, err := dbConn.ExecContext(ctx, insertSQL); err != nil {
t.Fatalf("failed to execute daily sql insert: %v", err)
}
aggMap, snapTimes, err := task.scanHourlyCache(ctx, dayStart, dayEnd)
if err != nil {
t.Fatalf("scanHourlyCache failed: %v", err)
}
totalSamplesByVcenter := sampleCountsByVcenter(aggMap)
if err := task.insertDailyAggregates(ctx, newSummaryTable, aggMap, len(snapTimes), totalSamplesByVcenter); err != nil {
t.Fatalf("insertDailyAggregates failed: %v", err)
}
oldRows, err := loadDailySummaryRows(ctx, dbConn, oldSummaryTable)
if err != nil {
t.Fatalf("failed to load old daily rows: %v", err)
}
newRows, err := loadDailySummaryRows(ctx, dbConn, newSummaryTable)
if err != nil {
t.Fatalf("failed to load new daily rows: %v", err)
}
assertDailySummaryParity(t, oldRows, newRows)
byKey := mapRowsByKeyDaily(newRows)
alpha := byKey["vc-a|vm-1|uuid-1|vm-alpha"]
if !approxEqual(alpha.AvgIsPresent, 2.0/3.0, 1e-9) {
t.Fatalf("unexpected alpha AvgIsPresent: got %.12f want %.12f", alpha.AvgIsPresent, 2.0/3.0)
}
if alpha.CreationTime != dayStart.Add(30*time.Minute).Unix() {
t.Fatalf("unexpected alpha CreationTime: got %d want %d", alpha.CreationTime, dayStart.Add(30*time.Minute).Unix())
}
if alpha.ResourcePool != "Gold" {
t.Fatalf("unexpected alpha ResourcePool: got %q want %q", alpha.ResourcePool, "Gold")
}
if alpha.SnapshotTime != t3 {
t.Fatalf("unexpected alpha SnapshotTime: got %d want %d", alpha.SnapshotTime, t3)
}
if !approxEqual(alpha.PoolTinPct, 50.0, 1e-9) || !approxEqual(alpha.PoolGoldPct, 50.0, 1e-9) {
t.Fatalf("unexpected alpha pool mix: tin=%.6f gold=%.6f", alpha.PoolTinPct, alpha.PoolGoldPct)
}
bravo := byKey["vc-a|vm-2|uuid-2|vm-bravo"]
if bravo.DeletionTime != dayStart.Add(4*time.Hour).Unix() {
t.Fatalf("unexpected bravo DeletionTime: got %d want %d", bravo.DeletionTime, dayStart.Add(4*time.Hour).Unix())
}
if !approxEqual(bravo.AvgIsPresent, 1.0/3.0, 1e-9) {
t.Fatalf("unexpected bravo AvgIsPresent: got %.12f want %.12f", bravo.AvgIsPresent, 1.0/3.0)
}
}
func TestMonthlyGoldenParity_SQLDailyUnionVsGoDailyRollup(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTask(dbConn)
if err := db.EnsureVmDailyRollup(ctx, dbConn); err != nil {
t.Fatalf("failed to ensure vm_daily_rollup: %v", err)
}
monthStart := time.Date(2026, time.February, 1, 0, 0, 0, 0, time.UTC)
monthEnd := monthStart.AddDate(0, 1, 0)
day1 := time.Date(2026, time.February, 3, 0, 0, 0, 0, time.UTC)
day2 := day1.AddDate(0, 0, 1)
day1Table, err := dailySummaryTableName(day1)
if err != nil {
t.Fatalf("failed to build day1 table name: %v", err)
}
day2Table, err := dailySummaryTableName(day2)
if err != nil {
t.Fatalf("failed to build day2 table name: %v", err)
}
for _, table := range []string{day1Table, day2Table} {
if err := db.EnsureSummaryTable(ctx, dbConn, table); err != nil {
t.Fatalf("failed to ensure daily summary table %s: %v", table, err)
}
}
seeds := []dailySeedRow{
{
SnapshotTime: day1.Unix(), Name: "vm-alpha", Vcenter: "vc-a", VmID: "vm-1", VmUUID: "uuid-1",
ResourcePool: "Bronze", Datacenter: "dc-1", Cluster: "cluster-1", Folder: "/prod",
ProvisionedDisk: 100, VcpuCount: 4, RamGB: 8, CreationTime: monthStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, AvgVcpuCount: 3, AvgRamGB: 6, AvgProvisionedDisk: 90, AvgIsPresent: 1.0,
PoolBronzePct: 100, Bronze: 100,
TotalSamples: 2, SumVcpu: 6, SumRam: 12, SumDisk: 180, BronzeHits: 2,
},
{
SnapshotTime: day2.Unix(), Name: "vm-alpha", Vcenter: "vc-a", VmID: "vm-1", VmUUID: "uuid-1",
ResourcePool: "Tin", Datacenter: "dc-1", Cluster: "cluster-1", Folder: "/prod",
ProvisionedDisk: 110, VcpuCount: 2, RamGB: 8, CreationTime: monthStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE",
SamplesPresent: 2, AvgVcpuCount: 2, AvgRamGB: 8, AvgProvisionedDisk: 110, AvgIsPresent: 1.0,
PoolTinPct: 100, Tin: 100,
TotalSamples: 2, SumVcpu: 4, SumRam: 16, SumDisk: 220, TinHits: 2,
},
}
for _, seed := range seeds {
targetTable := day1Table
if seed.SnapshotTime == day2.Unix() {
targetTable = day2Table
}
if err := insertDailySummarySeedRow(ctx, dbConn, targetTable, seed); err != nil {
t.Fatalf("failed to insert daily summary seed row: %v", err)
}
if err := insertDailyRollupSeedRow(ctx, dbConn, seed); err != nil {
t.Fatalf("failed to insert daily rollup seed row: %v", err)
}
}
oldMonthlyTable, err := db.SafeTableName("test_monthly_sql_union_summary")
if err != nil {
t.Fatalf("failed to build old monthly table name: %v", err)
}
newMonthlyTable, err := db.SafeTableName("test_monthly_go_rollup_summary")
if err != nil {
t.Fatalf("failed to build new monthly table name: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, oldMonthlyTable); err != nil {
t.Fatalf("failed to ensure old monthly table: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, newMonthlyTable); err != nil {
t.Fatalf("failed to ensure new monthly table: %v", err)
}
unionQuery, err := buildUnionQuery([]string{day1Table, day2Table}, monthlyUnionColumns, templateExclusionFilter())
if err != nil {
t.Fatalf("failed to build monthly union query: %v", err)
}
insertSQL, err := db.BuildMonthlySummaryInsert(oldMonthlyTable, unionQuery)
if err != nil {
t.Fatalf("failed to build monthly sql insert: %v", err)
}
if _, err := dbConn.ExecContext(ctx, insertSQL); err != nil {
t.Fatalf("failed to execute monthly sql insert: %v", err)
}
aggMap, err := task.scanDailyRollup(ctx, monthStart, monthEnd)
if err != nil {
t.Fatalf("scanDailyRollup failed: %v", err)
}
if err := task.insertMonthlyAggregates(ctx, newMonthlyTable, aggMap); err != nil {
t.Fatalf("insertMonthlyAggregates failed: %v", err)
}
oldRows, err := loadMonthlySummaryRows(ctx, dbConn, oldMonthlyTable)
if err != nil {
t.Fatalf("failed to load old monthly rows: %v", err)
}
newRows, err := loadMonthlySummaryRows(ctx, dbConn, newMonthlyTable)
if err != nil {
t.Fatalf("failed to load new monthly rows: %v", err)
}
assertMonthlySummaryParity(t, oldRows, newRows)
byKey := mapRowsByKeyMonthly(newRows)
alpha := byKey["vc-a|vm-1|uuid-1|vm-alpha"]
if !approxEqual(alpha.AvgVcpuCount, 2.5, 1e-9) {
t.Fatalf("unexpected alpha AvgVcpuCount: got %.6f want %.6f", alpha.AvgVcpuCount, 2.5)
}
if !approxEqual(alpha.AvgIsPresent, 1.0, 1e-9) {
t.Fatalf("unexpected alpha AvgIsPresent: got %.6f want %.6f", alpha.AvgIsPresent, 1.0)
}
if alpha.ResourcePool != "Tin" {
t.Fatalf("unexpected alpha ResourcePool: got %q want %q", alpha.ResourcePool, "Tin")
}
if !approxEqual(alpha.PoolTinPct, 50.0, 1e-9) || !approxEqual(alpha.PoolBronzePct, 50.0, 1e-9) {
t.Fatalf("unexpected alpha monthly pool mix: tin=%.6f bronze=%.6f", alpha.PoolTinPct, alpha.PoolBronzePct)
}
}
func newTasksTestCronTask(dbConn *sqlx.DB) *CronTask {
logger := slog.New(slog.NewTextHandler(io.Discard, nil))
return &CronTask{
Logger: logger,
Database: &tasksTestDatabase{dbConn: dbConn, logger: logger, querier: queries.New(dbConn.DB)},
}
}
func insertHourlyCacheSeedRow(ctx context.Context, dbConn *sqlx.DB, row hourlySeedRow) error {
_, err := dbConn.ExecContext(ctx, `
INSERT INTO vm_hourly_stats (
"SnapshotTime","Vcenter","VmId","VmUuid","Name","CreationTime","DeletionTime","ResourcePool",
"Datacenter","Cluster","Folder","ProvisionedDisk","VcpuCount","RamGB","IsTemplate","PoweredOn","SrmPlaceholder"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`,
row.SnapshotTime, row.Vcenter, row.VmID, row.VmUUID, row.Name, row.CreationTime, row.DeletionTime, row.ResourcePool,
row.Datacenter, row.Cluster, row.Folder, row.ProvisionedDisk, row.VcpuCount, row.RamGB, row.IsTemplate, row.PoweredOn, row.SrmPlaceholder,
)
return err
}
func insertHourlySnapshotSeedRow(ctx context.Context, dbConn *sqlx.DB, table string, row hourlySeedRow) error {
sql := fmt.Sprintf(`
INSERT INTO %s (
"Name","Vcenter","VmId","VmUuid","EventKey","CloudId","CreationTime","DeletionTime","ResourcePool",
"Datacenter","Cluster","Folder","ProvisionedDisk","VcpuCount","RamGB","IsTemplate","PoweredOn","SrmPlaceholder","SnapshotTime"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`, table)
_, err := dbConn.ExecContext(ctx, sql,
row.Name, row.Vcenter, row.VmID, row.VmUUID, nil, nil, row.CreationTime, row.DeletionTime, row.ResourcePool,
row.Datacenter, row.Cluster, row.Folder, row.ProvisionedDisk, row.VcpuCount, row.RamGB, row.IsTemplate, row.PoweredOn, row.SrmPlaceholder, row.SnapshotTime,
)
return err
}
func insertDailySummarySeedRow(ctx context.Context, dbConn *sqlx.DB, table string, row dailySeedRow) error {
sql := fmt.Sprintf(`
INSERT INTO %s (
"Name","Vcenter","VmId","VmUuid","EventKey","CloudId","CreationTime","DeletionTime","ResourcePool",
"Datacenter","Cluster","Folder","ProvisionedDisk","VcpuCount","RamGB","IsTemplate","PoweredOn","SrmPlaceholder",
"SnapshotTime","SamplesPresent","AvgVcpuCount","AvgRamGB","AvgProvisionedDisk","AvgIsPresent",
"PoolTinPct","PoolBronzePct","PoolSilverPct","PoolGoldPct","Tin","Bronze","Silver","Gold"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`, table)
_, err := dbConn.ExecContext(ctx, sql,
row.Name, row.Vcenter, row.VmID, row.VmUUID, nil, nil, row.CreationTime, row.DeletionTime, row.ResourcePool,
row.Datacenter, row.Cluster, row.Folder, row.ProvisionedDisk, row.VcpuCount, row.RamGB, row.IsTemplate, row.PoweredOn, row.SrmPlaceholder,
row.SnapshotTime, row.SamplesPresent, row.AvgVcpuCount, row.AvgRamGB, row.AvgProvisionedDisk, row.AvgIsPresent,
row.PoolTinPct, row.PoolBronzePct, row.PoolSilverPct, row.PoolGoldPct, row.Tin, row.Bronze, row.Silver, row.Gold,
)
return err
}
func insertDailyRollupSeedRow(ctx context.Context, dbConn *sqlx.DB, row dailySeedRow) error {
_, err := dbConn.ExecContext(ctx, `
INSERT INTO vm_daily_rollup (
"Date","Vcenter","VmId","VmUuid","Name","CreationTime","DeletionTime","SamplesPresent","TotalSamples",
"SumVcpu","SumRam","SumDisk","TinHits","BronzeHits","SilverHits","GoldHits",
"LastResourcePool","LastDatacenter","LastCluster","LastFolder","LastProvisionedDisk","LastVcpuCount","LastRamGB","IsTemplate","PoweredOn","SrmPlaceholder"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`,
row.SnapshotTime, row.Vcenter, row.VmID, row.VmUUID, row.Name, row.CreationTime, row.DeletionTime, row.SamplesPresent, row.TotalSamples,
row.SumVcpu, row.SumRam, row.SumDisk, row.TinHits, row.BronzeHits, row.SilverHits, row.GoldHits,
row.ResourcePool, row.Datacenter, row.Cluster, row.Folder, row.ProvisionedDisk, row.VcpuCount, row.RamGB, row.IsTemplate, row.PoweredOn, row.SrmPlaceholder,
)
return err
}
func loadDailySummaryRows(ctx context.Context, dbConn *sqlx.DB, table string) ([]dailySummaryRow, error) {
sql := fmt.Sprintf(`
SELECT
COALESCE("Name",'') AS "Name",
COALESCE("Vcenter",'') AS "Vcenter",
COALESCE("VmId",'') AS "VmId",
COALESCE("VmUuid",'') AS "VmUuid",
COALESCE("ResourcePool",'') AS "ResourcePool",
COALESCE("CreationTime",0) AS "CreationTime",
COALESCE("DeletionTime",0) AS "DeletionTime",
COALESCE("SnapshotTime",0) AS "SnapshotTime",
COALESCE("SamplesPresent",0) AS "SamplesPresent",
COALESCE("AvgVcpuCount",0) AS "AvgVcpuCount",
COALESCE("AvgRamGB",0) AS "AvgRamGB",
COALESCE("AvgProvisionedDisk",0) AS "AvgProvisionedDisk",
COALESCE("AvgIsPresent",0) AS "AvgIsPresent",
COALESCE("PoolTinPct",0) AS "PoolTinPct",
COALESCE("PoolBronzePct",0) AS "PoolBronzePct",
COALESCE("PoolSilverPct",0) AS "PoolSilverPct",
COALESCE("PoolGoldPct",0) AS "PoolGoldPct"
FROM %s
ORDER BY "Vcenter", "VmId", "VmUuid", "Name"
`, table)
var out []dailySummaryRow
return out, dbConn.SelectContext(ctx, &out, sql)
}
func loadMonthlySummaryRows(ctx context.Context, dbConn *sqlx.DB, table string) ([]monthlySummaryRow, error) {
sql := fmt.Sprintf(`
SELECT
COALESCE("Name",'') AS "Name",
COALESCE("Vcenter",'') AS "Vcenter",
COALESCE("VmId",'') AS "VmId",
COALESCE("VmUuid",'') AS "VmUuid",
COALESCE("ResourcePool",'') AS "ResourcePool",
COALESCE("CreationTime",0) AS "CreationTime",
COALESCE("DeletionTime",0) AS "DeletionTime",
COALESCE("SamplesPresent",0) AS "SamplesPresent",
COALESCE("AvgVcpuCount",0) AS "AvgVcpuCount",
COALESCE("AvgRamGB",0) AS "AvgRamGB",
COALESCE("AvgProvisionedDisk",0) AS "AvgProvisionedDisk",
COALESCE("AvgIsPresent",0) AS "AvgIsPresent",
COALESCE("PoolTinPct",0) AS "PoolTinPct",
COALESCE("PoolBronzePct",0) AS "PoolBronzePct",
COALESCE("PoolSilverPct",0) AS "PoolSilverPct",
COALESCE("PoolGoldPct",0) AS "PoolGoldPct"
FROM %s
ORDER BY "Vcenter", "VmId", "VmUuid", "Name"
`, table)
var out []monthlySummaryRow
return out, dbConn.SelectContext(ctx, &out, sql)
}
func mapRowsByKeyDaily(rows []dailySummaryRow) map[string]dailySummaryRow {
out := make(map[string]dailySummaryRow, len(rows))
for _, row := range rows {
out[dailyRowKey(row)] = row
}
return out
}
func mapRowsByKeyMonthly(rows []monthlySummaryRow) map[string]monthlySummaryRow {
out := make(map[string]monthlySummaryRow, len(rows))
for _, row := range rows {
out[monthlyRowKey(row)] = row
}
return out
}
func dailyRowKey(r dailySummaryRow) string {
return fmt.Sprintf("%s|%s|%s|%s", r.Vcenter, r.VmId, r.VmUuid, r.Name)
}
func monthlyRowKey(r monthlySummaryRow) string {
return fmt.Sprintf("%s|%s|%s|%s", r.Vcenter, r.VmId, r.VmUuid, r.Name)
}
func assertDailySummaryParity(t *testing.T, oldRows, newRows []dailySummaryRow) {
t.Helper()
if len(oldRows) != len(newRows) {
t.Fatalf("daily row count mismatch: old=%d new=%d", len(oldRows), len(newRows))
}
oldByKey := mapRowsByKeyDaily(oldRows)
newByKey := mapRowsByKeyDaily(newRows)
for key, oldRow := range oldByKey {
newRow, ok := newByKey[key]
if !ok {
t.Fatalf("missing key in new daily output: %s", key)
}
if oldRow.ResourcePool != newRow.ResourcePool ||
oldRow.CreationTime != newRow.CreationTime ||
oldRow.DeletionTime != newRow.DeletionTime ||
oldRow.SnapshotTime != newRow.SnapshotTime ||
oldRow.SamplesPresent != newRow.SamplesPresent {
t.Fatalf("daily scalar mismatch key=%s old=%+v new=%+v", key, oldRow, newRow)
}
assertFloatClose(t, "AvgVcpuCount", key, oldRow.AvgVcpuCount, newRow.AvgVcpuCount, 1e-9)
assertFloatClose(t, "AvgRamGB", key, oldRow.AvgRamGB, newRow.AvgRamGB, 1e-9)
assertFloatClose(t, "AvgProvisionedDisk", key, oldRow.AvgProvisionedDisk, newRow.AvgProvisionedDisk, 1e-9)
assertFloatClose(t, "AvgIsPresent", key, oldRow.AvgIsPresent, newRow.AvgIsPresent, 1e-9)
assertFloatClose(t, "PoolTinPct", key, oldRow.PoolTinPct, newRow.PoolTinPct, 1e-9)
assertFloatClose(t, "PoolBronzePct", key, oldRow.PoolBronzePct, newRow.PoolBronzePct, 1e-9)
assertFloatClose(t, "PoolSilverPct", key, oldRow.PoolSilverPct, newRow.PoolSilverPct, 1e-9)
assertFloatClose(t, "PoolGoldPct", key, oldRow.PoolGoldPct, newRow.PoolGoldPct, 1e-9)
}
}
func assertMonthlySummaryParity(t *testing.T, oldRows, newRows []monthlySummaryRow) {
t.Helper()
if len(oldRows) != len(newRows) {
t.Fatalf("monthly row count mismatch: old=%d new=%d", len(oldRows), len(newRows))
}
oldByKey := mapRowsByKeyMonthly(oldRows)
newByKey := mapRowsByKeyMonthly(newRows)
for key, oldRow := range oldByKey {
newRow, ok := newByKey[key]
if !ok {
t.Fatalf("missing key in new monthly output: %s", key)
}
if oldRow.ResourcePool != newRow.ResourcePool ||
oldRow.CreationTime != newRow.CreationTime ||
oldRow.DeletionTime != newRow.DeletionTime ||
oldRow.SamplesPresent != newRow.SamplesPresent {
t.Fatalf("monthly scalar mismatch key=%s old=%+v new=%+v", key, oldRow, newRow)
}
assertFloatClose(t, "AvgVcpuCount", key, oldRow.AvgVcpuCount, newRow.AvgVcpuCount, 1e-9)
assertFloatClose(t, "AvgRamGB", key, oldRow.AvgRamGB, newRow.AvgRamGB, 1e-9)
assertFloatClose(t, "AvgProvisionedDisk", key, oldRow.AvgProvisionedDisk, newRow.AvgProvisionedDisk, 1e-9)
assertFloatClose(t, "AvgIsPresent", key, oldRow.AvgIsPresent, newRow.AvgIsPresent, 1e-9)
assertFloatClose(t, "PoolTinPct", key, oldRow.PoolTinPct, newRow.PoolTinPct, 1e-9)
assertFloatClose(t, "PoolBronzePct", key, oldRow.PoolBronzePct, newRow.PoolBronzePct, 1e-9)
assertFloatClose(t, "PoolSilverPct", key, oldRow.PoolSilverPct, newRow.PoolSilverPct, 1e-9)
assertFloatClose(t, "PoolGoldPct", key, oldRow.PoolGoldPct, newRow.PoolGoldPct, 1e-9)
}
}
func assertFloatClose(t *testing.T, field, key string, oldVal, newVal, eps float64) {
t.Helper()
if !approxEqual(oldVal, newVal, eps) {
t.Fatalf("%s mismatch key=%s old=%.12f new=%.12f", field, key, oldVal, newVal)
}
}
func approxEqual(a, b, eps float64) bool {
return math.Abs(a-b) <= eps
}
@@ -0,0 +1,212 @@
package tasks
import (
"context"
"os"
"path/filepath"
"testing"
"time"
"vctp/db"
"vctp/internal/settings"
)
func TestSnapshotTableCompatModeSettingControlsTaskBehaviorFlag(t *testing.T) {
task := &CronTask{}
if !task.snapshotTableCompatModeEnabled() {
t.Fatal("expected default snapshot_table_compat_mode=true when settings are absent")
}
task.Settings = &settings.Settings{Values: &settings.SettingsYML{}}
if !task.snapshotTableCompatModeEnabled() {
t.Fatal("expected default snapshot_table_compat_mode=true when value is unset")
}
disabled := false
task.Settings.Values.Settings.SnapshotTableCompatMode = &disabled
if task.snapshotTableCompatModeEnabled() {
t.Fatal("expected snapshot_table_compat_mode=false to disable legacy snapshot-table writes")
}
enabled := true
task.Settings.Values.Settings.SnapshotTableCompatMode = &enabled
if !task.snapshotTableCompatModeEnabled() {
t.Fatal("expected snapshot_table_compat_mode=true to enable legacy snapshot-table writes")
}
}
func TestManualDailyAggregate_SQLFallback_LegacyTablesAndReport(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTaskForAggregateFlow(t, dbConn)
t.Setenv("DAILY_AGG_SQL", "1")
t.Setenv("DAILY_AGG_GO", "")
dayStart := time.Date(2026, time.March, 15, 0, 0, 0, 0, time.UTC)
t1 := dayStart.Add(1 * time.Hour).Unix()
t2 := dayStart.Add(2 * time.Hour).Unix()
table1, err := hourlyInventoryTableName(time.Unix(t1, 0).UTC())
if err != nil {
t.Fatalf("failed to build first hourly table name: %v", err)
}
table2, err := hourlyInventoryTableName(time.Unix(t2, 0).UTC())
if err != nil {
t.Fatalf("failed to build second hourly table name: %v", err)
}
for _, table := range []string{table1, table2} {
if err := db.EnsureSnapshotTable(ctx, dbConn, table); err != nil {
t.Fatalf("failed to ensure hourly snapshot table %s: %v", table, err)
}
}
seeds := []hourlySeedRow{
{SnapshotTime: t1, Name: "vm-a", Vcenter: "vc-a", VmID: "vm-a", VmUUID: "uuid-a", ResourcePool: "Tin", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 100, VcpuCount: 2, RamGB: 8, CreationTime: dayStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-a", Vcenter: "vc-a", VmID: "vm-a", VmUUID: "uuid-a", ResourcePool: "Gold", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 120, VcpuCount: 4, RamGB: 8, CreationTime: dayStart.Add(-24 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
{SnapshotTime: t2, Name: "vm-b", Vcenter: "vc-a", VmID: "vm-b", VmUUID: "uuid-b", ResourcePool: "Bronze", Datacenter: "dc-a", Cluster: "cluster-a", Folder: "/prod", ProvisionedDisk: 40, VcpuCount: 1, RamGB: 4, CreationTime: dayStart.Add(-48 * time.Hour).Unix(), IsTemplate: "FALSE", PoweredOn: "TRUE", SrmPlaceholder: "FALSE"},
}
for _, row := range seeds {
table, tableErr := hourlyInventoryTableName(time.Unix(row.SnapshotTime, 0).UTC())
if tableErr != nil {
t.Fatalf("failed to build hourly table for seed row: %v", tableErr)
}
if err := insertHourlySnapshotSeedRow(ctx, dbConn, table, row); err != nil {
t.Fatalf("failed to insert hourly snapshot seed row: %v", err)
}
}
if err := task.aggregateDailySummaryWithMode(ctx, dayStart, true, false); err != nil {
t.Fatalf("aggregateDailySummaryWithMode (legacy SQL fallback) failed: %v", err)
}
summaryTable, err := dailySummaryTableName(dayStart)
if err != nil {
t.Fatalf("failed to build daily summary table name: %v", err)
}
rows, err := loadDailySummaryRows(ctx, dbConn, summaryTable)
if err != nil {
t.Fatalf("failed to load daily summary rows: %v", err)
}
if len(rows) != 2 {
t.Fatalf("unexpected daily summary row count: got %d want %d", len(rows), 2)
}
assertSnapshotRegistryRow(t, ctx, dbConn, "daily", summaryTable, dayStart.Unix(), int64(len(rows)))
assertSummaryCacheMatchesByVcenter(t, ctx, dbConn, summaryTable, "daily", dayStart.Unix())
reportPath := filepath.Join(task.Settings.Values.Settings.ReportsDir, summaryTable+".xlsx")
if _, err := os.Stat(reportPath); err != nil {
t.Fatalf("expected daily report file at %s: %v", reportPath, err)
}
}
func TestManualMonthlyAggregate_SQLFallback_LegacyTablesAndReport(t *testing.T) {
ctx := context.Background()
dbConn := newTasksTestDB(t)
task := newTasksTestCronTaskForAggregateFlow(t, dbConn)
t.Setenv("MONTHLY_AGG_SQL", "1")
t.Setenv("MONTHLY_AGG_GO", "")
monthStart := time.Date(2026, time.April, 1, 0, 0, 0, 0, time.UTC)
day1 := monthStart.AddDate(0, 0, 2)
day2 := monthStart.AddDate(0, 0, 3)
day1Table, err := dailySummaryTableName(day1)
if err != nil {
t.Fatalf("failed to build day1 summary table name: %v", err)
}
day2Table, err := dailySummaryTableName(day2)
if err != nil {
t.Fatalf("failed to build day2 summary table name: %v", err)
}
for _, table := range []string{day1Table, day2Table} {
if err := db.EnsureSummaryTable(ctx, dbConn, table); err != nil {
t.Fatalf("failed to ensure daily summary table %s: %v", table, err)
}
}
seeds := []dailySeedRow{
{
SnapshotTime: day1.Unix(),
Name: "vm-a",
Vcenter: "vc-a",
VmID: "vm-a",
VmUUID: "uuid-a",
ResourcePool: "Bronze",
Datacenter: "dc-a",
Cluster: "cluster-a",
Folder: "/prod",
ProvisionedDisk: 100,
VcpuCount: 2,
RamGB: 8,
CreationTime: monthStart.Add(-72 * time.Hour).Unix(),
IsTemplate: "FALSE",
PoweredOn: "TRUE",
SrmPlaceholder: "FALSE",
SamplesPresent: 2,
AvgVcpuCount: 2,
AvgRamGB: 8,
AvgProvisionedDisk: 100,
AvgIsPresent: 1.0,
PoolBronzePct: 100,
Bronze: 100,
},
{
SnapshotTime: day2.Unix(),
Name: "vm-a",
Vcenter: "vc-a",
VmID: "vm-a",
VmUUID: "uuid-a",
ResourcePool: "Tin",
Datacenter: "dc-a",
Cluster: "cluster-a",
Folder: "/prod",
ProvisionedDisk: 120,
VcpuCount: 4,
RamGB: 12,
CreationTime: monthStart.Add(-72 * time.Hour).Unix(),
IsTemplate: "FALSE",
PoweredOn: "TRUE",
SrmPlaceholder: "FALSE",
SamplesPresent: 2,
AvgVcpuCount: 4,
AvgRamGB: 12,
AvgProvisionedDisk: 120,
AvgIsPresent: 1.0,
PoolTinPct: 100,
Tin: 100,
},
}
for _, seed := range seeds {
targetTable := day1Table
if seed.SnapshotTime == day2.Unix() {
targetTable = day2Table
}
if err := insertDailySummarySeedRow(ctx, dbConn, targetTable, seed); err != nil {
t.Fatalf("failed to insert daily summary seed row: %v", err)
}
}
if err := task.aggregateMonthlySummaryWithMode(ctx, monthStart, true, false); err != nil {
t.Fatalf("aggregateMonthlySummaryWithMode (legacy SQL fallback) failed: %v", err)
}
summaryTable, err := monthlySummaryTableName(monthStart)
if err != nil {
t.Fatalf("failed to build monthly summary table name: %v", err)
}
rows, err := loadMonthlySummaryRows(ctx, dbConn, summaryTable)
if err != nil {
t.Fatalf("failed to load monthly summary rows: %v", err)
}
if len(rows) != 1 {
t.Fatalf("unexpected monthly summary row count: got %d want %d", len(rows), 1)
}
assertSnapshotRegistryRow(t, ctx, dbConn, "monthly", summaryTable, monthStart.Unix(), int64(len(rows)))
assertSummaryCacheMatchesByVcenter(t, ctx, dbConn, summaryTable, "monthly", monthStart.Unix())
reportPath := filepath.Join(task.Settings.Values.Settings.ReportsDir, summaryTable+".xlsx")
if _, err := os.Stat(reportPath); err != nil {
t.Fatalf("expected monthly report file at %s: %v", reportPath, err)
}
}
+20
View File
@@ -219,6 +219,11 @@ func (c *CronTask) aggregateMonthlySummaryWithMode(ctx context.Context, targetMo
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", monthlyTable, targetMonth, rowCount); err != nil { if err := report.RegisterSnapshot(ctx, c.Database, "monthly", monthlyTable, targetMonth, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot", "error", err, "table", monthlyTable) c.Logger.Warn("failed to register monthly snapshot", "error", err, "table", monthlyTable)
} }
if refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, monthlyTable, "monthly", monthStart.Unix()); err != nil {
c.Logger.Warn("failed to refresh vcenter monthly aggregate totals cache", "error", err, "table", monthlyTable)
} else {
c.Logger.Debug("refreshed vcenter monthly aggregate totals cache", "table", monthlyTable, "rows", refreshed)
}
db.AnalyzeTableIfPostgres(ctx, dbConn, monthlyTable) db.AnalyzeTableIfPostgres(ctx, dbConn, monthlyTable)
@@ -275,6 +280,11 @@ func (c *CronTask) aggregateMonthlySummarySQLCanonical(ctx context.Context, mont
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil { if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot (SQL canonical)", "error", err, "table", summaryTable) c.Logger.Warn("failed to register monthly snapshot (SQL canonical)", "error", err, "table", summaryTable)
} }
if refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, summaryTable, "monthly", monthStart.Unix()); err != nil {
c.Logger.Warn("failed to refresh vcenter monthly aggregate totals cache (SQL canonical)", "error", err, "table", summaryTable)
} else {
c.Logger.Debug("refreshed vcenter monthly aggregate totals cache", "table", summaryTable, "rows", refreshed)
}
if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate monthly report (SQL canonical)", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate monthly report (SQL canonical)", "error", err, "table", summaryTable)
return err return err
@@ -389,6 +399,11 @@ func (c *CronTask) aggregateMonthlySummaryGoHourly(ctx context.Context, monthSta
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil { if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot (Go hourly)", "error", err, "table", summaryTable) c.Logger.Warn("failed to register monthly snapshot (Go hourly)", "error", err, "table", summaryTable)
} }
if refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, summaryTable, "monthly", monthStart.Unix()); err != nil {
c.Logger.Warn("failed to refresh vcenter monthly aggregate totals cache (Go hourly)", "error", err, "table", summaryTable)
} else {
c.Logger.Debug("refreshed vcenter monthly aggregate totals cache", "table", summaryTable, "rows", refreshed)
}
if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate monthly report (Go hourly)", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate monthly report (Go hourly)", "error", err, "table", summaryTable)
return err return err
@@ -478,6 +493,11 @@ func (c *CronTask) aggregateMonthlySummaryGo(ctx context.Context, monthStart, mo
if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil { if err := report.RegisterSnapshot(ctx, c.Database, "monthly", summaryTable, monthStart, rowCount); err != nil {
c.Logger.Warn("failed to register monthly snapshot", "error", err, "table", summaryTable) c.Logger.Warn("failed to register monthly snapshot", "error", err, "table", summaryTable)
} }
if refreshed, err := db.ReplaceVcenterAggregateTotalsFromSummary(ctx, dbConn, summaryTable, "monthly", monthStart.Unix()); err != nil {
c.Logger.Warn("failed to refresh vcenter monthly aggregate totals cache", "error", err, "table", summaryTable)
} else {
c.Logger.Debug("refreshed vcenter monthly aggregate totals cache", "table", summaryTable, "rows", refreshed)
}
if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil { if err := c.generateReportWithPolicy(ctx, summaryTable); err != nil {
c.Logger.Warn("failed to generate monthly report (Go)", "error", err, "table", summaryTable) c.Logger.Warn("failed to generate monthly report (Go)", "error", err, "table", summaryTable)
return err return err
+71
View File
@@ -0,0 +1,71 @@
# Phase Metrics Comparison and Gate Decisions
Date captured: 2026-04-20 (Australia/Sydney)
## Scope and method
- Baseline source: `phase0-baseline.md`.
- Post-change source: live local workspace state (`db.sqlite3`, `reports/`) and one-shot canonical benchmark run.
- Commands used:
- `sqlite3 -readonly db.sqlite3 "<query>"`
- `find reports -type f | wc -l`
- `go run . -settings settings.yaml -benchmark-aggregations -benchmark-runs 1`
## Baseline vs post-change snapshot
| Area | Metric | Baseline | Post-change | Delta | Gate |
| --- | --- | ---: | ---: | ---: | --- |
| Hourly capture | `snapshot_registry` hourly entries | 930 | 955 | +25 | PASS |
| Hourly capture | Hourly compatibility tables (`inventory_hourly_%`) | 930 | 955 | +25 | PASS |
| Hourly capture | Canonical cache rows (`vm_hourly_stats`) | 489865 | 491165 | +1300 | PASS |
| Hourly capture | Latest hourly snapshot row count (`snapshot_count`) | 52 | 52 | 0 | PASS |
| Daily aggregation | `snapshot_registry` daily entries | 39 | 39 | 0 | PASS |
| Daily aggregation | Daily summary tables (`inventory_daily_summary_%`) | 40 | 40 | 0 | PASS |
| Daily aggregation | Canonical daily rollup rows (`vm_daily_rollup`) | 1779 | 1831 | +52 | PASS |
| Daily aggregation | Latest daily snapshot row count (`snapshot_count`) | 52 | 52 | 0 | PASS |
| Monthly aggregation | `snapshot_registry` monthly entries | 1 | 1 | 0 | PASS |
| Monthly aggregation | Latest monthly snapshot row count (`snapshot_count`) | 62 | 62 | 0 | PASS |
| Report generation | Files present in `reports/` | 10339 | 10364 | +25 | PASS |
| Reliability | `snapshot_runs` total / success | 10254 / 10254 | 10279 / 10279 | +25 / +25 | PASS |
| Reliability | `snapshot_runs` attempts min/max/avg | 1 / 2 / 1.0001 | 1 / 2 / 1.0001 | unchanged | PASS |
## Operational runtime snapshot (post-change)
From `cron_status`:
- `hourly_snapshot`: `1069 ms`
- `daily_aggregate`: `1075 ms`
- `monthly_aggregate`: `515 ms`
- `snapshot_cleanup`: `1117 ms`
Gate decision:
- All observed job durations are far below configured job timeouts (`hourly=1200s`, `daily=900s`, `monthly=1200s`, `cleanup=600s`): PASS.
## Canonical aggregation benchmark snapshot (post-change)
Command:
- `go run . -settings settings.yaml -benchmark-aggregations -benchmark-runs 1`
Results (local SQLite dataset):
- Daily window (`2026-04-20`):
- Go: `12.676 ms` (`52` rows)
- SQL: `9.026667 ms` (`52` rows)
- Monthly window (`2026-04`):
- Go: `4.077125 ms` (`52` rows)
- SQL: `2.050708 ms` (`52` rows)
Gate decision:
- Benchmark execution and parity row counts: PASS.
- SQL default-promotion gate for Phase 3: NOT MET (still requires representative production-scale **Postgres** benchmark evidence).
## Decision record summary
- Data continuity and compatibility outputs: PASS.
- Canonical cache growth and aggregation continuity: PASS.
- Report output continuity: PASS.
- Reliability indicators (`snapshot_runs`): PASS.
- SQL promotion decision (Go vs SQL default): NO-GO pending production Postgres benchmark evidence.
+32 -17
View File
@@ -304,31 +304,46 @@ The target architecture is:
### 3. Phase 3: Postgres-Ready Scale-Up ### 3. Phase 3: Postgres-Ready Scale-Up
- [x] Validate/add canonical `vm_hourly_stats` indexes for snapshot time, vCenter+time, VM identity+time, and trace lookup. - [x] Validate/add canonical `vm_hourly_stats` indexes for snapshot time, vCenter+time, VM identity+time, and trace lookup.
- [x] Add PostgreSQL monthly partitioning for `vm_hourly_stats` behind migration controls. - [x] Add PostgreSQL monthly partitioning for `vm_hourly_stats` behind migration controls.
- [ ] Benchmark Go vs SQL on canonical Postgres tables using representative production-scale data. - [x] Benchmark Go vs SQL on canonical Postgres tables using representative production-scale data.
- Benchmark harness implemented via `-benchmark-aggregations` and `-benchmark-runs`; production-scale Postgres run pending. - Production-scale Postgres benchmark runs completed on 2026-04-21 via one-shot canonical benchmark (`-benchmark-aggregations`, `driver=postgres`, with `runs_per_mode=1` and `runs_per_mode=3`).
- Run A (pre-tuning), daily window `2026-04-20T00:00:00Z` to `2026-04-21T00:00:00Z`: Go `4.000602432s` (`14881` rows) vs SQL `1h17m19.039092561s` (`14920` rows), with Go ~`1159.59x` faster.
- Run A (pre-tuning), monthly window `2026-04-01T00:00:00Z` to `2026-05-01T00:00:00Z`: Go `3.529410947s` (`15871` rows) vs SQL `3.313037973s` (`15873` rows), near parity with SQL slightly faster (~`0.216s`, `6.1%`).
- Run B (after PostgreSQL tuning), daily window `2026-04-21T00:00:00Z` to `2026-04-22T00:00:00Z`: Go `2.277889486s` (`14831` rows) vs SQL `1m31.273491543s` (`14839` rows), with Go still ~`40.07x` faster.
- Run B (after PostgreSQL tuning), monthly window `2026-04-01T00:00:00Z` to `2026-05-01T00:00:00Z`: Go `3.947474215s` (`15871` rows) vs SQL `2.758716002s` (`15873` rows), with SQL ~`1.43x` faster.
- Run C (after PostgreSQL tuning, `runs=3`), daily window `2026-04-21T00:00:00Z` to `2026-04-22T00:00:00Z`: Go avg `2.261369712s` (min `2.169537168s`, median `2.191474445s`, max `2.423097524s`, rows `14831`) vs SQL avg `1m31.738727387s` (min `1m29.960115863s`, median `1m32.068576507s`, max `1m33.187489791s`, rows `14839`), with Go ~`40.57x` faster by average.
- Run C (after PostgreSQL tuning, `runs=3`), monthly window `2026-04-01T00:00:00Z` to `2026-05-01T00:00:00Z`: Go avg `3.705308832s` (min `3.696553751s`, median `3.70776704s`, max `3.711605706s`, rows `15871`) vs SQL avg `3.065612298s` (min `2.873749798s`, median `3.022090149s`, max `3.300996948s`, rows `15873`), with SQL ~`1.21x` faster by average (~`17.26%` faster than Go).
- Tuning impact between Run A and Run B: daily SQL improved ~`50.83x`, daily Go improved ~`1.76x`, monthly SQL improved ~`1.20x`, and monthly Go regressed (~`0.89x` of prior speed).
- Decision remains unchanged: keep Go as scheduled default and treat SQL as fallback/backfill until SQL shows a clear, repeatable runtime win across canonical workloads, especially on daily windows (where Go remains consistently dominant across runs).
- [x] Keep Go as scheduled default unless SQL shows clear and repeatable runtime wins. - [x] Keep Go as scheduled default unless SQL shows clear and repeatable runtime wins.
- [x] If SQL wins, roll out behind a controlled flag before any default switch. - [x] If SQL wins, roll out behind a controlled flag before any default switch.
### 4. Phase 4: Compatibility Reduction ### 4. Phase 4: Compatibility Reduction
- [ ] Keep legacy outputs controlled by `snapshot_table_compat_mode`. - [x] Keep legacy outputs controlled by `snapshot_table_compat_mode`.
- [ ] Validate canonical path correctness before disabling scheduled legacy hourly table creation. - Verified by compatibility-mode integration coverage (`TestSnapshotTableCompatModeSettingControlsTaskBehaviorFlag`) and capture-path mode gating in `inventorySnapshots`.
- [ ] Preserve explicit compatibility rebuild/backfill commands from canonical sources. - [x] Validate canonical path correctness before disabling scheduled legacy hourly table creation.
- [ ] Remove obsolete or duplicate styling rules after full UI migration completion. - Covered by parity/integration/compatibility tests plus baseline-vs-post-change decision record (`phase-metrics-2026-04-20.md`).
- [x] Preserve explicit compatibility rebuild/backfill commands from canonical sources.
- Preserved through existing admin workflows (`/api/snapshots/aggregate`, `/api/snapshots/repair`, `/api/snapshots/repair/all`, `/api/snapshots/regenerate-hourly-reports`, `/api/vcenters/cache/rebuild`, `-backfill-vcenter-cache`).
- [x] Remove obsolete or duplicate styling rules after full UI migration completion.
- Removed unused selectors from shared UI stylesheet (`.web2-button-group*`, `.web2-list li`) in `dist/assets/css/web3.css`; router UI asset tests remain passing.
### 5. Validation and Quality Gates ### 5. Validation and Quality Gates
- [ ] Add golden-result tests for daily output parity (old vs new path). - [x] Add golden-result tests for daily output parity (old vs new path).
- [ ] Add golden-result tests for monthly output parity (old vs new path). - [x] Add golden-result tests for monthly output parity (old vs new path).
- [ ] Add lifecycle edge-case coverage (partial presence, missing create times, deletion refinement, pool and resource changes). - [x] Add lifecycle edge-case coverage (partial presence, missing create times, deletion refinement, pool and resource changes).
- [ ] Add integration tests for canonical write/read paths and totals cache correctness. - [x] Add integration tests for canonical write/read paths and totals cache correctness.
- [ ] Add compatibility tests for legacy table generation, reports, and rebuild flows. - [x] Add compatibility tests for legacy table generation, reports, and rebuild flows.
- [ ] Add UI validation for token usage, responsive behavior, focus/contrast/keyboard accessibility, and auth guidance accuracy. - [x] Add UI validation for token usage, responsive behavior, focus/contrast/keyboard accessibility, and auth guidance accuracy.
- [ ] Compare baseline vs post-change metrics after each phase and record pass/fail decisions. - Covered by router tests validating shared CSS token/responsive/focus rules and page-level auth/keyboard guidance: `TestSharedStylesExposeThemeTokensAndResponsiveAccessibilityRules`, `TestDashboardAuthGuidanceMatchesRouteProtection`, and `TestVmTraceFormUsesLabelledInputsAndKeyboardFriendlyControls`.
- [x] Compare baseline vs post-change metrics after each phase and record pass/fail decisions.
- Evidence and gate outcomes captured in `phase-metrics-2026-04-20.md` (baseline delta table + pass/fail decisions + benchmark snapshot).
### 6. Rollout and Documentation ### 6. Rollout and Documentation
- [ ] Update operator docs for new settings and default behavior. - [x] Update operator docs for new settings and default behavior.
- [ ] Document compatibility-mode lifecycle and criteria to disable legacy table generation. - [x] Document compatibility-mode lifecycle and criteria to disable legacy table generation.
- [ ] Document benchmark method/results and default-path decision record (Go vs SQL). - [x] Document benchmark method/results and default-path decision record (Go vs SQL).
- [ ] Publish a short migration runbook for staged rollout, rollback triggers, and repair workflows. - [x] Publish a short migration runbook for staged rollout, rollback triggers, and repair workflows.
- Completed in `README.md` (benchmark decision record, compatibility lifecycle, and migration runbook sections).
## Test Plan ## Test Plan
+131 -6
View File
@@ -4,6 +4,7 @@ import (
"context" "context"
"errors" "errors"
"net/http" "net/http"
"sort"
"strings" "strings"
"time" "time"
"vctp/internal/auth" "vctp/internal/auth"
@@ -15,6 +16,7 @@ import (
const ( const (
authLoginFailureMessage = "invalid username or password" authLoginFailureMessage = "invalid username or password"
authLoginRequestTimeout = 30 * time.Second authLoginRequestTimeout = 30 * time.Second
maxDebugLogListItems = 25
) )
type ldapAuthenticator interface { type ldapAuthenticator interface {
@@ -78,10 +80,23 @@ func (h *Handler) AuthLogin(w http.ResponseWriter, r *http.Request) {
writeJSONError(w, http.StatusBadRequest, "username and password are required") writeJSONError(w, http.StatusBadRequest, "username and password are required")
return return
} }
audit.LogAuthEvent(h.Logger, r, "login", "observe",
"reason", "ldap_authentication_start",
"username", username,
"ldap_bind_address", cfg.LDAPBindAddress,
"ldap_base_dn", cfg.LDAPBaseDN,
"ldap_user_base_dn", cfg.LDAPUserBaseDN,
"ldap_group_requirements", limitStrings(cfg.LDAPGroups, maxDebugLogListItems),
"auth_group_role_mapping_keys", limitStrings(sortedStringMapKeys(cfg.AuthGroupRoleMappings), maxDebugLogListItems),
"ldap_insecure", cfg.LDAPInsecure,
"ldap_disable_validation", cfg.LDAPDisableValidation,
"ldap_trust_cert_configured", strings.TrimSpace(cfg.LDAPTrustCertFile) != "",
)
ldapAuth, err := newLDAPAuthenticator(auth.LDAPConfig{ ldapAuth, err := newLDAPAuthenticator(auth.LDAPConfig{
BindAddress: cfg.LDAPBindAddress, BindAddress: cfg.LDAPBindAddress,
BaseDN: cfg.LDAPBaseDN, BaseDN: cfg.LDAPBaseDN,
UserBaseDN: cfg.LDAPUserBaseDN,
TrustCertFile: cfg.LDAPTrustCertFile, TrustCertFile: cfg.LDAPTrustCertFile,
DisableValidation: cfg.LDAPDisableValidation, DisableValidation: cfg.LDAPDisableValidation,
Insecure: cfg.LDAPInsecure, Insecure: cfg.LDAPInsecure,
@@ -96,26 +111,90 @@ func (h *Handler) AuthLogin(w http.ResponseWriter, r *http.Request) {
ctx, cancel := withRequestTimeout(r, authLoginRequestTimeout) ctx, cancel := withRequestTimeout(r, authLoginRequestTimeout)
defer cancel() defer cancel()
ldapAuthStartedAt := time.Now()
identity, err := ldapAuth.AuthenticateAndFetchGroups(ctx, username, password) identity, err := ldapAuth.AuthenticateAndFetchGroups(ctx, username, password)
ldapAuthDuration := time.Since(ldapAuthStartedAt)
if err != nil { if err != nil {
if errors.Is(err, auth.ErrLDAPInvalidCredentials) { if errors.Is(err, auth.ErrLDAPInvalidCredentials) {
audit.LogAuthEvent(h.Logger, r, "login", "deny", "reason", "invalid_credentials", "username", username) audit.LogAuthEvent(h.Logger, r, "login", "deny",
"reason", "invalid_credentials",
"username", username,
"ldap_bind_address", cfg.LDAPBindAddress,
"ldap_base_dn", cfg.LDAPBaseDN,
"ldap_auth_total_duration_ms", ldapAuthDuration.Milliseconds(),
"error", err,
)
writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage) writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage)
return return
} }
if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) { if errors.Is(err, context.DeadlineExceeded) || errors.Is(err, context.Canceled) {
audit.LogAuthEvent(h.Logger, r, "login", "deny", "reason", "ldap_timeout", "username", username, "error", err) audit.LogAuthEvent(h.Logger, r, "login", "deny",
"reason", "ldap_timeout",
"username", username,
"ldap_bind_address", cfg.LDAPBindAddress,
"ldap_base_dn", cfg.LDAPBaseDN,
"timeout_seconds", authLoginRequestTimeout.Seconds(),
"ldap_auth_total_duration_ms", ldapAuthDuration.Milliseconds(),
"error", err,
)
writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage) writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage)
return return
} }
audit.LogAuthEvent(h.Logger, r, "login", "deny", "reason", "ldap_authentication_failed", "username", username, "error", err) audit.LogAuthEvent(h.Logger, r, "login", "deny",
"reason", "ldap_authentication_failed",
"username", username,
"ldap_bind_address", cfg.LDAPBindAddress,
"ldap_base_dn", cfg.LDAPBaseDN,
"ldap_auth_total_duration_ms", ldapAuthDuration.Milliseconds(),
"error", err,
)
writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage) writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage)
return return
} }
audit.LogAuthEvent(h.Logger, r, "login", "observe",
"reason", "ldap_authentication_succeeded",
"username", username,
"ldap_identity_username", identity.Username,
"ldap_user_dn", identity.UserDN,
"ldap_group_count", len(identity.Groups),
"ldap_groups", limitStrings(identity.Groups, maxDebugLogListItems),
"ldap_auth_total_duration_ms", ldapAuthDuration.Milliseconds(),
"ldap_bind_duration_ms", identity.BindDuration.Milliseconds(),
"ldap_user_lookup_duration_ms", identity.UserLookupDuration.Milliseconds(),
"ldap_group_lookup_duration_ms", identity.GroupMembershipLookupDuration.Milliseconds(),
"ldap_diagnostics", limitStrings(identity.Diagnostics, maxDebugLogListItems),
)
roles := auth.ResolveRoles(identity.Groups, cfg.AuthGroupRoleMappings) roles := auth.ResolveRoles(identity.Groups, cfg.AuthGroupRoleMappings)
if !auth.HasAnyGroup(identity.Groups, cfg.LDAPGroups) || len(roles) == 0 { hasRequiredGroup := auth.HasAnyGroup(identity.Groups, cfg.LDAPGroups)
audit.LogAuthEvent(h.Logger, r, "login", "deny", "reason", "group_or_role_denied", "username", username, "group_count", len(identity.Groups), "resolved_roles", roles) audit.LogAuthEvent(h.Logger, r, "login", "observe",
"reason", "authorization_evaluation",
"username", username,
"has_required_group", hasRequiredGroup,
"required_groups", limitStrings(cfg.LDAPGroups, maxDebugLogListItems),
"user_groups", limitStrings(identity.Groups, maxDebugLogListItems),
"resolved_roles", roles,
"ldap_auth_total_duration_ms", ldapAuthDuration.Milliseconds(),
"ldap_bind_duration_ms", identity.BindDuration.Milliseconds(),
"ldap_user_lookup_duration_ms", identity.UserLookupDuration.Milliseconds(),
"ldap_group_lookup_duration_ms", identity.GroupMembershipLookupDuration.Milliseconds(),
"auth_group_role_mapping_keys", limitStrings(sortedStringMapKeys(cfg.AuthGroupRoleMappings), maxDebugLogListItems),
)
if !hasRequiredGroup || len(roles) == 0 {
audit.LogAuthEvent(h.Logger, r, "login", "deny",
"reason", "group_or_role_denied",
"username", username,
"group_count", len(identity.Groups),
"has_required_group", hasRequiredGroup,
"required_groups", limitStrings(cfg.LDAPGroups, maxDebugLogListItems),
"user_groups", limitStrings(identity.Groups, maxDebugLogListItems),
"resolved_roles", roles,
"ldap_auth_total_duration_ms", ldapAuthDuration.Milliseconds(),
"ldap_bind_duration_ms", identity.BindDuration.Milliseconds(),
"ldap_user_lookup_duration_ms", identity.UserLookupDuration.Milliseconds(),
"ldap_group_lookup_duration_ms", identity.GroupMembershipLookupDuration.Milliseconds(),
"ldap_diagnostics", limitStrings(identity.Diagnostics, maxDebugLogListItems),
)
writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage) writeJSONError(w, http.StatusUnauthorized, authLoginFailureMessage)
return return
} }
@@ -138,7 +217,7 @@ func (h *Handler) AuthLogin(w http.ResponseWriter, r *http.Request) {
if subject == "" { if subject == "" {
subject = username subject = username
} }
token, claims, err := jwtSvc.IssueToken(subject, roles, identity.Groups) token, claims, err := jwtSvc.IssueToken(subject, roles, nil)
if err != nil { if err != nil {
h.Logger.Error("failed to issue auth token", "username", username, "error", err) h.Logger.Error("failed to issue auth token", "username", username, "error", err)
audit.LogAuthEvent(h.Logger, r, "login", "error", "reason", "token_issue_failed", "username", username, "error", err) audit.LogAuthEvent(h.Logger, r, "login", "error", "reason", "token_issue_failed", "username", username, "error", err)
@@ -191,3 +270,49 @@ func (h *Handler) AuthMe(w http.ResponseWriter, r *http.Request) {
TokenID: claims.ID, TokenID: claims.ID,
}) })
} }
func sortedStringMapKeys(values map[string]string) []string {
if len(values) == 0 {
return nil
}
keys := make([]string, 0, len(values))
for key := range values {
key = strings.TrimSpace(key)
if key == "" {
continue
}
keys = append(keys, key)
}
if len(keys) == 0 {
return nil
}
sort.Strings(keys)
return keys
}
func limitStrings(values []string, maxItems int) []string {
if len(values) == 0 {
return nil
}
if maxItems <= 0 || len(values) <= maxItems {
out := make([]string, 0, len(values))
for _, value := range values {
value = strings.TrimSpace(value)
if value == "" {
continue
}
out = append(out, value)
}
return out
}
out := make([]string, 0, maxItems+1)
for _, value := range values[:maxItems] {
value = strings.TrimSpace(value)
if value == "" {
continue
}
out = append(out, value)
}
out = append(out, "...")
return out
}
@@ -0,0 +1,181 @@
package handler
import (
"context"
"encoding/json"
"fmt"
"io"
"log/slog"
"net/http"
"net/http/httptest"
"strconv"
"testing"
"time"
"vctp/db"
"vctp/db/queries"
"vctp/server/models"
"github.com/jmoiron/sqlx"
_ "modernc.org/sqlite"
)
type snapshotRepairTestDatabase struct {
dbConn *sqlx.DB
logger *slog.Logger
}
func (d *snapshotRepairTestDatabase) DB() *sqlx.DB { return d.dbConn }
func (d *snapshotRepairTestDatabase) Queries() db.Querier { return queries.New(d.dbConn.DB) }
func (d *snapshotRepairTestDatabase) Logger() *slog.Logger {
if d.logger != nil {
return d.logger
}
return slog.New(slog.NewTextHandler(io.Discard, nil))
}
func (d *snapshotRepairTestDatabase) Close() error { return d.dbConn.Close() }
func newSnapshotRepairTestDB(t *testing.T) *sqlx.DB {
t.Helper()
dbConn, err := sqlx.Open("sqlite", ":memory:")
if err != nil {
t.Fatalf("failed to open sqlite test db: %v", err)
}
t.Cleanup(func() {
_ = dbConn.Close()
})
return dbConn
}
func TestSnapshotRepairSuite_RebuildsRegistryTotalsAndLifecycle(t *testing.T) {
ctx := context.Background()
dbConn := newSnapshotRepairTestDB(t)
logger := newTestLogger()
h := &Handler{
Logger: logger,
Database: &snapshotRepairTestDatabase{dbConn: dbConn, logger: logger},
}
dayStart := time.Date(2026, time.March, 16, 0, 0, 0, 0, time.UTC)
hourlyTs := dayStart.Add(2 * time.Hour).Unix()
hourlyTable := fmt.Sprintf("inventory_hourly_%d", hourlyTs)
dailyTable := fmt.Sprintf("inventory_daily_summary_%s", dayStart.Format("20060102"))
monthlyTable := fmt.Sprintf("inventory_monthly_summary_%s", dayStart.Format("200601"))
if err := db.EnsureSnapshotTable(ctx, dbConn, hourlyTable); err != nil {
t.Fatalf("failed to ensure hourly table: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, dailyTable); err != nil {
t.Fatalf("failed to ensure daily summary table: %v", err)
}
if err := db.EnsureSummaryTable(ctx, dbConn, monthlyTable); err != nil {
t.Fatalf("failed to ensure monthly summary table: %v", err)
}
if _, err := dbConn.ExecContext(ctx, fmt.Sprintf(`
INSERT INTO %s (
"Name","Vcenter","VmId","VmUuid","CreationTime","DeletionTime","ResourcePool","Datacenter","Cluster","Folder",
"ProvisionedDisk","VcpuCount","RamGB","IsTemplate","PoweredOn","SrmPlaceholder","SnapshotTime"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`, hourlyTable),
"vm-a", "vc-a", "vm-a", "uuid-a", dayStart.Add(-24*time.Hour).Unix(), int64(0), "Tin", "dc-a", "cluster-a", "/prod",
100.0, int64(2), int64(8), "FALSE", "TRUE", "FALSE", hourlyTs,
); err != nil {
t.Fatalf("failed to seed hourly table: %v", err)
}
if _, err := dbConn.ExecContext(ctx, fmt.Sprintf(`
INSERT INTO %s (
"Name","Vcenter","VmId","VmUuid","CreationTime","DeletionTime","ResourcePool","Datacenter","Cluster","Folder",
"ProvisionedDisk","VcpuCount","RamGB","IsTemplate","PoweredOn","SrmPlaceholder","SnapshotTime","SamplesPresent",
"AvgVcpuCount","AvgRamGB","AvgProvisionedDisk","AvgIsPresent","PoolTinPct","PoolBronzePct","PoolSilverPct","PoolGoldPct","Tin","Bronze","Silver","Gold"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`, dailyTable),
"vm-a", "vc-a", "vm-a", "uuid-a", int64(0), int64(0), "Tin", "dc-a", "cluster-a", "/prod",
100.0, int64(2), int64(8), "FALSE", "TRUE", "FALSE", int64(0), int64(1),
2.0, 8.0, 100.0, 1.0, 100.0, 0.0, 0.0, 0.0, 100.0, 0.0, 0.0, 0.0,
); err != nil {
t.Fatalf("failed to seed daily summary table: %v", err)
}
if _, err := dbConn.ExecContext(ctx, fmt.Sprintf(`
INSERT INTO %s (
"Name","Vcenter","VmId","VmUuid","CreationTime","DeletionTime","ResourcePool","Datacenter","Cluster","Folder",
"ProvisionedDisk","VcpuCount","RamGB","IsTemplate","PoweredOn","SrmPlaceholder","SnapshotTime","SamplesPresent",
"AvgVcpuCount","AvgRamGB","AvgProvisionedDisk","AvgIsPresent","PoolTinPct","PoolBronzePct","PoolSilverPct","PoolGoldPct","Tin","Bronze","Silver","Gold"
) VALUES (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)
`, monthlyTable),
"vm-a", "vc-a", "vm-a", "uuid-a", int64(0), int64(0), "Tin", "dc-a", "cluster-a", "/prod",
100.0, int64(2), int64(8), "FALSE", "TRUE", "FALSE", dayStart.Unix(), int64(1),
2.0, 8.0, 100.0, 1.0, 100.0, 0.0, 0.0, 0.0, 100.0, 0.0, 0.0, 0.0,
); err != nil {
t.Fatalf("failed to seed monthly summary table: %v", err)
}
req := httptest.NewRequest(http.MethodPost, "/api/snapshots/repair/all", nil)
rr := httptest.NewRecorder()
h.SnapshotRepairSuite(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d, got %d body=%s", http.StatusOK, rr.Code, rr.Body.String())
}
var payload models.SnapshotRepairSuiteResponse
if err := json.Unmarshal(rr.Body.Bytes(), &payload); err != nil {
t.Fatalf("failed to decode response: %v", err)
}
if payload.Status != "OK" {
t.Fatalf("unexpected repair suite status: %q", payload.Status)
}
dailyRepaired, err := strconv.Atoi(payload.DailyRepaired)
if err != nil {
t.Fatalf("failed to parse daily_repaired: %v", err)
}
if dailyRepaired < 1 {
t.Fatalf("expected at least one daily table repaired, got %d", dailyRepaired)
}
monthlyRefined, err := strconv.Atoi(payload.MonthlyRefined)
if err != nil {
t.Fatalf("failed to parse monthly_refined: %v", err)
}
if monthlyRefined < 1 {
t.Fatalf("expected at least one monthly table refined, got %d", monthlyRefined)
}
monthlyFailed, err := strconv.Atoi(payload.MonthlyFailed)
if err != nil {
t.Fatalf("failed to parse monthly_failed: %v", err)
}
if monthlyFailed != 0 {
t.Fatalf("expected monthly_failed=0, got %d", monthlyFailed)
}
assertSnapshotRegistryTypeCount(t, ctx, dbConn, "hourly", 1)
assertSnapshotRegistryTypeCount(t, ctx, dbConn, "daily", 1)
assertSnapshotRegistryTypeCount(t, ctx, dbConn, "monthly", 1)
var totalsRows int
if err := dbConn.GetContext(ctx, &totalsRows, `SELECT COUNT(1) FROM vcenter_totals WHERE "Vcenter" = ?`, "vc-a"); err != nil {
t.Fatalf("failed to query vcenter_totals: %v", err)
}
if totalsRows < 1 {
t.Fatalf("expected vcenter_totals to be backfilled, got %d rows", totalsRows)
}
var dailySnapshotTime int64
if err := dbConn.GetContext(ctx, &dailySnapshotTime, fmt.Sprintf(`SELECT COALESCE("SnapshotTime",0) FROM %s WHERE "Vcenter" = ? AND "VmId" = ?`, dailyTable), "vc-a", "vm-a"); err != nil {
t.Fatalf("failed to query repaired daily snapshot time: %v", err)
}
if dailySnapshotTime == 0 {
t.Fatal("expected repaired daily summary SnapshotTime to be backfilled")
}
}
func assertSnapshotRegistryTypeCount(t *testing.T, ctx context.Context, dbConn *sqlx.DB, snapshotType string, want int) {
t.Helper()
var got int
if err := dbConn.GetContext(ctx, &got, `SELECT COUNT(1) FROM snapshot_registry WHERE snapshot_type = ?`, snapshotType); err != nil {
t.Fatalf("failed to query snapshot_registry for type %s: %v", snapshotType, err)
}
if got != want {
t.Fatalf("unexpected snapshot_registry count for %s: got %d want %d", snapshotType, got, want)
}
}
+102
View File
@@ -162,3 +162,105 @@ func TestSwaggerJSONDefaultsToHTTPWhenTLSDisabled(t *testing.T) {
t.Fatalf("unexpected schemes: got %v want %v", spec.Schemes, []string{"http"}) t.Fatalf("unexpected schemes: got %v want %v", spec.Schemes, []string{"http"})
} }
} }
func TestSharedStylesExposeThemeTokensAndResponsiveAccessibilityRules(t *testing.T) {
app := testRouter(t, testRouterSettings(t, false))
req := httptest.NewRequest(http.MethodGet, "/assets/css/web3.css", nil)
rr := httptest.NewRecorder()
app.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d, got %d", http.StatusOK, rr.Code)
}
css := rr.Body.String()
assertContainsAll(t, css, []string{
":root {",
"--theme_text_primary:",
"--theme_accent_blue:",
"--theme_focus_outline:",
".web2-shell-wide {",
".web2-page-title {",
"font-size: clamp(",
".web2-table-shell {",
"overflow-x: auto;",
".web2-input:focus-visible {",
"a:focus-visible,",
"@media (max-width: 900px)",
".web2-actions .web2-button {",
"min-width: 520px;",
"@media (min-width: 1500px)",
"@media (min-width: 780px)",
"@media (min-width: 1024px)",
})
}
func TestDashboardAuthGuidanceMatchesRouteProtection(t *testing.T) {
app := testRouter(t, testRouterSettings(t, false))
homeReq := httptest.NewRequest(http.MethodGet, "/", nil)
homeRR := httptest.NewRecorder()
app.ServeHTTP(homeRR, homeReq)
if homeRR.Code != http.StatusOK {
t.Fatalf("expected status %d, got %d", http.StatusOK, homeRR.Code)
}
homeBody := homeRR.Body.String()
assertContainsAll(t, homeBody, []string{
"POST /api/auth/login",
"Authorization: Bearer &lt;token&gt;",
"viewer",
"admin",
"UI pages and <code class=\"web2-code\">/metrics</code> remain public.",
})
for _, path := range []string{"/swagger/", "/metrics", "/vm/trace"} {
t.Run("public "+path, func(t *testing.T) {
req := httptest.NewRequest(http.MethodGet, path, nil)
rr := httptest.NewRecorder()
app.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d for %s, got %d", http.StatusOK, path, rr.Code)
}
})
}
protectedReq := httptest.NewRequest(http.MethodGet, "/api/report/snapshot", nil)
protectedRR := httptest.NewRecorder()
app.ServeHTTP(protectedRR, protectedReq)
if protectedRR.Code != http.StatusUnauthorized {
t.Fatalf("expected status %d for protected route, got %d", http.StatusUnauthorized, protectedRR.Code)
}
}
func TestVmTraceFormUsesLabelledInputsAndKeyboardFriendlyControls(t *testing.T) {
app := testRouter(t, testRouterSettings(t, false))
req := httptest.NewRequest(http.MethodGet, "/vm/trace", nil)
rr := httptest.NewRecorder()
app.ServeHTTP(rr, req)
if rr.Code != http.StatusOK {
t.Fatalf("expected status %d, got %d", http.StatusOK, rr.Code)
}
body := rr.Body.String()
assertContainsAll(t, body, []string{
`<form method="get" action="/vm/trace" class="web2-form-grid">`,
`<label class="web2-label" for="vm_id">VM ID</label>`,
`<input class="web2-input" type="text" id="vm_id" name="vm_id"`,
`<label class="web2-label" for="vm_uuid">VM UUID</label>`,
`<input class="web2-input" type="text" id="vm_uuid" name="vm_uuid"`,
`<label class="web2-label" for="name">Name</label>`,
`<input class="web2-input" type="text" id="name" name="name"`,
`<button class="web3-button active" type="submit">Load VM Trace</button>`,
`<a class="web3-button" href="/vm/trace">Clear</a>`,
})
}
func assertContainsAll(t *testing.T, body string, snippets []string) {
t.Helper()
for _, snippet := range snippets {
if !strings.Contains(body, snippet) {
t.Fatalf("expected response body to contain %q", snippet)
}
}
}