Idea Summary
Propose a caching mechanism for List of Values (LOV) configurable at the global (Shared LOV) level and/or Item level. The cache can operate with session scope, application scope, or user scope, and entries are indexed by a hash of filter parameters / bind variables used by the LOV. Benefits include lower latency, reduced I/O and CPU on the server, and better scalability in pages with multiple Items dependent on the same LOV.
Problem
- LOVs with repetitive queries (same user, same filters) are recalculated on every refresh or render.
- On pages with many dependent Items, the same LOV query executes multiple times per session or application.
- Lack of a declarative cache causes unnecessary DB load and slower response times.
Objectives
- Reduce redundant DB calls for LOV generation.
- Provide declarative configuration: cache per Session, per Application, or per User.
- Make cache bind/filter-aware (keys based on parameter values).
- Maintain consistency with controlled invalidation (TTL, events, data dependencies).
Scope
- Applies to Shared LOVs (SQL/REST) and Items consuming LOVs (Select List, Popup LOV, Shuttle, IG/IR column LOV).
- Compatible with Parameterized LOVs (global/page binds).
Telemetry
APEX views for LOV_CACHE_STATS and Dev Toolbar dashboard with hit ratio, average response time served from cache vs DB/REST.
Use Case
- Repeated Select Lists in the same page: multiple regions use the same LOV with identical filters → Per Session avoids recalculation.
- Application-wide master filters (e.g., ORG_ID, LEDGER_ID) → Per Application with TTL=30m and bind-aware keys.
- User-specific LOVs (e.g., personalized catalog or entitlements) → Per User ensures results are cached uniquely per authenticated user.
- Cascading LOVs: parent changes recalculate the hash (parent in the key); if unchanged, cache is served.
- REST-backed LOVs: reduce latency to external services; Per Application or Per User with manual invalidation on catalog changes.
Preferred Solution (Optional)
Goal: Deliver fast, consistent, and secure LOVs by adding a declarative cache that is bind/filter-aware and scoped per Session, per User, or per Application, with automatic change detection and safe rollouts.
- Cache Modes: Off | Per Session | Per User | Per Application (default = Off for backward compatibility).
- Bind/Filter-Aware Keys: Cache key = (LOV_ID, SCOPE, APP_ID?/SESSION_ID?/USER_ID?, LOCALE?, PARENT_VALUE?, SEARCH_TEXT?, HASH(BINDS)) with normalized types/lengths.
- TTL & Size Controls: Optional TTL per mode and LRU caps per app/user to bound memory.
- Auto-Invalidate: Declarative options—Object dependencies, CQN, MV-refresh hooks, trigger-raised events, and version tokens (SCN/timestamp/hash). For REST, respect ETag/Last-Modified (304).
- Configuration Hierarchy: Shared LOV defines defaults; Items may inherit or override cache mode, TTL, and key strategy.
- Security Isolation:
- Per Session → isolated by session.
- Per User → isolated by user ID.
- Per Application → shared only for non-personalized LOVs (no user/session binds in the key).
- Builder UX: New Cache tab (Shared LOV & Item) with key preview, bind simulation, change-detection options, and manual flush.
- Telemetry: Built-in stats (cache_hits, misses, evictions, avg latency saved) and Dev Toolbar panel.
- Safe Rollout: Opt-in per LOV/Item; wizard to migrate popular LOVs and recommend modes based on usage.
Acceptance Criteria
- Enabling cache on a Shared LOV yields reuse across Items/pages according to selected scope(s).
- Cache keys vary with bind/filter changes; wrong-scope sharing does not occur.
- Auto-invalidate refreshes only affected keys after underlying data changes.
- Telemetry shows improved hit ratio and reduced DB time without stale results.