Skip to Main Content
Feature Request FR-4655
Product Area Page Components
Status OPEN

1 Voters

LOV Caching with Session/Application/User Scopes and Bind/Filter-Aware Keys

ld.diaz Public
· Sep 17 2025

Idea Summary
Propose a caching mechanism for List of Values (LOV) configurable at the global (Shared LOV) level and/or Item level. The cache can operate with session scope, application scope, or user scope, and entries are indexed by a hash of filter parameters / bind variables used by the LOV. Benefits include lower latency, reduced I/O and CPU on the server, and better scalability in pages with multiple Items dependent on the same LOV.

Problem

  • LOVs with repetitive queries (same user, same filters) are recalculated on every refresh or render.
  • On pages with many dependent Items, the same LOV query executes multiple times per session or application.
  • Lack of a declarative cache causes unnecessary DB load and slower response times.

Objectives

  1. Reduce redundant DB calls for LOV generation.
  2. Provide declarative configuration: cache per Session, per Application, or per User.
  3. Make cache bind/filter-aware (keys based on parameter values).
  4. Maintain consistency with controlled invalidation (TTL, events, data dependencies).

Scope

  • Applies to Shared LOVs (SQL/REST) and Items consuming LOVs (Select List, Popup LOV, Shuttle, IG/IR column LOV).
  • Compatible with Parameterized LOVs (global/page binds).

Telemetry

APEX views for LOV_CACHE_STATS and Dev Toolbar dashboard with hit ratio, average response time served from cache vs DB/REST.

Use Case

  1. Repeated Select Lists in the same page: multiple regions use the same LOV with identical filters → Per Session avoids recalculation.
  2. Application-wide master filters (e.g., ORG_ID, LEDGER_ID) → Per Application with TTL=30m and bind-aware keys.
  3. User-specific LOVs (e.g., personalized catalog or entitlements) → Per User ensures results are cached uniquely per authenticated user.
  4. Cascading LOVs: parent changes recalculate the hash (parent in the key); if unchanged, cache is served.
  5. REST-backed LOVs: reduce latency to external services; Per Application or Per User with manual invalidation on catalog changes.

Preferred Solution (Optional)
Goal: Deliver fast, consistent, and secure LOVs by adding a declarative cache that is bind/filter-aware and scoped per Session, per User, or per Application, with automatic change detection and safe rollouts.

  1. Cache Modes: Off | Per Session | Per User | Per Application (default = Off for backward compatibility).
  2. Bind/Filter-Aware Keys: Cache key = (LOV_ID, SCOPE, APP_ID?/SESSION_ID?/USER_ID?, LOCALE?, PARENT_VALUE?, SEARCH_TEXT?, HASH(BINDS)) with normalized types/lengths.
  3. TTL & Size Controls: Optional TTL per mode and LRU caps per app/user to bound memory.
  4. Auto-Invalidate: Declarative options—Object dependencies, CQN, MV-refresh hooks, trigger-raised events, and version tokens (SCN/timestamp/hash). For REST, respect ETag/Last-Modified (304).
  5. Configuration Hierarchy: Shared LOV defines defaults; Items may inherit or override cache mode, TTL, and key strategy.
  6. Security Isolation:
    1. Per Session → isolated by session.
    2. Per User → isolated by user ID.
    3. Per Application → shared only for non-personalized LOVs (no user/session binds in the key).
  7. Builder UX: New Cache tab (Shared LOV & Item) with key preview, bind simulation, change-detection options, and manual flush.
  8. Telemetry: Built-in stats (cache_hits, misses, evictions, avg latency saved) and Dev Toolbar panel.
  9. Safe Rollout: Opt-in per LOV/Item; wizard to migrate popular LOVs and recommend modes based on usage.

Acceptance Criteria

  • Enabling cache on a Shared LOV yields reuse across Items/pages according to selected scope(s).
  • Cache keys vary with bind/filter changes; wrong-scope sharing does not occur.
  • Auto-invalidate refreshes only affected keys after underlying data changes.
  • Telemetry shows improved hit ratio and reduced DB time without stale results.
This idea is open.