@stephen-shopopop/cache
    Preparing search index...

    @stephen-shopopop/cache

    npm version Coverage Status CI

    node-cache

    A high-performance, strongly-typed caching library for Node.js, supporting in-memory (LRU, TTL), metadata, and persistent SQLite backends. Designed for reliability, flexibility, and modern TypeScript/ESM workflows.

    • ⚡️ Fast in-memory LRU and TTL caches
    • 🗃️ Persistent cache with SQLite backend
    • 🏷️ Metadata support for all entries
    • 📏 Size and entry count limits
    • 🧑‍💻 100% TypeScript, ESM & CJS compatible
    • 🧪 Simple, robust API for all Node.js projects
    npm i @stephen-shopopop/cache
    

    This library requires no special configuration for basic usage.

    • Node.js >= 20.17.0
    • Compatible with both ESM (import) and CommonJS (require)
    • TypeScript types included
    • SQLiteCacheStore available on Node.js > 20.x
    import { LRUCache } from '@stephen-shopopop/cache';
    
    const { LRUCache } = require('@stephen-shopopop/cache');
    

    Full API documentation is available here: 📚 Generated Docs

    A fast in-memory Least Recently Used (LRU) cache. Removes the least recently used item when the maximum size is reached.

    • Constructor:

      new LRUCache<K, V>({ maxSize?: number })
      
    • Methods:

      • set(key, value): Add or update a value
      • get(key): Retrieve a value
      • delete(key): Remove a key
      • clear(): Clear the cache
      • has(key): Check if a key exists
      • size: Number of items

    LRU cache with automatic expiration (TTL) for entries. Combines LRU eviction and time-based expiration.

    • Constructor:

      new LRUCacheWithTTL<K, V>({ maxSize?: number, ttl?: number, stayAlive?: boolean, cleanupInterval?: number })
      
    • Methods:

      • set(key, value, ttl?): Add a value with optional TTL
      • get(key): Retrieve a value (or undefined if expired)
      • delete(key): Remove a key
      • clear(): Clear the cache
      • has(key): Check if a key exists
      • size: Number of items

    In-memory cache with LRU policy, supports max size, max entry size, max number of entries, and associated metadata.

    • Constructor:

      new MemoryCacheStore<K, Metadata>({ maxCount?: number, maxEntrySize?: number, maxSize?: number })
      
    • Methods:

      • set(key, value, metadata?): Add a value (string or Buffer) with metadata
      • get(key): Retrieve { value, metadata, size } or undefined
      • delete(key): Remove a key
      • clear(): Clear the cache
      • has(key): Check if a key exists
      • size: Number of items
      • byteSize: Total size in bytes

    Persistent cache using SQLite as backend, supports metadata, TTL, entry size and count limits.

    • Constructor:

      new SQLiteCacheStore<Metadata>({ filename?: string, maxEntrySize?: number, maxCount?: number, timeout?: number })
      
    • Methods:

      • set(key, value, metadata?, ttl?): Add a value (string or Buffer) with metadata and optional TTL

      • get(key): Retrieve { value, metadata } or undefined

      • delete(key): Remove a key

      • size: Number of items

      • close(): Close the database connection

    Note: SQLiteCacheStore methods may throw errors related to SQLite (connection, query, file access, etc.). It is the responsibility of the user to handle these errors (e.g., with try/catch) according to their application's needs. The library does not catch or wrap SQLite errors by design.

    • maxSize: max number of items (LRUCache, LRUCacheWithTTL), max total size in bytes (MemoryCacheStore)

    • maxCount: max number of entries (MemoryCacheStore)

    • maxEntrySize: max size of a single entry (MemoryCacheStore)

    • ttl: time to live in ms (LRUCacheWithTTL)

    • cleanupInterval: automatic cleanup interval (LRUCacheWithTTL)

    • stayAlive: keep the timer active (LRUCacheWithTTL)

    • filename: SQLite database file name (SQLiteCacheStore)

    • timeout: SQLite operation timeout in ms (SQLiteCacheStore)

    import { LRUCache, LRUCacheWithTTL, MemoryCacheStore } from '@stephen-shopopop/cache';

    const lru = new LRUCache({ maxSize: 100 });
    lru.set('a', 1);

    const lruTtl = new LRUCacheWithTTL({ maxSize: 100, ttl: 60000 });
    lruTtl.set('a', 1);

    const mem = new MemoryCacheStore({ maxCount: 10, maxEntrySize: 1024 });
    mem.set('a', 'value', { meta: 123 });

    const sqlite = new SQLiteCacheStore({ filename: 'cache.db', maxEntrySize: 1024 });
    sqlite.set('a', 'value', { meta: 123 }, 60000);
    const result = sqlite.get('a');
    [Most Recent]   [   ...   ]   [Least Recent]
    head <-> node <-> ... <-> tail
    | |
    +---> {key,value} +---> {key,value}

    Eviction: when maxSize is reached, 'tail' is removed (least recently used)
    Access: accessed node is moved to 'head' (most recently used)
    +-----------------------------+
    |        MemoryCacheStore     |
    +-----------------------------+
    |  #data: LRUCache  |
    |  #maxCount                  |
    |  #maxEntrySize              |
    |  #maxSize                   |
    |  #size                      |
    +-----------------------------+
            |         |
            |         +---> [maxCount, maxEntrySize, maxSize] constraints
            |
            +---> LRUCache (internal):
                    head <-> node <-> ... <-> tail
                    (evicts least recently used)
    
    Each entry:
      {
        key: K,
        value: string | Buffer,
        metadata: object,
        size: number (bytes)
      }
    
    Eviction: when maxCount or maxSize is reached, oldest/oversized entries are removed.
    
    +-----------------------------+
    |      SQLiteCacheStore       |
    +-----------------------------+
    |  #db: SQLite database       |
    |  #maxCount                  |
    |  #maxEntrySize              |
    |  #timeout                   |
    +-----------------------------+
            |
            +---> [SQLite file: cache.db]
                    |
                    +---> Table: cache_entries
                            +-------------------------------+
                            | key | value | metadata | ttl  |
                            +-------------------------------+
    
    Each entry:
      {
        key: string,
        value: string | Buffer,
        metadata: object,
        ttl: number (ms, optional)
      }
    
    Eviction: when maxCount or maxEntrySize is reached, or TTL expires, entries are deleted from the table.
    Persistence: all data is stored on disk in the SQLite file.
    
    +-----------------------------+
    |      LRUCacheWithTTL        |
    +-----------------------------+
    |  #data: LRUCache  |
    |  #ttl                       |
    |  #cleanupInterval           |
    |  #timer                     |
    +-----------------------------+
            |
            +---> LRUCache (internal):
                    head <-> node <-> ... <-> tail
                    (evicts least recently used)
    
    Each entry:
      {
        key: K,
        value: V,
        expiresAt: number (timestamp, ms)
      }
    
    Expiration: entries are removed when their TTL expires (checked on access or by cleanup timer).
    Eviction: LRU policy applies when maxSize is reached.
    
    • API response caching: Reduce latency and external API calls by caching HTTP responses in memory or on disk.
    • Session storage: Store user sessions or tokens with TTL for automatic expiration.
    • File or image cache: Cache processed files, images, or buffers with size limits.
    • Metadata tagging: Attach custom metadata (timestamps, user info, tags) to each cache entry for advanced logic.
    • Persistent job queue: Use SQLiteCacheStore to persist jobs or tasks between server restarts.
    • Rate limiting: Track and limit user actions over time using TTL-based caches.
    • Temporary feature flags: Store and expire feature flags or toggles dynamically.

    Note: Results below are indicative and may vary depending on your hardware and Node.js version. Run npm run bench for up-to-date results on your machine.

    Store set (ops/s) get (ops/s) delete (ops/s) complex workflow (ops/s)
    LRUCache 1,082,000 1,870,000 1,060,000 629,000
    LRUCacheWithTTL 943,000 1,670,000 950,000 591,000
    MemoryCacheStore 1,059,000 1,870,000 177,600 292,000
    SQLiteCacheStore (mem) 110,000 430,000 137,000 50,700
    SQLiteCacheStore (file) 49,000 47,000 135,000 45,900

    Bench run on Apple M1, Node.js 24.7.0, npm run bench — complex workflow = set, get, update, delete, hit/miss, TTL, metadata.

    How are ops/s calculated? For each operation, the benchmark reports the average time per operation (e.g. 1.87 µs/iter). To get the number of operations per second (ops/s), we use:

    ops/s = 1 / (average time per operation in seconds)

    Example: if the bench reports 856.45 ns/iter, then:

    • 856.45 ns = 0.00000085645 seconds
    • ops/s = 1 / 0.00000085645 ≈ 1,168,000

    All values in the table are calculated this way and rounded for readability.

    Each backend has different performance characteristics and is suited for different use cases:

    Backend Typical use case Max ops/s (indicative) Latency (typical) Notes
    LRUCache Hot-path, ultra-fast in-memory >1,000,000 <2µs No persistence, no TTL
    LRUCacheWithTTL In-memory with expiration >1,000,000 <2µs TTL adds slight overhead
    MemoryCacheStore In-memory, metadata, size limit ~1,000,000 <2µs Metadata, size/count limits
    SQLiteCacheStore (mem) Fast, ephemeral persistence ~100,000 ~10µs Data lost on restart
    SQLiteCacheStore (file) Durable persistence ~50,000 ~20–50µs Disk I/O, best for cold data

    Guidance:

    • Use LRUCache/LRUCacheWithTTL for ultra-low-latency, high-throughput scenarios (API cache, session, etc.).
    • Use MemoryCacheStore if you need metadata or strict size limits.
    • Use SQLiteCacheStore (memory) for fast, non-persistent cache across processes.
    • Use SQLiteCacheStore (file) for persistent cache, but expect higher latency due to disk I/O.

    Numbers are indicative, measured on Apple M1, Node.js 24.x. Always benchmark on your own hardware for production sizing.

    SQLite is a disk-based database. Even with optimizations (WAL, memory temp store), disk I/O and serialization add latency compared to pure in-memory caches. For ultra-low-latency needs, use LRUCache or MemoryCacheStore.

    You can instrument the library using diagnostic_channel (Node.js). Future versions may provide built-in hooks. For now, you can wrap cache methods or use diagnostic_channel in your own code to publish events on cache operations.

    This warning is from Node.js itself (v20+). SQLite support is stable for most use cases, but the API may change in future Node.js versions. Follow Node.js release notes for updates.

    All errors from SQLite (connection, query, file access) are thrown as-is. You should use try/catch around your cache operations and handle errors according to your application’s needs.

    Yes, but persistent caches (SQLiteCacheStore with file) may not be suitable for ephemeral file systems. Use in-memory caches for stateless/serverless workloads.

    Want to contribute to this library? Thank you! Here’s what you need to know to get started:

    • Node.js >= 20.17.0
    • pnpm or npm (package manager)
    • TypeScript (strictly typed everywhere)
    git clone https://github.com/stephen-shopopop/node-cache.git
    cd node-cache
    pnpm install # or npm install
    • npm run build: build TypeScript (ESM + CJS via tsup)
    • npm run test: run all tests (node:test)
    • npm run lint: check lint (biome)
    • npm run format: format code
    • npm run check: type check
    • npm run bench: run benchmarks
    • npm run docs: generate documentation (TypeDoc)
    • src/library/: main source code (all cache classes)
    • src/index.ts: entry point
    • test/: all unit tests (node:test)
    • bench/: benchmarks (mitata)
    • docs/: generated documentation
    • Follow the style: semicolons, single quotes, arrow functions for callbacks
    • Avoid nested ternary operators
    • Always add tests for any new feature or bugfix (see example below)
    • Use clear, conventional commit messages (see Conventional Commits)
    • PRs and code reviews are welcome in French or English
    import test from 'node:test';
    import { LRUCache } from '../src/library/LRUCache.js';

    test('LRUCache basic set/get', (t: TestContext) => {
    // Arrange
    const cache = new LRUCache({ maxSize: 2 });

    // Act
    cache.set('a', 1);

    // Assert
    t.assert.strictEqual(cache.get('a'), 1);
    });
    1. Make sure all tests pass (npm run test)
    2. Check lint and formatting (npm run lint && npm run format)
    3. Check coverage (npm run coverage)
    4. Add/complete documentation if needed
    5. Clearly describe your contribution in the PR
    6. Use clear, conventional commit messages
    7. If your change impacts users, update the README and/or documentation
    • Releases are tagged and published manually by the maintainer. If you want to help with releases, open an issue or PR.