A high-performance, strongly-typed caching library for Node.js, supporting in-memory (LRU, TTL), metadata, and persistent SQLite backends. Designed for reliability, flexibility, and modern TypeScript/ESM workflows.
npm i @stephen-shopopop/cache
This library requires no special configuration for basic usage.
import
) and CommonJS (require
)import { LRUCache } from '@stephen-shopopop/cache';
const { LRUCache } = require('@stephen-shopopop/cache');
Full API documentation is available here: 📚 Generated Docs
A fast in-memory Least Recently Used (LRU) cache. Removes the least recently used item when the maximum size is reached.
Constructor:
new LRUCache<K, V>({ maxSize?: number })
Methods:
set(key, value)
: Add or update a valueget(key)
: Retrieve a valuedelete(key)
: Remove a keyclear()
: Clear the cachehas(key)
: Check if a key existssize
: Number of itemsLRU cache with automatic expiration (TTL) for entries. Combines LRU eviction and time-based expiration.
Constructor:
new LRUCacheWithTTL<K, V>({ maxSize?: number, ttl?: number, stayAlive?: boolean, cleanupInterval?: number })
Methods:
set(key, value, ttl?)
: Add a value with optional TTLget(key)
: Retrieve a value (or undefined if expired)delete(key)
: Remove a keyclear()
: Clear the cachehas(key)
: Check if a key existssize
: Number of itemsIn-memory cache with LRU policy, supports max size, max entry size, max number of entries, and associated metadata.
Constructor:
new MemoryCacheStore<K, Metadata>({ maxCount?: number, maxEntrySize?: number, maxSize?: number })
Methods:
set(key, value, metadata?)
: Add a value (string or Buffer) with metadataget(key)
: Retrieve { value, metadata, size }
or undefineddelete(key)
: Remove a keyclear()
: Clear the cachehas(key)
: Check if a key existssize
: Number of itemsbyteSize
: Total size in bytesPersistent cache using SQLite as backend, supports metadata, TTL, entry size and count limits.
Constructor:
new SQLiteCacheStore<Metadata>({ filename?: string, maxEntrySize?: number, maxCount?: number, timeout?: number })
Methods:
set(key, value, metadata?, ttl?)
: Add a value (string or Buffer) with metadata and optional TTL
get(key)
: Retrieve { value, metadata }
or undefined
delete(key)
: Remove a key
size
: Number of items
close()
: Close the database connection
Note: SQLiteCacheStore methods may throw errors related to SQLite (connection, query, file access, etc.). It is the responsibility of the user to handle these errors (e.g., with try/catch) according to their application's needs. The library does not catch or wrap SQLite errors by design.
maxSize
: max number of items (LRUCache, LRUCacheWithTTL), max total size in bytes (MemoryCacheStore)
maxCount
: max number of entries (MemoryCacheStore)
maxEntrySize
: max size of a single entry (MemoryCacheStore)
ttl
: time to live in ms (LRUCacheWithTTL)
cleanupInterval
: automatic cleanup interval (LRUCacheWithTTL)
stayAlive
: keep the timer active (LRUCacheWithTTL)
filename
: SQLite database file name (SQLiteCacheStore)
timeout
: SQLite operation timeout in ms (SQLiteCacheStore)
import { LRUCache, LRUCacheWithTTL, MemoryCacheStore } from '@stephen-shopopop/cache';
const lru = new LRUCache({ maxSize: 100 });
lru.set('a', 1);
const lruTtl = new LRUCacheWithTTL({ maxSize: 100, ttl: 60000 });
lruTtl.set('a', 1);
const mem = new MemoryCacheStore({ maxCount: 10, maxEntrySize: 1024 });
mem.set('a', 'value', { meta: 123 });
const sqlite = new SQLiteCacheStore({ filename: 'cache.db', maxEntrySize: 1024 });
sqlite.set('a', 'value', { meta: 123 }, 60000);
const result = sqlite.get('a');
[Most Recent] [ ... ] [Least Recent]
head <-> node <-> ... <-> tail
| |
+---> {key,value} +---> {key,value}
Eviction: when maxSize is reached, 'tail' is removed (least recently used)
Access: accessed node is moved to 'head' (most recently used)
+-----------------------------+
| MemoryCacheStore |
+-----------------------------+
| #data: LRUCache |
| #maxCount |
| #maxEntrySize |
| #maxSize |
| #size |
+-----------------------------+
| |
| +---> [maxCount, maxEntrySize, maxSize] constraints
|
+---> LRUCache (internal):
head <-> node <-> ... <-> tail
(evicts least recently used)
Each entry:
{
key: K,
value: string | Buffer,
metadata: object,
size: number (bytes)
}
Eviction: when maxCount or maxSize is reached, oldest/oversized entries are removed.
+-----------------------------+
| SQLiteCacheStore |
+-----------------------------+
| #db: SQLite database |
| #maxCount |
| #maxEntrySize |
| #timeout |
+-----------------------------+
|
+---> [SQLite file: cache.db]
|
+---> Table: cache_entries
+-------------------------------+
| key | value | metadata | ttl |
+-------------------------------+
Each entry:
{
key: string,
value: string | Buffer,
metadata: object,
ttl: number (ms, optional)
}
Eviction: when maxCount or maxEntrySize is reached, or TTL expires, entries are deleted from the table.
Persistence: all data is stored on disk in the SQLite file.
+-----------------------------+
| LRUCacheWithTTL |
+-----------------------------+
| #data: LRUCache |
| #ttl |
| #cleanupInterval |
| #timer |
+-----------------------------+
|
+---> LRUCache (internal):
head <-> node <-> ... <-> tail
(evicts least recently used)
Each entry:
{
key: K,
value: V,
expiresAt: number (timestamp, ms)
}
Expiration: entries are removed when their TTL expires (checked on access or by cleanup timer).
Eviction: LRU policy applies when maxSize is reached.
Note: Results below are indicative and may vary depending on your hardware and Node.js version. Run
npm run bench
for up-to-date results on your machine.
Store | set (ops/s) | get (ops/s) | delete (ops/s) | complex workflow (ops/s) |
---|---|---|---|---|
LRUCache | 1,082,000 | 1,870,000 | 1,060,000 | 629,000 |
LRUCacheWithTTL | 943,000 | 1,670,000 | 950,000 | 591,000 |
MemoryCacheStore | 1,059,000 | 1,870,000 | 177,600 | 292,000 |
SQLiteCacheStore (mem) | 110,000 | 430,000 | 137,000 | 50,700 |
SQLiteCacheStore (file) | 49,000 | 47,000 | 135,000 | 45,900 |
Bench run on Apple M1, Node.js 24.7.0, npm run bench
— complex workflow = set, get, update, delete, hit/miss, TTL, metadata.
How are ops/s calculated?
For each operation, the benchmark reports the average time per operation (e.g. 1.87 µs/iter
).
To get the number of operations per second (ops/s), we use:
ops/s = 1 / (average time per operation in seconds)
Example: if the bench reports 856.45 ns/iter
, then:
All values in the table are calculated this way and rounded for readability.
Each backend has different performance characteristics and is suited for different use cases:
Backend | Typical use case | Max ops/s (indicative) | Latency (typical) | Notes |
---|---|---|---|---|
LRUCache | Hot-path, ultra-fast in-memory | >1,000,000 | <2µs | No persistence, no TTL |
LRUCacheWithTTL | In-memory with expiration | >1,000,000 | <2µs | TTL adds slight overhead |
MemoryCacheStore | In-memory, metadata, size limit | ~1,000,000 | <2µs | Metadata, size/count limits |
SQLiteCacheStore (mem) | Fast, ephemeral persistence | ~100,000 | ~10µs | Data lost on restart |
SQLiteCacheStore (file) | Durable persistence | ~50,000 | ~20–50µs | Disk I/O, best for cold data |
Guidance:
Numbers are indicative, measured on Apple M1, Node.js 24.x. Always benchmark on your own hardware for production sizing.
SQLite is a disk-based database. Even with optimizations (WAL, memory temp store), disk I/O and serialization add latency compared to pure in-memory caches. For ultra-low-latency needs, use LRUCache or MemoryCacheStore.
You can instrument the library using diagnostic_channel (Node.js). Future versions may provide built-in hooks. For now, you can wrap cache methods or use diagnostic_channel in your own code to publish events on cache operations.
This warning is from Node.js itself (v20+). SQLite support is stable for most use cases, but the API may change in future Node.js versions. Follow Node.js release notes for updates.
All errors from SQLite (connection, query, file access) are thrown as-is. You should use try/catch around your cache operations and handle errors according to your application’s needs.
Yes, but persistent caches (SQLiteCacheStore with file) may not be suitable for ephemeral file systems. Use in-memory caches for stateless/serverless workloads.
Want to contribute to this library? Thank you! Here’s what you need to know to get started:
git clone https://github.com/stephen-shopopop/node-cache.git
cd node-cache
pnpm install # or npm install
npm run build
: build TypeScript (ESM + CJS via tsup)npm run test
: run all tests (node:test)npm run lint
: check lint (biome)npm run format
: format codenpm run check
: type checknpm run bench
: run benchmarksnpm run docs
: generate documentation (TypeDoc)src/library/
: main source code (all cache classes)src/index.ts
: entry pointtest/
: all unit tests (node:test)bench/
: benchmarks (mitata)docs/
: generated documentationimport test from 'node:test';
import { LRUCache } from '../src/library/LRUCache.js';
test('LRUCache basic set/get', (t: TestContext) => {
// Arrange
const cache = new LRUCache({ maxSize: 2 });
// Act
cache.set('a', 1);
// Assert
t.assert.strictEqual(cache.get('a'), 1);
});
npm run test
)npm run lint && npm run format
)npm run coverage
)