CompassDB: Pioneering High-Performance Key-Value Store with Perfect Hash
Jin Jiang, Dongsheng He, Yu Hu, Dong Liu, Chenfan Xiao, Hongxiao Bi, Yusong Zhang, Chaoqu Jiang, Zhijun Fu
TL;DR
CompassDB tackles the bottlenecks of LSM-tree KV stores on modern SSDs by introducing a Two-tier Perfect Hash Table (TPH) for $O(1)$ lookups and a piece-file based storage with a Hash Range compaction model to curb write amplification. Built atop RocksDB, it uses a CPHash index that blends CHD and PTHash with SIMD acceleration to efficiently map keys into compact in-memory structures, enabling all indexing metadata to fit in memory. The system achieves significantly lower write/read amplification and latency, demonstrated by 2.5×–4× throughput gains over RocksDB and 5×–17× over PebblesDB across six workloads, plus up to 50%–85% latency reductions. These results suggest that perfect-hash indexed KV stores with controlled segmentation and delta-based compaction can provide robust industrial-grade performance while preserving compatibility with existing RocksDB-based applications.
Abstract
Modern mainstream persistent key-value storage engines utilize Log-Structured Merge tree (LSM-tree) based designs, optimizing read/write performance by leveraging sequential disk I/O. However, the advent of SSDs, with their significant improvements in bandwidth and IOPS, shifts the bottleneck from I/O to CPU. The high compaction cost and large read/write amplification associated with LSM trees have become critical bottlenecks. In this paper, we introduce CompassDB, which utilizes a Two-tier Perfect Hash Table (TPH) design to significantly decrease read/write amplification and compaction costs. CompassDB utilizes a perfect hash algorithm for its in-memory index, resulting in an average index cost of about 6 bytes per key-value pair. This compact index reduces the lookup time complexity from $O(log N)$ to $O(1)$ and decreases the overall cost. Consequently, it allows for the storage of more key-value pairs for reads or provides additional memory for the memtable for writes. This results in substantial improvements in both throughput and latency. Our evaluation using the YCSB benchmark tool shows that CompassDB increases throughput by 2.5x to 4x compared to RocksDB, and by 5x to 17x compared to PebblesDB across six typical workloads. Additionally, CompassDB significantly reduces average and 99th percentile read/write latency, achieving a 50% to 85% reduction in comparison to RocksDB.
