Table of Contents
Fetching ...

CompassDB: Pioneering High-Performance Key-Value Store with Perfect Hash

Jin Jiang, Dongsheng He, Yu Hu, Dong Liu, Chenfan Xiao, Hongxiao Bi, Yusong Zhang, Chaoqu Jiang, Zhijun Fu

TL;DR

CompassDB tackles the bottlenecks of LSM-tree KV stores on modern SSDs by introducing a Two-tier Perfect Hash Table (TPH) for $O(1)$ lookups and a piece-file based storage with a Hash Range compaction model to curb write amplification. Built atop RocksDB, it uses a CPHash index that blends CHD and PTHash with SIMD acceleration to efficiently map keys into compact in-memory structures, enabling all indexing metadata to fit in memory. The system achieves significantly lower write/read amplification and latency, demonstrated by 2.5×–4× throughput gains over RocksDB and 5×–17× over PebblesDB across six workloads, plus up to 50%–85% latency reductions. These results suggest that perfect-hash indexed KV stores with controlled segmentation and delta-based compaction can provide robust industrial-grade performance while preserving compatibility with existing RocksDB-based applications.

Abstract

Modern mainstream persistent key-value storage engines utilize Log-Structured Merge tree (LSM-tree) based designs, optimizing read/write performance by leveraging sequential disk I/O. However, the advent of SSDs, with their significant improvements in bandwidth and IOPS, shifts the bottleneck from I/O to CPU. The high compaction cost and large read/write amplification associated with LSM trees have become critical bottlenecks. In this paper, we introduce CompassDB, which utilizes a Two-tier Perfect Hash Table (TPH) design to significantly decrease read/write amplification and compaction costs. CompassDB utilizes a perfect hash algorithm for its in-memory index, resulting in an average index cost of about 6 bytes per key-value pair. This compact index reduces the lookup time complexity from $O(log N)$ to $O(1)$ and decreases the overall cost. Consequently, it allows for the storage of more key-value pairs for reads or provides additional memory for the memtable for writes. This results in substantial improvements in both throughput and latency. Our evaluation using the YCSB benchmark tool shows that CompassDB increases throughput by 2.5x to 4x compared to RocksDB, and by 5x to 17x compared to PebblesDB across six typical workloads. Additionally, CompassDB significantly reduces average and 99th percentile read/write latency, achieving a 50% to 85% reduction in comparison to RocksDB.

CompassDB: Pioneering High-Performance Key-Value Store with Perfect Hash

TL;DR

CompassDB tackles the bottlenecks of LSM-tree KV stores on modern SSDs by introducing a Two-tier Perfect Hash Table (TPH) for lookups and a piece-file based storage with a Hash Range compaction model to curb write amplification. Built atop RocksDB, it uses a CPHash index that blends CHD and PTHash with SIMD acceleration to efficiently map keys into compact in-memory structures, enabling all indexing metadata to fit in memory. The system achieves significantly lower write/read amplification and latency, demonstrated by 2.5×–4× throughput gains over RocksDB and 5×–17× over PebblesDB across six workloads, plus up to 50%–85% latency reductions. These results suggest that perfect-hash indexed KV stores with controlled segmentation and delta-based compaction can provide robust industrial-grade performance while preserving compatibility with existing RocksDB-based applications.

Abstract

Modern mainstream persistent key-value storage engines utilize Log-Structured Merge tree (LSM-tree) based designs, optimizing read/write performance by leveraging sequential disk I/O. However, the advent of SSDs, with their significant improvements in bandwidth and IOPS, shifts the bottleneck from I/O to CPU. The high compaction cost and large read/write amplification associated with LSM trees have become critical bottlenecks. In this paper, we introduce CompassDB, which utilizes a Two-tier Perfect Hash Table (TPH) design to significantly decrease read/write amplification and compaction costs. CompassDB utilizes a perfect hash algorithm for its in-memory index, resulting in an average index cost of about 6 bytes per key-value pair. This compact index reduces the lookup time complexity from to and decreases the overall cost. Consequently, it allows for the storage of more key-value pairs for reads or provides additional memory for the memtable for writes. This results in substantial improvements in both throughput and latency. Our evaluation using the YCSB benchmark tool shows that CompassDB increases throughput by 2.5x to 4x compared to RocksDB, and by 5x to 17x compared to PebblesDB across six typical workloads. Additionally, CompassDB significantly reduces average and 99th percentile read/write latency, achieving a 50% to 85% reduction in comparison to RocksDB.

Paper Structure

This paper contains 30 sections, 5 equations, 13 figures, 2 tables.

Figures (13)

  • Figure 1: The space of $N$ keys is mapped to an array of slots with a length of $N*c$, where each key is perfectly hash to a unique slot. The length of the slot array is greater than the number of keys.
  • Figure 2: Each TPH is equivalent to the position of SST in RocksDB, with each TPH composed of multiple piece files, each piece file containing its own local hash table; TPH includes a global hash table pointing to the piece file where the actual Key is located.
  • Figure 3: K5 is mapped to the position with index=4 in the global hash table (index starts from 0), and signature matches sign5, the search process continues. The piece number where K5 is located is identified as piece file 2. Within piece 2's local hash table, the slot position corresponding to K5 is recalculated based on the local hash table, it yields offset-4. Then we read from the piece file 2 at the position offset-4 to get the slot value. If key of slot is matched, we got the final value.
  • Figure 4: The sorted key location array. The latest piece file record the offset of sorted keys within current TPH. And use interval sample to improve scan performance.
  • Figure 5: The range of keys contained in TPH-1 is the search key in the range [0, 1024). Only overlap with 2 TPHs in the next level.
  • ...and 8 more figures