Table of Contents
Fetching ...

SetBERT: Enhancing Retrieval Performance for Boolean Logic and Set Operation Queries

Quan Mai, Susan Gauch, Douglas Adams

TL;DR

This work proposes an innovative use of inversed-contrastive loss, focusing on identifying the negative sentence, and fine-tuning BERT with a dataset generated via prompt GPT, and demonstrates that, unlike other BERT-based models, fine-tuning with triplet loss actually degrades performance for this specific task.

Abstract

We introduce SetBERT, a fine-tuned BERT-based model designed to enhance query embeddings for set operations and Boolean logic queries, such as Intersection (AND), Difference (NOT), and Union (OR). SetBERT significantly improves retrieval performance for logic-structured queries, an area where both traditional and neural retrieval methods typically underperform. We propose an innovative use of inversed-contrastive loss, focusing on identifying the negative sentence, and fine-tuning BERT with a dataset generated via prompt GPT. Furthermore, we demonstrate that, unlike other BERT-based models, fine-tuning with triplet loss actually degrades performance for this specific task. Our experiments reveal that SetBERT-base not only significantly outperforms BERT-base (up to a 63% improvement in Recall) but also achieves performance comparable to the much larger BERT-large model, despite being only one-third the size.

SetBERT: Enhancing Retrieval Performance for Boolean Logic and Set Operation Queries

TL;DR

This work proposes an innovative use of inversed-contrastive loss, focusing on identifying the negative sentence, and fine-tuning BERT with a dataset generated via prompt GPT, and demonstrates that, unlike other BERT-based models, fine-tuning with triplet loss actually degrades performance for this specific task.

Abstract

We introduce SetBERT, a fine-tuned BERT-based model designed to enhance query embeddings for set operations and Boolean logic queries, such as Intersection (AND), Difference (NOT), and Union (OR). SetBERT significantly improves retrieval performance for logic-structured queries, an area where both traditional and neural retrieval methods typically underperform. We propose an innovative use of inversed-contrastive loss, focusing on identifying the negative sentence, and fine-tuning BERT with a dataset generated via prompt GPT. Furthermore, we demonstrate that, unlike other BERT-based models, fine-tuning with triplet loss actually degrades performance for this specific task. Our experiments reveal that SetBERT-base not only significantly outperforms BERT-base (up to a 63% improvement in Recall) but also achieves performance comparable to the much larger BERT-large model, despite being only one-third the size.

Paper Structure

This paper contains 24 sections, 3 equations, 1 figure, 2 tables.

Figures (1)

  • Figure 1: A generated sample for intersection (AND). The gold (or anchor) sentence is in yellow, positive sentences are in green, negative sentences are in red.