Beyond Text-to-SQL for IoT Defense: A Comprehensive Framework for Querying and Classifying IoT Threats
Ryan Pavlich, Nima Ebadi, Richard Tarbell, Billy Linares, Adrian Tan, Rachael Humphreys, Jayanta Kumar Das, Rambod Ghandiparsi, Hannah Haley, Jerris George, Rocky Slavin, Kim-Kwang Raymond Choo, Glenn Dietrich, Anthony Rios
TL;DR
The paper addresses the gap in NLIDB by integrating data querying with inference over returned results in an IoT context. It introduces the IoT-SQL dataset, combining 10,985 text-SQL pairs with 239,398 network-traffic rows from IoT-23 and smart-building sensors, and proposes a two-stage pipeline that jointly learns to generate SQL and classify returned data as malicious or benign. Empirical results show that joint training substantially improves text-to-SQL generation (e.g., a base model matching a larger model when trained jointly) while exposing limitations of large LLMs like GPT-3.5 in domain-specific reasoning. The work provides a new testbed for combining tabular QA, temporal queries, and security analytics, with implications for practical IoT defense and the development of more capable domain-aware NLIDB systems.
Abstract
Recognizing the promise of natural language interfaces to databases, prior studies have emphasized the development of text-to-SQL systems. While substantial progress has been made in this field, existing research has concentrated on generating SQL statements from text queries. The broader challenge, however, lies in inferring new information about the returned data. Our research makes two major contributions to address this gap. First, we introduce a novel Internet-of-Things (IoT) text-to-SQL dataset comprising 10,985 text-SQL pairs and 239,398 rows of network traffic activity. The dataset contains additional query types limited in prior text-to-SQL datasets, notably temporal-related queries. Our dataset is sourced from a smart building's IoT ecosystem exploring sensor read and network traffic data. Second, our dataset allows two-stage processing, where the returned data (network traffic) from a generated SQL can be categorized as malicious or not. Our results show that joint training to query and infer information about the data can improve overall text-to-SQL performance, nearly matching substantially larger models. We also show that current large language models (e.g., GPT3.5) struggle to infer new information about returned data, thus our dataset provides a novel test bed for integrating complex domain-specific reasoning into LLMs.
