Table of Contents
Fetching ...

Cooking Up Risks: Benchmarking and Reducing Food Safety Risks in Large Language Models

Weidi Luo, Xiaofei Wen, Tenghao Huang, Hongyi Wang, Zhen Xiang, Chaowei Xiao, Kristina Gligorić, Muhao Chen

Abstract

Large language models (LLMs) are increasingly deployed for everyday tasks, including food preparation and health-related guidance. However, food safety remains a high-stakes domain where inaccurate or misleading information can cause severe real-world harm. Despite these risks, current LLMs and safety guardrails lack rigorous alignment tailored to domain-specific food hazards. To address this gap, we introduce FoodGuardBench, the first comprehensive benchmark comprising 3,339 queries grounded in FDA guidelines, designed to evaluate the safety and robustness of LLMs. By constructing a taxonomy of food safety principles and employing representative jailbreak attacks (e.g., AutoDAN and PAP), we systematically evaluate existing LLMs and guardrails. Our evaluation results reveal three critical vulnerabilities: First, current LLMs exhibit sparse safety alignment in the food-related domain, easily succumbing to a few canonical jailbreak strategies. Second, when compromised, LLMs frequently generate actionable yet harmful instructions, inadvertently empowering malicious actors and posing tangible risks. Third, existing LLM-based guardrails systematically overlook these domain-specific threats, failing to detect a substantial volume of malicious inputs. To mitigate these vulnerabilities, we introduce FoodGuard-4B, a specialized guardrail model fine-tuned on our datasets to safeguard LLMs within food-related domains.

Cooking Up Risks: Benchmarking and Reducing Food Safety Risks in Large Language Models

Abstract

Large language models (LLMs) are increasingly deployed for everyday tasks, including food preparation and health-related guidance. However, food safety remains a high-stakes domain where inaccurate or misleading information can cause severe real-world harm. Despite these risks, current LLMs and safety guardrails lack rigorous alignment tailored to domain-specific food hazards. To address this gap, we introduce FoodGuardBench, the first comprehensive benchmark comprising 3,339 queries grounded in FDA guidelines, designed to evaluate the safety and robustness of LLMs. By constructing a taxonomy of food safety principles and employing representative jailbreak attacks (e.g., AutoDAN and PAP), we systematically evaluate existing LLMs and guardrails. Our evaluation results reveal three critical vulnerabilities: First, current LLMs exhibit sparse safety alignment in the food-related domain, easily succumbing to a few canonical jailbreak strategies. Second, when compromised, LLMs frequently generate actionable yet harmful instructions, inadvertently empowering malicious actors and posing tangible risks. Third, existing LLM-based guardrails systematically overlook these domain-specific threats, failing to detect a substantial volume of malicious inputs. To mitigate these vulnerabilities, we introduce FoodGuard-4B, a specialized guardrail model fine-tuned on our datasets to safeguard LLMs within food-related domains.

Paper Structure

This paper contains 39 sections, 1 equation, 7 figures, 5 tables.

Figures (7)

  • Figure 1: Consumer survey results indicate substantial openness to GenAI for food-related assistance, with meal planning and menu suggestions (47%), personalized nutrition and diet plans (45%), and grocery budgeting support (41%) among the most accepted use cases; only 15% of respondents selected none of the above. Source: PwC Voice of the Consumer Survey 2025.
  • Figure 2: Data generation pipeline. To construct FoodGuardBench, we first derive seed safety principles from the FDA food safety taxonomy and regulations, such as contamination and temperature control. Next, we generate a broad spectrum of benign and harmful queries by injecting benign or malicious user intents into these seed principles. Finally, we apply similarity constraints coupled with manual review to guarantee the high quality and structural diversity of the final dataset.
  • Figure 3: Comparison with existing benchmarks.
  • Figure 4: T-SNE visualization of the dataset distribution.
  • Figure 5: Results of Human Evaluation on the Response of LLMs. The majority of models demonstrate the ability to provide effective information for malicious queries.
  • ...and 2 more figures