Table of Contents
Fetching ...

Conversational Control with Ontologies for Large Language Models: A Lightweight Framework for Constrained Generation

Barbara Gendron, Gaël Guibon, Mathieu d'Aquin

Abstract

Conversational agents based on Large Language Models (LLMs) have recently emerged as powerful tools for human-computer interaction. Nevertheless, their black-box nature implies challenges in predictability and a lack of personalization, both of which can be addressed by controlled generation. This work proposes an end-to-end method to obtain modular and explainable control over LLM outputs through ontological definitions of aspects related to the conversation. Key aspects are modeled and used as constraints; we then further fine-tune the LLM to generate content accordingly. To validate our approach, we explore two tasks that tackle two key conversational aspects: the English proficiency level and the polarity profile of the content. Using a hybrid fine-tuning procedure on seven state-of-the-art, open-weight conversational LLMs, we show that our method consistently outperforms pre-trained baselines, even on smaller models. Beyond quantitative gains, the framework remains model-agnostic, lightweight, and interpretable, enabling reusable control strategies that can be extended to new domains and interaction goals. This approach enhances alignment with strategy instructions and demonstrates the effectiveness of ontology-driven control in conversational systems.

Conversational Control with Ontologies for Large Language Models: A Lightweight Framework for Constrained Generation

Abstract

Conversational agents based on Large Language Models (LLMs) have recently emerged as powerful tools for human-computer interaction. Nevertheless, their black-box nature implies challenges in predictability and a lack of personalization, both of which can be addressed by controlled generation. This work proposes an end-to-end method to obtain modular and explainable control over LLM outputs through ontological definitions of aspects related to the conversation. Key aspects are modeled and used as constraints; we then further fine-tune the LLM to generate content accordingly. To validate our approach, we explore two tasks that tackle two key conversational aspects: the English proficiency level and the polarity profile of the content. Using a hybrid fine-tuning procedure on seven state-of-the-art, open-weight conversational LLMs, we show that our method consistently outperforms pre-trained baselines, even on smaller models. Beyond quantitative gains, the framework remains model-agnostic, lightweight, and interpretable, enabling reusable control strategies that can be extended to new domains and interaction goals. This approach enhances alignment with strategy instructions and demonstrates the effectiveness of ontology-driven control in conversational systems.

Paper Structure

This paper contains 26 sections, 1 equation, 5 figures, 2 tables.

Figures (5)

  • Figure 1: The proposed approach applied to both use-cases. Proficiency-Level Control involves a two-step process: first, quantitative CEFR criteria are obtained from the decision tree output; then, the ontology can be built.
  • Figure 2: A description of the data sources used in both use-cases.
  • Figure 3: FKGL distribution across CEFR levels using Llama3-8B pre-trained (Raw) and fine-tuned (CLM).
  • Figure 4: Example of the Proficiency-Level Control strategy, annotated with detected and target levels.
  • Figure 5: Implementation of the Polarity Profile Control conversation strategy, annotated with detected and target profiles.