Table of Contents
Fetching ...

Automated Malware Family Classification using Weighted Hierarchical Ensembles of Large Language Models

Samita Bai, Hamed Jelodar, Tochukwu Emmanuel Nwankwo, Parisa Hamedi, Mohammad Meymani, Roozbeh Razavi-Far, Ali A. Ghorbani

Abstract

Malware family classification remains a challenging task in automated malware analysis, particularly in real-world settings characterized by obfuscation, packing, and rapidly evolving threats. Existing machine learning and deep learning approaches typically depend on labeled datasets, handcrafted features, supervised training, or dynamic analysis, which limits their scalability and effectiveness in open-world scenarios. This paper presents a zero-label malware family classification framework based on a weighted hierarchical ensemble of pretrained large language models (LLMs). Rather than relying on feature-level learning or model retraining, the proposed approach aggregates decision-level predictions from multiple LLMs with complementary reasoning strengths. Model outputs are weighted using empirically derived macro-F1 scores and organized hierarchically, first resolving coarse-grained malicious behavior before assigning fine-grained malware families. This structure enhances robustness, reduces individual model instability, and aligns with analyst-style reasoning.

Automated Malware Family Classification using Weighted Hierarchical Ensembles of Large Language Models

Abstract

Malware family classification remains a challenging task in automated malware analysis, particularly in real-world settings characterized by obfuscation, packing, and rapidly evolving threats. Existing machine learning and deep learning approaches typically depend on labeled datasets, handcrafted features, supervised training, or dynamic analysis, which limits their scalability and effectiveness in open-world scenarios. This paper presents a zero-label malware family classification framework based on a weighted hierarchical ensemble of pretrained large language models (LLMs). Rather than relying on feature-level learning or model retraining, the proposed approach aggregates decision-level predictions from multiple LLMs with complementary reasoning strengths. Model outputs are weighted using empirically derived macro-F1 scores and organized hierarchically, first resolving coarse-grained malicious behavior before assigning fine-grained malware families. This structure enhances robustness, reduces individual model instability, and aligns with analyst-style reasoning.

Paper Structure

This paper contains 44 sections, 6 equations, 6 figures, 7 tables, 1 algorithm.

Figures (6)

  • Figure 1: A sample from SBAN dataset.
  • Figure 2: Zero-shot malware family classification prompt used to elicit decision-level predictions from pretrained large language models.
  • Figure 3: Proposed zero-shot LLM ensemble pipeline for malware family classification. A shared classification prompt is applied to multiple LLMs, followed by normalization, weighted hierarchical ensembling, and gold-based evaluation.
  • Figure 4: Accuracy comparison of individual LLMs and ensemble strategies on the 200-sample gold standard. The final weighted hierarchical ensemble achieves the highest overall accuracy.
  • Figure 5: Prompt sensitivity (ensemble output): Accuracy and Macro-F1 of FinalLabel across prompts P1--P5.
  • ...and 1 more figures