Table of Contents
Fetching ...

A First Step Towards Even More Sparse Encodings of Probability Distributions

Florian Andreas Marwitz, Tanya Braun, Ralf Möller

Abstract

Real world scenarios can be captured with lifted probability distributions. However, distributions are usually encoded in a table or list, requiring an exponential number of values. Hence, we propose a method for extracting first-order formulas from probability distributions that require significantly less values by reducing the number of values in a distribution and then extracting, for each value, a logical formula to be further minimized. This reduction and minimization allows for increasing the sparsity in the encoding while also generalizing a given distribution. Our evaluation shows that sparsity can increase immensely by extracting a small set of short formulas while preserving core information.

A First Step Towards Even More Sparse Encodings of Probability Distributions

Abstract

Real world scenarios can be captured with lifted probability distributions. However, distributions are usually encoded in a table or list, requiring an exponential number of values. Hence, we propose a method for extracting first-order formulas from probability distributions that require significantly less values by reducing the number of values in a distribution and then extracting, for each value, a logical formula to be further minimized. This reduction and minimization allows for increasing the sparsity in the encoding while also generalizing a given distribution. Our evaluation shows that sparsity can increase immensely by extracting a small set of short formulas while preserving core information.

Paper Structure

This paper contains 10 sections, 2 equations, 1 figure, 2 tables.

Figures (1)

  • Figure 1: Hellinger distances from the original to the noised and mapped models

Theorems & Definitions (3)

  • definition 1: Parfactor model
  • definition 2: Hellinger distance
  • definition 3: MLN