You're using an outdated browser. Please upgrade to a modern browser for the best experience.
The Stochastic Complexity of Spin Models
Academic Video Service
  • View Times: 84
  • |
  • Release Date: 2021-11-03
  • statistical inference
  • model complexity
  • minimum description length
Video Introduction

This video is adapted from 10.3390/e20100739

Models can be simple for different reasons: because they yield a simple and computationally efficient interpretation of a generic dataset (e.g., in terms of pairwise dependencies)—as in statistical learning—or because they capture the laws of a specific phenomenon—as e.g., in physics—leading to non-trivial falsifiable predictions. In information theory, the simplicity of a model is quantified by the stochastic complexity, which measures the number of bits needed to encode its parameters. In order to understand how simple models look like, researchers study the stochastic complexity of spin models with interactions of arbitrary order. Researchers show that bijections within the space of possible interactions preserve the stochastic complexity, which allows to partition the space of all models into equivalence classes. They thus found that the simplicity of a model is not determined by the order of the interactions, but rather by their mutual arrangements. Models where statistical dependencies are localized on non-overlapping groups of few variables are simple, affording predictions on independencies that are easy to falsify. On the contrary, fully connected pairwise models, which are often used in statistical learning, appear to be highly complex, because of their extended set of interactions, and they are hard to falsify. 

Full Transcript
Academic Video Service