Languaging Bayesian Models

by Joe Thorley

I just watched a thoroughly entertaining and thought-provoking talk by Richard McElreath

At the end he makes suggestions for a new Bayesian language that avoids such frequentist-centric terms as data, parameter, likelihood and even prior or posterior!

This got me thinking…

Fundamentally, there are just two types of things in Bayesian modelling.

Let us call them variables and relationships.

The variables themselves can be known or unknown with unknown variables differing in how uncertain they are.

The relationships (among variables) can be stochastic or deterministic. In contrast to stochastic relationships, deterministic ones introduce no uncertainty.

A Bayesian model can thus be simply conceived of as an arrangement of relationships and variables that is used to compute the uncertainty in unknown variables.

I think this covers the basics.

Clearly there are the questions of how variables are known (observed vs assumed), how uncertainty is quantified (probability distributions) and how stochastic relationships introduce uncertainty as well as how the computations are performed. However, it seems to me that they can be tackled with reference to the basic framework.

To help grok the new terms I have put together the following table which indicates the approximate equivalents.

Bayesian Frequentist
Known Observed Variable Data
Variable that must be Assumed Known to compute Uncertainties of Unknown Variables Hyper-Prior
Uncertainty Probability Distribution
Unknown Variable Parameter
Stochastic Relationship Sampling Distribution
Uncertainty in Unknown Variable without Known Observed Variables Prior
Uncertainty in Unknown Variable with Known Observed Variables Posterior

Thoughts?