The beauty of Bayesian Learning

Understanding and visualizing Bayesian concepts

🧮 math
Table of Contents

In this post, we will build intuitions behind Bayes theory via an interactive visualization and then realize certain properties of such a probabilistic formulation of Machine Learning. It turns of that most advanced approaches in fact run on top of these very simple foundational concepts and help us model problems quite beautifully.

Probability theory is ubiquitous in Machine Learning and perhaps will be the most important tools for future breakthroughs. By the end of this post, the hope is that you will have gained just enough insight into the Bayesian world-view.

This post has an accompanying Bayesian Learning Demo. See here!

Learning Theory

In itself, the idea of “learning” is not well defined. But like any physical system, we have a mathematical model around how “learning” should be quantified. As you might already imagine, there cannot possibly exist a deterministic view of “learning” given the subjective nature of matter. In light of this uncertainty, it is quite obvious that we turn to probability theory to seek answers to our questions.

In learning theory, a “concept” is something that we wish to learn given some data. “Concept” is not concretely realizable and instead we wish to learn something from a big collection of concepts called a “concept” class. For instance, imagine I provide you a list of numbers and ask you what “concept” do these numbers represent. You’d make educated guesses from a certain list of concepts you had in mind, for instance even/odd numbers, multiples of 3 or more complicated ones like odd powers of 2. The possibilities are endless which actually makes the realization of every concept pretty much impossible. Instead, we work around this complication and arrive at a mutually agreed upon set of “concepts” to form the “Hypothesis” set. While the size of this set may be infinite as well, ideally its nature tends to be more approachable and computationally tractable.

For the more mathematically inclined, it is fruitful to view hypotheses as just any other function which when fed with data provides meaningful inference. For instance, to distinguish cats and dogs, let us say you chose to build an SVM classifier. In essence, you have implicitly arrived at a choice of your hypothesis set - The set of all hyperplanes that separate the data under (relaxed) margin constraints.

Our hope is that this hypothesis set is a “close” enough representation of the true concept class so as to minimize the “generalization error” on unseen data. The true nature of “close” and “generalization error” is out of scope for this post. But to make things concrete, I would like to present a very simple yet powerful result in learning theory

PSDm[R(h)ϵ]1δ\underset{S \sim D^m}{P}[R(h) \leq \epsilon] \geq 1 - \delta

In plain words, the probability that “true risk” R(h)R(h) will be within the limits of a fixed positive constant ϵ\epsilon with a confidence of at least 1δ1-\delta. Risk in simple words measures how inaccurate was your learning with respect to new unseen data.

The above equation is dense with knowledge and has various other forms under relaxed constraints. But, for the sake of brevity I will defer that discussion, perhaps for another day. The takeaway here is that we do have a concrete quantification of both the risk and the uncertainty in that risk.

Bayesian Learning

In hindsight, Bayes rule (discussed later) fits quite naturally when modelling human cognition. To motivate the intuition, imagine how a child’s “concept” of classifying objects based on earlier observations gets strengthened every time the objects are observed.

I will put it out right away that like all modelled physical systems, theory doesn’t always lead to tractable solutions and only under reasonable assumptions do we solve the learning problem. Nevertheless, the bayesian approach is theoretically well-motivated. More importantly, it works!

Let us take a look at the Bayes equation which I’m sure is nothing new. In machine learning, this factorization is a boon for the theorists and occasionally a bane for the practitioners.

P(AB)=P(BA)P(A)P(B)P(A | B) = \frac{P(B|A)P(A)}{P(B)}

I will defer the explanation and instead invite you to play the number game.

The Number Game

The game is very simple. I will give you a set of numbers and a universe of concepts where these numbers could be coming from. Your job is to guess which concept do these numbers belong to. We also have our “Bayesian” friend playing this game along with us.

Experimental Setup

For the sake of this game, I will limit all numbers between 1 and 100 and the universe of concepts consists of the following

In total, you have just 32 concepts in this universe of numbers. I call this H\mathcal{H}.

Experiment Trials

I give you D=[16]\mathcal{D} = [16]. Well, you’d tell me that this could be anything, an even number, a perfect square, a number ending in 6 and so on. And you are right, our Bayesian friend would be just as confused as you are. This is quantified by the fact that when that friend tells us the predictions for each class, it turns out the uncertainty is quite high given that many hypotheses tend to have high probabilities.

In this trial, we will quantify the “likelihood” that a dataset comes from a given hypothesis as

P(Dh)=1hP(\mathcal{D} | h) = \frac{1}{|h|}

We read this as the “likelihood” of the dataset given hypothesis hh. This quantifies the conditional dependency of having generated the dataset given a hypothesis.

To keep things simple, I’ve defined is as the probability of picking a number from the complete set of numbers uniform. For instance,

P(Dheven)=150P(\mathcal{D} | h_{even}) = \frac{1}{50}

is the likelihood of picking up a single number from the set of 50 even numbers between 1 and 100 (our universe of numbers defined earlier). Extending this to a dataset of mm even numbers, since each choice is independent of the other, we simply have

P(Dmheven)=(150)mP(\mathcal{D}^m | h_{even}) = \left( \frac{1}{50} \right)^m

Let us compare this to say the likelihood of D\mathcal{D} being from a power of four set,

P(Dhpower of 4)=13P(\mathcal{D} | h_{\text{power of 4}}) = \frac{1}{3}

This equation tells us that the likelihood of 16 being from a the set of powers of 4 is considerably larger than it being from a set of even numbers. If we augment the dataset as D=[4,16]\mathcal{D}^\prime = [4,16], it leads to a likelihood choice between 1502\frac{1}{50^2} and 132\frac{1}{3^2}. Those odds escalated quickly!

This is in fact attributed to a principle known as the Occam’s Razor which can be paraphrased as - given everything else same, a simpler (in this case the smaller) explanation is often the better one.

Coming back to our original data, we were confused among different competing hypothesis. How do we solve this? Prior knowledge is the key. You prior experience with numbers would tell you that it is more often that you’ve seen even numbers than numbers as specific as powers of 4. However, somebody else might have had an opposite experience so far and would have different “priors”.

To accommodate those experiences, we use a product of the “likelihood” and the “prior” to form the “posterior”.

posteriorlikelihood×priorposterior \propto likelihood \times prior

In this context,

P(hD)P(Dh)×P(h)P(h | \mathcal{D}) \propto P(\mathcal{D} | h) \times P(h)

The probability that the concept is hh “given” that we have seen data D\mathcal{D} is proportional to the likelihood of the data conditional on belonging to that hypothesis hh scaled by a prior factor. The proportionality is removed by a normalization constant composing of the “evidence”.

Let us say the prior for even numbers is P(heven)=0.5P(h_{even}) = 0.5 and P(hpower of 4)=0.2P(h_{\text{power of 4}}) = 0.2, our posterior probabilities would now be reported as P(hevenD)=0.01P(h_{even} | \mathcal{D}) = 0.01 and P(hpower of 4D)=0.067P(h_{\text{power of 4}} | \mathcal{D}) = 0.067. I guess I’d be better off choosing the power of 4 set as my concept that the set of even numbers. It should be easy to see that changing the values of the prior would lead me to different results.

The subjective nature of the distribution of likelihoods and priors is in fact where we encode our underlying assumptions about the concept. To play with different data and probability configurations, head over to the Interactive Bayesian Learning Demo. Note how adding more data makes our “Bayesian” friend more confident about the concept just like you would be.

The Mathematical Model

Having set our intuition right, it is time we delve into certain equations and realize some properties of the Bayesian approach. We will complete our Bayes rule for this problem

P(hD)=P(Dh)P(h)fHP(Df)P(f)P(h | \mathcal{D}) = \frac{P(\mathcal{D}|h)P(h)}{\sum_{f \in \mathcal{H}} P(\mathcal{D}|f)P(f)}

where P(Dh)=I(DE(h))/hmP(\mathcal{D}|h) = \mathbb{I}(\mathcal{D} \in \mathcal{E}(h)) / |h|^m. I\mathbb{I} is an indicator function which is 00 if there exists some number in D\mathcal{D} not satisfied by hh and 11 otherwise. The remaining term is the same as the discussed earlier. This comes from a fairly natural intuition that if a number in the dataset does not satisfy the hypothesis hh, then the likelihood of the data coming from that hypothesis should be zero.

For the priors P(h)P(h), I’ve chosen the standard odd/even numbers to have a high prior, sets of multiples and ending numbers a moderate prior and the more “unnatural” sets in the end to have a very low prior. This again is a natural choice under the knowledge of the problem setup.

Note that the choice of these likelihoods and priors are what will affect the solution which is why I invite you to try out different custom/preset distributions at the demo page.

It turns out that with “sufficient” amount of data (the term has relevant definitions which I defer to another discussion), the posterior tends to peak at one single hypothesis which is called the “maximum a posteriori” or “MAP estimate” and tends to push down all other probabilities to zero. It is compactly written in literature using the Dirac Measure which is a binary valued function parametrized over a set. In this case it gives 1 when h=h^MAPh = \hat{h}^\text{MAP} and 0 otherwise.

P(hD)δh^MAP(h)P(h|\mathcal{D}) \to \delta_{\hat{h}^\text{MAP}}(h)

The MAP estimate is written in a more familiar form as

h^MAP=argmaxhHP(Dh)P(h)\hat{h}^\text{MAP} = \underset{h \in \mathcal{H}}{\text{argmax}} P(\mathcal{D}|h)P(h)

It is more convenient to write this as the “Log-likelihood”

h^MAP=argmaxhH[log(P(Dh))+log(P(h))]\hat{h}^\text{MAP} = \underset{h \in \mathcal{H}}{\text{argmax}} \left[ \text{log}(P(\mathcal{D}|h)) + \text{log}(P(h)) \right]

Given that the prior stays constant, with enough data, the likelihood term starts dominating. The MAP estimate then starts converging towards maximum likelihood estimate or “MLE”.

h^MLE=argmaxhHlog(P(Dh))\hat{h}^\text{MLE} = \underset{h \in \mathcal{H}}{\text{argmax}} \text{log}(P(\mathcal{D}|h))

The takeaway here is that data overwhelms the prior. In retrospect, this is quite intuitive. To observe, in the experiment trial above, put D=[4,8,16,32]\mathcal{D} = [4,8,16,32] and see how suddenly the MAP estimate peaks at just a single hypothesis. Theoretically, we claim that the “truth is recoverable in the limit of infinite data”. This in fact forms the foundations of most machine learning solutions.

Now finally, to make our predictions on a new test data point x^\hat{x}, we run it via the Posterior Predictive Distribution. In this problem we get

P(x^D)=hHP(y=1x^,h)P(hD)P(\hat{x}|\mathcal{D}) = \sum_{h \in \mathcal{H}} P(y=1|\hat{x},h) P(h|\mathcal{D})

which is just a simple Bayes Model Averaging to predict whether the new number x^\hat{x} belongs to the learned concept or not.

Applications

Whatever we’ve learned so far stays at the core of any probabilistic model. But there are a handful of further concerns we still have not addressed. I note down a few without going into further details yet.

Conclusion

Most data in practice tends to allude to some inherent structure. Bayesian Modelling is a fairly powerful approach to model those relationships via a paradigm of approaches known as Probabilistic Graphical Models. We’ve barely scratched the surface here but this opens up avenues to understand a variety of fairly popular methods like Hidden Markov Models (HMMs) and Conditional Random Fields (CRFs). The references are highly recommended.

As an interesting side note, Bayesians have a strong tendency to reason anything and everything from the lens of Bayes theorem. It shouldn’t be surprising now to model perhaps everything within reasonable limits of this model and is an interesting line of thought to consider.

Acknowledgements

This piece has been inspired heavily by the clear explanations by Kevin Murphy 1. Tenenbaum’s clear descriptions on Bayesian Learning 2 were also helpful.

Footnotes

  1. Murphy, K. (2012). Machine learning - a probabilistic perspective. Adaptive computation and machine learning series.

  2. Tenenbaum, J. (1999). A Bayesian framework for concept learning.