Sensitivity and Zero Priors

fieldnotes
2 min readJun 21, 2021

What do we mean by sensitivity?

Sensitivity refers to a measure of how much influence the prior of choice affects the associated posterior distribution.

A general rule of thumb is, when looking at Bayes Rule, the component (either likelihood or prior) with the most extreme LOWER value will affect the posterior the most.

Now,

It is also true that the MORE data we have, the LESS of an effect the prior has.

In contrast, the LESS data we have, the GREATER of an effect the prior has.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

How can we reason this?

Let’s try to consider that the “data” or “current data” that we refer to above can replaced with the concept of likelihood.

Likelihoods represent the influence of the most up-to-date data of interest.

If we have MORE data, there are MORE combinations of ways in which we can produce the individual entries dataset, aka, a more restricted, constricted likelihood.

Alternatively, if we have less data to go off of, there are FEWER ways or combinations that result in the current dataset, so the likelihood becomes more broad and expanded.

Here is are some illustrations that describes what we have covered to far:

So what makes a model LESS SENSITIVE or STRONG?

A STRONG model is LESS SENSITIVE to PRIOR choice. Aka, there is MORE current data to inform the likelihood.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — -

Zero Priors

Hang tight! Just one more thing before we wrap up priors..

A zero-valued prior will always affect the posterior.

  • > A zero-valued prior across a parameter range ALWAYS results on a corresponding zero posterior probability.
  • > Priors represent a sense of “subjectivity”, so by choosing a zero-valued prior, we are assuming that the event is entirely IMPOSSIBLE (this requires a certain sense of surety).

Main Takeaway: If we want STRONG models, we shouldn’t necessarily rule out the use highly informative or zero-valued discontinuous priors, but rather we should aim to use priors that illicit the least influence on the posterior. This can be achieved a number of ways, particularly through either more data or a weaker prior.

Research done from: “Priors” A Student’s Guide to Bayesian Statistics, by Ben Lambert, SAGE, 2018.

--

--