MLE – The Maximum Likelihood Estimate

The maximum likelihood estimation (MLE) is a general method to find the function that most likely fits the available data; it therefore addresses a central problem in data sciences. Depending on the model, the math behind MLE can be very complicated, but an intuitive way to think about it is through the following thought experiment….

Does the p-value overestimate the strength of evidence?

Thom Baguley points to the standardized or minimum LR (p381) to answer this question. The minimum LR represents a worst case scenario for the null in that it compares the LR for against the MLE of the observed data, i.e. the most likely (strongest) possible hypothesis supported by the data, and is defined as  …

Inference via likelihood

The likelihood school affirms that likelihood ratios are the basic tool of inference. The likelihood is the (conjugate) probability of observed data D, conditional on the hypothesis A being true.     Given two hypotheses, A and B, it is meaningless to assess evidence except by comparing the evidence favoring hypothesis A over hypothesis B….

Likelihood and Probability

The difference between probability and likelihood is central, among others, to understanding MLE. Randy Gallistel has posted a succinct treatment of this topic: The distinction between probability and likelihood is fundamentally important: Probability attaches to possible results; likelihood attaches to hypotheses [my emphasis]. Explaining this distinction is the purpose of this first column. Possible results…