Let X=() be a random vector and
In other words, the likelikhood function is functionally the same in form as a probability density function. However, the emphasis is changed from the to the . The pdf is a function of the ’s while holding the parameters ’s constant, is a function of the parameters ’s, while holding the ’s constant.
When there is no confusion, is abbreviated to be .
The parameter vector such that for all is called a maximum likelihood estimate, or MLE, of .
Many of the density functions are exponential in nature, it is therefore easier to compute the MLE of a likelihood function by finding the maximum of the natural log of , known as the log-likelihood function:
due to the monotonicity of the log function.
A coin is tossed times and heads are observed. Assume that the probability of a head after one toss is . What is the MLE of ?
Solution: Define the outcome of a toss be 0 if a tail is observed and 1 if a head is observed. Next, let be the outcome of the th toss. For any single toss, the density function is where . Assume that the tosses are independent events, then the joint probability density is
which is also the likelihood function . Therefore, the log-likelihood function has the form
Using standard calculus, we get that the MLE of is
Suppose a sample of data points are collected. Assume that the and the ’s are independent of each other. What is the MLE of the parameter vector ?
Solution: The joint pdf of the , and hence the likelihood function, is
The log-likelihood function is
and solve for we have
|Date of creation||2013-03-22 14:27:58|
|Last modified on||2013-03-22 14:27:58|
|Last modified by||CWoo (3771)|
|Defines||maximum likelihood estimate|