Likelihood estimation#

This notebook shows how to do a simple maximum likelihood (ml) estimation with estimagic. As an illustrating example, we implement a simple linear regression model. This is the same example model used as in the method of moments notebook.

We proceed in 4 steps:

  1. Create a data generating process

  2. Set up a likelihood function

  3. Maximize the likelihood function

  4. Calculate standard errors, confidence intervals, and p-values

The user only needs to do step 1 and 2. The rest is done by estimate_ml.

To be very clear: Estimagic is not a package to estimate linear models or other models that are implemented in Stata, statsmodels or anywhere else. Its purpose is to estimate parameters with custom likelihood or method of simulated moments functions. We just use an ordered logit model as an example of a very simple likelihood function.

Model:#

\[ y = \beta_0 + \beta_1 x + \epsilon, \text{ where } \epsilon \sim N(0, \sigma^2)\]

We aim to estimate \(\beta_0, \beta_1, \sigma^2\).

import estimagic as em
import numpy as np
import pandas as pd
from scipy.stats import norm

rng = np.random.default_rng(seed=0)

1. Create a data generating process#

def simulate_data(params, n_draws):
    x = rng.normal(0, 1, size=n_draws)
    e = rng.normal(0, params.loc["sd", "value"], size=n_draws)
    y = params.loc["intercept", "value"] + params.loc["slope", "value"] * x + e
    return pd.DataFrame({"y": y, "x": x})
true_params = pd.DataFrame(
    data=[[2, -np.inf], [-1, -np.inf], [1, 1e-10]],
    columns=["value", "lower_bound"],
    index=["intercept", "slope", "sd"],
)
true_params
value lower_bound
intercept 2 -inf
slope -1 -inf
sd 1 1.000000e-10
data = simulate_data(true_params, n_draws=100)

2. Define the loglike function#

def normal_loglike(params, data):
    norm_rv = norm(
        loc=params.loc["intercept", "value"] + params.loc["slope", "value"] * data["x"],
        scale=params.loc["sd", "value"],
    )
    contributions = norm_rv.logpdf(data["y"])
    return {"contributions": contributions, "value": contributions.sum()}

A few remarks before we move on:

  1. There are numerically better ways to calculate the likelihood; we chose this implementation for brevity and readability.

  2. The loglike function takes params and other arguments. You are completely flexible with respect to the number and names of the other arguments as long as the first argument is params.

  3. The loglike function returns a dictionary with the entries “contributions” and “value”. The “contributions” are the log likelihood evaluations of each individual in the dataset. The “value” are their sum. The “value” entry could be omitted, the “contributions” entry, however, is mandatory.

3. Estimate the model#

start_params = true_params.assign(value=[100, 100, 100])

res = em.estimate_ml(
    loglike=normal_loglike,
    params=start_params,
    optimize_options={"algorithm": "scipy_lbfgsb"},
    loglike_kwargs={"data": data},
)
res.summary().round(3)
value standard_error ci_lower ci_upper p_value free stars
intercept 1.945 0.104 1.742 2.148 0.0 True ***
slope -0.945 0.113 -1.167 -0.723 0.0 True ***
sd 0.954 0.079 0.799 1.109 0.0 True ***

4. What’s in the results?#

LikelihoodResult objects provide attributes and methods to calculate standard errors, confidence intervals, and p-values. For all three, several methods are available. You can even calculate cluster robust standard errors.

A few examples are:

res.params
value lower_bound
intercept 1.944958 -inf
slope -0.944920 -inf
sd 0.954225 1.000000e-10
res.cov(method="robust")
intercept slope sd
intercept 0.008986 0.000426 -0.001904
slope 0.000426 0.007734 0.000303
sd -0.001904 0.000303 0.003748
res.se()
value lower_bound
intercept 0.103759 -inf
slope 0.113341 -inf
sd 0.078959 1.000000e-10