Steven Morse personal website and research notes

Hawkes process - complete log-likelihood

This is a quick post mostly as a note to myself, on how to derive the log-likelihood (and complete data log-likelihood) for the Hawkes process. This model has been around for about 50 years, so there’s a rich literature, but it’s still beyond standard textbook treatment and I’ve found parsing basic properties to be sometimes a frustrating mess.

Here’s my (very) previous post on this model, my Github repo with some beginner-friendly code, a preprint I co-authored using a MAP-EM technique, and my master’s thesis with more detail.

Prelims

The Hawkes process is a doubly stochastic, self-exciting temporal point process where, given some marked history of events \(\tau=\{(t_i, u_i)\}\) where we interpret the event at \(t_i\) occurring on “stream” or dimension \(u_i\), the conditional intensity for stream \(u_i\) is defined as

\[\lambda_u(t|\tau; \Theta) = \mu_u + \sum_{i:t_i<t} h_{uu_i}(t-t_i; \theta_{uu_i})\]

where \(\Theta=(\mu, \theta)\), \(\mu_u\) represents a background rate, and \(h_*(t)\) is the triggering kernel or decay function or other names and captures the additive effect of prior events on the probability of an arrival at the current time \(t\).

A classic choice is of the form \(h_{uu_i}(t) = a_{uu_i}\omega e^{-\omega t}\) where \(a_{uu_i}\) represents the “influence” of stream \(u_i\) on \(u\), and the exponential captures the decaying effect of this influence over time, scaled by a global parameter \(\omega\). (But there are of course many, many other choices.)

We’re interested in learning the parameters \(\Theta\) — first we’ll derive the (log) likelihood, and then introduce a latent variable \(Q\) representing the branching structure of this process (intuitively, each event in effect is either a background (or parent) event or child event) to derive the complete data (log) likelihood.

Log-likelihood (in general)

Consider a point process where the probability density of a single event is conditional on the event history, that is \(f^*(t)=f(t\vert\mathcal{H}_t)\). We also have the cumulative density function (CDF) as \(F^*(t)=\int_{-\infty}^t f^*(s)ds\). Then the conditional intensity (aka hazard function) is

\[\lambda(t) = \frac{f^*(t)}{1-F^*(t)}\]

Now we can show that

\[\begin{align*} f*(t) &= \lambda(t)\text{exp}\left(-\int_{t_n}^t \lambda(s)ds \right) \\ F^*(t) &= 1 - \text{exp}\left(-\int_{t_n}^t \lambda(s) ds \right) \end{align*}\]

Proof: From before, we know

\[\lambda(t)dt = \frac{dF^*(t)}{1-F^*(t)}\]

and integrating, we have

\[\int_{t_n}^t \lambda(s)ds = \int_{t_n}^t \frac{dF^*(t)}{1-F^*(t)} ds = -\log (1-F^*(t))\]

which gives \(\text{exp}(-\int \lambda)= 1- F^*\), giving the result. QED

Next, we can show that given a sequence \(\tau=(t_i)\) on \([0, T)\), the likelihood this sequence was produced by conditional intensity \(\lambda(t)\) is

\[p(\tau) = \left( \prod_i \lambda(t_i) \right) \text{exp}\left( -\int_0^T \lambda(s) ds\right)\]

Proof: The joint density of all events on the interval \([0, T)\) is the joint density of the events themselves (all the \(f^*(t_i)\)) and the probability that nothing occurred on the entire interval:

\[\begin{align*} p(\tau) &= f^*(t_1)...f^*(t_n)(1-F*(T)) \\ &= \left( \prod_i f^*(t_i) \right) \frac{f^*(T)}{\lambda(T)} \\ &= \left( \prod_i \lambda(t_i)\text{exp}\left(-\int_{t_{i-1}}^{t_i} \lambda(s)ds \right)\right) \text{exp}\left(-\int_{t^n}^T \lambda(s)ds \right) \\ &= \left(\prod_i \lambda(t_i) \right)\text{exp}\left(-\int_0^T \lambda(s)ds\right) \end{align*}\]

QED. And finally note that the log-likelihood for this generic intensity \(\lambda(t)\) is

\[\log p(\tau) = \sum_i \log \lambda(t_i) - \int_0^T \lambda(s) ds\]

For Hawkes (specifically)

Extending this for Hawkes specifically is now straightforward. We have in the univariate case,

\[\begin{align*} \log p(\tau) &= \sum_i \log(\mu + \sum_{j: t_j < t_i} h(t_i - t_j)) - \int_0^T \lambda(s) ds \\ &= \sum_i \log(\mu + \sum_{j: t_j < t_i} h(t_i - t_j)) - T\mu - \sum_{j: t_j<t_i}\int_0^T h(s)ds \end{align*}\]

and in the multivariate case,

\[\log p(\tau) = \sum_u \sum_i \log(\mu_u + \sum_{j:t_j<t_i} h_{u_i u_j}(t_i-t_j)) - T\sum_u \mu_u - \sum_u \sum_i H_{uu_i}(T-t_i)\]

where \(H_{uu_i}(t) = \int_0^t h_{uu_i}(s)ds\).

Deriving the complete data log-likelihood

Things get more interesting when we introduce the latent variable \(Q\) to represent the branching structure of this process. Let \(Q=[q_{ij}]\) represent the branching matrix such that \(q_{ij}=1\) if event \(j\) caused event \(i\), and 0 otherwise. We may then interpret \(q_{ii}=1\) to mean event \(i\) was a background event. Critically, we note an event must either be a child or parent, i.e. \(\sum_j q_{ij} = 1\) for all \(i\).

These indicator variables allow us to rewrite the log-likelihood like this (taking the univariate case first):

\[\log p(\tau, Q) = \sum_i \log(q_{ii}\mu + \sum_{j: t_j<t_i} q_{ij} h(t_i - t_j)) - -\int_0^T \lambda(s) ds\]

Several important insights here. First, the indicator variables are turning on/off the different sources of probability. Second, the final integral term does not have any \(q_{ij}\) terms because it is representing the probability nothing happened.

Third, notice that for any event \(t_i\), that is for every term in the sum \(\sum_i\), we will only have a single indicator variable active, by definition. Either that event is a background event (\(q_{ii}=1\)) or exactly one of the preceding events caused it (\(q_{ij}=1\)). So we could take the indicator variables out of the log and split the log:

\[\log(q_{ii} \mu + \sum_j q_{ij} h) = q_{ii} \log \mu + \sum_j q_{ij} \log h\]

This is a remarkable step, and I don’t find it clearly explained in any paper, dissertation, or text. Maybe I’m just thick-headed.

Anyway, now that we have that, we can do things like take the expectation of this complete data log-likelihood. Define \(\mathbb{E}[Q] = P\), with \(\mathbb{E}[q_{ij}] = p_{ij}\), and now we can easily do:

\[\begin{align*} \mathbb{E}_Q\left[ \log p(\tau, Q \right] &= \mathbb{E}_Q\left[ \sum_i q_{ii} \log \mu + \sum_j q_{ij} \log h(t_i-t_j) \right] - \int_0^T h(s)ds \\ &= \sum_i p_{ii} \log \mu + \sum_j p_{ij} \log h(t_i-t_j) - \int_0^T h(s)ds \end{align*}\]

Easy day. And similarly for the multivariate case, left as an exercise for the reader (ha!).