Skip to content
Aaron A King edited this page Jul 1, 2016 · 42 revisions

Tips, tricks, advice, and examples.

Keeping a database of parameter-space explorations

Likelihood surfaces for dynamic models can be very complex and the computations needed to explore them can be expensive. By keeping a record of all parameter points visited, along with the computed likelihood at each point, is a good way to ensure that you continually improve your picture of the likelihood surface.

Doing this can be as simple as maintaining a CSV file with one column for each parameter, plus the likelihood (and s.e.). It can be useful to supplement this with an indication of the name of the model and any other qualifying information.

Reproducibility on a multicore machine via bake, stew, and freeze

It is often the case that heavy pomp computations are best performed in parallel on a cluster or multi-core machine. This poses some challenges in trying to ensure reproducibility and avoiding repetition of expensive calculations. The bake, stew, and freeze functions provide some useful facilities in this regard.

For example:

require(pomp)
pompExample(ricker)

require(foreach)
require(doMC)
registerDoMC(5)

bake(file="pfilter1.rds",seed=607686730,kind="L'Ecuyer",{
  foreach (i=1:10, .combine=c, 
           .options.multicore=list(set.seed=TRUE)) %dopar% {
     pf <- pfilter(ricker,Np=1000)
     logLik(pf)
   }
}) -> ll

In the above bake first checks to see whether the file pfilter1.rds exists. If it does, it then loads it (using readRDS) and stores the result in ll. If it does not, it evaluates the expression embraced in the brackets, stores the result in pfilter1.rds, and returns it. While the expression is evaluated, the R session's pseudorandom number generator (RNG) is temporarily set to the state specified by the seed and kind arguments (see ?set.seed). In this case, the expression to be evaluated makes use of the foreach and doMC packages to run 10 particle filtering operations in parallel on a multicore machine. We therefore use a parallel RNG ("L'Ecuyer").

The bake function stores or retrieves and returns a single R object. If one wants to produce multiple objects in a reproducible way, use stew. For example:

require(pomp)
pompExample(ricker)

stew(file="pfilter2.rda",seed=607686730,kind="L'Ecuyer",{
  te <- system.time(
  foreach (i=1:10, .combine=c, 
           .options.multicore=list(set.seed=TRUE)) %dopar% {
      pf <- pfilter(ricker,Np=1000)
      logLik(pf)
   } -> ll2
  )
})

In the above, stew again temporarily sets the RNG state before evaluating the expression. The objects te and ll2 are created during this evaluation; these are stored in pfilter2.rda to be retrieved if the snippet is run a second time.

Like bake and stew, the freeze function temporarily sets the state of the RNG and evaluates an arbitrary R expression, and finally returns the RNG state to its status quo ante. Unlike bake and stew, freeze neither stores nor retrieves results.

How can I include a vector of variables in a Csnippet?

See this entry in the FAQ.

How to handle missing data

Missing data are a common complication. pomp can handle NAs in the data, but the measurement model probability density function, dmeasure, if it is needed, must be written so as to deal with NAs appropriately. For example, look at the following variant of the SIR model describing the influenza outbreak in a boarding school:

library(pomp)
library(magrittr)
library(ggplot2)
library(reshape2)

read.csv(text="
B,day
NA,0
1,1
6,2
26,3
73,4
NA,5
293,6
258,7
236,8
191,9
124,10
69,11
26,12
11,13
4,14
") -> dat

dat %>%
  na.omit() %>%
  ggplot(aes(x=day, y=B)) +
  geom_line() + geom_point()

The data are missing at days 0 and 5. We create a pomp object implementing a simple SIR process model and a binomial measurement model, as in the original example. The only difference is in the dmeasure:

sir_step <- Csnippet("
  double dN_SI = rbinom(S,1-exp(-Beta*I/N*dt));
  double dN_IR = rbinom(I,1-exp(-gamma*dt));
  S -= dN_SI;
  I += dN_SI - dN_IR;
  R += dN_IR;
  H += dN_IR;
")

sir_init <- Csnippet("
  S = N-1;
  I = 1;
  R = 0;
  H = 0;
")

rmeas <- Csnippet("B = rbinom(H,rho);")

dmeas <- Csnippet("if (ISNA(B)) {
                    lik = (give_log) ? 0 : 1;
                  } else {
                    lik = dbinom(B,H,rho,give_log);
                  }")

dat %>%
  pomp(time="day",t0=0,
       rprocess=euler.sim(sir_step,delta.t=1/12),
       initializer=sir_init,
       rmeasure=rmeas,
       dmeasure=dmeas,
       zeronames="H",
       paramnames=c("Beta","gamma","N", "rho"),
       statenames=c("S","I","R","H")
       ) -> sir_na

Note that the dmeasure returns a likelihood of 1 (log likelihood 0) for the missing data. [What's the probability of not observing anything if you don't look?] When there is an observation, it returns a binomial likelihood as usual.

Our simulations now include simulations of the missing data,

sir_na %>%
  simulate(params=c(Beta=3,gamma=2,rho=0.9,N=2600),
           nsim=10,as.data.frame=TRUE,include.data=TRUE) %>%
  ggplot(aes(x=time,y=B,group=sim,color=sim=="data"))+
  geom_line()+
  guides(color=FALSE)+
  theme_bw()

and the particle filter handles the missing data correctly:

sir_na %>%
  pfilter(Np=1000,params=c(Beta=3,gamma=2,rho=0.9,N=2600)) %>%
  as.data.frame() %>%
  subset(select=-cond.loglik) %>%
  melt(id="time") %>%
  ggplot(aes(x=time,y=value,color=variable))+
  guides(color=FALSE)+
  geom_line()+
  facet_wrap(~variable,ncol=1,scales='free_y')+
  theme_bw()

In the above particle filter computation, notice that the effective sample size (ESS) is full, as it should be, when the missing data contribute no information.

How to deal with accumulator variables and t0 much less than t[1]

As shown [above](#How to handle missing data), the accumulator variable collects events between any two observations in the time series provided, i.e. were the missing data at day 5 not flagged as NA but represented by a missing row the the accumulator variable would have collected events over 2 days which usually will not represent what is listed in time series data. A similar issue arises if the simulation needs to start at t0 much less than t[1] which can be circumvented by introducing a "missing value" at t[0] as shown above. The accumulator variable shows a proper value for the first data point in the time series at t[1] which can be seen when plotting the subset starting at t[1].

  sims_na %>%
  subset(time>"0")%>%
  melt(id=c("sim","time")) %>%
  ggplot(aes(x=time,group=sim,color=sim,y=value))+
  geom_line()+
  facet_wrap(~variable,scales="free_y")               

Clone this wiki locally