This function fits the multinomial-Poisson mixture model, useful for data collected via survey methods such as removal or double observer sampling.

stan_multinomPois(
  formula,
  data,
  prior_intercept_state = normal(0, 5),
  prior_coef_state = normal(0, 2.5),
  prior_intercept_det = logistic(0, 1),
  prior_coef_det = logistic(0, 1),
  prior_sigma = gamma(1, 1),
  log_lik = TRUE,
  ...
)

Arguments

formula

Double right-hand side formula describing covariates of detection and abundance in that order

data

A unmarkedFrameMPois object

prior_intercept_state

Prior distribution for the intercept of the state (abundance) model; see ?priors for options

prior_coef_state

Prior distribution for the regression coefficients of the state model

prior_intercept_det

Prior distribution for the intercept of the detection probability model

prior_coef_det

Prior distribution for the regression coefficients of the detection model

prior_sigma

Prior distribution on random effect standard deviations

log_lik

If TRUE, Stan will save pointwise log-likelihood values in the output. This can greatly increase the size of the model. If FALSE, the values are calculated post-hoc from the posteriors

...

Arguments passed to the stan call, such as number of chains chains or iterations iter

Value

ubmsFitMultinomPois object describing the model fit.

See also

multinomPois, unmarkedFrameMPois

Examples

# \donttest{
data(ovendata)
ovenFrame <- unmarkedFrameMPois(ovendata.list$data,
                                siteCovs=ovendata.list$covariates,
                                type="removal")

oven_fit <- stan_multinomPois(~1~scale(ufc), ovenFrame, chains=3, iter=300)
#> 
#> SAMPLING FOR MODEL 'multinomPois' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 0.000132 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 1.32 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 1: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 1: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 1: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 1: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 1: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 1: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 1: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 1: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 1: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 1: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 1: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0.153 seconds (Warm-up)
#> Chain 1:                0.122 seconds (Sampling)
#> Chain 1:                0.275 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'multinomPois' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 0.000133 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 1.33 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 2: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 2: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 2: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 2: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 2: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 2: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 2: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 2: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 2: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 2: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 2: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 0.156 seconds (Warm-up)
#> Chain 2:                0.124 seconds (Sampling)
#> Chain 2:                0.28 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'multinomPois' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 0.000201 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 2.01 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:   1 / 300 [  0%]  (Warmup)
#> Chain 3: Iteration:  30 / 300 [ 10%]  (Warmup)
#> Chain 3: Iteration:  60 / 300 [ 20%]  (Warmup)
#> Chain 3: Iteration:  90 / 300 [ 30%]  (Warmup)
#> Chain 3: Iteration: 120 / 300 [ 40%]  (Warmup)
#> Chain 3: Iteration: 150 / 300 [ 50%]  (Warmup)
#> Chain 3: Iteration: 151 / 300 [ 50%]  (Sampling)
#> Chain 3: Iteration: 180 / 300 [ 60%]  (Sampling)
#> Chain 3: Iteration: 210 / 300 [ 70%]  (Sampling)
#> Chain 3: Iteration: 240 / 300 [ 80%]  (Sampling)
#> Chain 3: Iteration: 270 / 300 [ 90%]  (Sampling)
#> Chain 3: Iteration: 300 / 300 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 0.134 seconds (Warm-up)
#> Chain 3:                0.115 seconds (Sampling)
#> Chain 3:                0.249 seconds (Total)
#> Chain 3: 
#> Warning: Bulk Effective Samples Size (ESS) is too low, indicating posterior means and medians may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#bulk-ess
#> Warning: Tail Effective Samples Size (ESS) is too low, indicating posterior variances and tail quantiles may be unreliable.
#> Running the chains for more iterations may help. See
#> https://mc-stan.org/misc/warnings.html#tail-ess
# }