idmtools_calibra.algorithms.imis module#

class idmtools_calibra.algorithms.imis.IMIS(Incremental Mixture Importance Sampling)[source]#

Bases: NextPointAlgorithm

Algorithm ported from R code: http://cran.r-project.org/web/packages/IMIS

Full description from Adrian Raftery and Le Bao (2009): http://www.stat.washington.edu/research/reports/2009/tr560.pdf

The basic idea of IMIS is that points with high importance weights are in areas where the target density is underrepresented by the importance sampling distribution. At each iteration, a multivariate normal distribution centered at the point with the highest importance weight is added to the current importance sampling distribution, which thus becomes a mixture of such functions and of the prior. In this way underrepresented parts of the parameter space are successively identified and are given representation, ending up with an iteratively constructed importance sampling distribution that covers the target distribution well.

validate_parameters()[source]#

Ensure valid parameter ranges: ‘samples_per_iteration’ is used to select N-closest points to maximum-weight sample, so it can’t exceed ‘n_initial_samples’. It is also used to estimate weighted covariance, so it cannot be smaller than the dimensionality of the samples.

choose_initial_samples()[source]#
set_initial_samples()[source]#

Set the initial samples points for the algorithm. If initial_samples parameter is an array, use those values as initial samples. Otherwise, if initial_samples parameter is a number, draw the specified number randomly from the prior distribution.

add_samples(samples, iteration)[source]#
get_next_samples_for_iteration(iteration)[source]#
choose_next_point_samples(iteration)[source]#
generate_variables_from_data()[source]#

restore some properties from self.data

get_samples_for_iteration(iteration)[source]#
update_iteration(iteration)[source]#

Initial Stage (iteration: k = 0)

  1. Sample N inputs \(\theta_1, \theta_2, ... , \theta_N\) from the prior distribution \(p(\theta)\).

  2. For each \(\theta_i\), calculate the likelihood \(L_i\), and form the importance weights: \(w^{(0)}_i = L_i / sum(L_j)\).

Importance Sampling Stage (iteration: k > 0; samples_per_iteration: B)

  1. Calculate the likelihood of the new inputs and combine the new inputs with the previous ones. Form the importance weights: \(w^{(k)_i} = c * L_i * p(\theta_i) / q^{(k)}(\theta_i)\), where c is chosen so that the weights add to 1, \(q^{(k)}\) is the mixture sampling distribution: \(q^{(k)} = (N_0/N_k) * p + (B/N_k) * sum(H_s)\) where \(H_s\) is the Sth multivariate normal distribution, and \(N_k = N_0 + B_k\) is the total number of inputs up to iteration k.

update_state(iteration)[source]#

Update the next-point algorithm state and select next samples.

update_gaussian()[source]#

Importance Sampling Stage (iteration: k > 0; samples_per_iteration: B)

  1. Choose the current maximum weight input as the center \(\theta^{(k)}\). Estimate \(\Sigma^{(k)}\) from the weighted covariance of the B inputs with the smallest Mahalanobis distances to \(\theta^{(k)}\), where the distances are calculated with respect to the covariance of the prior distribution and the weights are taken to be proportional to the average of the importance weights and \(1/N_k\).

  2. Sample ‘samples_per_iteration’ new inputs from a multivariate Gaussian distribution \(H_k\) with covariance matrix \(\Sigma^{(k)}\).

update_gaussian_center()[source]#

Choose the current maximum weight input as the center point for the next iteration of multivariate-normal sampling.

weighted_distances_from_center()[source]#

Calculate the covariance-weighted distances from the current maximum-weight sample. N.B. Using normalized Euclid instead of Mahalanobis if we’re just going to diagonalize anyways.

update_gaussian_covariance(distances)[source]#

Calculate the covariance of the next-iteration of multivariate-normal sampling from the “samples_per_iteration” closest samples.

get_param_names()[source]#
verify_valid_samples(next_samples)[source]#

Resample from next-point function until all samples have non-zero prior.

next_point_fn()[source]#

IMIS next-point sampling from multivariate normal centered on the maximum weight.

update_gaussian_probabilities(iteration)[source]#

Calculate the probabilities of all sample points as estimated from the multivariate-normal probability distribution function centered on the maximum weight and with covariance fitted from the most recent iteration.

calculate_weighted_covariance(samples, weights, center)[source]#

A weighted covariance of sample points. N.B. The weights are normalized as in the R function “cov.wt”

end_condition()[source]#
Stopping Criterion:

The algorithm ends when the importance sampling weights are reasonably uniform. Specifically, we end the algorithm when the expected fraction of unique points in the resample is at least (1 - 1/e) = 0.632. This is the expected fraction when the importance sampling weights are all equal, which is the case when the importance sampling function is the same as the target distribution.

get_final_samples()[source]#
get_state()[source]#
set_state(state, iteration)[source]#
set_results_for_iteration(iteration, results)[source]#
cleanup()[source]#
restore(iteration_state)[source]#