idmtools_calibra.algorithms.separatrix_bhm module#

class idmtools_calibra.algorithms.separatrix_bhm.SeparatrixBHM(params, constrain_sample_fn=<function SeparatrixBHM.<lambda>>, implausibility_threshold=3, target_success_probability=0.7, num_past_iterations_to_include_in_metamodel=3, samples_per_iteration=32, samples_final_iteration=128, max_iterations=10, training_frac=0.8)[source]#

Bases: NextPointAlgorithm

Separatrix using Bayesian History Matching

The basic idea of Separatrix is that each simulation results in a success (+1) or a failure (-1), and the success probability varies as a function of the input parameters. We seek an isocline of the the latent success probability function.

cleanup()[source]#
resolve_args(iteration)[source]#
add_samples(samples, iteration)[source]#
get_samples_for_iteration(iteration)[source]#
set_results_for_iteration(iteration, results)[source]#
set_gpc_vec(iteration, gpc)[source]#
choose_initial_samples()[source]#
choose_samples_via_history_matching(iteration)[source]#
end_condition()[source]#
get_final_samples()[source]#

Return some number of samples from the residual non-implausible area:

prep_for_dict(df)[source]#

Utility function allowing to transform a DataFrame into a dict removing null values

get_state()[source]#
set_state(state, iteration)[source]#
get_param_names()[source]#