Parameter sweeps and model iteration¶
In modeling, parameter sweeps are an important method for fine-tuning parameter values, exploring parameter space, and calibrating simulations to data. A parameter sweep is an iterative process in which simulations are run repeatedly using different values of the parameter(s) of choice. This process enables the modeler to determine a parameter’s “best” value (or range of values), or even where in parameter space the model produces desirable (or non-desirable) behaviors.
When fitting models to data, it is likely that there will be numerous parameters that do not have a pre-determined value. Some parameters will have a range of values that are biologically plausible, or have been determined from previous experiments; however, selecting a particular numerical value to use in the model may not be feasible or realistic. Therefore, the best practice involves using a parameter sweep to narrow down the range of possible values or to provide a range of outcomes for those possible values.
idmtools provides an automated approach to parameter sweeps. With few lines of code, it is possible to test the model over any range of parameter values, with any combination of parameters.
Contents
With a stochastic model (such as EMOD), it is especially important to utilize parameter sweeps, not only for calibration to data or parameter selection, but to fully explore the stochasticity in output. Single model runs may appear to provide good fits to data, but variation will arise and multiple runs are necessary to determine the appropriate range of parameter values necessary to achieve desired outcomes. Multiple iterations of a single set of parameter values should be run to determine trends in simulation output: a single simulation output could provide results that are due to random chance.
How to do parameter sweeps¶
With idmtools, you can do parameter sweeps with builders or without builders using a base task to set your simulation parameters.
The typical ‘output’ of idmtools is a config.json file for each created simulation, which contains the parameter values assigned: both the constant values and those being swept.
Using builders¶
- In this release, to support parameter sweeps for models, we have the following builders to assist you:
SimulationBuilder
- you set your sweep parameters in your scripts and it generates a config.json file with your sweeps for your experiment/simulations to useCSVExperimentBuilder
- you can use a CSV file to do your parameter sweepsYamlSimulationBuilder
- you can use a Yaml file to do your parameter sweepsArmSimulationBuilder
for cross and pair parameters, which allows you to cross parameters, like you cross your arms.
There are two types of sweeping, cross and pair. Cross means you have for example, 3 x 3 = 9 set of parameters, and pair means 3 + 3 = 3 pairs of parameters, for example, a, b, c and d, e, f.
For cross sweeping, let’s say again you have parameters a, b, c and d, e, f that you want to cross so you would have the following possible matches: - a & d - a & e - a & f - b & d - b & e - b & f - c & d - c & e - c & f
For Python models, we also support them using a JSONConfiguredPythonTask. In the future we will support additional configured tasks for Python and R models.
Creating sweeps without builders¶
You can also create sweeps without using builders. Like this example:
"""
This file demonstrates how to create param sweeps without builders.
we create base task including our common assets, e.g. our python model to run
we create 5 simulations and for each simulation, we set parameter 'a' = [0,4] and 'b' = a + 10 using this task
then we are adding this to task to our Experiment to run our simulations
"""
import copy
import os
import sys
from idmtools.assets import AssetCollection
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH
if __name__ == "__main__":
# define our platform
platform = Platform('COMPS2')
# create experiment object and define some extra assets
assets_path = os.path.join(COMMON_INPUT_PATH, "python", "Assets")
e = Experiment(name=os.path.split(sys.argv[0])[1],
tags={"string_tag": "test", "number_tag": 123},
assets=AssetCollection.from_directory(assets_path))
# define paths to model and extra assets folder container more common assets
model_path = os.path.join(COMMON_INPUT_PATH, "python", "model.py")
# define our base task including the common assets. We could also add these assets to the experiment above
base_task = JSONConfiguredPythonTask(script_path=model_path, envelope='parameters')
base_simulation = Simulation.from_task(base_task)
# now build our simulations
for i in range(5):
# first copy the simulation
sim = copy.deepcopy(base_simulation)
# configure it
sim.task.set_parameter("a", i)
sim.task.set_parameter("b", i + 10)
# and add it to the simulations
e.simulations.append(sim)
# run the experiment
e.run(platform=platform)
# wait on it
# in most real scenarios, you probably do not want to wait as this will wait until all simulations
# associated with an experiment are done. We do it in our examples to show feature and to enable
# testing of the scripts
e.wait()
# use system status as the exit code
sys.exit(0 if e.succeeded else -1)
Running parameter sweeps in specific models¶
The following pages provide information about running parameter sweeps in particular models, and include example scripts.