Welcome to idmtools

idmtools is a collection of Python scripts and utilities created to streamline user interactions with disease models. This framework provides the user with tools necessary to complete projects, starting from the creation of input files (if required), to calibration of the model to data, to commissioning and running simulations, through the analysis of results. Modelers can use idmtools to run models locally or send suites of simulations to an HPC or other computing source. This framework is free, open-source, and model agnostic: it can be used to interact with a variety of models, such as custom models written in R or Python, or IDM’s own EMOD. Additional functionality for interacting with EMOD is provided in the emod_api package and emodpy package.

idmtools workflow

idmtools includes a variety of options for each step of the modeling process. Because of this, the tool suite was developed in a modular fashion, so that users can select the utilities they wish to use. In order to simplify the desired workflow, facilitate the modeling process, and make the model (and its results) reusable and sharable, idmtools allows the user to create assets. Assets can be added at any level of the process, from running a specific task, through creating a simulation, to creating a experiment. This allows the user to create inputs based on their specific needs: they can be transient, or sharable across multiple simulations.

The diagram below shows how idmtools and each of the related packages are used in an end-to-end workflow using EMOD as the disease transmission model.

_images/672677c1467a40bda66e9d55ff28379646e54419fb61b5c910571eaf1b5410d3.svg

Exact workflows for using idmtools is user-dependent, and can include any of the tasks listed below.

Installation

You can install idmtools in two different ways. If you intend to use idmtools as IDM builds it, follow the instructions in Basic installation. However, if you intend to modify the idmtools source code to add new functionality, follow the instructions in Developer installation. Whichever installation method you choose, the prerequisites are the same.

Prerequisites

idmtools uses Docker to run idmtools within a container to keep the idmtools environment securely isolated. You must also have Python 3.7, or 3.8 64-bit and Python virtual environments installed to isolate your idmtools installation in a separate Python environment. If you do not already have these installed, see the links below for instructions.

  • Windows 10 Pro or Enterprise

  • Python 3.7, or 3.8 64-bit (https://www.python.org/downloads/release)

  • Python virtual environments

    Python virtual environments enable you to isolate your Python environments from one another and give you the option to run multiple versions of Python on the same computer. When using a virtual environment, you can indicate the version of Python you want to use and the packages you want to install, which will remain separate from other Python environments. You may use virtualenv, which requires a separate installation, but venv is recommended and included with Python 3.7+.

  • Docker (https://docs.docker.com/)

    Docker is optional for the basic installation of idmtools; it is needed only for running simulations or analysis locally. It is required for the developer installation.

Basic installation

Follow the steps below if you will use idmtools to run and analyze simulations, but will not make source code changes.

  1. Open a command prompt and create a virtual environment in any directory you choose. The command below names the environment “idmtools”, but you may use any desired name:

    python -m venv idmtools
    
  2. Activate the virtual environment:

    • Windows

    • Linux

    Enter the following:

    idmtools\Scripts\activate
    

    Enter the following:

    source idmtools/bin/activate
    
  3. Install idmtools packages:

    pip install idmtools[idm] --index-url=https://packages.idmod.org/api/pypi/pypi-production/simple
    

    Note

    When reinstalling idmtools you should use the --no-cache-dir and --force-reinstall options, such as: pip install idmtools[idm] --index-url=https://packages.idmod.org/api/pypi/pipi-production/simple --no-cache-dir --force-reinstall. Otherwise, you may see the error, idmtools not found, when attempting to open and run one of the example Python scripts.

  4. Verify installation by pulling up idmtools help:

    idmtools --help
    
  5. When you are finished, deactivate the virtual environment by entering the following at a command prompt:

    deactivate
    
Developer installation

For development environment setup and installation options see https://github.com/InstituteforDiseaseModeling/idmtools#development-environment-setup.

Configuration

The configuration of idmtools is set in the idmtools.ini file. This file is normally located in the project directory but idmtools will search up through the directory hierarchy, and lastly the files ~/.idmtools.ini on Linux and %LOCALAPPDATA%\idmtools\idmtools.ini on Windows. You can also specify the path to the idmtools file by setting the environment variable IDMTOOLS_CONFIG_FILE. An idmtools.ini file is recommended when using idmtools. If you want to generate an idmtools.ini file, see documentation about the Configuration Wizard. Configuration values can also be set using environment variables. The variables name can be specified using the format IDMTOOLS_SECTION_OPTION except for common options, which have the format IDMTOOLS_OPTION.

If no configuration file is found, an error is displayed. To supress this error, you can use IDMTOOLS_NO_CONFIG_WARNING=1

Global parameters

The idmtool.ini file includes some global parameters that drive features within idmtools. These primarily control features around workers and threads and are defined within the [COMMON] section of idmtool.ini. Most likely, you will not need to change these.

The following includes an example of the [COMMON] section of idmtools.ini with the default settings:

[COMMON]
max_threads = 16
sims_per_thread = 20
max_local_sims = 6
max_workers = 16
batch_size = 10
  • max_threads - Maximum number of threads for analysis and other multi-threaded activities.

  • sims_per_thread - How many simulations per threads during simulation creation.

  • max_local_sims - Maximum simulations to run locally.

  • max_workers - Maximum number of workers processing in parallel.

  • batch_size - Maximum batch size to retrieve simulations.

Logging overview

idmtools includes built-in logging, which is configured in the [LOGGING] section of the idmtools.ini file, and includes the following parameters: level, console, and filename. Default settings are shown in the following example:

[LOGGING]
level = INFO
console = off
filename = idmtools.log

Logging verbosity is controlled by configuring the parameter, level, with one of the below listed options. They are in descending order, where the lower the item in the list, the more verbose logging is included.

CRITICAL
ERROR
WARNING
INFO
DEBUG

Console logging is enabled by configuring the parameter, console, to “on”. The filename parameter can be configured to something other than the default filename, “idmtools.log”.

See Enabling/Disabling/Changing Log Level at Runtime for an example on enabling logging/changing levels at runtime.

idmtools.ini wizard

You can use the config command to create a configuration block in your project’s idmtools.ini file.

$ idmtools config --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools config [OPTIONS] COMMAND [ARGS]...

  Contains commands related to the creation of idmtools.ini.

  With the config command, you can :  - Generate an idmtools.ini file in the
  current directory  - Add a configuration block

Options:
  --config_path FILE              Path to the idmtools.ini file
  --global-config / --no-global-config
                                  Allow generating config in the platform
                                  default global location
  --help                          Show this message and exit.

Commands:
  block  Command to create/replace a block in the selected idmtools.ini.

If you do not specify a config path, the command will use the idmtools.ini file in the current directory. To edit a different file, use the --config_path argument to specify its path, such as: idmtools config --config_path C:\my_project\idmtools.ini.

Use the block command to start the wizard that will guide you through the creation of a configuration block in the selected idmtools.ini, for example: idmtools config block.

Here is a demo of the command in action

_images/config-wizard.svg

Below is an example configuration file:

# You can also override any configuration option using environment variables
# For any common variable, you can use
# IDMTOOLS_OPTION
#
# For any other section, you can use IDMTOOLS_SECTION_OPTION
[COMMON]
# Number of threads idmtools will use for analysis and other multi-threaded activities
max_threads = 16

# How many simulations per threads during simulation creation
sims_per_thread = 20

# Maximum number of LOCAL simulation ran simultaneously
max_local_sims = 6

# Maximum number of workers processing in parallel
max_workers = 16

# What type of ids should idmtools use internally
# use idmtools info plugins id_generators
id_generator = uuid

# You can also set number of workers per CPU
# If you had 16 cpus and set to 2, 32 workers would be created
# workers_per_cpu = 2

# Maximum batch size to retrieve simulations
batch_size = 50

# You can disable progress bars by using the following options
# disable_progress_bar = true

# When using a development version of idmtools, you will get a log message about the version being development. You can disable using this item
# hide_dev_warning = true

# You can suppress the statement about the configuration used by using
# NO_PRINT_CONFIG_USED = true

# Toggles if platform blocks will be printed
# SHOW_PLATFORM_CONFIG = true

[COMPS]
type = COMPS
endpoint = https://comps.idmod.org
environment = Belegost
priority = Lowest
simulation_root = $COMPS_PATH(USER)\output
node_group = emod_abcd
num_retries = 0
num_cores = 1
max_workers = 16
batch_size = 10
exclusive = False

[COMPS2]
type = COMPS
endpoint = https://comps2.idmod.org
environment = Bayesian
priority = Lowest
simulation_root = $COMPS_PATH(USER)\output
node_group = emod_abcd
num_retries = 0
num_cores = 1
max_workers = 16
batch_size = 10
exclusive = False
# Minimum time in seconds between commissioning when batching.. Values between 10-300
min_time_between_commissions = 10

[Logging]
# Options are is descending order. The lower the item in the list, the more verbose the logging will be
# CRITICAL, ERROR, SUCCESS, WARNING, NOTICE, INFO, VERBOSE, DEBUG
level = DEBUG
console = off
# If you set this to an empty value, you can disable file logging or the value "-1"
filename = idmtools.log
# You can change the logging level for file only using the file level option
# file_level = DEBUG

# Toggle for colored logs. Generally you want this enabled
# use_colored_logs = on

# Toggle user print. Default to true. THIS SHOULD NOT GENERALLY NOT BE USES
# USER_OUTPUT = on

# This is a test we used to validate loading local from section block
[Custom_Local]
type = Local

[SLURM]
type = COMPS
endpoint = https://comps2.idmod.org
environment = SlurmStage
priority = Highest
simulation_root = $COMPS_PATH(USER)\output
num_retries = 0
num_cores = 1
exclusive = False
max_workers = 16
batch_size = 10

Supported platforms

idmtools currently supports running on the following platforms:

COMPS: COmputational Modeling Platform Service (COMPS) is a high performance computing cluster used by employees and collaborators at IDM. To support running simulations and analysis on COMPS, idmtools includes the following modules: idmtools_platform_comps.

Note

COMPS access is restricted to IDM employees. See additional documentation for using idmtools with other high-performance computing clusters.

SLURM: You can also run simulations on the open-source SLURM platform for large and small Linux clusters.”. For more information, see SLURM.

If you need to use a different platform, you can also add a new platform to idmtools by creating a new platform plugin, as described in Create platform plugin.

You can use the idmtools.ini file to configure platform specific settings, as the following examples shows for COMPS:

[COMPS]
type = COMPS
endpoint = https://comps.idmod.org
environment = Belegost
priority = Lowest
simulation_root = $COMPS_PATH(USER)\output
node_group = emod_abcd
num_retires = 0
num_cores = 1
max_workers = 16
batch_size = 10
exclusive = False

As an alternative to the INI based configurations, some platforms such as COMPS provide predefined configurations aliases. With those aliases, you can use an alias for a known environment without a config. To see a list of aliases, use the cli command idmtools info plugins platform-aliases.

Within your code you use the Platform class to specify which platform idmtools will use. For example, the following excerpt sets platform to use COMPS and overrides priority and node_group settings.:

platform = Platform('COMPS',priority='AboveNormal',node_group='emod_a')

You use the Platform class whether you’re building or running an experiment, or running analysis on output from simulations.

For additional information about configuring idmtools.ini, see Configuration.

COMPS

The COMPS platform allows use of the COMPS HPC. COMPS has multiple environments. Most have predefined aliases that can be used to quickly use the environments. Here are a list of predefined environments:

  • BELEGOST

  • BAYESIAN

  • SLURMSTAGE

  • CALCULON

  • SLURM

  • SLURM2

  • BOXY

You can also see a list of aliases and configuration options using the CLI command idmtools info plugins platform-aliases

$ idmtools info plugins platform-aliases
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
+---------------------------+-------------------------------------------------------------------------+
| Platform Plugin Aliases   | Configuration Options                                                   |
|---------------------------+-------------------------------------------------------------------------|
| SLURM_LOCAL               | {'mode': 'local', 'job_directory': '/home/docs'}                        |
| SLURM_BRIDGED             | {'mode': 'bridged', 'job_directory': '/home/docs'}                      |
| BELEGOST                  | {'endpoint': 'https://comps.idmod.org', 'environment': 'Belegost'}      |
| CALCULON                  | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| IDMCLOUD                  | {'endpoint': 'https://comps.idmod.org', 'environment': 'IDMcloud'}      |
| NDCLOUD                   | {'endpoint': 'https://comps.idmod.org', 'environment': 'NDcloud'}       |
| BMGF_IPMCLOUD             | {'endpoint': 'https://comps.idmod.org', 'environment': 'BMGF_IPMcloud'} |
| QSTART                    | {'endpoint': 'https://comps.idmod.org', 'environment': 'Qstart'}        |
| BAYESIAN                  | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Bayesian'}     |
| SLURMSTAGE                | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| CUMULUS                   | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Cumulus'}      |
| SLURM                     | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| SLURM2                    | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| BOXY                      | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| BELEGOST_SSMT             | {'endpoint': 'https://comps.idmod.org', 'environment': 'Belegost'}      |
| CALCULON_SSMT             | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| IDMCLOUD_SSMT             | {'endpoint': 'https://comps.idmod.org', 'environment': 'IDMcloud'}      |
| NDCLOUD_SSMT              | {'endpoint': 'https://comps.idmod.org', 'environment': 'NDcloud'}       |
| BMGF_IPMCLOUD_SSMT        | {'endpoint': 'https://comps.idmod.org', 'environment': 'BMGF_IPMcloud'} |
| QSTART_SSMT               | {'endpoint': 'https://comps.idmod.org', 'environment': 'Qstart'}        |
| BAYESIAN_SSMT             | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Bayesian'}     |
| SLURMSTAGE_SSMT           | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| CUMULUS_SSMT              | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Cumulus'}      |
| SLURM_SSMT                | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| SLURM2_SSMT               | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| BOXY_SSMT                 | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
+---------------------------+-------------------------------------------------------------------------+
Utilities Unique to COMPS
Add to asset collection

idmtools allows you to add assets, such as input files and model libraries, to an asset collection on COMPS. This allows you to access and use these assets when running model simulations on the COMPS platform.

Add assets

There are two primary ways of adding assets (experiment and task):

Add assets to experiment

There are multiple ways of adding to Experiment:

  • Add directory to experiment/workitem:

    experiment.assets.add_directory(assets_directory=os.path.join("inputs", "python_model_with_deps", "Assets"))
    
  • Add list of Asset or AssetCollection to Experiment:

    ac = AssetCollection.from_directory(assets_directory=os.path.abspath(os.path.join(COMMON_INPUT_PATH, "assets", collections")))
    experiment.add_assets(ac)
    
  • Add file as asset to Experiment:

    experiment.add_asset(os.path.join("inputs", "scheduling", "commandline_model.py"))
    
Add assets via task (then add task to experiment/workitem)

There are multiple ways of adding via task:

  • Add files to common_assets or transient_assets to task:

    task.common_assets.add_asset(os.path.join(INPUT_PATH, os_type, "bin", "schema.json"))
    task.transient_assets.add_asset(os.path.join(INPUT_PATH, "campaign_template.json"))
    
  • Add list of Asset or AssetCollection to task:

    task.common_assets.add_assets(AssetCollection.from_id_file("covasim.id"))
    
  • Add from directory:

    task.common_assets.add_directory(assets_directory=os.path.join(COMMON_INPUT_PATH, "python", "Assets"))
    
Add libraries

To add a specific library to an asset collection you first add the library package name to a requirements file, either lib_requirements_linux.txt or lib_requirements_wins.txt, and place the file in the root directory containing your model files. Then you use the add_libs_utils.py script to add the library to the asset collection on COMPS.

Add to requirements file

The following contains contents of an example requirements file:

dbfread~=2.0.7
PyCRS~=1.0.2
ete3~=3.1.1
czml~=0.3.3
pygeoif~=0.7
pyshp~=2.1.0
rasterio~=1.1.5
matplotlib~=3.3.4
pandas~=1.2.3
h5py~=2.10.0
Upload library to asset collection

After including the desired libraries in the requirement file, you use the following Python script add_libs_utils.py to upload them to your asset collection:

from idmtools.core.platform_factory import Platform
from idmtools_platform_comps.utils.python_requirements_ac.requirements_to_asset_collection import \
    RequirementsToAssetCollection

def main():
    #platform = Platform('COMPS2')
    platform = Platform('SLURM')

    env = platform.environment
    if env == 'Belegost' or env == 'Bayesian':  # COMPS or COMPS2
        pl = RequirementsToAssetCollection(platform, requirements_path='lib_requirements_wins.txt',
                                           local_wheels=['GDAL-3.1.2-cp36-cp36m-win_amd64.whl',
                                           'rasterio-1.1.5-cp36-cp36m-win_amd64.whl',
                                           'PyQt4-4.11.4-cp36-cp36m-win_amd64.whl'])
    else:  # SLURM env
        pl = RequirementsToAssetCollection(platform, requirements_path='lib_requirements_linux.txt',
                                           local_wheels=['GDAL-3.1.2-cp36-cp36m-manylinux1_x86_64.whl'])

    ac_id = pl.run(rerun=False) # only change to True if you want to regenerate same set of ac again
    print('ac_id: ', ac_id)
    with open(env + '_asset_collection.txt', 'w') as fn:
        fn.write(str(ac_id))

if __name__ == '__main__':
    main()
Assetize outputs Workitem

Assetizing outputs allows you to create an AssetCollection from the outputs of a previous Experiment, Simulation, workitem (GenericWorkItem, SSMTWorkItem, SingularityBuildWorkItem) and other asset collections. In addition, you can create assets from multiple items of these type. For example, three simulations and an asset collection, or an experiment and a workitem. AssetizeOutput is implemented as a workitem that depends on other items to complete before running.

Assetized outputs are available on COMPS in the asset collection for the associated workitem.

Assetize outputs using glob patterns to select or deselect files. See https://docs.python.org/3/library/glob.html for details on glob patterns. The default configuration is set to assetize all outputs, “**” pattern, and exclude the “.log”, “StdOut.txt”, “StdErr.txt”, and “WorkOrder.json” files.

You can see a list of files that will be assetized without assetizing them by using the dry_run parameter. The file list will be in the output of the workitem.

See the Cookbook for examples of assetizing outputs.

Also review the class details AssetizeOutput

You can also run this command from the CLI. For details, see COMPS CLI reference.

Errors

See COMPS Errors reference

DownloadWorkItem

DownloadWorkItem will let you download files using glob patterns, and also from the CLI. You can download files from one or many experiments, simulations, work items, and asset collection.

Download uses glob patterns to select or deselect files. See https://docs.python.org/3/library/glob.html for details on glob patterns. The default configuration is set to download to all outputs, “**” pattern, and exclude the “.log”, “StdOut.txt”, “StdErr.txt”, and “WorkOrder.json” files.

You can see a list of files that will be downloaded without downloading them by using the dry_run parameter. The file list will be in the output of the work item or printed on the CLI.

Also review the class details DownloadWorkItem.

You can also run this command from the CLI. For details, see COMPS CLI reference

Download errors

See COMPS Errors reference

COMPS errors

The following errors mostly occur in SSMT workitems that run on COMPS:

  • NoFileFound - This means the patterns you specified resulted in no files found. Review your patterns.

  • CrossEnvironmentFilterNotSupport - This occurs when you attempt to filter an item in a COMPS environment that does not match that of the workitem. Use the same environment for your workitem as you did for your original item.

  • AtLeastOneItemToWatch - You cannot run assetize without linking at least one item.

  • DuplicateAsset - The resulting asset collection would have duplicate assets. See the error for a list of duplicate assets. This often occurs when filtering either experiments or multiple items. With experiments, this can be avoided by using the simulation_prefix_format_str to place the assets into sub-folders. When processing multiple workitems with files that would overlap, you can use work_item_prefix_format_str. For other cases, you may need to do multiple runs and exclude patterns such as combining two asset collections with a single file that overlaps.

COMPS scheduling

idmtools supports job scheduling on the COMPS platform, which includes support for multiple scenarios depending upon the scheduling needs of your specific research needs and requirements. For example, you could schedule your simulations to run under a single process on the same node and with a specified number of cores. For more information about this and other supported scenarios, see Scheduling Scenarios. To use the full scheduling capabilites included within COMPS you must add the workorder.json as a transient asset. This is a one time task to complete for your project. For more information about scheduling configuration, see Scheduling Configuration. Examples are provided from which you can leverage to help get started and gain a better understanding. Scheduling Schemas enumerate the available options that may be included in workorder.json.

Scheduling scenarios

Choosing the correct scheduling scenario will depend upon your specific research needs and requirements. The following lists some of the common scenarios supported:

  • N cores, N processes - useful for single-threaded or MPI-enabled workloads, such as EMOD.

  • N cores, 1 node, 1 process - useful for models that want to spawn various worker thread (GenEpi) or have large memory usage, where the number of cores being an indicator of memory usage.

  • 1 node, N processes - useful for models with high migration and interprocess communication. By running on the same node MPI can use shared memory, as opposed to slower tcp sockets over multiple nodes. This may be useful for some scenarios using EMOD or other MPI-enabled workloads.

Scheduling configuration

By configuring a workorder.json file and adding it as a transient asset you can take advantage of the full scheduling support provided with COMPS. Scheduling information included in the workorder.json file will take precedent over any scheduling information you may have in the idmtools.ini file or scheduling parameters passed to Platform. The following examples shows some of the options available to include in a workorder.json file.

Example workorder.json for HPC clusters:

{
  "Command": "python -c \"print('hello test')\"",
  "NodeGroupName": "idm_abcd",
  "NumCores": 1,
  "SingleNode": false,
  "Exclusive": false
}

Example workorder.json for SLURM clusters:

{
  "Command": "python3 Assets/model1.py",
  "NodeGroupName": "idm_abcd",
  "NumCores": 1,
  "NumProcesses": 1,
  "NumNodes": 1,
  "Environment": {
    "key1": "value1",
    "key2:": "value2",
    "PYTHONPATH": "$PYTHONPATH:$PWD/Assets:$PWD/Assets/site-packages",
    "PATH": "$PATH:$PWD/Assets:$PWD/Assets/site-packages"
  }
}

In addition to including a workorder.json file you must also set and pass scheduling=True parameter when running simulations, for example:

experiment.run(scheduling=True)
Add workorder.json as a transient asset

To include the workorder.json file as a transient asset you can either add an existing workorder.json using the add_work_order method or dynamically create one using the add_schedule_config method, both methods included in the Scheduling class.

Add existing workorder.json:

add_work_order(ts, file_path=os.path.join(COMMON_INPUT_PATH, "scheduling", "slurm", "WorkOrder.json"))

Dynamically create workorder.json:

add_schedule_config(ts, command="python -c \"print('hello test')\"", node_group_name='idm_abcd', num_cores=2,
                        NumProcesses=1, NumNodes=1,
                        Environment={"key1": "value1", "key2:": "value2"})
Scheduling example

For addition information and specifics of using a workorder.json file within Python, you can begin with the following:

# In this example, we will demonstrate how to run use WorkOrder.json to create simulation in mshpc cluster
# if use WorkOrder.json correctly, it will create simulations based on the Command in WorkOrder.json. all commands from
# task will get ignored

import os
import sys
from functools import partial
from typing import Any, Dict

from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_platform_comps.utils.scheduling import add_work_order

# first define our base task. please see the detail explanation in examples/python_models/python_sim.py
# if we do not use WorkOrder.json, this task will create simulation command run as "python Assets/model.py" in comps
# but for this example, we will use WorkOrder.json to override this command, so here the task's script can be anything
task = JSONConfiguredPythonTask(script_path=os.path.join("inputs", "python_model_with_deps", "Assets", "model.py"),
                                parameters=(dict(c=0)))

# now let's use this task to create a TemplatedSimulation builder. This will build new simulations from sweep builders
# we will define later. We can also use it to manipulate the base_task or the base_simulation
ts = TemplatedSimulations(base_task=task)

# We can define common metadata like tags across all the simulations using the base_simulation object
ts.base_simulation.tags['tag1'] = 1

# load WorkOrder.json file from local to each simulation via task. the actual command in comps will contain in this file
add_work_order(ts, file_path=os.path.join("inputs", "scheduling", "hpc", "WorkOrder.json"))

# Since we have our templated simulation object now, let's define our sweeps
# To do that we need to use a builder
builder = SimulationBuilder()


# define an utility function that will update a single parameter at a
# time on the model and add that param/value pair as a tag on our simulation.
def param_update(simulation: Simulation, param: str, value: Any) -> Dict[str, Any]:
    """
    This function is called during sweeping allowing us to pass the generated sweep values to our Task Configuration

    We always receive a Simulation object. We know that simulations all have tasks and that for our particular set
    of simulations they will all include JSONConfiguredPythonTask. We configure the model with calls to set_parameter
    to update the config. In addition, we are can return a dictionary of tags to add to the simulations so we return
    the output of the 'set_parameter' call since it returns the param/value pair we set

    Args:
        simulation: Simulation we are configuring
        param: Param string passed to use
        value: Value to set param to

    Returns:

    """
    return simulation.task.set_parameter(param, value)


# now add the sweep to our builder
builder.add_sweep_definition(partial(param_update, param="a"), range(3))
builder.add_sweep_definition(partial(param_update, param="b"), [1, 2, 3])
ts.add_builder(builder)

# Now we can create our Experiment using our template builder
experiment = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1])
# Add our own custom tag to simulation
experiment.tags["tag1"] = 1
# And maybe some custom Experiment Level Assets
experiment.assets.add_directory(assets_directory=os.path.join("inputs", "python_model_with_deps", "Assets"))

with Platform('IDMCloud') as platform:
    # Call run() with 'scheduling=True' to run simulations with scheduling using WorkOrder.json(loaded above)
    # There are few ways to schedule computation resources in COMPS:
    #    1. add_work_order() method to add WorkOrder.json file to simulations as transient asset
    #    2. add_schedule_config() method can be used to add dynamic WorkOrder.json to simulations as transient asset
    #    3. add additional parameters to Platform creation with Platform(**kwargs) in kwargs
    #    4. idmtools.ini
    # the order of precedence is WorkOrder.json > Platform() > idmtools.ini
    # with experiment.run method, you can also passin other options like 'priority=Highest' here to override any
    # priority value either passed in from idmtools.ini or defined in Platform(**kwargs)
    experiment.run(True, scheduling=True, priority='Highest')
    # use system status as the exit code
    sys.exit(0 if experiment.succeeded else -1)

To see the list of platform alias’, such as BELEGOST and CALCULON, use the following CLI command: idmtools info plugins platform-aliases.

Scheduling schemas

The following schemas, for both HPC and SLURM clusters on COMPS, list the available options you are able to include within the workorder.json file.

HPC:

{
  "title": "MSHPC job WorkOrder Schema",
  "$schema": "http://json-schema.org/draft-04/schema",
  "type": "object",
  "required": [
    "Command"
  ],
  "properties": {
    "Command": {
      "type": "string",
      "minLength": 1,
      "description": "The command to run, including binary and all arguments"
    },
    "NodeGroupName": {
      "type": "string",
      "minLength": 1,
      "description": "The cluster node-group to commission the job to"
    },
    "NumCores": {
      "type": "integer",
      "minimum": 1,
      "description": "The number of cores to reserve"
    },
    "SingleNode": {
      "type": "boolean",
      "description": "A flag to limit all reserved cores to being on the same compute node"
    },
    "Exclusive": {
      "type": "boolean",
      "description": "A flag that controls whether nodes should be exclusively allocated to this job"
    }
  },
  "additionalProperties": false
}

SLURM:

{
  "title": "SLURM job WorkOrder Schema",
  "$schema": "http://json-schema.org/draft-04/schema",
  "type": "object",
  "required": [
    "Command"
  ],
  "properties": {
    "Command": {
      "type": "string",
      "minLength": 1,
      "description": "The command to run, including binary and all arguments"
    },
    "NodeGroupName": {
      "type": "string",
      "minLength": 1,
      "description": "The cluster node-group to commission to"
    },
    "NumCores": {
      "type": "integer",
      "minimum": 1,
      "description": "The number of cores to reserve"
    },
    "NumNodes": {
      "type": "integer",
      "minimum": 1,
      "description": "The number of nodes to schedule"
    },
    "NumProcesses": {
      "type": "integer",
      "minimum": 1,
      "description": "The number of processes to execute"
    },
    "EnableMpi": {
      "type": "boolean",
      "description": "A flag that controls whether to run the job with mpiexec (i.e. whether the job will use MPI)"
    },
    "Environment": {
      "type": "object",
      "description": "Environment variables to set in the job environment; these can be dynamically expanded (e.g. $PATH)",
      "additionalProperties": {
        "type": "string"
      }
    }
  },
  "additionalProperties": false
}
Singularity Build

The COMPS platform supports building singularity images remotely with SingularityBuildWorkItem.

The workitem supports a few different scenarios for creating Singularity images:

Building from definition files

You can build from a Singularity definition file. See Definition Document.

To build using a definition file, you need to set the definition_file parameter to the path of a definition file. You can specify inputs to be consumed in the build by adding them to the assets or transient_assets fields. It is generally best to use the assets since they take advantage of caching. Remember that the resulting path of any files added to assets will need to be references with Assets/filename in your definition file.

Building from definition string

You can build from a definition that is in script(as a string) using the definition_content. Be sure the is_template parameter is false.

Building from definition template

The SingularityBuildWorkItem can also build using jinja templates. The template should produce a singularity definition file when rendered.

To use a template, specify the template using definition_content and also by setting is_template to True. You can also use the template_args to define items to be passed to the template when rendered. In addition to those items, the current executing environment variables are accessible as env and the workitem is accessible as sbi.

For details on Jinja’s syntax see https://jinja.palletsprojects.com/

SLURM

The SLURM platform allows use of the Simple Linux Utility for Resource Management (SLURM). “Slurm is an open source, fault-tolerant, and highly scalable cluster management and job scheduling system for large and small Linux clusters.”, as quoted from (https://slurm.schedmd.com/overview.html). For high-level architecture information about SLURM, see (https://slurm.schedmd.com/quickstart.html#arch). For architecure and included packages information about idmtools and SLURM, see (Architecture and packages reference).

Prerequisites
  • Linux client

  • SLURM cluster access and general understanding

  • Python 3.7, or 3.8 64-bit (https://www.python.org/downloads/release)

  • Python virtual environments

    Python virtual environments enable you to isolate your Python environments from one another and give you the option to run multiple versions of Python on the same computer. When using a virtual environment, you can indicate the version of Python you want to use and the packages you want to install, which will remain separate from other Python environments. You may use virtualenv, which requires a separate installation, but venv is recommended and included with Python 3.7+.

Configuration

The Slurm platform requires you to provide some configuration elements to define its operation.

You can define these parameters in your idmtools.ini file by adding a configuration block for Slurm.

idmtools.ini example:

[SLURM_LOCAL]
type = SLURM
job_directory = /home/userxyz/experiments
You can also do this directly from code by passing the minimum requirements

Python script example:

Platform('SLURM_LOCAL', job_directory='/home/userxyz/experiments')
Configuration Options
Title

Parameter

Description

job_directory

This defines the location that idmtools will use to manage experiments on the SLURM cluster. The directory should be located somewhere that is mounted on all SLURM nodes at the same location. If you are unsure, ask your SLURM server administrator for guidance.

mode

Allows you to control the operational mode for idmtools. There are two modes currently supported. * - local * - bridged Bridged mode is required if you are running from within a Singularity container. See :ref:Operation Modes for details.

Note

Bold parameters are required

Operation Modes

The SLURM platform supports two modes of operation, Local and Bridged. Local is the default mode.

Bridged

Bridged mode allows you to utilize the emodpy/idmtools Singularity environment containers. This is accomplished through a script that manages the communication to Slurm outside the container.

Bridged mode requires the package idmtools-slurm-utils.

To use bridged mode, before running your container you must run the bridge script outside the container:

idmtools-slurm-bridge

If you plan on using the same terminal, you may want to run the bridge in the background:

idmtools-slurm-bridge &

Once you have the bridge running, you can now run idmtools scripts from within Singularity containers. Ensure your platform is configured to use bridged mode:

singularity exec idmtools_1.6.8 bash
$ python my_script.py
Tips

When using the slurm-bridge, there are a few tips for use

  1. When you background the process by running:

    idmtools-slurm-bridge &
    

    You will need to run:

    fg
    

    See [Foreground and Background Processes](https://www.linuxshelltips.com/foreground-and-background-process-in-linux/) in Linux

  2. You may need to load modules before executing the bridge. See [Modules documentation](https://curc.readthedocs.io/en/latest/compute/modules.html) for more details.

Local

Local operation is meant to be executed directly on a SLURM cluster node.

Recommendations
  • Simulation results and files should be backed up

Getting started

After you have installed idmtools (Basic installation or Developer installation) and met the above listed prerequisites, you can begin submitting and running jobs to your SLURM cluster with idmtools. First verify your SLURM platform is running. Then submit a job with the included example Python script.

Verify SLURM platform is running

Type the following at a terminal session to verify that SLURM platform is running:

sinfo -a

This will list your available partitions, and status. You should see output similar to the following:

PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
LocalQ*      up   infinite      1   idle localhost
Submit a job

Run the following included Python script to submit and run a job on your SLURM cluster:

/examples/native_slurm/python_sims.py

Note

workitems and AssetCollection are not supported on the SLURM platform with idmtools. If you’ve used the COMPS platform with idmtools you may have scripts using these objects. You would need to update these scripts without using these objects in order to run them on the SLURM platform.

Configuration and options

idmtools supports the SLURM options for configuring, submitting and running jobs. The following lists some of the sbatch options (https://slurm.schedmd.com/sbatch.html) that are used when making calls to idmtools_platform_slurm.platform_operations.SlurmPlatform:

Cancel jobs

To cancel a submitted job (simulation/experiment/suite) on the SLURM cluster you must use the scancel SLURM command from a terminal session connected to the SLURM cluster. idmtools submits jobs as job arrays. The job id of the job array is used with scancel for cancelling jobs. For more information about scancel, see https://slurm.schedmd.com/scancel.html.

View jobs in queue

To view the job id associated with the job array use the squeue command:

squeue

For more information about squeue, see https://slurm.schedmd.com/squeue.html.

Cancel a specific job

To cancel a specific job from the job array specify the job id of the job array and index number:

scancel job-id-number-and-index-number-here
Cancel all jobs

To cancel all jobs within the job array only specify the job id of the job array:

scancel job-id-number-only-here
Run Script As Slurm Job

This is a temporary workaround solution and user can follow the steps to run a Python script as a Slurm job.

In the future we may develop a utility tool which will run a Python script as a Slurm job automatically.

Steps

This guide takes Northwestern University QUEST Slurm system as an example. For general case, users may need to modify the steps based on their own Slurm environment.

Assume user has virtual environment created and activated.

1.Have target script ready, say my_script.py, suppose you have folder structure like:

script_folder
   my_script.py
   ......

2.within the script folder, create a batch file ‘sbatch.sh’ (without quote).

sbatch.sh has content like:

#!/bin/bash

#SBATCH --partition=b1139
#SBATCH --time=10:00:00
#SBATCH --account=b1139

#SBATCH --output=stdout.txt
#SBATCH --error=stderr.txt

# replace with your script file
python3 my_script.py

exit $RESULT

Note

based on user Slurm system, above content may be a little bit different.

3.Run target script as SLURM job

execute the following command from console (under virtual environment):

cd path_to_script_folder

then:

sbatch sbatch.sh

Note

any output information from my_script.py is stored in file stdout.txt under the current folder. For example, if my_script.py kicks out another Slurm job, then its Slurm id information can be found in file stdout.txt.

Create platform plugin

You can add a new platform to idmtools by creating a new platform plugin, as described in the following sections:

Adding fields to the config CLI

If you are developing a new platform plugin, you will need to add some metadata to the Platform class’ fields. All fields with a help key in the metadata will be picked up by the idmtools config block command line and allow a user to set a value. help should contain the help text that will be displayed. A choices key can optionally be present to restrict the available choices.

For example, for the given platform:

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    field1: int = field(default=1, metadata={"help": "This is the first field."})
    internal_field: imt = field(default=2)
    field2: str = field(default="a", metadata={"help": "This is the second field.", "choices": ["a", "b", "c"]})

The CLI wizard will pick up field1 and field2 and ask the user to provide values. The type of the field will be enforced and for field2, the user will have to select among the choices.

Modify fields metadata at runtime

Now, what happens if we want to change the help text, choices, or default value of a field based on a previously set field? For example, let’s consider an example platform where the user needs to specify an endpoint. This endpoint needs to be used to retrieve a list of environments and we want the user to choose select one of them.

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    endpoint: str = field(default="https://myapi.com", metadata={"help": "Enter the URL of the endpoint."})
    environment: str = field(metadata={"help": "Select an environment."})

The list of environments is dependent on the endpoint value. To achieve this, we need to provide a callback function to the metadata. This function will receive all the previously set user parameters, and will have the opportunity to modify the current field’s choices, default, and help parameters.

Let’s create a function querying the endpoint to get the list of environments and setting them as choices. Selecting the first one as default.

def environment_list(previous_settings:Dict, current_field:Field) -> Dict:
    """
    Allows the CLI to provide a list of available environments.
    Uses the previous_settings to get the endpoint to query for environments.
    Args:
        previous_settings: Previous settings set by the user in the CLI.
        current_field: Current field specs.

    Returns:
        Updates to the choices and default.
    """
    # Retrieve the endpoint set by the user
    # The key of the previous_settings is the name of the field we want the value of
    endpoint = previous_settings["endpoint"]

    # Query the API for environments
    client.connect(endpoint)
    environments = client.get_all_environments()

    # If the current field doesnt have a set default already, set one by using the first environment
    # If the field in the platform class has a default, consider it first
    if current_field.default not in environments:
        default_env = environment_choices[0]
    else:
        default_env = current_field.default

    # Return a dictionary that will be applied to the current field
    # Setting the new choices and default at runtime
    return {"choices": environment_choices, "default": default_env}

We can then use this function on the field, and the user will be prompted with the correct list of available environments.

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    endpoint: str = field(default="https://myapi.com", metadata={"help": "Enter the URL of the endpoint"})
    environment: str = field(metadata={"help": "Select an environment ", "callback": environment_list})
Fields validation

By default the CLI will provide validation on type. For example an int field, will only accept an integer value. To fine tune this validation, we can leverage the validation key of the metadata.

For example, if you want to create a field that has an integer value between 1 and 10, you can pass a validation function as shown:

def validate_number(value):
    if 1 <= value <= 10:
        return True, ''
    return False, "The value needs to be bewtween 1 and 10."

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    custom_validation: int = field(default=1, metadata={"help": "Enter a number between 1 and 10.", "validation":validate_number})

The validation function will receive the user input as value and is expected to return a bool representing the result of the validation (True if the value is correct, False if not) and a string to give an error message to the user.

We can leverage the Python partials and make the validation function more generic to use in multiple fields:

from functools import partial

def validate_range(value, min, max):
    if min <= value <= max:
        return True, ''
    return False, f"The value needs to be between {min} and {max}."

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    custom_validation: int = field(default=1, metadata={"help": "Enter a number between 1 and 10.", "validation":partial(validate_range, min=1, max=10)})
    custom_validation2: int = field(default=100, metadata={"help": "Enter a number between 100 and 500.", "validation":partial(validate_range, min=100, max=500)})

Create and run simulations

To create simulations with idmtools, create a Python file that imports the relevant packages, uses the classes and functions to meet your specific needs, and then run the script using python script_name.py.

For example, if you would like to create many simulations “on-the-fly” (such as parameter sweeps) then you should use the SimulationBuilder and TemplatedSimulations classes. On the other hand, if you would like to create multiple simulations beforehand then you can use the Simulation class.

See the following examples for each of these scenarios:

SimulationBuilder example

"""
        This file demonstrates how to use ExperimentBuilder in PythonExperiment's builder.
        We are then adding the builder to PythonExperiment.

        Parameters for sweeping:
            |__ a = [0,1,2,3,4]

        Expect 5 sims with config parameters, note: "b" is not a sweep parameter, but it is depending on a's value:
            sim1: {a:0, b:2}
            sim2: {a:1, b:3}
            sim3: {a:2, b:4}
            sim4: {a:3, b:5}
            sim5: {a:4, b:6}
"""

import os
import sys
from functools import partial

from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH


# define a custom sweep callback that sets b to a + 2
def param_update_ab(simulation, param, value):
    # Set B within
    if param == "a":
        simulation.task.set_parameter("b", value + 2)

    return simulation.task.set_parameter(param, value)


if __name__ == "__main__":
    # define what platform we want to use. Here we use a context manager but if you prefer you can
    with platform('CALCULON'):
        # define our base task
        base_task = JSONConfiguredPythonTask(script_path=os.path.join(COMMON_INPUT_PATH, "python", "model1.py"),
                                             parameters=dict(c='c-value'))

        # define our input csv sweep
        builder = SimulationBuilder()
        # Sweep parameter "a" and make "b" depends on "a"
        setAB = partial(param_update_ab, param="a")
        builder.add_sweep_definition(setAB, range(0, 5))

        # now define we want to create a series of simulations using the base task and the sweep
        ts = TemplatedSimulations.from_task(base_task, tags=dict(c='c-value'))
        ts.add_builder(builder)

        # define our experiment with its metadata
        experiment = Experiment.from_template(ts,
                                              name=os.path.split(sys.argv[0])[1],
                                              tags={"string_tag": "test", "number_tag": 123}
                                              )

        # run experiment
        experiment.run()
        # wait until done with longer interval
        # in most real scenarios, you probably do not want to wait as this will wait until all simulations
        # associated with an experiment are done. We do it in our examples to show feature and to enable
        # testing of the scripts
        experiment.wait(refresh_interval=10)
        # use system status as the exit code
        sys.exit(0 if experiment.succeeded else -1)

Simulation example

"""
        This file demonstrates how to use StandAloneSimulationsBuilder in PythonExperiment's builder.

        we create 5 simulations and for each simulation, we set parameter 'a' = [0,4] and 'b' = a + 10:
        then add each updated simulation to builder
        then we are adding the builder to PythonExperiment
"""
import copy
import os
import sys

from idmtools.assets import AssetCollection
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

if __name__ == "__main__":

    # define our platform
    platform = Platform('Calculon')

    # create experiment  object and define some extra assets
    assets_path = os.path.join(COMMON_INPUT_PATH, "python", "Assets")
    e = Experiment(name=os.path.split(sys.argv[0])[1],
                   tags={"string_tag": "test", "number_tag": 123},
                   assets=AssetCollection.from_directory(assets_path))

    # define paths to model and extra assets folder container more common assets
    model_path = os.path.join(COMMON_INPUT_PATH, "python", "model.py")

    # define our base task including the common assets. We could also add these assets to the experiment above
    base_task = JSONConfiguredPythonTask(script_path=model_path, envelope='parameters')

    base_simulation = Simulation.from_task(base_task)

    # now build our simulations
    for i in range(5):
        # first copy the simulation
        sim = copy.deepcopy(base_simulation)
        # For now, you have to reset the uid manually when copying here. In future, you should only need to do a
        # copy method here
        sim._uid = None
        # configure it
        sim.task.set_parameter("a", i)
        sim.task.set_parameter("b", i + 10)
        # and add it to the simulations
        e.simulations.append(sim)

    # run the experiment
    e.run()
    # wait on it
    # in most real scenarios, you probably do not want to wait as this will wait until all simulations
    # associated with an experiment are done. We do it in our examples to show feature and to enable
    # testing of the scripts
    e.wait()
    # use system status as the exit code
    sys.exit(0 if e.succeeded else -1)

Many additional examples can be found in the /examples folder of the GitHub repository.

Create simulation tags

During the creation of simulations you can add tags, key:value pairs, included as metadata. The tags can be used for filtering on and searching for simulations. idmtools includes multiple ways for adding tags to simulations:

(Preferred) Builder callbacks via add_sweep_definition

You can add tags to simulations by using builder callbacks while building experiments with SimulationBuilder or Simulation classes and the add_sweep_definition method. This way supports adding tags to a large set of simulations and gives you full control over the simulation/task object. In addition, built-in tag management support is used when implementing the return values in a dictionary for the tags. For more information see the example in SimulationBuilder.

Base task with TemplatedSimulations

You can add tags to all simulations via base task used with the TemplatedSimulations class while building simulations. For more information see the example in TemplatedSimulations.

Specific simulation from TemplatedSimulations

If you need to add a tag to a specific simulation after building simulations from task with TemplatedSimulations, then you must convert the simulations to a list. For more information see the example in TemplatedSimulations.

Create EMOD simulations

To create simulations using EMOD you must use the emodpy package with idmtools. Included with emodpy is the EMODTask class, inheriting from the ITask abstract class, and used for the running and configuration of EMOD simulations and experiments.

_images/fa23bb3a41e84efc62a62f39968d754e369b89ce345f6de26cb3b9f9438b403d.svg

For more information about the architecture of job (simulation/experiment) creation and how EMOD leverages idmtools plugin architecture, see Architecture and packages reference.

The following Python excerpt shows an example of using EMODTask class and from_default method to create a task object using default config, campaign, and demographic values from EMODSir class and to use the Eradication.exe from local directory:

task = EMODTask.from_default(default=EMODSir(), eradication_path=os.path.join(BIN_PATH, "Eradication"))

Another option, instead of using from_default, is to use the from_files method:

task = EMODTask.from_files(config_path=os.path.join(INPUT_PATH, "config.json"),
                           campaign_path=os.path.join(INPUT_PATH, "campaign.json"),
                           demographics_paths=os.path.join(INPUT_PATH, "demographics.json"),
                           eradication_path=eradication_path)

For complete examples of the above see the following Python scripts:

  • (from_default) emodpy.examples.create_sims_from_default_run_analyzer

  • (from_files) emodpy.examples.create_sims_eradication_from_github_url

Containers overview

You can use idmtools in containers, such as Singularity and COMPS. This can help make it easier for other data scientists to use and rerun your work without having to try and reproduce your environment and utilities. You just need to share your container for others to run on their own HPC.

“A container is a software package that contains everything the software needs to run. This includes the executable program as well as system tools, libraries, and settings”, as quoted from techterms.com (https://techterms.com/definition/container). The conceptual components of containers are the same regardless of the specific container technology, such as Singularity and Docker.

For additional overview and conceptual information about containers, see the following:

Containers and science

Containers and science are great partners. The primary reason being the enhancement of reproducibility in scientific computing. Another reason is to allow access to more utilities beyond the scope of what is available by default within your HPC environments. For example, if you need to use the Julia programming language or any other utilities not currently available in your HPC then you could create your own container with the desired utilities. This allows you control over the environment and tools in the container to be run on your HPC.

Understand Singularity

“Singularity is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization. One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing world.”, as quoted from https://en.wikipedia.org/wiki/Singularity_(software).

For additional overview and conceptual information about Singularity, see the following:

Services on COMPS with containers

When using idmtools and containers on COMPS you can use the following two options:

  • Container builder service

  • Running simulations using containers

Container Builder Service

The container builder service in idmtools allows you to create a Singularity Image File (SIF), the Singularity container file to run in your HPC environment. For more in-depth information about the container builder service, see Container builder service.

Running Simulations using Containers

Whether you’ve used the container builder service in idmtools to create a new SIF file or you’ve downloaded a pre-existing SIF file you can then use it for running simulations with COMPS. For more in-depth information about using containers with COMPS, see Using containers in COMPS.

Container builder service

The container builder service in idmtools uses the SingularityBuildWorkItem class, which can use as input a .def (Singularity container defiinition) file (the instructions or blueprint for building the container - .sif file), and then writes it to an asset collection id file to be available as part of an asset collection on COMPS. You can then use the built container for running simulations on COMPS.

For more information about Singularity builds on COMPS, see Singularity Build.

Using containers in COMPS

You can use the Singularity container files (.sif) for running simulations on COMPS.

Run a job in COMPS with Singularity

idmtools includes examples to help you get up and running with Singularity on COMPS. First, you can run the create_ubuntu_sif.py script, located in examples/singularity/ubuntu-20-04/create_ubuntu_sif.py. This script creates an Ubuntu Singularity container based on the included definition file, ubuntu_20_04_base.def, and writes it to an asset collection on COMPS.

if __name__ == '__main__':
platform = Platform("CALCULON")
sbi = SingularityBuildWorkItem(name="Create ubuntu sif with def file", definition_file="ubuntu_20_04_base.def", image_name="ubuntu.sif")
sbi.tags = dict(ubuntu="20.04")
sbi.run(wait_until_done=True, platform=platform)
if sbi.succeeded:
    # Write ID file
    sbi.asset_collection.to_id_file("ubuntu.id")

Once you have the required Linux .sif container file, you can then add your modeling files. For example, create_covasim_sif.py, located in examples/singularity/covasim/create_covasim_sif.py, uses the pre-created ubuntu container and associated asset collection id to create a new .sif container file for running simulations using Covasim.

if __name__ == '__main__':
platform = Platform("CALCULON")
sbi = SingularityBuildWorkItem(name="Create covasim sif with def file", definition_file="covasim_req.def", image_name="covasim_ubuntu.sif")
# Try to load the ubuntu image from an id file
pwd = PurePath(__file__).parent
ub_base = pwd.joinpath("..", "ubuntu-20-04")
fp = pwd.joinpath("ubuntu.id")
sbi.add_assets(AssetCollection.from_id_file(fp))
sbi.tags = dict(covasim=None)
sbi.run(wait_until_done=True, platform=platform)
if sbi.succeeded:
    sbi.asset_collection.to_id_file("covasim.id")

As the following example script, run_covasim_sweep.py, shows you can run simulations in a Singularity container on COMPS using the previously created .sif container file.

import os
import sys
from functools import partial
from idmtools.assets import AssetCollection
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities import CommandLine
from idmtools.entities.command_task import CommandTask
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations

def set_value(simulation, name, value):
    fix_value = round(value, 2) if isinstance(value, float) else value
    # add argument
    simulation.task.command.add_raw_argument(str(fix_value))
    # add tag with our value
    simulation.tags[name] = fix_value

if __name__ == "__main__":
    here = os.path.dirname(__file__)
    # Create a platform to run the workitem
    platform = Platform("CALCULON")
    # create commandline input for the task
    command = CommandLine(f"singularity exec ./Assets/covasim_ubuntu.sif python3 Assets/run_sim_sweep.py")
    task = CommandTask(command=command)
    ts = TemplatedSimulations(base_task=task)
    # Add our image
    task.common_assets.add_assets(AssetCollection.from_id_file("covasim.id"))
    sb = SimulationBuilder()
    # Add sweeps on 3 parameters. Total of 1680 simulations(6x14x21)
    sb.add_sweep_definition(partial(set_value, name="pop_size"), [10000, 20000])
    sb.add_sweep_definition(partial(set_value, name="pop_infected"), [10, 100, 1000])
    sb.add_sweep_definition(partial(set_value, name="n_days"), [100, 110, 120])
    sb.add_sweep_definition(partial(set_value, name="rand_seed"), [1234, 4567])
    ts.add_builder(sb)

    experiment = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1])
    experiment.add_asset(os.path.join("inputs", "run_sim_sweep.py"))
    experiment.add_asset(os.path.join("inputs", "sim_to_inset.py"))
    experiment.run(wait_until_done=True)
    if experiment.succeeded:
        experiment.to_id_file("run_sim_sweep.id")

Parameter sweeps and model iteration

In modeling, parameter sweeps are an important method for fine-tuning parameter values, exploring parameter space, and calibrating simulations to data. A parameter sweep is an iterative process in which simulations are run repeatedly using different values of the parameter(s) of choice. This process enables the modeler to determine a parameter’s “best” value (or range of values), or even where in parameter space the model produces desirable (or non-desirable) behaviors.

When fitting models to data, it is likely that there will be numerous parameters that do not have a pre-determined value. Some parameters will have a range of values that are biologically plausible, or have been determined from previous experiments; however, selecting a particular numerical value to use in the model may not be feasible or realistic. Therefore, the best practice involves using a parameter sweep to narrow down the range of possible values or to provide a range of outcomes for those possible values.

idmtools provides an automated approach to parameter sweeps. With few lines of code, it is possible to test the model over any range of parameter values with any combination of parameters.

With a stochastic model (such as EMOD), it is especially important to utilize parameter sweeps, not only for calibration to data or parameter selection, but to fully explore the stochasticity in output. Single model runs may appear to provide good fits to data, but variation will arise and multiple runs are necessary to determine the appropriate range of parameter values necessary to achieve desired outcomes. Multiple iterations of a single set of parameter values should be run to determine trends in simulation output: a single simulation output could provide results that are due to random chance.

How to do parameter sweeps

With idmtools, you can do parameter sweeps with builders or without builders using a base task to set your simulation parameters.

The typical ‘output’ of idmtools is a config.json file for each created simulation, which contains the parameter values assigned: both the constant values and those being swept.

Using builders

In this release, to support parameter sweeps for models, we have the following builders to assist you:

  1. SimulationBuilder - you set your sweep parameters in your scripts and it generates a config.json file with your sweeps for your experiment/simulations to use

  2. CsvExperimentBuilder - you can use a CSV file to do your parameter sweeps

  3. YamlSimulationBuilder - you can use a Yaml file to do your parameter sweeps

  4. ArmSimulationBuilder for cross and pair parameters, which allows you to cross parameters, like you cross your arms.

There are two types of sweeping, cross and pair. Cross means you have for example, 3 x 3 = 9 set of parameters, and pair means 3 + 3 = 3 pairs of parameters, for example, a, b, c and d, e, f.

For cross sweeping, let’s say again you have parameters a, b, c and d, e, f that you want to cross so you would have the following possible matches: - a & d - a & e - a & f - b & d - b & e - b & f - c & d - c & e - c & f

For Python models, we also support them using a JSONConfiguredPythonTask. In the future we will support additional configured tasks for Python and R models.

Add sweep definition

You can use the following two different methods for adding a sweep definition to a builder object:

  • add_sweep_definition

  • add_multiple_parameter_sweep_definition

Generally add_sweep_definition is used; however, in scenarios where you need to add multiple parameters to the sweep definition you use add_multiple_parameter_sweep_definiton - as seen in idmtools.examples.python_model.multiple_parameter_sweeping.py. More specifically, add_multiple_parameter_sweep_definition is used for sweeping with the same definition callback that takes multiple parameters, where the parameters can be a list of arguments or a list of keyword arguments. The sweep function will do cross-product sweeps between the parameters.

Creating sweeps without builders

You can also create sweeps without using builders. Like this example:

"""
        This file demonstrates how to create param sweeps without builders.

        we create base task including our common assets, e.g. our python model to run
        we create 5 simulations and for each simulation, we set parameter 'a' = [0,4] and 'b' = a + 10 using this task
        then we are adding this to task to our Experiment to run our simulations
"""
import copy
import os
import sys

from idmtools.assets import AssetCollection
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

if __name__ == "__main__":

    # define our platform
    platform = Platform('COMPS2')

    # create experiment  object and define some extra assets
    assets_path = os.path.join(COMMON_INPUT_PATH, "python", "Assets")
    e = Experiment(name=os.path.split(sys.argv[0])[1],
                   tags={"string_tag": "test", "number_tag": 123},
                   assets=AssetCollection.from_directory(assets_path))

    # define paths to model and extra assets folder container more common assets
    model_path = os.path.join(COMMON_INPUT_PATH, "python", "model.py")

    # define our base task including the common assets. We could also add these assets to the experiment above
    base_task = JSONConfiguredPythonTask(script_path=model_path, envelope='parameters')
    base_simulation = Simulation.from_task(base_task)

    # now build our simulations
    for i in range(5):
        # first copy the simulation
        sim = copy.deepcopy(base_simulation)
        # configure it
        sim.task.set_parameter("a", i)
        sim.task.set_parameter("b", i + 10)
        # and add it to the simulations
        e.simulations.append(sim)

    # run the experiment
    e.run(platform=platform)
    # wait on it
    # in most real scenarios, you probably do not want to wait as this will wait until all simulations
    # associated with an experiment are done. We do it in our examples to show feature and to enable
    # testing of the scripts
    e.wait()
    # use system status as the exit code
    sys.exit(0 if e.succeeded else -1)
Running parameter sweeps in specific models

The following pages provide information about running parameter sweeps in particular models, and include example scripts.

Running parameter sweeps with Python models

For Python modelers running parameter sweeps, we have multiple examples, as described in the following sections.

python_model.python_sim

idmtools.examples.python_model.python_sim.py

First, import some necessary system and idmtools packages.

  • TemplatedSimulations: A utility that builds simulations from a template,

  • SimulationBuilder: An interface to different types of sweeps. It is used in conjunction with TemplatedSimulations.

  • Platform: To specify the platform you want to run your experiment on.

  • JSONConfiguredPythonTask: We want to run a task executing a Python script. We will run a task in each simulation using this object. This particular task has a JSON config that is generated as well. There are other Python tasks with either different or no configuration formats.

import os
import sys
from functools import partial
from typing import Any, Dict

from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask

We have Python model defined in “model.py” which has 3 parameters: “a, b, c” and supports a JSON config from a file named “config.json”. We want to sweep the parameters “a” for the values 0-2 and “b” for the values 1-3 and keep “c” as value 0.

To accomplish this, we are going to proceed in a few high-level steps. See https://bit.ly/37DHUf0 for workflow.

  1. Define our base task. This task is the common configuration across all our tasks. For us, that means some basic run info like script path as well as our parameter/value we don’t plan on sweeping, “c”.

  2. Then we will define our TemplatedSimulations object that will use our task to build a series of simulations.

  3. Then we will define a SimulationBuilder and define our sweeps. This will involve also writing some callback functions that update the each task’s config with the sweep values.

  4. Then we will add our SimulationBuilder to our TemplatedSimulations object.

  5. We will then build our Experiment object using TemplatedSimulations as our simulations list.

  6. Lastly we will run the experiment on the platform.

First, let’s define our base task. Normally, you want to do set any assets/configurations you want across the all the different simulations we are going to build for our experiment. Here we set “c” to 0 since we do not want to sweep it.

task = JSONConfiguredPythonTask(script_path=os.path.join("inputs", "python_model_with_deps", "Assets", "model.py"),
                                parameters=(dict(c=0)))

Now let’s use this task to create a TemplatedSimulations builder. This will build new simulations from sweep builders we will define later. We can also use it to manipulate base_task or base_simulation objects.

ts = TemplatedSimulations(base_task=task)

We can define common metadata like tags across all the simulations using base_simulation object.

ts.base_simulation.tags['tag1'] = 1

Since we have our templated simulation object now, let’s define our sweeps. To do that we need to use a builder:

builder = SimulationBuilder()

When adding sweep definitions, you need to generally provide two items.

See https://bit.ly/314j7xS for a diagram of how simulations are built using TemplatedSimulations and SimulationBuilder.

  1. A callback function that will be called for every value in the sweep. This function will receive a simulation object and a value. You then define how to use those within the simulation. Generally, you want to pass those to your task’s configuration interface. In this example, we are using JSONConfiguredPythonTask which has a set_parameter function that takes a simulation, a parameter name, and a value. To pass to this function, we will use either a class wrapper or function partials.

  2. A list/generator of values

Since our model uses a JSON config let’s define a utility function that will update a single parameter at a time on the model and add that param/value pair as a tag on our simulation.

def param_update(simulation: Simulation, param: str, value: Any) -> Dict[str, Any]:
    """
    This function is called during sweeping allowing us to pass the generated sweep values to our Task Configuration

    We always receive a Simulation object. We know that simulations all have tasks and that for our particular set
    of simulations they will all include :py:class:`~idmtools_models.python.json_python_task.JSONConfiguredPythonTask`. We configure the model with calls to set_parameter
    to update the config. In addition, we are can return a dictionary of tags to add to the simulations so we return
    the output of the 'set_parameter' call since it returns the param/value pair we set

    Args:
        simulation: Simulation we are configuring
        param: Param string passed to use
        value: Value to set param to

    Returns:

    """
    return simulation.task.set_parameter(param, value)

Let’s sweep the parameter “a” for the values 0-2. Since our utility function requires a simulation, param, and value, the sweep framework calls our function with a simulation and value. Let’s use the partial function to define that we want the param value to always be “a” so we can perform our sweep.

setA = partial(param_update, param="a")

Now add the sweep to our builder:

builder.add_sweep_definition(setA, range(3))
  1# Example Python Experiment with JSON Configuration
  2# In this example, we will demonstrate how to run a python experiment with JSON Configuration
  3
  4# First, import some necessary system and idmtools packages.
  5# - TemplatedSimulations: A utility that builds simulations from a template
  6# - SimulationBuilder: An interface to different types of sweeps. It is used in conjunction with TemplatedSimulations
  7# - Platform: To specify the platform you want to run your experiment on
  8# - JSONConfiguredPythonTask: We want to run an task executing a Python script. We will run a task in each simulation
  9# using this object. This particular task has a json config that is generated as well. There are other python task
 10# we either different or no configuration formats.
 11import os
 12import sys
 13from functools import partial
 14from typing import Any, Dict
 15
 16from idmtools.builders import SimulationBuilder
 17from idmtools.core.platform_factory import Platform
 18from idmtools.entities.experiment import Experiment
 19from idmtools.entities.simulation import Simulation
 20from idmtools.entities.templated_simulation import TemplatedSimulations
 21from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
 22
 23# We have python model defined in "model.py" which has 3 parameters: a, b, c and supports
 24# a json config from a file named "config".json. We want to sweep the parameters a for the values 0-2 and b for the
 25# values 1-3 and keep c as value 0.
 26# To accomplish this, we are going to proceed in a few high-level steps. See https://bit.ly/37DHUf0 for workflow
 27# 1. Define our base task. This task is the common configuration across all our tasks. For us, that means some basic
 28#    run info like script path as well as our parameter/value we don't plan on sweeping, c
 29# 2. Then we will define our TemplateSimulations object that will use our task to build a series of simulations
 30# 3. Then we will define a SimulationBuilder and define our sweeps. This will invlove also writing some callback
 31# functions that update the each task's config with the swep values
 32# 4. Then we will add our simulation builder to our TemplateSimulation object.
 33# 5. We will then build our Experiment object using the TemplateSimulations as our simulations list.
 34# 6. Lastly we will run the experiment on the platform
 35
 36# first let's define our base task. Normally, you want to do set any assets/configurations you want across the
 37# all the different Simulations we are going to build for our experiment. Here we set c to 0 since we do not want to
 38# sweep it
 39task = JSONConfiguredPythonTask(script_path=os.path.join("inputs", "python", "Assets", "model.py"),
 40                                parameters=(dict(c=0)))
 41
 42# now let's use this task to create a TemplatedSimulation builder. This will build new simulations from sweep builders
 43# we will define later. We can also use it to manipulate the base_task or the base_simulation
 44ts = TemplatedSimulations(base_task=task)
 45# We can define common metadata like tags across all the simulations using the base_simulation object
 46ts.base_simulation.tags['tag1'] = 1
 47
 48# Since we have our templated simulation object now, let's define our sweeps
 49# To do that we need to use a builder
 50builder = SimulationBuilder()
 51
 52# When adding sweep definitions, you need to generally provide two items
 53# See https://bit.ly/314j7xS for a diagram of how the Simulations are built using TemplateSimulations +
 54# SimulationBuilders
 55# 1. A callback function that will be called for every value in the sweep. This function will receive a Simulation
 56#    object and a value. You then define how to use those within the simulation. Generally, you want to pass those
 57#    to your task's configuration interface. In this example, we are using JSONConfiguredPythonTask which has a
 58#    set_parameter function that takes a Simulation, a parameter name, and a value. To pass to this function, we will
 59#    user either a class wrapper or function partials
 60# 2. A list/generator of values
 61
 62# Since our models uses a json config let's define an utility function that will update a single parameter at a
 63# time on the model and add that param/value pair as a tag on our simulation.
 64
 65
 66def param_update(simulation: Simulation, param: str, value: Any) -> Dict[str, Any]:
 67    """
 68    This function is called during sweeping allowing us to pass the generated sweep values to our Task Configuration
 69
 70    We always receive a Simulation object. We know that simulations all have tasks and that for our particular set
 71    of simulations they will all include JSONConfiguredPythonTask. We configure the model with calls to set_parameter
 72    to update the config. In addition, we are can return a dictionary of tags to add to the simulations so we return
 73    the output of the 'set_parameter' call since it returns the param/value pair we set
 74
 75    Args:
 76        simulation: Simulation we are configuring
 77        param: Param string passed to use
 78        value: Value to set param to
 79
 80    Returns:
 81
 82    """
 83    return simulation.task.set_parameter(param, value)
 84
 85
 86# Let's sweep the parameter 'a' for the values 0-2. Since our utility function requires a Simulation, param, and value
 87# but the sweep framework all calls our function with Simulation, value, let's use the partial function to define
 88# that we want the param value to always be "a" so we can perform our sweep
 89setA = partial(param_update, param="a")
 90# now add the sweep to our builder
 91builder.add_sweep_definition(setA, range(3))
 92
 93
 94# An alternative to using partial is define a class that store the param and is callable later. let's use that technique
 95# to perform a sweep one the values 1-3 on parameter b
 96
 97# First define our class. The trick here is we overload __call__ so that after we create the class, and calls to the
 98# instance will be relayed to the task in a fashion identical to the param_update function above. It is generally not
 99# best practice to define a class like this in the body of our main script so it is advised to place this in a library
100# or at the very least the top of your file.
101class setParam:
102    def __init__(self, param):
103        self.param = param
104
105    def __call__(self, simulation, value):
106        return simulation.task.set_parameter(self.param, value)
107
108
109# Now add our sweep on a list
110builder.add_sweep_definition(setParam("b"), [1, 2, 3])
111ts.add_builder(builder)
112
113# Now we can create our Experiment using our template builder
114experiment = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1])
115# Add our own custom tag to simulation
116experiment.tags["tag1"] = 1
117# And maybe some custom Experiment Level Assets
118experiment.assets.add_directory(assets_directory=os.path.join("inputs", "python_model_with_deps", "Assets"))
119
120# In order to run the experiment, we need to create a `Platform`
121# The `Platform` defines where we want to run our simulation.
122# You can easily switch platforms by changing the Platform to for example 'CALCULON'
123with Platform('CALCULON'):
124
125    # The last step is to call run() on the ExperimentManager to run the simulations.
126    experiment.run(True)
127    # use system status as the exit code
128    sys.exit(0 if experiment.succeeded else -1)
python_model.python_SEIR_sim

idmtools.examples.python_model.python_SEIR_sim.py

Example Python experiment with JSON configuration. In this example, we will demonstrate how to run a Python experiment with JSON configuration. First, import some necessary system and idmtools packages:

import os
import sys
import json
from functools import partial
from typing import Any, Dict

from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.builders import SimulationBuilder
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from inputs.ye_seir_model.custom_csv_analyzer import NodeCSVAnalyzer, InfectiousnessCSVAnalyzer

Define some constant strings used in this example:

class ConfigParameters():
    Infectious_Period_Constant = "Infectious_Period_Constant"
    Base_Infectivity_Constant = "Base_Infectivity_Constant"
    Base_Infectivity_Distribution = "Base_Infectivity_Distribution"
    GAUSSIAN_DISTRIBUTION = "GAUSSIAN_DISTRIBUTION"
    Base_Infectivity_Gaussian_Mean = "Base_Infectivity_Gaussian_Mean"
    Base_Infectivity_Gaussian_Std_Dev = "Base_Infectivity_Gaussian_Std_Dev"

Script needs to be in a main block, otherwise AnalyzeManager will have issue with multi-threads in Windows OS.

if __name__ == '__main__':

We have Python model defined in “SEIR_model.py” which takes several arguments like “–duration” and “–outbreak_coverage”, and supports a JSON config from a file named “nd_template.json”. We want to sweep some arguments passed in to “SEIR_model.py” and some parameters in “nd_template.json”.

To accomplish this, we are going to proceed in a few high-level steps. See https://bit.ly/37DHUf0 for workflow

  1. Define our base task object. This task is the common configuration across all our tasks. For us, that means some basic run info like script path as well as our arguments/value and parameter/value we don’t plan on sweeping, “–duration”, and most of the parameters inside “nd_template.json”.

  2. Then we will define our TemplatedSimulations object that will use our task to build a series of simulations.

  3. Then we will define a SimulationBuilder and define our sweeps. This will involve also writing some callback functions that update the each task’s config or option with the sweep values.

  4. Then we will add our simulation builder to our TemplatedSimulations object.

  5. We will then build our Experiment object using TemplatedSimulations as our simulations list.

  6. We will run the experiment on the platform.

  7. Once and experiment is succeeded, we run two CSV analyzers to analyze results from the Python model.

  1. First, let’s define our base task. Normally, you want to do set any assets/configurations you want across the all the different simulations we are going to build for our experiment. Here we load config file from a template JSON file and rename the “config_file_name” (default value is config.json).

parameters = json.load(open(os.path.join("inputs", "ye_seir_model", "Assets", "templates", 'seir_configuration_template.json'), 'r'))
parameters[ConfigParameters.Base_Infectivity_Distribution] = ConfigParameters.GAUSSIAN_DISTRIBUTION
task = JSONConfiguredPythonTask(script_path=os.path.join("inputs", "ye_seir_model", "Assets", "SEIR_model.py"),
                                parameters=parameters,
                                config_file_name="seir_configuration_template.json")

We define arguments/value for simulation duration that we don’t want to sweep as an option for the task.

task.command.add_option("--duration", 40)
  1. Now, let’s use this task to create a TemplatedSimulations builder. This will build new simulations from sweep builders we will define later. We can also use it to manipulate base_task or base_simulation objects.

ts = TemplatedSimulations(base_task=task)

We can define common metadata like tags across all the simulations using the base_simulation object.

ts.base_simulation.tags['simulation_name_tag'] = "SEIR_Model"
  1. Since we have our TemplatedSimulations object now, let’s define our sweeps.

To do that we need to use a builder:

builder = SimulationBuilder()

When adding sweep definitions, you need to generally provide two items.

See https://bit.ly/314j7xS for a diagram of how the simulations are built using TemplatedSimulations and SimulationBuilder.

3.1. A callback function that will be called for every value in the sweep. This function will receive a Simulation object and a value. You then define how to use those within the simulation. Generally, you want to pass those to your task’s configuration interface. In this example, we are using JSONConfiguredPythonTask which has a set_parameter function that takes a simulation, a parameter name, and a value. To pass to this function, we will user either a class wrapper or function partials.

3.2. A list/generator of values

Since our models uses a JSON config let’s define an utility function that will update a single parameter at a time on the model and add that param/value pair as a tag on our simulation.

def param_update(simulation: Simulation, param: str, value: Any) -> Dict[str, Any]:
    """
    This function is called during sweeping allowing us to pass the generated sweep values to our Task Configuration

    We always receive a Simulation object. We know that simulations all have tasks and that for our particular set
    of simulations they will all include JSONConfiguredPythonTask. We configure the model with calls to set_parameter
    to update the config. In addition, we are can return a dictionary of tags to add to the simulations so we return
    the output of the 'set_parameter' call since it returns the param/value pair we set

    Args:
        simulation: Simulation we are configuring
        param: Param string passed to use
        value: Value to set param to

    Returns:

    """
    return simulation.task.set_parameter(param, value)

Let’s sweep the parameter Base_Infectivity_Gaussian_Mean for the values 0.5 and 2. Since our utility function requires a simulation, param, and value but the sweep framework all calls our function with simulation, value, let’s use the partial function to define that we want the param value to always be Base_Infectivity_Gaussian_Mean so we can perform our sweep set_base_infectivity_gaussian_mean = partial(param_update, param=ConfigParameters.Base_Infectivity_Gaussian_Mean) now add the sweep to our builder builder.add_sweep_definition(set_base_infectivity_gaussian_mean, [0.5, 2]).

An alternative to using partial is define a class that store the param and is callable later. Let’s use that technique to perform a sweep one the values 1 and 2 on parameter Base_Infectivity_Gaussian_Std_Dev.

First define our class. The trick here is we overload __call__ so that after we create the class, and calls to the instance will be relayed to the task in a fashion identical to the param_update function above. It is generally not best practice to define a class like this in the body of our main script so it is advised to place this in a library or at the very least the top of your file.

class setParam:
    def __init__(self, param):
        self.param = param

    def __call__(self, simulation, value):
        return simulation.task.set_parameter(self.param, value)

Now add our sweep on a list:

builder.add_sweep_definition(setParam(ConfigParameters.Base_Infectivity_Gaussian_Std_Dev), [0.3, 1])

Using the same methodologies, we can sweep on option/arguments that pass to our Python model. You can uncomment the following code to enable it.

3.3 First method:

# def option_update(simulation: Simulation, option: str, value: Any) -> Dict[str, Any]:
#     simulation.task.command.add_option(option, value)
#     return {option: value}
# set_outbreak_coverage = partial(option_update, option="--outbreak_coverage")
# builder.add_sweep_definition(set_outbreak_coverage, [0.01, 0.1])
#
# # 3.4 second method:
# class setOption:
#     def __init__(self, option):
#         self.option = option
#
#     def __call__(self, simulation, value):
#         simulation.task.command.add_option(self.option, value)
#         return {self.option: value}
# builder.add_sweep_definition(setOption("--population"), [1000, 4000])
  1. Add our builder to the template simulations.

ts.add_builder(builder)
  1. Now we can create our experiment using our template builder.

experiment = Experiment(name=os.path.split(sys.argv[0])[1], simulations=ts)

Add our own custom tag to simulation.

experiment.tags['experiment_name_tag'] = "SEIR_Model"

And maybe some custom experiment level assets.

experiment.assets.add_directory(assets_directory=os.path.join("inputs", "ye_seir_model", "Assets"))
  1. In order to run the experiment, we need to create a Platform object.

Platform defines where we want to run our simulation.

You can easily switch platforms by changing Platform, for example to “Local”.

platform = Platform('COMPS2')

The last step is to call run() on the ExperimentManager to run the simulations.

platform.run_items(experiment)
platform.wait_till_done(experiment)

Check experiment status, only move to analyzer step if experiment succeeded.

if not experiment.succeeded:
    print(f"Experiment {experiment.uid} failed.\n")
    sys.exit(-1)
  1. Now let’s look at the experiment results. Here are two outputs we want to analyze.

filenames = ['output/individual.csv']
filenames_2 = ['output/node.csv']

Initialize two analyser classes with the path of the output csv file.

analyzers = [InfectiousnessCSVAnalyzer(filenames=filenames), NodeCSVAnalyzer(filenames=filenames_2)]

Specify the id Type, in this case an experiment on COMPS.

manager = AnalyzeManager(configuration={}, partial_analyze_ok=True, platform=platform,
                         ids=[(experiment.uid, ItemType.EXPERIMENT)],
                         analyzers=analyzers)

Now analyze:

manager.analyze()
sys.exit(0)

Note

COMPS access is restricted to IDM employees. See additional documentation for using idmtools with other high-performance computing clusters.

python_model.python_model_allee

idmtools.examples.python_model.python_model_allee.py

In this example, we will demonstrate how to run a Python experiment using an asset collection on COMPS.

First, import some necessary system and idmtools packages:

import os
import sys
from functools import partial

from idmtools.assets import AssetCollection
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_platform_comps.utils.python_requirements_ac.requirements_to_asset_collection import RequirementsToAssetCollection

In order to run the experiment, we need to create SimulationBuilder, Platform, Experiment, TemplatedSimulations, and JSONConfiguredPythonTask objects.

In addition AssetCollection and RequirementsToAssetCollection objects are created for using an asset collection on COMPS. Platform defines where we want to run our simulation. You can easily switch platforms by changing Platform, for example to “Local”.

platform = Platform('COMPS2')

pl = RequirementsToAssetCollection(platform,
                                   requirements_path=os.path.join("inputs", "allee_python_model", "requirements.txt"))

ac_id = pl.run()
pandas_assets = AssetCollection.from_id(ac_id, platform=platform)

base_task = JSONConfiguredPythonTask(
    # specify the path to the script. This is most likely a scientific model
    script_path=os.path.join("inputs", "allee_python_model", "run_emod_sweep.py"),
    envelope='parameters',
    parameters=dict(
        fname="runNsim100.json",
        customGrid=1,
        nsims=100
    ),
    common_assets=pandas_assets
)

Update and set simulation configuration parameters.

def param_update(simulation, param, value):
    return simulation.task.set_parameter(param, 'sweepR04_a_' + str(value) + '.json')

setA = partial(param_update, param="infile")

Define our template:

ts = TemplatedSimulations(base_task=base_task)

Now that the experiment is created, we can add sweeps to it and set additional params

builder = SimulationBuilder()
builder.add_sweep_definition(setA, range(7850, 7855))

Add sweep builder to template:

ts.add_builder(builder)

Create experiment:

e = Experiment.from_template(
    ts,
    name=os.path.split(sys.argv[0])[1],
    assets=AssetCollection.from_directory(os.path.join("inputs", "allee_python_model"))
)

platform.run_items(e)

Use system status as the exit code:

sys.exit(0 if e.succeeded else -1)
Running parameter sweeps with EMOD

When running parameter sweeps with EMOD, you use the EMODTask class for setting the sweep parameters and passing them to the SimulationBuilder class using the add_sweep_definition method.

In addition to the parameters for sweeping, you must also set the Run_Number parameter. This determines the seed for the random number generator. This is particularly important with EMOD in order to explore the stochastic nature of the model. Otherwise, if Run_Number is not changed then each simulation will result in the same output.

The following Python code excerpt shows an example:

# Create TemplatedSimulations with task
ts = TemplatedSimulations(base_task=task)

# Create SimulationBuilder
builder = SimulationBuilder()

# Add sweep parameter to builder
builder.add_sweep_definition(EMODTask.set_parameter_partial("Run_Number"), range(num_seeds))

# Add another sweep parameter to builder
builder.add_sweep_definition(EMODTask.set_parameter_partial("Base_Infectivity"), [0.6, 1.0, 1.5, 2.0])

# Add builder to templated simulations
ts.add_builder(builder)

You can run a parameter sweep using the above code excerpt by running the included example, emodpy.examples.create_sims_eradication_from_github_url.

Output data

The output produced by running simulations using idmtools depends on the configuration of the model itself. idmtools is itself agnostic to the output format when running simulations. However, the analysis framework expects simulation output in CSV, JSON, XLSX, or TXT to be automatically loaded to a Python object. All other formats are loaded as a raw binary stream. For more information, see Introduction to analyzers.

If you are running simulations on COMPS, the configuration of the “idmtools.ini” file will determine where output files can be found. For more information, see idmtools.ini wizard

Note

COMPS access is restricted to IDM employees. See additional documentation for using idmtools with other high-performance computing clusters.

If you are running simulations or experiments locally, they are saved to your local computer at C:\Users\yourname\.local_data\workers for Windows and ~/.local_data/workers for Linux.

Additionally, when running locally using Docker, output can be found in your browser in the output directory appended after the experiment or simulation ID. For example, the output from an experiment with an ID of S07OASET could be found at http://localhost:5000/data/S07OASET. The output from an individual simulation (ID FCPRIV7H) within that experiment could be found at http://localhost:5000/data/S07OASET/FCPRIV7H.

The python_csv_output.py example below demonstrates how to produce output in CSV format for a simple parameter sweep.

# Example Python Experiment
# In this example, we will demonstrate how to run a python experiment.

# First, import some necessary system and idmtools packages.
# - TemplatedSimulations: To create simulation from a template
# - ExperimentManager: To manage our experiment
# - platform: To specify the platform you want to run your experiment on as a context object
# - JSONConfiguredPythonTask: We want to run an experiment executing a Python script that uses a JSON configuration file
import os
import sys

from idmtools.assets import AssetCollection
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask

# In order to run the experiment, we need to create a `Platform` and an `ExperimentManager`.
# The `Platform` defines where we want to run our simulation.
# You can easily switch platforms by changing the Platform to for example 'CALCULON'
with platform('SlurmStage') as platform:
    # define our base task as a python model with json config
    base_task = JSONConfiguredPythonTask(
        script_path=os.path.join("inputs", "python", "Assets", "model.py"),
        # set the default parameters to 0
        parameters=(dict(c=0)),
        # add some experiment level assets
        common_assets=AssetCollection.from_directory(os.path.join("inputs", "python", "Assets"))
    )

    # create a templating object using the base task
    ts = TemplatedSimulations(base_task=base_task)
    # Define the parameters we are going to want to sweep
    builder = SimulationBuilder()
    # define two partial callbacks so we can use the built in sweep callback function on the model
    # Since we want to sweep per parameter, and we want need to define a partial for each parameter
    # The JSON model provides utility function for this puprose
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("a"), range(3))
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("b"), [1, 2, 3])
    # add the builder to our template
    ts.add_builder(builder)

    # now build experiment
    e = Experiment.from_template(
        ts,
        name=os.path.split(sys.argv[0])[1],
        tags=dict(tag1=1))

    # now we can run the experiment
    e.run(wait_until_done=True)
    # use system status as the exit code
    sys.exit(0 if e.succeeded else -1)

Introduction to analyzers

The analyzers and examples in idmtools provide support for the MapReduce framework, where you can process large data sets in parallel, typically on a high-performance computing (HPC) cluster. The MapReduce framework includes two primary phases, Map and Reduce. Map takes input data, as key:value pairs, and creates an intermediate set of key:value pairs. Reduce takes the intermediate set of key:value pairs and transforms the data (typically reducing it) as output containing a final set of key:value pairs.

An example of this process with idmtools is to use the simulation output data as the input data (key:value pairs), filter and sort a subset of the data to focus on, and then combine and reduce the data to create the final output data.

_images/mapreduce-idmtools.png

The analyzers included with idmtools help facilitate this process. For example, if you would like to focus on specific data points from all simulations in one or more experiments then you can do this using analyzers with idmtools and plot the final output.

_images/alldata-plot.png

The analysis framework expects simulation output in CSV, JSON, XLSX, or TXT to be automatically loaded to a Python object. All other formats are loaded as a raw binary stream. The format indicated by the filename of the simulation output determines the data format loaded to the analyzers.

Output format

Object loaded to analyzer

JSON

A dictionary

CSV

A pandas DataFrame

XLSX

A pandas DataFrame

TXT

An rstring

All other files

A bytes object

Example analyzers are inlucded with idmtools to help you get started. For more information, see Example analyzers.

You can also create custom analyzers to meet your individual analysis needs. For more information, see Create an analyzer.

Integration with Server-Side Modeling Tools (SSMT) increases the performance of running analyzers. You may find this useful when running multiple analyzers across multiple experiments.

Example analyzers

You can use the following example analyzers as templates to get started using idmtools:

Each example analyzer is configured to run with existing simulation data and already configured options, such as using the COMPS platform and existing experiments. This allows you to easily run these example analyzers for demonstrating some of the tasks you may want to accomplish when analyzing simulation output data. You can then use and modify these examples for your specific needs.

Note

COMPS access is restricted to IDM employees. See additional documentation for using idmtools with other high-performance computing clusters.

For a description of each of these analyzers please see the following:

  • AddAnalyzer: Gets metadata from simulations, maps to key:value pairs, and returns a .txt output file.

  • CSVAnalyzer: Analyzes .csv output files from simulations and returns a .csv output file.

  • DownloadAnalyzer: Downloads simulation output files for analysis on local computer resources.

  • TagsAnalyzer: Analyzes tags from simulations and returns a .csv output file.

Each of the included example analyzers inherit from the built-in analyzers and the IAnalyzer abstract class:

_images/e334fcc0ff41ead1d29f4e79608fb02107fa2a3d6c4083d5e941851500513943.svg

For more information about the built-in analyzers, see Create an analyzer. There are also additional examples, such as forcing analyzers to use a specific working directory and how to perform partial analysis on only succeeded or failed simulations:

Force working directory

You can force analyzers to use a specific working directory other than the default, which is the directory from which the analyzer is run. For example, if you install idmtools to the \idmtools directory and then run one of the example analyzers from their default directory, \examples\analyzers, then the default working directory would be \idmtools\examples\analyzers.

To force a working directory, you use the force_manager_working_directory parameter from the AnalyzeManager class. The following python code, using the DownloadAnalyzer as an example , illustrates different ways on how to use and configure the force_manager_working_directory parameter and how it works and interacts with the working_dir parameter:

from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.analysis.download_analyzer import DownloadAnalyzer
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform

if __name__ == '__main__':
    platform = Platform('COMPS2')
    filenames = ['StdOut.txt']
    experiment_id = '11052582-83da-e911-a2be-f0921c167861'  # comps2 staging exp id

# force_manager_working_directory = False (default value):
# Analyzers will use their own specified working_dir if available. If not, the AnalyzeManager
# specified working_dir will be used (default: '.').
#
# force_manager_working_directory = True
# Analyzers will use the AnalyzeManager specified working_dir (default: '.')

# Examples

# This will use the default working_dir for both analyzers (the current run directory, '.')
analyzers = [DownloadAnalyzer(filenames=filenames, output_path='DL1'),
             DownloadAnalyzer(filenames=filenames, output_path='DL2')]
manager = AnalyzeManager(platform=platform, ids=[(experiment_id, ItemType.EXPERIMENT)],
                         analyzers=analyzers)
manager.analyze()

# This will use the manager-specified working_dir for both analyzers
analyzers = [DownloadAnalyzer(filenames=filenames, output_path='DL1'),
             DownloadAnalyzer(filenames=filenames, output_path='DL2')]
manager = AnalyzeManager(platform=platform, ids=[(experiment_id, ItemType.EXPERIMENT)],
                         analyzers=analyzers, working_dir='use_this_working_dir_for_both_analyzers')
manager.analyze()

# This will use the analyzer-specified working_dir for DL1 and the manager-specified dir for DL2
analyzers = [DownloadAnalyzer(filenames=filenames, output_path='Dl1', working_dir='DL1_working_dir'),
             DownloadAnalyzer(filenames=filenames, output_path='DL2')]
manager = AnalyzeManager(platform=platform, ids=[(experiment_id, ItemType.EXPERIMENT)],
                         analyzers=analyzers, working_dir='use_this_working_dir_if_not_set_by_analyzer')
manager.analyze()

# This will use the manager-specified dir for both DL1 and DL2, even though DL1 tried to set its own
analyzers = [DownloadAnalyzer(filenames=filenames, output_path='DL1', working_dir='DL1_working_dir'),
             DownloadAnalyzer(filenames=filenames, output_path='DL2')]
manager = AnalyzeManager(platform=platform, ids=[(experiment_id, ItemType.EXPERIMENT)],
                         analyzers=analyzers, working_dir='use_this_working_dir_if_not_set_by_analyzer',
                         force_manager_working_directory=True)
manager.analyze()
Partial analysis

You can use analyzers for a partial analysis of simulations. This allows you to only analyze succeeded simulations, while one or more simulations within an experiment may have failed. In addition, you can analyze both succeeded and failed simulations.

Analysis on only succeeded simulations

For partial analysis only on the succeeded simulations, where one or more simulations may have failed, you set to “True” the partial_analyze_ok parameter from the AnalyzeManager class, as seen in the following python code excerpt:

analyzers = [CSVAnalyzer(filenames=filenames)]
manager = AnalyzeManager(platform=self.platform, partial_analyze_ok=True,
                         ids=[(experiment_id, ItemType.EXPERIMENT)],
                         analyzers=analyzers)
manager.analyze()
Analysis on both succeeded and failed simulations

For analysis on both succeeded and failed simulations, you set to “True” the analyze_failed_items parameter from the AnalyzeManager class, as seen in the following python code excerpt:

analyzers = [CSVAnalyzer(filenames=filenames)]
manager = AnalyzeManager(platform=self.platform, analyze_failed_items=True,
                         ids=[(experiment_id, ItemType.EXPERIMENT)],
                         analyzers=analyzers)
manager.analyze()

Create an analyzer

You can use built-in analyzers included with idmtools to help with creating a new analyzer. The following list some of these analyzers, all inheriting from the the IAnalyzer abstract class:

_images/3caa92605e76e691b7a6ce66fb1098b0b2d64329eec5832360156bf2beca71c4.svg

For more information about these built-in analyzers, see:

To create an analyzer methods from the IAnalyzer abstract class are used:

_images/e18efe0cceee8c0a96b4c2987681a8249c6dc84ffabb081fa791a88cd284a551.svg

All analyzers must also call the AnalyzeManager class for analysis management:

_images/34acb5c36fd6469ebe3220eac45a736b8fac2a2ac3e1c5b05b7cc29b6b1a0ff7.svg

The following python code and comments, from the CSVAnalyzer class, is an example of how to create an analyzer for analysis of .csv output files from simulations:

class CSVAnalyzer(IAnalyzer):
# Arg option for analyzer init are uid, working_dir, parse (True to leverage the :class:`OutputParser`;
# False to get the raw data in the :meth:`select_simulation_data`), and filenames
# In this case, we want parse=True, and the filename(s) to analyze
def __init__(self, filenames, parse=True):
    super().__init__(parse=parse, filenames=filenames)
    # Raise exception early if files are not csv files
    if not all(['csv' in os.path.splitext(f)[1].lower() for f in self.filenames]):
        raise Exception('Please ensure all filenames provided to CSVAnalyzer have a csv extension.')

def initialize(self):
    if not os.path.exists(os.path.join(self.working_dir, "output_csv")):
        os.mkdir(os.path.join(self.working_dir, "output_csv"))

# Map is called to get for each simulation a data object (all the metadata of the simulations) and simulation object
def map(self, data, simulation):
    # If there are 1 to many csv files, concatenate csv data columns into one dataframe
    concatenated_df = pd.concat(list(data.values()), axis=0, ignore_index=True, sort=True)
    return concatenated_df

# In reduce, we are printing the simulation and result data filtered in map
def reduce(self, all_data):

    results = pd.concat(list(all_data.values()), axis=0,  # Combine a list of all the sims csv data column values
                        keys=[str(k.uid) for k in all_data.keys()],  # Add a hierarchical index with the keys option
                        names=['SimId'])  # Label the index keys you create with the names option
    results.index = results.index.droplevel(1)  # Remove default index

    # Make a directory labeled the exp id to write the csv results to
    # NOTE: If running twice with different filename, the output files will collide
    results.to_csv(os.path.join("output_csv", self.__class__.__name__ + '.csv'))

You can quickly see this analyzer in use by running the included CSVAnalyzer example class.

Using analyzers with SSMT

If you have access to COMPS, you can use idmtools to run analyzers on Server-Side Modeling Tools (SSMT). SSMT is integrated with COMPS, allowing you to leverage the HPC compute power for running both the analyzers and any pre or post processing scripts that you may have previously ran locally.

The PlatformAnalysis class is used for sending the needed information (such as analyzers, files, and experiment ids) as a SSMT work item to be run with SSMT and COMPS.

The following example, run_ssmt_analysis.py, shows how to use PlatformAnalysis for running analysis on SSMT:

from examples.ssmt.simple_analysis.analyzers.AdultVectorsAnalyzer import AdultVectorsAnalyzer
from examples.ssmt.simple_analysis.analyzers.PopulationAnalyzer import PopulationAnalyzer
from idmtools.core.platform_factory import Platform
from idmtools.analysis.platform_anaylsis import PlatformAnalysis

if __name__ == "__main__":
    platform = Platform('CALCULON')
    analysis = PlatformAnalysis(
        platform=platform, experiment_ids=["b3e4fceb-bb71-ed11-aa00-b88303911bc1"],
        analyzers=[PopulationAnalyzer, AdultVectorsAnalyzer], analyzers_args=[{'title': 'idm'}, {'name': 'global good'}],
        analysis_name="SSMT Analysis Simple 1",
        # You can pass any additional arguments needed to AnalyzerManager through the extra_args parameter
        extra_args=dict(max_workers=8)
    )

    analysis.analyze(check_status=True)
    wi = analysis.get_work_item()
    print(wi)

In this example two analyzers are run on an existing experiment with the output results saved to an output directory. After you run the example you can see the results by using the returned SSMTWorkItem id and searching for it under Work Items in COMPS.

Note

COMPS access is restricted to IDM employees. See additional documentation for using idmtools with other high-performance computing clusters.

Convert analyzers from DTK-Tools

Although the use of analyzers in DTK-Tools and idmtools is very similar, being aware of some of the differences may be helpful with the conversion process. For example some of the class and method names are different, as seen in the following diagram:

_images/b25e2dbb3f5b2ffca31027aa09770fd7840b9f21331791e05425309b6034d809.svg

As the previous diagram illustrates the DTK-Tools methods, select_simulation_data() and finalize(), have been renamed to map() and reduce() in idmtools; however, the parameters are the same:

select_simulation_data(self,data,simulation)
map(self,data,simulation)

finalize(self,all_data)
reduce(self,all_data)

For additional information about the IAnalyzer class and methods, see IAnalyzer.

In addition, you can also see an example of a .csv analyzer created in DTK-Tools and how it was converted to idmtools. Other than the class name and some method names changing the core code is almost the same. The primary differences can be seen in the class import statements and the execution of the analysis within the if __name__ == ‘__main__’: block of code.

DTK-Tools example analyzer

The following DTK-Tools example performs analysis on simulation output data in .csv files and returns the result data in a .csv file:

import os
import pandas as pd
from simtools.Analysis.BaseAnalyzers import BaseAnalyzer
from simtools.Analysis.AnalyzeManager import AnalyzeManager
from simtools.SetupParser import SetupParser


class CSVAnalyzer(BaseAnalyzer):

    def __init__(self, filenames, parse=True):
        super().__init__(parse=parse, filenames=filenames)
        if not all(['csv' in os.path.splitext(f)[1].lower() for f in self.filenames]):
            raise Exception('Please ensure all filenames provided to CSVAnalyzer have a csv extension.')

    def initialize(self):
        if not os.path.exists(os.path.join(self.working_dir, "output_csv")):
            os.mkdir(os.path.join(self.working_dir, "output_csv"))

    def select_simulation_data(self, data, simulation):
        concatenated_df = pd.concat(list(data.values()), axis=0, ignore_index=True, sort=True)
        return concatenated_df

    def finalize(self, all_data: dict) -> dict:

        results = pd.concat(list(all_data.values()), axis=0,
                            keys=[k.id for k in all_data.keys()],
                            names=['SimId'])
        results.index = results.index.droplevel(1)

        results.to_csv(os.path.join("output_csv", self.__class__.__name__ + '.csv'))


if __name__ == "__main__":

    SetupParser.init(selected_block='HPC', setup_file="simtools.ini")
    filenames = ['output/c.csv']
    analyzers = [CSVAnalyzer(filenames=filenames)]
    manager = AnalyzeManager('9311af40-1337-ea11-a2be-f0921c167861', analyzers=analyzers)
    manager.analyze()
DTK-Tools converted to idmtools

The following converted from DTK-Tools to idmtools example performs analysis on simulation output data in .csv files and returns the result data in a .csv file:

import os
import pandas as pd
from idmtools.entities import IAnalyzer
from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform


class CSVAnalyzer(IAnalyzer):

    def __init__(self, filenames, parse=True):
        super().__init__(parse=parse, filenames=filenames)
        if not all(['csv' in os.path.splitext(f)[1].lower() for f in self.filenames]):
            raise Exception('Please ensure all filenames provided to CSVAnalyzer have a csv extension.')

    def initialize(self):
        if not os.path.exists(os.path.join(self.working_dir, "output_csv")):
            os.mkdir(os.path.join(self.working_dir, "output_csv"))

    def map(self, data, simulation):
        concatenated_df = pd.concat(list(data.values()), axis=0, ignore_index=True, sort=True)
        return concatenated_df

    def reduce(self, all_data):

        results = pd.concat(list(all_data.values()), axis=0,
                            keys=[k.id for k in all_data.keys()],
                            names=['SimId'])
        results.index = results.index.droplevel(1)

        results.to_csv(os.path.join("output_csv", self.__class__.__name__ + '.csv'))


if __name__ == '__main__':

    platform = Platform('COMPS')
    filenames = ['output/c.csv']
    analyzers = [CSVAnalyzer(filenames=filenames)]
    experiment_id = '9311af40-1337-ea11-a2be-f0921c167861'
    manager = AnalyzeManager(configuration={}, partial_analyze_ok=True, platform=platform,
                             ids=[(experiment_id, ItemType.EXPERIMENT)],
                             analyzers=analyzers)
    manager.analyze()

You can quickly see this analyzer in use by running the included CSVAnalyzer example class.

Plot data

You can use idmtools to plot the output results of the analysis of simulations and experiments. You must include a plotting library within your script. For example, with Python a common plotting library is matplotlib (https://matplotlib.org/).

The following shows how to add matplotlib to a reduce method for plotting the output results of a population analyzer:

def reduce(self, all_data: dict) -> Any:
    output_dir = os.path.join(self.working_dir, "output")

    with open(os.path.join(output_dir, "population.json"), "w") as fp:
        json.dump({str(s.uid): v for s, v in all_data.items()}, fp)

    import matplotlib.pyplot as plt

    fig = plt.figure()
    ax = fig.add_subplot()

    for pop in list(all_data.values()):
        ax.plot(pop)
    ax.legend([str(s.uid) for s in all_data.keys()])
    fig.savefig(os.path.join(output_dir, "population.png"))

The reduce method uses the output from the map method, which is InsetChart.json, as the input for plotting the results of the Statistical Population channel:

filenames = ['output/InsetChart.json']

def map(self, data: Any, item: IItem) -> Any:
    return data[self.filenames[0]]["Channels"]["Statistical Population"]["Data"]

The final results are plotted and saved to the file, population.png:

_images/plots-insetchart.png

Architecture and packages reference

idmtools is built in Python and includes an architecture designed for ease of use, flexibility, and extensibility. You can quickly get up and running and see the capabilities of idmtools by using one of the many included example Python scripts demonstrating the functionality of the packages.

idmtools is built in a modular fashion, as seen in the diagrams below. idmtools design includes multiple packages and APIs, providing both the flexibility to only include the necessary packages for your modeling needs and the extensibility by using the APIs for any needed customization.

Packages overview

_images/5c0f7ec3734a7336a2ae24163816744cfd3ac3975937b8edbebcba48d363573a.svg

Packages and APIs

The following diagrams help illustrate the primary packages and associated APIs available for modeling and development with idmtools:

Core and job orchestration
_images/9e90c3d94a2de75f5017d885f94378c5bd4e7355d1f8f1f889ed26c93f0269f3.svg
Local platform
_images/b6ac1510c52577b49d5ba72783356fd9f1b4afdcbe428066c5d917c53916b67f.svg
COMPS platform
_images/983db3e48bf6208512cff010989825736d08e99df36268b3a9dbdd547b72f83b.svg

Note

COMPS access is restricted to IDM employees. See additional documentation for using idmtools with other high-performance computing clusters.

SLURM platform
_images/2df77dcf8df0464ddf83d37f35956a7fb06e9785ed7e7de86de08f1b63002477.svg
Models reference
_images/1471295e28d358159121b66ecb100412ecacef7e14a85abdb1656c7de41d44d7.svg
API class specifications
_images/3a4c68ce075bf4f5f558c37f4e14d3b70c5df0ac83a74bddf1f4fd168ff3c7bd.svg
EMOD
_images/035e540468a441e8cb365997a1310a7cd611681dbf66cedb854c1b012a03e541.svg

EMOD support with idmtools is provided with the emodpy package, which leverages idmtools plugin architecture.

API Documentation

idmtools
idmtools package

idmtools core package.

This init installs a system exception hook for idmtools. It also ensures the configuration is loaded.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools Subpackages
idmtools.analysis package

idmtools analyzer framework

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.analysis Submodules
idmtools.analysis.add_analyzer module

idmtools add analyzer.

More of an example.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.analysis.add_analyzer.AddAnalyzer(filenames=None, output_path='output')[source]

Bases: IAnalyzer

A simple base class to add analyzers.

Examples

# Example AddAnalyzer for EMOD Experiment
# In this example, we will demonstrate how to create an AddAnalyzer to analyze an experiment's output file

# First, import some necessary system and idmtools packages.
from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.analysis.add_analyzer import AddAnalyzer
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform

if __name__ == '__main__':

    # Set the platform where you want to run your analysis
    with Platform('CALCULON') as platform:

        # Arg option for analyzer init are uid, working_dir, data in the method map (aka select_simulation_data),
        # and filenames
        # In this case, we want to provide a filename to analyze
        filenames = ['stdout.txt']
        # Initialize the analyser class with the name of file to save to and start the analysis
        analyzers = [AddAnalyzer(filenames=filenames)]

        # Set the experiment you want to analyze
        experiment_id = '6f305619-64b3-ea11-a2c6-c4346bcb1557'  # comps exp id

        # Specify the id Type, in this case an Experiment
        manager = AnalyzeManager(ids=[(experiment_id, ItemType.EXPERIMENT)], analyzers=analyzers)
        manager.analyze()
__init__(filenames=None, output_path='output')[source]

Initialize our analyzer.

Parameters:
  • filenames – Filename to fetch

  • output_path – Path to write output to

filter(item: IWorkflowItem | Simulation)[source]

Filter analyzers. Here we want all the items so just return true.

Parameters:

item – Item to filter

Returns:

True

initialize()[source]

Initialize our analyzer before running it.

We use this to create our output directory.

Returns:

None

map(data, item: IWorkflowItem | Simulation)[source]

Run this on each item and the files we retrieve.

Parameters:
  • data – Map of filesnames -> content

  • item – Item we are mapping

Returns:

Values added up

reduce(data)[source]

Combine all the data we mapped.

Parameters:

data – Map of results in form Item -> map results

Returns:

Sum of all the results

idmtools.analysis.analyze_manager module

idmtools Analyzer manager.

AnalyzerManager is the “driver” of analysis. Analysis is mostly a map reduce operation.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.analysis.analyze_manager.pool_worker_initializer(func, analyzers, platform: IPlatform) NoReturn[source]

Initialize the pool worker, which allows the process pool to associate the analyzers, cache, and path mapping to the function executed to retrieve data.

Using an initializer improves performance.

Parameters:
  • func – The function that the pool will call.

  • analyzers – The list of all analyzers to run.

  • platform – The platform to communicate with to retrieve files from.

Returns:

None

class idmtools.analysis.analyze_manager.AnalyzeManager(platform: IPlatform = None, configuration: dict = None, ids: List[Tuple[str, ItemType]] = None, analyzers: List[IAnalyzer] = None, working_dir: str = None, partial_analyze_ok: bool = False, max_items: int | None = None, verbose: bool = True, force_manager_working_directory: bool = False, exclude_ids: List[str] = None, analyze_failed_items: bool = False, max_workers: int | None = None, executor_type: str = 'process')[source]

Bases: object

Analyzer Manager Class. This is the main driver of analysis.

ANALYZE_TIMEOUT = 28800
WAIT_TIME = 1.15
EXCEPTION_KEY = '__EXCEPTION__'
exception TimeOutException[source]

Bases: Exception

TimeOutException is raised when the analysis times out.

exception ItemsNotReady[source]

Bases: Exception

ItemsNotReady is raised when items to be analyzed are still running.

Notes

TODO - Add doc_link

__init__(platform: IPlatform = None, configuration: dict = None, ids: List[Tuple[str, ItemType]] = None, analyzers: List[IAnalyzer] = None, working_dir: str = None, partial_analyze_ok: bool = False, max_items: int | None = None, verbose: bool = True, force_manager_working_directory: bool = False, exclude_ids: List[str] = None, analyze_failed_items: bool = False, max_workers: int | None = None, executor_type: str = 'process')[source]

Initialize the AnalyzeManager.

Parameters:
  • platform (IPlatform) – Platform

  • configuration (dict, optional) – Initial Configuration. Defaults to None.

  • ids (Tuple[str, ItemType], optional) – List of ids as pair of Tuple and ItemType. Defaults to None.

  • analyzers (List[IAnalyzer], optional) – List of Analyzers. Defaults to None.

  • working_dir (str, optional) – The working directory. Defaults to os.getcwd().

  • partial_analyze_ok (bool, optional) – Whether partial analysis is ok. When this is True, Experiments in progress or Failed can be analyzed. Defaults to False.

  • max_items (int, optional) – Max Items to analyze. Useful when developing and testing an Analyzer. Defaults to None.

  • verbose (bool, optional) – Print extra information about analysis. Defaults to True.

  • force_manager_working_directory (bool, optional) – [description]. Defaults to False.

  • exclude_ids (List[str], optional) – [description]. Defaults to None.

  • analyze_failed_items (bool, optional) – Allows analyzing of failed items. Useful when you are trying to aggregate items that have failed. Defaults to False.

  • max_workers (int, optional) – Set the max workers. If not provided, falls back to the configuration item max_threads. If max_workers is not set in configuration, defaults to CPU count

  • executor_type – (str): Whether to use process or thread pooling. Process pooling is more efficient but threading might be required in some environments

add_item(item: IEntity) NoReturn[source]

Add an additional item for analysis.

Parameters:

item – The new item to add for analysis.

Returns:

None

add_analyzer(analyzer: IAnalyzer) NoReturn[source]

Add another analyzer to use on the items to be analyzed.

Parameters:

analyzer – An analyzer object (IAnalyzer).

Returns:

None

analyze() bool[source]

Process the provided items with the provided analyzers. This is the main driver method of AnalyzeManager.

Parameters:

kwargs – extra parameters

Returns:

True on success; False on failure/exception.

idmtools.analysis.csv_analyzer module

idmtools CSVAnalyzer.

Example of a csv analyzer to concatenate csv results into one csv from your experiment’s simulations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.analysis.csv_analyzer.CSVAnalyzer(filenames, output_path='output_csv')[source]

Bases: IAnalyzer

Provides an analyzer for CSV output.

Examples

Simple Example

This example covers the basic usage of the CSVAnalyzer

# Example CSVAnalyzer for any experiment
# In this example, we will demonstrate how to use a CSVAnalyzer to analyze csv files for experiments

# First, import some necessary system and idmtools packages.
from logging import getLogger

from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.analysis.csv_analyzer import CSVAnalyzer
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform

if __name__ == '__main__':

    # Set the platform where you want to run your analysis
    # In this case we are running in CALCULON since the Work Item we are analyzing was run on COMPS
    logger = getLogger()
    with Platform('CALCULON') as platform:

        # Arg option for analyzer init are uid, working_dir, data in the method map (aka select_simulation_data),
        # and filenames
        # In this case, we want to provide a filename to analyze
        filenames = ['output/c.csv']
        # Initialize the analyser class with the path of the output csv file
        analyzers = [CSVAnalyzer(filenames=filenames, output_path="output_csv")]

        # Set the experiment id you want to analyze
        experiment_id = '31285dfc-4fe6-ee11-9f02-9440c9bee941'  # comps exp id simple sim and csv example

        # Specify the id Type, in this case an Experiment on COMPS
        manager = AnalyzeManager(partial_analyze_ok=True, ids=[(experiment_id, ItemType.EXPERIMENT)],
                                 analyzers=analyzers)
        manager.analyze()
Multiple CSVs

This example covers analyzing multiple CSVs

# Example CSVAnalyzer for any experiment with multiple csv outputs
# In this example, we will demonstrate how to use a CSVAnalyzer to analyze csv files for experiments

# First, import some necessary system and idmtools packages.
from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.analysis.csv_analyzer import CSVAnalyzer
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform


if __name__ == '__main__':

    # Set the platform where you want to run your analysis
    platform = Platform('CALCULON')

    # Arg option for analyzer init are uid, working_dir, data in the method map (aka select_simulation_data),
    # and filenames
    # In this case, we have multiple csv files to analyze
    filenames = ['output/a.csv', 'output/b.csv']
    # Initialize the analyser class with the path of the output csv file
    analyzers = [CSVAnalyzer(filenames=filenames, output_path="output_csv")]

    # Set the experiment id you want to analyze
    experiment_id = '31285dfc-4fe6-ee11-9f02-9440c9bee941'  # comps exp id

    # Specify the id Type, in this case an Experiment on COMPS
    manager = AnalyzeManager(partial_analyze_ok=True, ids=[(experiment_id, ItemType.EXPERIMENT)],
                             analyzers=analyzers)
    manager.analyze()
__init__(filenames, output_path='output_csv')[source]

Initialize our analyzer.

Parameters:
  • filenames – Filenames we want to pull

  • output_path – Output path to write the csv

initialize()[source]

Initialize on run. Create an output directory.

Returns:

None

map(data: Dict[str, Any], simulation: IWorkflowItem | Simulation) DataFrame[source]

Map each simulation/workitem data here.

The data is a mapping of files -> content(in this case, dataframes since it is csvs parsed).

Parameters:
  • data – Data mapping of files -> content

  • simulation – Simulation/Workitem we are mapping

Returns:

Items joined together into a dataframe.

reduce(all_data: Dict[IWorkflowItem | Simulation, DataFrame])[source]

Reduce(combine) all the data from our mapping.

Parameters:

all_data – Mapping of our data in form Item(Simulation/Workitem) -> Mapped dataframe

Returns:

None

idmtools.analysis.download_analyzer module

idmtools Download analyzer.

Download Analyzer.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.analysis.download_analyzer.DownloadAnalyzer(filenames=None, output_path='output', **kwargs)[source]

Bases: IAnalyzer

A simple base class that will download the files specified in filenames without further treatment.

Can be used by creating a child class:

class InsetDownloader(DownloadAnalyzer):
    filenames = ['output/InsetChart.json']

Or by directly calling it:

analyzer = DownloadAnalyzer(filenames=['output/InsetChart.json'])

Examples

# Example DownloadAnalyzer for EMOD Experiment
# In this example, we will demonstrate how to create an DownloadAnalyzer to download simulation output files locally

# First, import some necessary system and idmtools packages.
from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.analysis.download_analyzer import DownloadAnalyzer
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform

if __name__ == '__main__':

    # Set the platform where you want to run your analysis
    with Platform('CALCULON') as platform:

        # Arg option for analyzer init are uid, working_dir, data in the method map (aka select_simulation_data),
        # and filenames
        # In this case, we want to provide a filename to analyze
        filenames = ['stdout.txt']
        # Initialize the analyser class with the path of the output files to download
        analyzers = [DownloadAnalyzer(filenames=filenames, output_path='download')]

        # Set the experiment you want to analyze
        experiment_id = '31285dfc-4fe6-ee11-9f02-9440c9bee941'  # comps exp id

        # Specify the id Type, in this case an Experiment
        manager = AnalyzeManager(ids=[(experiment_id, ItemType.EXPERIMENT)],
                                 analyzers=analyzers)
        manager.analyze()
reduce(all_data: Dict[IWorkflowItem | Simulation, Any])[source]

Combine the map() data for a set of items into an aggregate result. In this case, for downloading, we just ignore it because there is no reduction.

Parameters:

all_data – Dictionary in form item->map result where item is Simulations or WorkItems

Returns:

None

__init__(filenames=None, output_path='output', **kwargs)[source]

Constructor of the analyzer.

initialize()[source]

Initialize our sim. In this case, we create our output directory.

Returns:

None

get_item_folder(item: IWorkflowItem | Simulation)[source]

Concatenate the specified top-level output folder with the item ID.

Parameters:

item – A simulation output parsing thread.

Returns:

The name of the folder to download this simulation’s output to.

map(data: Dict[str, Any], item: IWorkflowItem | Simulation)[source]

Provide a map of filenames->data for each item. We then download each of these files to our output folder.

Parameters:
  • data – Map filenames->data

  • item – Item we are mapping.

Returns:

None

idmtools.analysis.map_worker_entry module

We define our map entry items here for analysis framework.

Most of these function are used either to initialize a thread or to handle exceptions while executing.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.analysis.map_worker_entry.map_item(item: IItem) Dict[str, Dict][source]

Initialize some worker-global values; a worker process entry point for analyzer item-mapping.

Parameters:

item – The item (often simulation) to process.

Returns:

Dict[str, Dict]

idmtools.analysis.platform_analysis_bootstrap module

This script is executed as entrypoint in the docker SSMT worker.

Its role is to collect the experiment ids and analyzers and run the analysis.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.analysis.platform_anaylsis module

Platform Analysis is a wrapper to allow execution of analysis through SSMT vs Locally.

Running remotely has great advantages over local execution with the biggest being more compute resources and less data transfer. Platform Analysis tries to make the process of running remotely similar to local execution.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.analysis.platform_anaylsis.PlatformAnalysis(platform: IPlatform, analyzers: List[Type[IAnalyzer]], experiment_ids: List[str] = [], simulation_ids: List[str] = [], work_item_ids: List[str] = [], analyzers_args=None, analysis_name: str = 'WorkItem Test', tags=None, additional_files: FileList | AssetCollection | List[str] | None = None, asset_collection_id=None, asset_files: FileList | AssetCollection | List[str] | None = None, wait_till_done: bool = True, idmtools_config: str | None = None, pre_run_func: Callable | None = None, wrapper_shell_script: str | None = None, verbose: bool = False, extra_args: Dict[str, Any] | None = None)[source]

Bases: object

PlatformAnalysis allows remote Analysis on the server.

__init__(platform: IPlatform, analyzers: List[Type[IAnalyzer]], experiment_ids: List[str] = [], simulation_ids: List[str] = [], work_item_ids: List[str] = [], analyzers_args=None, analysis_name: str = 'WorkItem Test', tags=None, additional_files: FileList | AssetCollection | List[str] | None = None, asset_collection_id=None, asset_files: FileList | AssetCollection | List[str] | None = None, wait_till_done: bool = True, idmtools_config: str | None = None, pre_run_func: Callable | None = None, wrapper_shell_script: str | None = None, verbose: bool = False, extra_args: Dict[str, Any] | None = None)[source]

Initialize our platform analysis.

Parameters:
  • platform – Platform

  • experiment_ids – Experiment ids

  • simulation_ids – Simulation ids

  • work_item_ids – WorkItem ids

  • analyzers – Analyzers to run

  • analyzers_args – Arguments for our analyzers

  • analysis_name – Analysis name

  • tags – Tags for the workitem

  • additional_files – Additional files for server analysis

  • asset_collection_id – Asset Collection to use

  • asset_files – Asset files to attach

  • wait_till_done – Wait until analysis is done

  • idmtools_config – Optional path to idmtools.ini to use on server. Mostly useful for development

  • pre_run_func – A function (with no arguments) to be executed before analysis starts on the remote server

  • wrapper_shell_script – Optional path to a wrapper shell script. This script should redirect all arguments to command passed to it. Mostly useful for development purposes

  • verbose – Enables verbose logging remotely

  • extra_args – Optional extra arguments to pass to AnalyzerManager on the server side. See __init__()

analyze(check_status=True)[source]

Analyze remotely.

Parameters:

check_status – Should we check status

Returns:

None

Notes

TODO: check_status is not being used

validate_args()[source]

Validate arguments for the platform analysis and analyzers.

Returns:

None

get_work_item()[source]

Get work item being using to run analysis job on server.

Returns:

Workflow item

idmtools.analysis.tags_analyzer module

Example of a tags analyzer to get all the tags from your experiment simulations into one csv file.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.analysis.tags_analyzer.TagsAnalyzer(uid=None, working_dir=None, parse=True, output_path='output_tag')[source]

Bases: IAnalyzer

Provides an analyzer for CSV output.

Examples

# Example TagsAnalyzer for any experiment
# In this example, we will demonstrate how to use a TagsAnalyzer to put your sim tags in a csv file

# First, import some necessary system and idmtools packages.
from idmtools.analysis.analyze_manager import AnalyzeManager
from idmtools.analysis.tags_analyzer import TagsAnalyzer
from idmtools.core import ItemType
from idmtools.core.platform_factory import Platform

if __name__ == '__main__':

    # Set the platform where you want to run your analysis
    # In this case we are running in COMPS since the Work Item we are analyzing was run on COMPS
    with Platform('CALCULON') as platform:

        # Arg option for analyzer init are uid, working_dir, data in the method map (aka select_simulation_data),
        # and filenames
        # Initialize the analyser class which just requires an experiment id
        analyzers = [TagsAnalyzer(output_path="output_tag")]

        # Set the experiment id you want to analyze
        experiment_id = '31285dfc-4fe6-ee11-9f02-9440c9bee941'  # comps exp id with partial succeed sims

        # Specify the id Type, in this case an Experiment on COMPS
        manager = AnalyzeManager(partial_analyze_ok=True,
                                 ids=[(experiment_id, ItemType.EXPERIMENT)],
                                 analyzers=analyzers)
        manager.analyze()
__init__(uid=None, working_dir=None, parse=True, output_path='output_tag')[source]

Initialize our Tags Analyzer.

Parameters:
  • uid

  • working_dir

  • parse

  • output_path

See also

IAnalyzer.

initialize()[source]

Initialize the item before mapping data. Here we create a directory for the output.

Returns:

None

map(data: Dict[str, Any], simulation: IWorkflowItem | Simulation)[source]

Map our data for our Workitems/Simulations. In this case, we just extract the tags and build a dataframe from those.

Parameters:
  • data – List of files. This should be empty for us.

  • simulation – Item to extract

Returns:

Data frame with the tags built.

reduce(all_data: Dict[IWorkflowItem | Simulation, DataFrame])[source]

Reduce the dictionary of items->Tags dataframe to a single dataframe and write to a csv file.

Parameters:

all_data – Map of Item->Tags dataframe

Returns:

None

idmtools.assets package

idmtools assets package.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.assets Submodules
idmtools.assets.asset module

idmtools asset class definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.assets.asset.Asset(absolute_path: str | None = None, relative_path: str | None = <property object>, filename: str | None = <property object>, content: dataclasses.InitVar[typing.Any] = <property object>, persisted: bool = False, handler: ~typing.Callable = <class 'str'>, download_generator_hook: ~typing.Callable | None = None, checksum: dataclasses.InitVar[typing.Any] = <property object>)[source]

Bases: object

A class representing an asset. An asset can either be related to a physical asset present on the computer or directly specified by a filename and content.

absolute_path: str | None = None

The absolute path of the asset. Optional if filename and content are given.

persisted: bool = False

Persisted tracks if item has been saved

handler

Handler to api

alias of str

download_generator_hook: Callable = None

Hook to allow downloading from platform

property checksum

Checksum of asset. Only required for existing assets

property extension: str

Returns extension of asset.

Returns:

Extension

Notes

This does not preserve the case of the extension in the filename. Extensions will always be returned in lowercase.

property filename

Name of the file. Optional if absolute_path is given.

property relative_path

The relative path (compared to the simulation root folder).

property bytes

Bytes is the content as bytes.

Returns:

None

property length

Get length of item.

Returns:

Length of the content

property content

The content of the file. Optional if absolute_path is given.

deep_equals(other: Asset) bool[source]

Performs a deep comparison of assets, including contents.

Parameters:

other – Other asset to compare

Returns:

True if filename, relative path, and contents are equal, otherwise false

download_generator() Generator[bytearray, None, None][source]

A Download Generator that returns chunks of bytes from the file.

Returns:

Generator of bytearray

Raises:

ValueError - When there is not a download generator hook defined

Notes

TODO - Add a custom error with doclink.

download_stream() BytesIO[source]

Get a bytes IO stream of the asset.

Returns:

BytesIO of the Asset

__init__(absolute_path: str | None = None, relative_path: str | None = <property object>, filename: str | None = <property object>, content: dataclasses.InitVar[typing.Any] = <property object>, persisted: bool = False, handler: ~typing.Callable = <class 'str'>, download_generator_hook: ~typing.Callable | None = None, checksum: dataclasses.InitVar[typing.Any] = <property object>) None
download_to_path(dest: str, force: bool = False)[source]

Download an asset to path. This requires loadings the object through the platform.

Parameters:
  • dest – Path to write to. If it is a directory, the asset filename will be added to it

  • force – Force download even if file exists

Returns:

None

calculate_checksum() str[source]

Calculate checksum on asset. If previous checksum was calculated, that value will be returned.

Returns:

Checksum string

short_remote_path() str[source]

Returns the short remote path. This is the join of the relative path and filename.

Returns:

Remote Path + Filename

save_as(dest: str, force: bool = False)[source]

Download asset object to destination file. :param dest: the file path :param force: force download

Returns:

None

save_md5_checksum()[source]

Save the md5 checksum of the asset to a file in the same directory as the asset. :returns: None

idmtools.assets.asset_collection module

idmtools assets collection package.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.assets.asset_collection.AssetCollection(assets: List[str] | List[TAsset] | AssetCollection | None = None, tags=None)[source]

Bases: IEntity

A class that represents a collection of assets.

__init__(assets: List[str] | List[TAsset] | AssetCollection | None = None, tags=None)[source]

A constructor.

Args: assets: An optional list of assets to create the collection with. tags: dict: tags associated with asset collection

item_type: ItemType = 'Asset Collection'

ItemType so platform knows how to handle item properly

assets: List[Asset] = None

Assets for collection

classmethod from_id(item_id: str, platform: IPlatform = None, as_copy: bool = False, **kwargs) AssetCollection[source]

Loads a AssetCollection from id.

Parameters:
  • item_id – Asset Collection ID

  • platform – Platform Object

  • as_copy – Should you load the object as a copy. When True, the contents of AC are copied, but not the id. Useful when editing ACs

  • **kwargs

Returns:

AssetCollection

classmethod from_directory(assets_directory: str, recursive: bool = True, flatten: bool = False, filters: List[Callable[[TAsset], bool] | Callable] | None = None, filters_mode: FilterMode = FilterMode.OR, relative_path: str | None = None) TAssetCollection[source]

Fill up an AssetCollection from the specified directory.

See assets_from_directory() for arguments.

Returns:

A created AssetCollection object.

static assets_from_directory(assets_directory: str | PathLike, recursive: bool = True, flatten: bool = False, filters: List[Callable[[TAsset], bool] | Callable] | None = None, filters_mode: FilterMode = FilterMode.OR, forced_relative_path: str | None = None, no_ignore: bool = False) List[Asset][source]

Create assets for files in a given directory.

Parameters:
  • assets_directory – The root directory of the assets.

  • recursive – True to recursively traverse the subdirectory.

  • flatten – Put all the files in root regardless of whether they were in a subdirectory or not.

  • filters – A list of filters to apply to the assets. The filters are functions taking an Asset as argument and returning true or false. True adds the asset to the collection; False filters it out. See asset_filters().

  • filters_mode – When given multiple filters, either OR or AND the results.

  • forced_relative_path – Prefix a relative path to the path created from the root directory.

  • no_ignore – Should we not ignore common directories(.git, .svn. etc) The full list is defined in IGNORE_DIRECTORIES

Examples

For relative_path, given the following folder structure root/a/1,txt root/b.txt and relative_path=”test”. Will return assets with relative path: test/a/1,txt and test/b.txt

Given the previous example, if flatten is also set to True, the following relative_path will be set: /1.txt and /b.txt

Returns:

A list of assets.

copy() AssetCollection[source]

Copy our Asset Collection, removing ID and tags.

Returns:

New AssetCollection containing Assets from other AssetCollection

add_directory(assets_directory: str | PathLike, recursive: bool = True, flatten: bool = False, filters: List[Callable[[TAsset], bool] | Callable] | None = None, filters_mode: FilterMode = FilterMode.OR, relative_path: str | None = None, no_ignore: bool = False)[source]

Retrieve assets from the specified directory and add them to the collection.

See assets_from_directory() for arguments.

is_editable(error=False) bool[source]

Checks whether Item is editable.

Parameters:

error – Throw error is not

Returns:

True if editable, False otherwise.

add_asset(asset: Asset | str | PathLike, fail_on_duplicate: bool = True, fail_on_deep_comparison: bool = False, **kwargs)[source]

Add an asset to the collection.

Parameters:
  • asset – A string or an Asset object to add. If a string, the string will be used as the absolute_path and any kwargs will be passed to the Asset constructor

  • fail_on_duplicate – Raise a DuplicateAssetError if an asset is duplicated. If not, simply replace it.

  • fail_on_deep_comparison – Fails only if deep comparison differs

  • **kwargs – Arguments to pass to Asset constructor when asset is a string

Raises:

DuplicatedAssetError - If fail_on_duplicate is true and the asset is already part of the collection

add_assets(assets: List[TAsset] | AssetCollection, fail_on_duplicate: bool = True, fail_on_deep_comparison: bool = False)[source]

Add assets to a collection.

Parameters:
  • assets – An list of assets as either list or a collection

  • fail_on_duplicate – Raise a DuplicateAssetError if an asset is duplicated. If not, simply replace it.

  • fail_on_deep_comparison – Fail if relative path/file is same but contents differ

Returns:

None

add_or_replace_asset(asset: Asset | str | PathLike, fail_on_deep_comparison: bool = False)[source]

Add or replaces an asset in a collection.

Parameters:
  • asset – Asset to add or replace

  • fail_on_deep_comparison – Fail replace if contents differ

Returns:

None.

get_one(**kwargs)[source]

Get an asset out of the collection based on the filers passed.

Examples:

>>> a = AssetCollection()
>>> a.get_one(filename="filename.txt")
Parameters:

**kwargs – keyword argument representing the filters.

Returns:

None or Asset if found.

remove(**kwargs) NoReturn[source]

Remove an asset from the AssetCollection based on keywords attributes.

Parameters:

**kwargs – Filter for the asset to remove.

pop(**kwargs) Asset[source]

Get and delete an asset based on keywords.

Parameters:

**kwargs – Filter for the asset to pop.

extend(assets: List[Asset], fail_on_duplicate: bool = True) NoReturn[source]

Extend the collection with new assets.

Parameters:
  • assets – Which assets to add

  • fail_on_duplicate – Fail if duplicated asset is included.

clear()[source]

Clear the asset collection.

Returns:

None

set_all_persisted()[source]

Set all persisted.

Returns:

None

property count

Number of assets in collections.

Returns:

Total assets

property uid

Uid of Asset Collection.

Returns:

Asset Collection UID.

has_asset(absolute_path: str | None = None, filename: str | None = None, relative_path: str | None = None, checksum: str | None = None) bool[source]

Search for asset by absolute_path or by filename.

Parameters:
  • absolute_path – Absolute path of source file

  • filename – Destination filename

  • relative_path – Relative path of asset

  • checksum – Checksum of asset(optional)

Returns:

True if asset exists, False otherwise

find_index_of_asset(other: Asset, deep_compare: bool = False) int | None[source]

Finds the index of asset by path or filename.

Parameters:
  • other – Other asset

  • deep_compare – Should content as well as path be compared

Returns:

Index number if found. None if not found.

pre_creation(platform: IPlatform) None[source]

Pre-Creation hook for the asset collection.

Parameters:

platform – Platform object we are create asset collection on

Returns:

None

Notes

TODO - Make default tags optional

set_tags(tags: Dict[str, Any])[source]

Set the tags on the asset collection.

Parameters:

tags – Tags to set on the item

Returns:

None

add_tags(tags: Dict[str, Any])[source]

Add tags to the Asset collection.

Parameters:

tags – Tags to add

Returns:

None

idmtools.assets.content_handlers module

idmtools assets content handlers.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.assets.content_handlers.json_handler(content)[source]

Dump a json to a string.

Parameters:

content – Content to write

Returns:

String on content

idmtools.assets.errors module

idmtools assets errors.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

exception idmtools.assets.errors.DuplicatedAssetError(asset: TAsset)[source]

Bases: Exception

DuplicatedAssetError is an error for when duplicates assets are added to collection.

Notes

TODO: Add a doclink

__init__(asset: TAsset)[source]

Initialize our error.

Parameters:

asset – Asset that caused the error

idmtools.assets.file_list module

idmtools FileList classes.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.assets.file_list.FileList(root=None, files_in_root=None, recursive=False, ignore_missing=False, relative_path=None, max_depth=3)[source]

Bases: object

Special utility class to help handling user files.

__init__(root=None, files_in_root=None, recursive=False, ignore_missing=False, relative_path=None, max_depth=3)[source]

Represents a set of files that are specified RELATIVE to root.

e.g. /a/b/c.json could be : root: ‘/a’ files_in_root: [‘b/c.json’] :param root: The dir all files_in_root are relative to. :param files_in_root: The listed files

add_asset_file(af)[source]

Method used to add asset file.

Parameters:

af – asset file to add

Returns: None

add_file(path, relative_path='')[source]

Method used to add a file.

Parameters:
  • path – file oath

  • relative_path – file relative path

Returns: None

add_path(path, files_in_dir=None, relative_path=None, recursive=False)[source]

Add a path to the file list.

Parameters:
  • path – The path to add (needs to be a dictionary)

  • files_in_dir – If we want to only retrieve certain files in this path

  • relative_path – relative_path: The relative path prefixed to each added files

  • recursive – Do we want to browse recursively

Returns: None

to_asset_collection() AssetCollection[source]

Convert a file list to an asset collection.

Returns:

AssetCollection version of filelist

static from_asset_collection(asset_collection: AssetCollection) FileList[source]

Create a FileList from a AssetCollection.

Parameters:

asset_collection – AssetCollection to convert.

Returns:

FileList version of AssetCollection

idmtools.builders package

idmtools builders package.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.builders Submodules
idmtools.builders.arm_simulation_builder module

idmtools arm builder definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.builders.arm_simulation_builder.ArmSimulationBuilder[source]

Bases: object

Class that represents an experiment builder.

This particular sweep builder build sweeps in “ARMS”. This is particular useful in situations where you want to sweep parameters that have branches of parameters. For Example, let’s say we have a model with the following parameters: * population * susceptible * recovered * enable_births * birth_rate

Enable births controls an optional feature that is controlled by the birth_rate parameter. If we want to sweep a set of parameters on population, susceptible with enabled_births set to off but also want to sweep the birth_rate we could do that like so

###################################
# This example provides how you can check if your sweeps are working as expected
###################################
from functools import partial
from idmtools.builders import ArmSimulationBuilder, SweepArm, ArmType
from idmtools.entities.command_task import CommandTask
from idmtools.entities.templated_simulation import TemplatedSimulations
from tabulate import tabulate


def update_parameter(simulation, parameter, value):
    simulation.task.config[parameter] = value


base_task = CommandTask('example')
base_task.config = dict(enable_births=False)
builder = ArmSimulationBuilder()
# Define our first set of sweeps
arm = SweepArm(type=ArmType.cross)
arm.add_sweep_definition(partial(update_parameter, parameter='population'), [500, 1000])
arm.add_sweep_definition(partial(update_parameter, parameter='susceptible'), [0.5, 0.9])
builder.add_arm(arm)
# Now add the sweeps with the birth_rate as well
arm2 = SweepArm(type=ArmType.cross)
arm2.add_sweep_definition(partial(update_parameter, parameter='enable_births'), [True])
arm2.add_sweep_definition(partial(update_parameter, parameter='birth_rate'), [0.01, 0.1])
builder.add_arm(arm2)

sims = TemplatedSimulations(base_task=base_task)
sims.add_builder(builder)

print(tabulate([s.task.config for s in sims], headers="keys"))

This would result in the output

Arm Example Values

enable_births

population

susceptible

birth_rate

False

500

0.5

False

500

0.9

False

1000

0.5

False

1000

0.9

True

500

0.5

0.01

True

500

0.5

0.1

True

500

0.9

0.01

True

500

0.9

0.1

True

1000

0.5

0.01

True

1000

0.5

0.1

True

1000

0.9

0.01

True

1000

0.9

0.1

Examples

"""
        This file demonstrates how to use ArmExperimentBuilder in PythonExperiment's builder.
        We are then adding the builder to PythonExperiment.

        |__sweep arm1
            |_ a = 1
            |_ b = [2,3]
            |_ c = [4,5]
        |__ sweep arm2
            |_ a = [6,7]
            |_ b = 2
        Expect sims with parameters:
            sim1: {a:1, b:2, c:4}
            sim2: {a:1, b:2, c:5}
            sim3: {a:1, b:3, c:4}
            sim4: {a:1, b:3, c:5}
            sim5: {a:6, b:2}
            sim6: {a:7, b:2}
        Note:
            arm1 and arm2 are adding to total simulations
"""
import os
import sys
from functools import partial

from idmtools.builders import SweepArm, ArmType, ArmSimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

# define specific callbacks for a, b, and c
setA = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="a")
setB = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="b")
setC = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="c")


if __name__ == "__main__":
    with platform('CALCULON'):
        base_task = JSONConfiguredPythonTask(script_path=os.path.join(COMMON_INPUT_PATH, "python", "model1.py"))
        # define that we are going to create multiple simulations from this task
        ts = TemplatedSimulations(base_task=base_task)

        # define our first sweep Sweep Arm
        arm1 = SweepArm(type=ArmType.cross)
        builder = ArmSimulationBuilder()
        arm1.add_sweep_definition(setA, 1)
        arm1.add_sweep_definition(setB, [2, 3])
        arm1.add_sweep_definition(setC, [4, 5])
        builder.add_arm(arm1)

        # adding more simulations with sweeping
        arm2 = SweepArm(type=ArmType.cross)
        arm2.add_sweep_definition(setA, [6, 7])
        arm2.add_sweep_definition(setB, [2])
        builder.add_arm(arm2)

        # add our builders to our template
        ts.add_builder(builder)

        # create experiment from the template
        experiment = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1],
                                              tags={"string_tag": "test", "number_tag": 123, "KeyOnly": None})
        # run the experiment
        experiment.run()
        # in most real scenarios, you probably do not want to wait as this will wait until all simulations
        # associated with an experiment are done. We do it in our examples to show feature and to enable
        # testing of the scripts
        experiment.wait()
        # use system status as the exit code
        sys.exit(0 if experiment.succeeded else -1)
__init__()[source]

Constructor.

property count
add_arm(arm)[source]

Add arm sweep definition. :param arm: Arm to add

Returns:

None

idmtools.builders.csv_simulation_builder module

idmtools CsvExperimentBuilder definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.builders.csv_simulation_builder.CsvExperimentBuilder[source]

Bases: ArmSimulationBuilder

Class that represents an experiment builder. .. rubric:: Examples

"""
        This file demonstrates how to use CsvExperimentBuilder in PythonExperiment's builder.
        then adding the builder to PythonExperiment.

        We first load a csv file from local dir which contains parameters/values to sweep
        then sweep parameters based in csv file with CsvExperimentBuilder
        the csv file basically already lists all possible combinations of parameters you wan to sweep

        Paramaters names(header) and values in csv file
            a,b,c,d
            1,2,3,
            1,3,1,
            2,2,3,4
            2,2,2,5
            2,,3,6
        Expect sims with parameters:
            sim1: {a:1, b:2, c:3}
            sim2: {a:1, b:3, c:1}
            sim3: {a:2, b:2, c:3, d:4}
            sim4: {a:2, b:2, c:2, d:5}
            sim5: {a:2, c:3, d:6}  <-- no 'b'

        This builder can be used to test or simple scenarios.
        for example, you may only want to test list of parameter combinations, and do not care about anything else,
        you can list them in csv file so you do not have to go through traditional sweep method(i.e ExperimentBuilder's)

"""

import os
import sys
from functools import partial

import numpy as np

from idmtools.builders import CsvExperimentBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

# define function partials to be used during sweeps
setA = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="a")
setB = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="b")
setC = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="c")
setD = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="d")

if __name__ == "__main__":
    # define what platform we want to use. Here we use a context manager but if you prefer you can
    # use objects such as Platform('COMPS') instead
    with platform('CALCULON'):
        # define our base task
        base_task = JSONConfiguredPythonTask(script_path=os.path.join(COMMON_INPUT_PATH, "python", "model1.py"),
                                             parameters=dict(c='c-value'))
        # define our input csv sweep
        base_path = os.path.abspath(os.path.join(COMMON_INPUT_PATH, "builder"))
        file_path = os.path.join(base_path, 'sweeps.csv')
        builder = CsvExperimentBuilder()
        func_map = {'a': setA, 'b': setB, 'c': setC, 'd': setD}
        type_map = {'a': np.int64, 'b': np.int64, 'c': np.int64, 'd': np.int64}
        builder.add_sweeps_from_file(file_path, func_map, type_map)

        # now define we want to create a series of simulations using the base task and the sweep
        ts = TemplatedSimulations.from_task(base_task)
        # optionally we could update the base simulation metdata here
        # ts.base_simulations.tags['example'] 'yes'
        ts.add_builder(builder)

        # define our experiment with its metadata
        experiment = Experiment.from_template(ts,
                                              name=os.path.split(sys.argv[0])[1],
                                              tags={"string_tag": "test", "number_tag": 123}
                                              )

        # run the experiment and wait. By default run does not wait
        # in most real scenarios, you probably do not want to wait as this will wait until all simulations
        # associated with an experiment are done. We do it in our examples to show feature and to enable
        # testing of the scripts
        experiment.run(wait_until_done=True)
        # use system status as the exit code
        sys.exit(0 if experiment.succeeded else -1)
add_sweeps_from_file(file_path, func_map=None, type_map=None, sep=',')[source]

Create sweeps from a CSV file. :param file_path: Path to file :param func_map: Function map :param type_map: Type :param sep: CSV Seperator

Returns:

None

idmtools.builders.simulation_builder module

idmtools SimulationBuilder definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.builders.simulation_builder.SimulationBuilder[source]

Bases: object

Class that represents an experiment builder.

Examples

import os
import sys

from idmtools.assets import AssetCollection
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

with platform('Calculon'):
    base_task = JSONConfiguredPythonTask(
        script_path=os.path.join(COMMON_INPUT_PATH, "compsplatform", "working_model.py"),
        # add common assets from existing collection
        common_assets=AssetCollection.from_id('41c1b14d-0a04-eb11-a2c7-c4346bcb1553', as_copy=True)
    )

    ts = TemplatedSimulations(base_task=base_task)
    # sweep parameter
    builder = SimulationBuilder()
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("min_x"), range(-2, 0))
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("max_x"), range(1, 3))
    ts.add_builder(builder)

    e = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1])
    e.run(wait_until_done=True)
    # use system status as the exit code
    sys.exit(0 if e.succeeded else -1)

Add tags with builder callbacks:

def update_sim(sim, parameter, value):
    sim.task.set_parameter(parameter, value)
    # set sim tasks,
    return {'custom': 123, parameter:value)

builder = SimulationBuilder()
set_run_number = partial(update_sim, param="Run_Number")
builder.add_sweep_definition(set_run_number, range(0, 2))
# create experiment from builder
exp = Experiment.from_builder(builder, task, name=expname)
SIMULATION_ATTR = 'simulation'
__init__()[source]

Constructor.

property count
add_sweep_definition(function: Callable[[Simulation, Any], Dict[str, Any]] | partial, *args, **kwargs)[source]

Add a sweep definition callback that takes possible multiple parameters (None or many).

The sweep will be defined as a cross-product between the parameters passed.

Parameters:
  • function – The sweep function, which must include a simulation parameter (or whatever is specified in SIMULATION_ATTR).

  • args – List of arguments to be passed

  • kwargs – List of keyword arguments to be passed

Returns:

None. Updates the Sweeps

Examples

Examples of valid functions:

 # This function takes one parameter
 def myFunction(simulation, parameter_a):
     pass

# This function takes one parameter with default value
 def myFunction(simulation, parameter_a=6):
     pass

 # This function takes two parameters (parameters may have default values)
 def myFunction(simulation, parameter_a, parameter_b=9):
     pass

 # Function that takes three parameters (parameters may have default values)
 def three_param_callback(simulation, parameter_a, parameter_b, parameter_c=10):
     pass

Calling Sweeps that take multiple parameters:

# This example references the above valid function example
sb = SimulationBuilder()

# Add a sweep on the myFunction that takes parameter(s).
# Here we sweep the values 1-4 on parameter_a and a,b on parameter_b
sb.add_sweep_definition(myFunction, range(1,5), ["a", "b"])

sb2 = SimulationBuilder()
# Example calling using a dictionary instead
sb.add_sweep_definition(three_param_callback, dict(parameter_a=range(1,5), parameter_b=["a", "b"], parameter_c=range(4,5))
# The following is equivalent
sb.add_sweep_definition(three_param_callback, **dict(parameter_a=range(1,5), parameter_b=["a", "b"], parameter_c=range(4,5))

sb3 = SimulationBuilder()
# If all parameters have default values, we can even simply do
sb3.add_sweep_definition(three_param_callback)

Remark: in general:

def my_callback(simulation, parameter_1, parameter_2, ..., parameter_n):
    pass

Calling Sweeps that take multiple parameters:

sb = SimulationBuilder()
sb.add_sweep_definition(my_callback, Iterable_1, Iterable_2, ..., Iterable_m)

# Note: the # of Iterable object must match the parameters # of my_callback, which don't have default values

# Or use the key (parameter names)

sb = SimulationBuilder()
sb.add_sweep_definition(my_callback, parameter_1=Iterable_1, parameter_2=Iterable_2, ..., parameter_m=Iterable_m)
# The following is equivalent
sb.add_sweep_definition(my_callback, dict(parameter_1=Iterable_1, parameter_2=Iterable_2, ..., parameter_m=Iterable_m))
# and
sb.add_sweep_definition(my_callback, **dict(parameter_1=Iterable_1, parameter_2=Iterable_2, ..., parameter_m=Iterable_m))
case_args_tuple(function: Callable[[Simulation, Any], Dict[str, Any]] | partial, remaining_parameters, values)[source]
case_kwargs(function: Callable[[Simulation, Any], Dict[str, Any]] | partial, remaining_parameters, values)[source]
add_multiple_parameter_sweep_definition(function: Callable[[Simulation, Any], Dict[str, Any]] | partial, *args, **kwargs)[source]

Add a sweep definition callback that takes possible multiple parameters (None or many).

The sweep will be defined as a cross-product between the parameters passed.

Parameters:
  • function – The sweep function, which must include a simulation parameter (or whatever is specified in SIMULATION_ATTR).

  • args – List of arguments to be passed

  • kwargs – List of keyword arguments to be passed

Returns:

None. Updates the Sweeps

Examples

Refer to the comments in the add_sweep_definition function for examples

idmtools.builders.sweep_arm module

idmtools SimulationBuilder definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.builders.sweep_arm.ArmType(value)[source]

Bases: Enum

ArmTypes.

cross = 0
pair = 1
class idmtools.builders.sweep_arm.SweepArm(type=ArmType.cross, funcs: List[Tuple[Callable, Iterable]] | None = None)[source]

Bases: SimulationBuilder

Class that represents a section of simulation sweeping.

__init__(type=ArmType.cross, funcs: List[Tuple[Callable, Iterable]] | None = None)[source]

Constructor.

property count

return simulations count.

property functions
idmtools.builders.yaml_simulation_builder module

idmtools YamlSimulationBuilder definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.builders.yaml_simulation_builder.DefaultParamFuncDict(default)[source]

Bases: dict

Enables a function that takes a single parameter and return another function.

Notes

TODO Add Example and types

__init__(default)[source]

Initialize our DefaultParamFuncDict. :param default: Default function to use

class idmtools.builders.yaml_simulation_builder.YamlSimulationBuilder[source]

Bases: ArmSimulationBuilder

Class that represents an experiment builder. .. rubric:: Examples

"""
        This file demonstrates how to use YamlExperimentBuilder in PythonExperiment's builder.
        then adding the builder to PythonExperiment.

        We first load a yaml file from local dir which contains parameters/values to sweep
        then sweep parameters based in yaml file with YamlExperimentBuilder
        Behind the scenes, we are using arm sweep, each group is treated with SweepArm and then add to builder

        Parameters in yaml file
            group1:
                - a: 1
                - b: 2
                - c: [3, 4]
                - d: [5, 6]
            group2:
                - c: [3, 4]
                - d: [5, 6, 7]

        Expect sims with parameters:
            sim1: {a:1, b:2, c:3, d:5}
            sim2: {a:1, b:2, c:3, d:6}
            sim3: {a:1, b:2, c:4, d:5}
            sim4: {a:1, b:2, c:4, d:6}
            sim5: {c:3, d:5}
            sim6: {c:3, d:6}
            sim7: {c:3, d:7}
            sim8: {c:4, d:5}
            sim9: {c:4, d:6}
            sim10: {c:4, d:7}

        This builder is very similar with ArmExperimentBuilder. but in more direct way. you just need list all cared
        parameter combinations in yaml file, and let builder do the job

"""
import os
import sys
from functools import partial

from idmtools.builders import YamlSimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

# define function partials to be used during sweeps
setA = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="a")
setB = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="b")
setC = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="c")
setD = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="d")

if __name__ == "__main__":
    # define what platform we want to use. Here we use a context manager but if you prefer you can
    # use objects such as Platform('CALCULON') instead
    with platform('CALCULON'):
        # define our base task
        base_task = JSONConfiguredPythonTask(script_path=os.path.join(COMMON_INPUT_PATH, "python", "model1.py"),
                                             parameters=dict(c='c-value'))
        # define our input csv sweep
        base_path = os.path.abspath(os.path.join(COMMON_INPUT_PATH, "builder"))
        file_path = os.path.join(base_path, 'sweeps.yaml')
        builder = YamlSimulationBuilder()
        # define a list of functions to map the specific yaml values
        func_map = {'a': setA, 'b': setB, 'c': setC, 'd': setD}
        builder.add_sweeps_from_file(file_path, func_map)
        # optionally, if you can also pass a function that is used for all parameters
        # The default behaviour of the builder is to assume the default function will be a partial
        # and attempts to call it with one var(param) before building sweep
        # builder.add_sweeps_from_file(file_path, JSONConfiguredPythonTask.set_parameter_partial)

        # now define we want to create a series of simulations using the base task and the sweep
        ts = TemplatedSimulations.from_task(base_task)
        # optionally we could update the base simulation metdata here
        # ts.base_simulations.tags['example'] 'yes'
        ts.add_builder(builder)

        # define our experiment from our template and add some metadata to the experiment
        experiment = Experiment.from_template(ts,
                                              name=os.path.split(sys.argv[0])[1],
                                              tags={"string_tag": "test", "number_tag": 123}
                                              )

        # run the experiment and wait. By default run does not wait
        # in most real scenarios, you probably do not want to wait as this will wait until all simulations
        # associated with an experiment are done. We do it in our examples to show feature and to enable
        # testing of the scripts
        experiment.run(wait_until_done=True)
        # use system status as the exit code
        sys.exit(0 if experiment.succeeded else -1)
add_sweeps_from_file(file_path, func_map: Dict[str, Callable] | Callable[[Any], Dict] | None = None)[source]

Add sweeps from a file. :param file_path: Path to file :param func_map: Optional function map

Returns:

None

idmtools.config package

idmtools Configuration tools/Manager.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.config Submodules
idmtools.config.idm_config_parser module

idmtools IdmConfig paraer, the main configuration engine for idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.config.idm_config_parser.initialization(force=False)[source]

Initialization decorator for configuration methods.

Parameters:

force – Force initialization

Returns:

Wrapper function

class idmtools.config.idm_config_parser.IdmConfigParser(dir_path: str = '.', file_name: str = 'idmtools.ini')[source]

Bases: object

Class that parses an INI configuration file.

classmethod retrieve_dict_config_block(field_type, section) Dict[str, Any][source]

Retrieve dictionary config block.

Parameters:
  • field_type – Field type

  • section – Section to load

Returns:

Dictionary of the config block

classmethod retrieve_settings()[source]
static get_global_configuration_name() str[source]

Get Global Configuration Name.

Returns:

On Windows, this returns %LOCALDATA%\idmtools\idmtools.ini On Mac and Linux, it returns “/home/username/.idmtools.ini’

Raises:

Value Error on OSs not supported

classmethod get_section(*args, **kwargs)[source]
classmethod get_option(*args, **kwargs)[source]
classmethod is_progress_bar_disabled() bool[source]

Are progress bars disabled.

Returns:

Return is progress bars should be enabled

classmethod is_output_enabled() bool[source]

Is output enabled.

Returns:

Return if output should be disabled

classmethod ensure_init(dir_path: str = '.', file_name: str = 'idmtools.ini', force: bool = False) None[source]

Verify that the INI file loaded and a configparser instance is available.

Parameters:
  • dir_path – The directory to search for the INI configuration file.

  • file_name – The configuration file name to search for.

  • force – Force reload of everything

Returns:

None

Raises:

ValueError – If the config file is found but cannot be parsed

classmethod get_config_path(*args, **kwargs)[source]
classmethod display_config_path(*args, **kwargs)[source]
classmethod view_config_file(*args, **kwargs)[source]
classmethod display_config_block_details(block)[source]

Display the values of a config block.

Parameters:

block – Block to print

Returns:

None

classmethod has_section(*args, **kwargs)[source]
classmethod has_option()[source]
classmethod found_ini() bool[source]

Did we find the config?

Returns:

True if did, False Otherwise

classmethod clear_instance() None[source]

Uninitialize and clean the IdmConfigParser instance.

Returns:

None

idmtools.core package

Core area classes are defined in this packaged.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core Subpackages
idmtools.core.interfaces package

idmtools List of core abstract interfaces that other core objects derive from.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core.interfaces Submodules
idmtools.core.interfaces.entity_container module

EntityContainer definition. EntityContainer provides an envelope for a Parent to container a list of sub-items.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.interfaces.entity_container.EntityContainer(children: List[IEntity] = None)[source]

Bases: list

EntityContainer is a wrapper classes used by Experiments and Suites to wrap their children.

It provides utilities to set status on entities

__init__(children: List[IEntity] = None)[source]

Initialize the EntityContainer.

Parameters:

children – Children to initialize with

set_status(status: EntityStatus)[source]

Set status on all the children.

Parameters:

status – Status to set

Returns:

None

set_status_for_item(item_id, status: EntityStatus)[source]

Set status for specific sub-item.

Parameters:
  • item_id – Item id to set status for

  • status – Status to set

Returns:

None

Raises:

ValueError when the item_id is not in the children list

idmtools.core.interfaces.iassets_enabled module

IAssetsEnabled interface definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.interfaces.iassets_enabled.IAssetsEnabled(assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>)[source]

Bases: object

Base class for objects containing an asset collection.

assets: AssetCollection
abstract gather_assets() NoReturn[source]

Function called at runtime to gather all assets in the collection.

add_assets(assets: List[TAsset] | AssetCollection | None = None, fail_on_duplicate: bool = True) NoReturn[source]

Add more assets to AssetCollection.

add_asset(asset: str | TAsset | None = None, fail_on_duplicate: bool = True) NoReturn[source]

Add an asset to our item.

Parameters:
  • asset – Asset to add. Asset can be a string in which case it is assumed to be a file path

  • fail_on_duplicate – Should we rain an exception if there is an existing file with same information

Returns:

None

Raises:

DuplicatedAssetError in cases where fail_on_duplicate are true

__init__(assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>) None
idmtools.core.interfaces.ientity module

IEntity definition. IEntity is the base of most of our Remote server entitiies like Experiment, Simulation, WorkItems, and Suites.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.interfaces.ientity.IEntity(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = None, _platform_object: ~typing.Any = None)[source]

Bases: IItem

Interface for all entities in the system.

platform_id: str = None

ID of the platform

parent_id: str = None

Parent id

status: EntityStatus = None

Status of item

tags: Dict[str, Any]

Tags for item

item_type: ItemType = None

Item Type(Experiment, Suite, Asset, etc)

update_tags(tags: dict | None = None) NoReturn[source]

Shortcut to update the tags with the given dictionary.

Parameters:

tags – New tags

post_creation(platform: IPlatform) None[source]

Post creation hook for object.

Returns:

None

classmethod from_id_file(filename: PathLike | str, platform: IPlatform = None, **kwargs) IEntity[source]

Load from a file that container the id.

Parameters:
  • filename – Filename to load

  • platform – Platform object to load id from. This can be loaded from file if saved there.

  • **kwargs – Platform extra arguments

Returns:

Entity loaded from id file

Raises:

EnvironmentError if item type is None.

classmethod from_id(item_id: str, platform: IPlatform = None, **kwargs) IEntity[source]

Load an item from an id.

Parameters:
  • item_id – Id of item

  • platform – Platform. If not supplied, we check the current context

  • **kwargs – Optional platform args

Returns:

IEntity of object

property parent

Return parent object for item.

Returns:

Parent entity if set

property platform: IPlatform

Get objects platform object.

Returns:

Platform

get_platform_object(force: bool = False, platform: IPlatform = None, **kwargs)[source]

Get the platform representation of an object.

Parameters:
  • force – Force reload of platform object

  • platform – Allow passing platform object to fetch

  • **kwargs – Optional args used for specific platform behaviour

Returns:

Platform Object

property done

Returns if a item is done.

For an item to be done, it should be in either failed or succeeded state.

Returns:

True if status is succeeded or failed

property succeeded

Returns if an item has succeeded.

Returns:

True if status is SUCCEEDED

property failed

Returns is a item has failed.

Returns:

True if status is failed

static get_current_platform_or_error()[source]

Try to fetch the current platform from context. If no platform is set, error.

Returns:

Platform if set

Raises:

NoPlatformException if no platform is set on the current context

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = None, _platform_object: ~typing.Any = None) None
to_id_file(filename: str | PathLike, save_platform: bool = False, platform_args: Dict | None = None)[source]

Write a id file.

Parameters:
  • filename – Filename to create

  • save_platform – Save platform to the file as well

  • platform_args – Platform Args

Returns:

None

idmtools.core.interfaces.iitem module

IItem is the base of all items that have ids such as AssetCollections, Experiments, etc.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core.interfaces.iitem.get_id_generator()[source]

Retrieves the type of id generator specified in .ini config as well as corresponding plugin.

Returns:

specified id generation plugin in .ini config (uuid, item_sequence, etc) plugin: id generation plugin that is used to determine ids for items. See setup.py > entry_points > idmtools_hooks for full names of plugin options

Return type:

id_gen

class idmtools.core.interfaces.iitem.IItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>)[source]

Bases: object

IItem represents items that have identifiable ids.

In addition, IItem facilities pre and post creation hooks through the __pre_creation_hooks, __post_creation_hooks, add_pre_creation_hook, add_post_creation_hook

property uid

UID Of the object.

If not id is set, uses the hash of the object.

Returns:

ID

property id

Alias for uid.

Returns:

UID of object

Notes

What is relation to uid?

property metadata: Dict[str, Any]

Identify the metadata from the fields.

Returns:

Metadata dict

property pickle_ignore_fields

Get list of fields that will be ignored when pickling.

Returns:

Set of fields that are ignored when pickling the item

property metadata_fields

Get list of fields that have metadata.

Returns:

Set of fields that have metadata

display() str[source]

Display as string representation.

Returns:

String of item

pre_creation(platform: IPlatform) None[source]

Called before the actual creation of the entity.

Parameters:

platform – Platform item is being created on

Returns:

None

post_creation(platform: IPlatform) None[source]

Called after the actual creation of the entity.

Parameters:

platform – Platform item was created on

Returns:

None

add_pre_creation_hook(hook: Callable[[IItem, IPlatform], None])[source]

Adds a hook function to be called before an item is created.

Parameters:

hook – Hook function. This should have two arguments, the item and the platform

Returns:

None

add_post_creation_hook(hook: Callable[[IItem, IPlatform], None])[source]

Adds a hook function to be called after an item is created.

Parameters:

hook – Hook function. This should have two arguments, the item and the platform

Returns:

None

post_setstate()[source]

Function called after restoring the state if additional initialization is required.

pre_getstate()[source]

Function called before picking and return default values for “pickle-ignore” fields.

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>) None
idmtools.core.interfaces.imetadata_operations module

Here we implement the Metadata operations interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.interfaces.imetadata_operations.IMetadataOperations[source]

Bases: ABC

Operations to handle metadata for SlurmPlatform.

abstract get(item: IEntity) Dict[source]

Obtain item’s metadata.

Parameters:

item – idmtools entity (Suite, Experiment and Simulation, etc.)

Returns:

a key/value dict of metadata from the given item

abstract dump(item: IEntity) None[source]

Save item’s metadata to a file.

Parameters:

item – idmtools entity (Suite, Experiment and Simulation, etc.)

Returns:

None

abstract load(item: IEntity) Dict[source]

Obtain item’s metadata file.

Parameters:

item – idmtools entity (Suite, Experiment and Simulation, etc.)

Returns:

key/value dict of item’s metadata file

abstract update(item: IEntity) None[source]

Update item’s metadata file.

Parameters:

item – idmtools entity (Suite, Experiment and Simulation, etc.)

Returns:

None

abstract clear(item: IEntity) None[source]

Clear the item’s metadata file.

Parameters:

item – idmtools entity (Suite, Experiment and Simulation, etc.)

Returns:

None

abstract filter(item_type: ItemType, item_filter: Dict | None = None) List[source]

Obtain all items that match the given item_filter key/value pairs passed.

Parameters:
  • item_type – the type of items to search for matches (simulation, experiment, suite, etc)

  • item_filter – a dict of metadata key/value pairs for exact match searching

Returns:

a list of matching items

__init__() None
idmtools.core.interfaces.inamed_entity module

INamedEntity definition. INamedEntity Provides a class with a name like Experiments or Suites.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.interfaces.inamed_entity.INamedEntity(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = None, _platform_object: ~typing.Any = None, name: str = None)[source]

Bases: IEntity

INamedEntity extends the IEntity adding the name property.

name: str = None
__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = None, _platform_object: ~typing.Any = None, name: str = None) None
idmtools.core.interfaces.irunnable_entity module

IRunnableEntity definition. IRunnableEntity defines items that can be ran using platform.run().

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.interfaces.irunnable_entity.IRunnableEntity(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = None, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>)[source]

Bases: IEntity

IRunnableEntity are items that can be ran on platforms like Experiments or WorkItems.

IRunnableEntity also add pre and post run hooks available to the IEntity class.

pre_run(platform: IPlatform) None[source]

Called before the actual creation of the entity.

Parameters:

platform – Platform item is being created on

Returns:

None

post_run(platform: IPlatform) None[source]

Called after the actual creation of the entity.

Parameters:

platform – Platform item was created on

Returns:

None

add_pre_run_hook(hook: Callable[[IRunnableEntity, IPlatform], None])[source]

Adds a hook function to be called before an item is ran.

Parameters:

hook – Hook function. This should have two arguments, the item and the platform

Returns:

None

add_post_run_hook(hook: Callable[[IRunnableEntity, IPlatform], None])[source]

Adds a hook function to be called after an item has ran.

Parameters:

hook – Hook function. This should have two arguments, the item and the platform

Returns:

None

run(wait_until_done: bool = False, platform: IPlatform = None, wait_on_done_progress: bool = True, **run_opts) NoReturn[source]

Runs an item.

Parameters:
  • wait_until_done – Whether we should wait on item to finish running as well. Defaults to False

  • platform – Platform object to use. If not specified, we first check object for platform object then the current context

  • wait_on_done_progress – Defaults to true

  • **run_opts – Options to pass to the platform

Returns:

None

wait(wait_on_done_progress: bool = True, timeout: int = None, refresh_interval=None, platform: IPlatform = None, **kwargs)[source]

Wait on an item to finish running.

Parameters:
  • wait_on_done_progress – Should we show progress as we wait?

  • timeout – Timeout to wait

  • refresh_interval – How often to refresh object

  • platform – Platform. If not specified, we try to determine this from context

Returns:

None

after_done()[source]

Run after an item is done after waiting. Currently we call the on succeeded and on failure plugins.

Returns:

Runs after an item is done after waiting

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = None, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>) None
idmtools.core Submodules
idmtools.core.cache_enabled module

CacheEnabled definition. CacheEnabled enables diskcache wrapping on an item.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.cache_enabled.CacheEnabled[source]

Bases: object

Allows a class to leverage Diskcache and expose a cache property.

initialize_cache(shards: int | None = None, eviction_policy=None)[source]

Initialize cache.

Parameters:
  • shards (Optional[int], optional) – How many shards. It is best to set this when multi-procressing Defaults to None.

  • eviction_policy ([type], optional) – See Diskcache docs. Defaults to None.

cleanup_cache()[source]

Cleanup our cache.

Returns:

None

property cache: Cache | FanoutCache

Allows fetches of cache and ensures it is initialized.

Returns:

Cache

idmtools.core.context module

Manages the idmtools context, mostly around the platform object.

This context allows us to easily fetch what platforms we are executing on and also supported nested, multi-platform operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core.context.set_current_platform(platform: IPlatform)[source]

Set the current platform that is being used to execute scripts.

Parameters:

platform – Platform to set

Returns:

None

idmtools.core.context.remove_current_platform()[source]

Set CURRENT_PLATFORM to None and delete old platform object.

Returns:

None

idmtools.core.context.clear_context()[source]

Clear all platforms from context.

idmtools.core.context.get_current_platform() IPlatform[source]

Get current platform.

idmtools.core.docker_task module

DockerTask provides a utility to run docker images.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.docker_task.DockerTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, image_name: str = None, build: bool = False, build_path: str | None = None, Dockerfile: str | None = None, pull_before_build: bool = True, use_nvidia_run: bool = False, _DockerTask__image_built: bool = False)[source]

Bases: ITask

Provides a task to run or optionally build a docker container.

image_name: str = None
build: bool = False
build_path: str | None = None
Dockerfile: str | None = None
pull_before_build: bool = True
use_nvidia_run: bool = False
gather_common_assets() AssetCollection[source]

Gather common(experiment-level) assets from task.

Returns:

AssetCollection containing all the common assets

gather_transient_assets() AssetCollection[source]

Gather transient(simulation-level) assets from task.

Returns:

AssetCollection

build_image(spinner=None, **extra_build_args)[source]

Build our docker image.

Parameters:
  • spinner – Should we display a CLI spinner

  • **extra_build_args – Extra build arguments to pass to docker

Returns:

None

reload_from_simulation(simulation: Simulation)[source]

Method to reload task details from simulation object. Currently we do not do this for docker task.

Parameters:

simulation – Simulation to load data from

Returns:

None

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, image_name: str = None, build: bool = False, build_path: str | None = None, Dockerfile: str | None = None, pull_before_build: bool = True, use_nvidia_run: bool = False, _DockerTask__image_built: bool = False) None
class idmtools.core.docker_task.DockerTaskSpecification[source]

Bases: TaskSpecification

DockerTaskSpecification provides the task plugin to idmtools for DockerTask.

get(configuration: dict) DockerTask[source]

Get instance of DockerTask with configuration provided.

Parameters:

configuration – configuration for DockerTask

Returns:

DockerTask with configuration

get_description() str[source]

Get description of plugin.

Returns:

Plugin description

get_type() Type[DockerTask][source]

Get type of task provided by plugin.

Returns:

DockerTask

idmtools.core.enums module

Define our common enums to be used through idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.enums.EntityStatus(value)[source]

Bases: Enum

EntityStatus provides status values for Experiment/Simulations/WorkItems.

COMMISSIONING = 'commissioning'
CREATED = 'created'
RUNNING = 'running'
SUCCEEDED = 'succeeded'
FAILED = 'failed'
class idmtools.core.enums.FilterMode(value)[source]

Bases: Enum

Allows user to specify AND/OR for the filtering system.

AND = 0
OR = 1
class idmtools.core.enums.ItemType(value)[source]

Bases: Enum

ItemTypes supported by idmtools.

SUITE = 'Suite'
EXPERIMENT = 'Experiment'
SIMULATION = 'Simulation'
WORKFLOW_ITEM = 'WorkItem'
ASSETCOLLECTION = 'Asset Collection'
idmtools.core.exceptions module

Define idmtools common exception as well as idmtools system exception handler.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

exception idmtools.core.exceptions.ExperimentNotFound(experiment_id: str, platform: TPlatform = None)[source]

Bases: Exception

Thrown when an experiment cannot be found on a platform.

__init__(experiment_id: str, platform: TPlatform = None)[source]

Initialize our ExperimentNotFound.

Parameters:
  • experiment_id – Experiment id to say wasn’t found

  • platform – Optional platform. Used in error message

exception idmtools.core.exceptions.UnknownItemException[source]

Bases: Exception

Thrown when an unknown item type is passed to idmtools.

This usually occurs within the platform operation area.

exception idmtools.core.exceptions.NoPlatformException[source]

Bases: Exception

Cannot find a platform matching the one requested by user.

exception idmtools.core.exceptions.TopLevelItem[source]

Bases: Exception

Thrown when a parent of a top-level item is requested by the platform.

exception idmtools.core.exceptions.UnsupportedPlatformType[source]

Bases: Exception

Occurs when an item is not supported by a platform but is requested.

exception idmtools.core.exceptions.NoTaskFound[source]

Bases: Exception

Thrown when a simulation has no task defined.

idmtools.core.exceptions.idmtools_error_handler(exctype, value: Exception, tb)[source]

Global exception handler. This will write our errors in a nice format as well as find document links if attached to the exception.

Parameters:
  • exctype – Type of exception

  • value – Value of the exception

  • tb – Traceback

Returns:

None

idmtools.core.experiment_factory module

Define ExperimentFactory.

This is used mostly internally. It does allow us to support specialized experiment types when needed.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.experiment_factory.ExperimentFactory[source]

Bases: object

ExperimentFactory allows creating experiments that could be derived through plugins.

DEFAULT_KEY = 'idmtools.entities.experiment.Experiment'
__init__()[source]

Initialize our factory.

On initialize, we load our plugin and build a map of ids for experiments.

create(key, fallback=None, **kwargs) Experiment[source]

Create an experiment of type key.

Parameters:
  • key – Experiment Type

  • fallback – Fallback type. If none, uses DEFAULT_KEY

  • **kwargs – Options to pass to the experiment object

Returns:

Experiment object that was created

idmtools.core.id_file module

Utility method for writing and reading id files.

ID Files allow us to reload entities like Experiment, Simulations, AssetCollections, etc from a platform through files. This can be enabling for workflows to chain steps together, or to self-document remote outputs in the local project directory.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core.id_file.read_id_file(filename: str | PathLike)[source]

Reads an id from an id file.

An id file is in the format of

<id>::<item_type>::<config block>::<extra args> :param filename:

Returns:

None

idmtools.core.id_file.write_id_file(filename: str | PathLike, item: IEntity, save_platform: bool = False, platform_args: Dict = None)[source]

Write an item as and id file.

Parameters:
  • filename – Filename to write file to

  • item – Item to write out

  • save_platform – When true, writes platform details to the file

  • platform_args – Platform arguments to write out

Returns:

None

idmtools.core.logging module

idmtools logging module.

We configure our logging here, manage multi-process logging, alternate logging level, and additional utilities to manage logging.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.logging.IdmToolsLoggingConfig(level: str | int = 30, filename: str | None = 'idmtools.log', console: bool = False, file_level: str | int = 'DEBUG', force: bool = False, file_log_format_str: str | None = None, user_log_format_str: str = '%(message)s', use_colored_logs: bool = True, user_output: bool = True, enable_file_logging: str | bool = True)[source]

Bases: object

Defines the config options available for idmtools logs.

level: str | int = 30

Console level

filename: str | None = 'idmtools.log'

Filename for idmtools logs

console: bool = False

Toggle to enable/disable console logging

file_level: str | int = 'DEBUG'

File log level

force: bool = False

Should we force reload

file_log_format_str: str = None

//docs.python.org/3/library/logging.html#logrecord-attributes for format vars

Type:

File format string. See https

user_log_format_str: str = '%(message)s'

//docs.python.org/3/library/logging.html#logrecord-attributes for format vars

Type:

Logging format. See https

use_colored_logs: bool = True

Toggle to enable/disable coloredlogs

user_output: bool = True

Toggle user output. This should only be used in certain situations like CLI’s that output JSON

enable_file_logging: str | bool = True

Toggle enable file logging

__init__(level: str | int = 30, filename: str | None = 'idmtools.log', console: bool = False, file_level: str | int = 'DEBUG', force: bool = False, file_log_format_str: str | None = None, user_log_format_str: str = '%(message)s', use_colored_logs: bool = True, user_output: bool = True, enable_file_logging: str | bool = True) None
class idmtools.core.logging.MultiProcessSafeRotatingFileHandler(filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False)[source]

Bases: RotatingFileHandler

Multi-process safe logger.

__init__(filename, mode='a', maxBytes=0, backupCount=0, encoding=None, delay=False)[source]

See RotatingFileHandler for full details on arguments.

Parameters:
  • filename – Filename to use

  • mode – Mode

  • maxBytes – Max bytes

  • backupCount – Total backups

  • encoding – Encoding

  • delay – Delay

handle(record: LogRecord) None[source]

Thread safe logger.

Parameters:

record – Record to handle

Returns:

None

doRollover() None[source]

Perform rollover safely.

We loop and try to move the log file. If we encounter an issue, we try to retry three times. If we failed after three times, we try a new process id appended to file name.

Returns:

None

class idmtools.core.logging.PrintHandler(level=0)[source]

Bases: Handler

A simple print handler. Used in cases where logging fails.

handle(record: LogRecord) None[source]

Simple log handler that prints to stdout.

Parameters:

record – Record to print

Returns:

None

idmtools.core.logging.setup_logging(logging_config: IdmToolsLoggingConfig) None[source]

Set up logging.

Parameters:

logging_config – IdmToolsLoggingConfig that defines our config

Returns:

Returns None

idmtools.core.logging.setup_handlers(logging_config: IdmToolsLoggingConfig)[source]

Setup Handlers for Global and user Loggers.

Parameters:

logging_config – Logging config

Returns:

FileHandler or None

idmtools.core.logging.setup_user_logger(logging_config: IdmToolsLoggingConfig)[source]

Setup the user logger. This logger is meant for user output only.

Parameters:

logging_config – Logging config object.

Returns:

None

idmtools.core.logging.setup_user_print_logger()[source]

Setup a print based logger for user messages.

Returns:

None

idmtools.core.logging.set_file_logging(logging_config: IdmToolsLoggingConfig, formatter: Formatter)[source]

Set File Logging.

Parameters:
  • logging_config – Logging config object.

  • formatter – Formatter obj

Returns:

Return File handler

idmtools.core.logging.create_file_handler(file_level, formatter: Formatter, filename: str)[source]

Create a MultiProcessSafeRotatingFileHandler for idmtools.log.

Parameters:
  • file_level – Level to log to file

  • formatter – Formatter to set on the handler

  • filename – Filename to use

Returns:

SafeRotatingFileHandler with properties provided

idmtools.core.logging.reset_logging_handlers()[source]

Reset all the logging handlers by removing the root handler.

Returns:

None

idmtools.core.logging.exclude_logging_classes(items_to_exclude=None)[source]

Exclude items from our logger by setting level to warning.

Parameters:

items_to_exclude – Items to exclude

Returns:

None

idmtools.core.platform_factory module

Manages the creation of our platforms.

The Platform allows us to lookup a platform via its plugin name, “COMPS” or via configuration aliases defined in a platform plugins, such as CALCULON.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core.platform_factory.platform(*args, **kwds)[source]

Utility function to create platform.

Parameters:
  • *args – Arguments to pass to platform

  • **kwds – Keyword args to pass to platform

Returns:

Platform created.

class idmtools.core.platform_factory.Platform(block, missing_ok: bool | None = None, **kwargs)[source]

Bases: object

Platform Factory.

idmtools.core.system_information module

Utilities functions/classes to fetch info that is useful for troubleshooting user issues.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.core.system_information.get_data_directory() str[source]

Get our default data directory for a user.

Returns:

Default data directory for a user.

idmtools.core.system_information.get_filtered_environment_vars(exclude=None) Dict[str, str][source]

Get environment vars excluding a specific set.

Parameters:

exclude – If not provided, we default to using [‘LS_COLORS’, ‘XDG_CONFIG_DIRS’, ‘PS1’, ‘XDG_DATA_DIRS’]

Returns:

Environment vars filtered for items specified

class idmtools.core.system_information.SystemInformation(data_directory: str | None = '/home/docs/.local_data', user: str | None = 'docs', python_version: str = '3.9.18', python_build: str = ('main', 'Feb  1 2024 17:16:01'), python_packages: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, os_name: str = 'Linux', hostname: str = 'build-2132681-project-5702-institute-for-disease-modeling-idmtoo', system_version: str = '#29~22.04.1-Ubuntu SMP Tue Jun 20 19:12:11 UTC 2023', system_architecture: str = 'x86_64', system_processor: str = 'x86_64', system_architecture_details: str = ('64bit', 'ELF'), default_docket_socket_path: str = '/var/run/docker.sock', cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs', user_group_str: str = '1000:1000', version: str | None = None)[source]

Bases: object

Utility class to provide details useful in troubleshooting issues.

data_directory: str | None = '/home/docs/.local_data'
user: str | None = 'docs'
python_version: str = '3.9.18'
python_build: str = ('main', 'Feb  1 2024 17:16:01')
python_implementation = 'CPython'
python_packages: List[str]
environment_variables: Dict[str, str]
os_name: str = 'Linux'
hostname: str = 'build-2132681-project-5702-institute-for-disease-modeling-idmtoo'
system_version: str = '#29~22.04.1-Ubuntu SMP Tue Jun 20 19:12:11 UTC 2023'
system_architecture: str = 'x86_64'
system_processor: str = 'x86_64'
system_architecture_details: str = ('64bit', 'ELF')
default_docket_socket_path: str = '/var/run/docker.sock'
cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs'
user_group_str: str = '1000:1000'
version: str = None
__init__(data_directory: str | None = '/home/docs/.local_data', user: str | None = 'docs', python_version: str = '3.9.18', python_build: str = ('main', 'Feb  1 2024 17:16:01'), python_packages: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, os_name: str = 'Linux', hostname: str = 'build-2132681-project-5702-institute-for-disease-modeling-idmtoo', system_version: str = '#29~22.04.1-Ubuntu SMP Tue Jun 20 19:12:11 UTC 2023', system_architecture: str = 'x86_64', system_processor: str = 'x86_64', system_architecture_details: str = ('64bit', 'ELF'), default_docket_socket_path: str = '/var/run/docker.sock', cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs', user_group_str: str = '1000:1000', version: str | None = None) None
class idmtools.core.system_information.LinuxSystemInformation(data_directory: str | None = '/home/docs/.local_data', user: str | None = 'docs', python_version: str = '3.9.18', python_build: str = ('main', 'Feb  1 2024 17:16:01'), python_packages: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, os_name: str = 'Linux', hostname: str = 'build-2132681-project-5702-institute-for-disease-modeling-idmtoo', system_version: str = '#29~22.04.1-Ubuntu SMP Tue Jun 20 19:12:11 UTC 2023', system_architecture: str = 'x86_64', system_processor: str = 'x86_64', system_architecture_details: str = ('64bit', 'ELF'), default_docket_socket_path: str = '/var/run/docker.sock', cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs', user_group_str: str = <factory>, version: str | None = None)[source]

Bases: SystemInformation

LinuxSystemInformation adds linux specific properties.

__init__(data_directory: str | None = '/home/docs/.local_data', user: str | None = 'docs', python_version: str = '3.9.18', python_build: str = ('main', 'Feb  1 2024 17:16:01'), python_packages: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, os_name: str = 'Linux', hostname: str = 'build-2132681-project-5702-institute-for-disease-modeling-idmtoo', system_version: str = '#29~22.04.1-Ubuntu SMP Tue Jun 20 19:12:11 UTC 2023', system_architecture: str = 'x86_64', system_processor: str = 'x86_64', system_architecture_details: str = ('64bit', 'ELF'), default_docket_socket_path: str = '/var/run/docker.sock', cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs', user_group_str: str = <factory>, version: str | None = None) None
class idmtools.core.system_information.WindowsSystemInformation(data_directory: str | None = '/home/docs/.local_data', user: str | None = 'docs', python_version: str = '3.9.18', python_build: str = ('main', 'Feb  1 2024 17:16:01'), python_packages: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, os_name: str = 'Linux', hostname: str = 'build-2132681-project-5702-institute-for-disease-modeling-idmtoo', system_version: str = '#29~22.04.1-Ubuntu SMP Tue Jun 20 19:12:11 UTC 2023', system_architecture: str = 'x86_64', system_processor: str = 'x86_64', system_architecture_details: str = ('64bit', 'ELF'), default_docket_socket_path: str = '/var/run/docker.sock', cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs', user_group_str: str = '1000:1000', version: str | None = None)[source]

Bases: SystemInformation

WindowsSystemInformation adds windows specific properties.

default_docket_socket_path: str = '//var/run/docker.sock'
idmtools.core.system_information.get_system_information() SystemInformation[source]

Fetch the system-appropriate information inspection object.

Returns:

SystemInformation with platform-specific implementation.

idmtools.core.task_factory module

Define our tasks factory. This is crucial to build tasks when fetching from the server.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.core.task_factory.DynamicTaskSpecification(task_type: Type[ITask], description: str = '')[source]

Bases: TaskSpecification

This class allows users to quickly define a spec for special tasks.

__init__(task_type: Type[ITask], description: str = '')[source]

Initialize our specification.

Parameters:
  • task_type – Task type to register

  • description – Description to register with task

get(configuration: dict) ITask[source]

Get an instance of our task using configuration.

Parameters:

configuration – Configuration keyword args.

Returns:

Task with configuration specified

get_description() str[source]

Get description of our plugin.

Returns:

Returns the user-defined plugin description.

get_type() Type[ITask][source]

Get our task type.

Returns:

Returns our task type

class idmtools.core.task_factory.TaskFactory[source]

Bases: object

TaskFactory allows creation of tasks that are derived from plugins.

DEFAULT_KEY = 'idmtools.entities.command_task.CommandTask'
__init__()[source]

Initialize our Factory.

register(spec: TaskSpecification) NoReturn[source]

Register a TaskSpecification dynamically.

Parameters:

spec – Specification to register

Returns:

None

register_task(task: Type[ITask]) NoReturn[source]

Dynamically register a class using the DynamicTaskSpecification.

Parameters:

task – Task to register

Returns:

None

create(key, fallback=None, **kwargs) ITask[source]

Create a task of type key.

Parameters:
  • key – Type of task to create

  • fallback – Fallback task type. Default to DEFAULT_KEY if not provided

  • **kwargs – Optional arguments to pass to the task

Returns:

Task with option specified

idmtools.entities package

Entities is the core entities used to build experiments or used as abstracts for those classes.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.entities Subpackages
idmtools.entities.iplatform_ops package

Defines all the platform operation interfaces.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.entities.iplatform_ops Submodules
idmtools.entities.iplatform_ops.iplatform_asset_collection_operations module

IPlatformAssetCollectionOperations defines asset collection operations interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform_ops.iplatform_asset_collection_operations.IPlatformAssetCollectionOperations(platform: IPlatform, platform_type: Type)[source]

Bases: CacheEnabled, ABC

IPlatformAssetCollectionOperations defines asset collection operations interface.

platform: IPlatform
platform_type: Type
pre_create(asset_collection: AssetCollection, **kwargs) NoReturn[source]

Run the platform/AssetCollection post creation events.

Parameters:
  • asset_collection – AssetCollection to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

post_create(asset_collection: AssetCollection, **kwargs) NoReturn[source]

Run the platform/AssetCollection post creation events.

Parameters:
  • asset_collection – AssetCollection to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

create(asset_collection: AssetCollection, do_pre: bool = True, do_post: bool = True, **kwargs) Any[source]

Creates an AssetCollection from an IDMTools AssetCollection object.

Also performs pre-creation and post-creation locally and on platform.

Parameters:
  • asset_collection – AssetCollection to create

  • do_pre – Perform Pre creation events for item

  • do_post – Perform Post creation events for item

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

abstract platform_create(asset_collection: AssetCollection, **kwargs) Any[source]

Creates an workflow_item from an IDMTools AssetCollection object.

Parameters:
  • asset_collection – AssetCollection to create

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

batch_create(asset_collections: List[AssetCollection], display_progress: bool = True, **kwargs) List[AssetCollection][source]

Provides a method to batch create asset collections items.

Parameters:
  • asset_collections – List of asset collection items to create

  • display_progress – Show progress bar

  • **kwargs

Returns:

List of tuples containing the create object and id of item that was created

abstract get(asset_collection_id: str, **kwargs) Any[source]

Returns the platform representation of an AssetCollection.

Parameters:
  • asset_collection_id – Item id of AssetCollection

  • **kwargs

Returns:

Platform Representation of an AssetCollection

to_entity(asset_collection: Any, **kwargs) AssetCollection[source]

Converts the platform representation of AssetCollection to idmtools representation.

Parameters:

asset_collection – Platform AssetCollection object

Returns:

IDMTools suite object

__init__(platform: IPlatform, platform_type: Type) None
idmtools.entities.iplatform_ops.iplatform_experiment_operations module

IPlatformExperimentOperations defines experiment item operations interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform_ops.iplatform_experiment_operations.IPlatformExperimentOperations(platform: IPlatform, platform_type: Type)[source]

Bases: ABC

IPlatformExperimentOperations defines experiments item operations interface.

platform: IPlatform
platform_type: Type
abstract get(experiment_id: str, **kwargs) Any[source]

Returns the platform representation of an Experiment.

Parameters:
  • experiment_id – Item id of Experiments

  • **kwargs

Returns:

Platform Representation of an experiment

pre_create(experiment: Experiment, **kwargs) NoReturn[source]

Run the platform/experiment post creation events.

Parameters:
  • experiment – Experiment to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

post_create(experiment: Experiment, **kwargs) NoReturn[source]

Run the platform/experiment post creation events.

Parameters:
  • experiment – Experiment to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

create(experiment: Experiment, do_pre: bool = True, do_post: bool = True, **kwargs) Experiment[source]

Creates an experiment from an IDMTools simulation object.

Also performs local/platform pre and post creation events.

Parameters:
  • experiment – Experiment to create

  • do_pre – Perform Pre creation events for item

  • do_post – Perform Post creation events for item

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

abstract platform_create(experiment: Experiment, **kwargs) Any[source]

Creates an experiment from an IDMTools experiment object.

Parameters:
  • experiment – Experiment to create

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

batch_create(experiments: List[Experiment], display_progress: bool = True, **kwargs) List[Tuple[Experiment]][source]

Provides a method to batch create experiments.

Parameters:
  • experiments – List of experiments to create

  • display_progress – Show progress bar

  • **kwargs – Keyword arguments to pass to the batch

Returns:

List of tuples containing the create object and id of item that was created

abstract get_children(experiment: Any, **kwargs) List[Any][source]

Returns the children of an experiment object.

Parameters:
  • experiment – Experiment object

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Children of experiment object

abstract get_parent(experiment: Any, **kwargs) Any[source]

Returns the parent of item. If the platform doesn’t support parents, you should throw a TopLevelItem error.

Parameters:
  • experiment – Experiment to get parent from

  • **kwargs

Returns:

Parent of Experiment(Suite)

Raises:

TopLevelItem

to_entity(experiment: Any, **kwargs) Experiment[source]

Converts the platform representation of experiment to idmtools representation.

Parameters:

experiment – Platform experiment object

Returns:

IDMTools experiment object

pre_run_item(experiment: Experiment, **kwargs)[source]

Trigger right before commissioning experiment on platform.

This ensures that the item is created. It also ensures that the children(simulations) have also been created.

Parameters:

experiment – Experiment to commission

Returns:

None

Raises:

ValueError - If there are no simulations

post_run_item(experiment: Experiment, **kwargs)[source]

Trigger right after commissioning experiment on platform.

Parameters:

experiment – Experiment just commissioned

Returns:

None

run_item(experiment: Experiment, **kwargs)[source]

Called during commissioning of an item. This should create the remote resource.

Parameters:
  • experiment – Experiment

  • **kwargs – Keyword arguments to pass to pre_run_item, platform_run_item, post_run_item

Returns:

None

abstract platform_run_item(experiment: Experiment, **kwargs)[source]

Called during commissioning of an item. This should perform what is needed to commission job on platform.

Parameters:

experiment

Returns:

None

abstract send_assets(experiment: Any, **kwargs)[source]

Transfer Experiment assets to the platform.

Parameters:

experiment – Experiment to send assets for

Returns:

None

abstract refresh_status(experiment: Experiment, **kwargs)[source]

Refresh status for experiment object.

This should update the object directly. For experiments it is best if all simulation states are updated as well.

Parameters:

experiment – Experiment to get status for

Returns:

None

get_assets(experiment: Experiment, files: List[str], **kwargs) Dict[str, Dict[str, bytearray]][source]

Get files from experiment.

Parameters:
  • experiment – Experiment to get files from

  • files – List files

  • **kwargs

Returns:

Dict with each sim id and the files contents matching specified list

list_assets(experiment: Experiment, children: bool = False, **kwargs) List[Asset][source]

List available assets for a experiment.

Parameters:
  • experiment – Experiment to list files for

  • children – Should we load assets from children as well?

Returns:

List of Assets

platform_list_asset(experiment: Experiment, **kwargs) List[Asset][source]

List the assets on an experiment.

Parameters:
  • experiment – Experiment to list.

  • **kwargs – Extra Arguments

Returns:

List of Assets

platform_modify_experiment(experiment: Experiment, regather_common_assets: bool = False, **kwargs) Experiment[source]

API to allow detection of experiments already created.

Parameters:
  • experiment

  • regather_common_assets – When modifying, should we gather assets from template/simulations. It is important to note that when using this feature, ensure the previous simulations have finished provisioning. Failure to do so can lead to unexpected behaviour

Returns:

Experiment updated

create_sim_directory_map(experiment_id: str) Dict[source]

Build simulation working directory mapping. :param experiment_id: experiment id

Returns:

Dict

platform_delete(experiment_id: str) None[source]

Delete platform experiment. :param experiment_id: experiment id

Returns:

None

platform_cancel(experiment_id: str) None[source]

Cancel platform experiment. :param experiment_id: experiment id

Returns:

None

__init__(platform: IPlatform, platform_type: Type) None
idmtools.entities.iplatform_ops.iplatform_simulation_operations module

IPlatformSimulationOperations defines simulation item operations interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform_ops.iplatform_simulation_operations.IPlatformSimulationOperations(platform: IPlatform, platform_type: Type)[source]

Bases: CacheEnabled, ABC

IPlatformSimulationOperations defines simulation item operations interface.

platform: IPlatform
platform_type: Type
abstract get(simulation_id: str, **kwargs) Any[source]

Returns the platform representation of an Simulation.

Parameters:
  • simulation_id – Item id of Simulations

  • **kwargs

Returns:

Platform Representation of an simulation

pre_create(simulation: Simulation, **kwargs) NoReturn[source]

Run the platform/simulation post creation events.

Parameters:
  • simulation – simulation to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

post_create(simulation: Simulation, **kwargs) NoReturn[source]

Run the platform/simulation post creation events.

Parameters:
  • simulation – simulation to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

create(simulation: Simulation, do_pre: bool = True, do_post: bool = True, **kwargs) Any[source]

Creates an simulation from an IDMTools simulation object.

Also performs pre-creation and post-creation locally and on platform.

Parameters:
  • simulation – Simulation to create

  • do_pre – Perform Pre creation events for item

  • do_post – Perform Post creation events for item

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

abstract platform_create(simulation: Simulation, **kwargs) Any[source]

Creates an simulation on Platform from an IDMTools Simulation Object.

Parameters:
  • simulation – Simulation to create

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

batch_create(sims: List[Simulation], display_progress: bool = True, **kwargs) List[Simulation][source]

Provides a method to batch create simulations.

Parameters:
  • sims – List of simulations to create

  • display_progress – Show progress bar

  • **kwargs

Returns:

List of tuples containing the create object and id of item that was created

abstract get_parent(simulation: Any, **kwargs) Any[source]

Returns the parent of item. If the platform doesn’t support parents, you should throw a TopLevelItem error.

Parameters:
  • simulation

  • **kwargs

Returns:

Parent of simulation

Raises:

TopLevelItem

to_entity(simulation: Any, load_task: bool = False, parent: Experiment | None = None, **kwargs) Simulation[source]

Converts the platform representation of simulation to idmtools representation.

Parameters:
  • simulation – Platform simulation object

  • load_task – Load Task Object as well. Can take much longer and have more data on platform

  • parent – Optional parent object

Returns:

IDMTools simulation object

pre_run_item(simulation: Simulation, **kwargs)[source]

Trigger right before commissioning experiment on platform.

This ensures that the item is created. It also ensures that the children(simulations) have also been created.

Parameters:

simulation – Experiment to commission

Returns:

None

post_run_item(simulation: Simulation, **kwargs)[source]

Trigger right after commissioning experiment on platform.

Parameters:

simulation – Experiment just commissioned

Returns:

None

run_item(simulation: Simulation, **kwargs)[source]

Called during commissioning of an item. This should create the remote resource.

Parameters:

simulation

Returns:

None

abstract platform_run_item(simulation: Simulation, **kwargs)[source]

Called during commissioning of an item. This should create the remote resource but not upload assets.

Parameters:

simulation – Simulation to run

Returns:

None

abstract send_assets(simulation: Any, **kwargs)[source]

Send simulations assets to server.

Parameters:
  • simulation – Simulation to upload assets for

  • **kwargs – Keyword arguments for the op

Returns:

None

abstract refresh_status(simulation: Simulation, **kwargs)[source]

Refresh status for simulation object.

Parameters:

simulation – Experiment to get status for

Returns:

None

abstract get_assets(simulation: Simulation, files: List[str], **kwargs) Dict[str, bytearray][source]

Get files from simulation.

Parameters:
  • simulation – Simulation to fetch files from

  • files – Files to get

  • **kwargs

Returns:

Dictionary containing filename and content

abstract list_assets(simulation: Simulation, **kwargs) List[Asset][source]

List available assets for a simulation.

Parameters:

simulation – Simulation of Assets

Returns:

List of filenames

create_sim_directory_map(simulation_id: str) Dict[source]

Build simulation working directory mapping. :param simulation_id: simulation id

Returns:

Dict

platform_delete(simulation_id: str) None[source]

Delete platform simulation. :param simulation_id: simulation id

Returns:

None

platform_cancel(simulation_id: str) None[source]

Cancel platform simulation. :param simulation_id: simulation id

Returns:

None

__init__(platform: IPlatform, platform_type: Type) None
idmtools.entities.iplatform_ops.iplatform_suite_operations module

IPlatformSuiteOperations defines suite item operations interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform_ops.iplatform_suite_operations.IPlatformSuiteOperations(platform: IPlatform, platform_type: Type)[source]

Bases: ABC

IPlatformSuiteOperations defines suite item operations interface.

platform: IPlatform
platform_type: Type
abstract get(suite_id: str, **kwargs) Any[source]

Returns the platform representation of an Suite.

Parameters:
  • suite_id – Item id of Suites

  • **kwargs

Returns:

Platform Representation of an suite

batch_create(suites: List[Suite], display_progress: bool = True, **kwargs) List[Tuple[Any, str]][source]

Provides a method to batch create suites.

Parameters:
  • display_progress – Display progress bar

  • suites – List of suites to create

  • **kwargs

Returns:

List of tuples containing the create object and id of item that was created

pre_create(suite: Suite, **kwargs) NoReturn[source]

Run the platform/suite post creation events.

Parameters:
  • suite – Experiment to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

post_create(suite: Suite, **kwargs) NoReturn[source]

Run the platform/suite post creation events.

Parameters:
  • suite – Experiment to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

create(suite: Suite, do_pre: bool = True, do_post: bool = True, **kwargs) Tuple[Any, str][source]

Creates an simulation from an IDMTools suite object.

Also performs pre-creation and post-creation locally and on platform.

Parameters:
  • suite – Suite to create

  • do_pre – Perform Pre creation events for item

  • do_post – Perform Post creation events for item

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

abstract platform_create(suite: Suite, **kwargs) Tuple[Any, str][source]

Creates an suite from an IDMTools suite object.

Parameters:
  • suite – Suite to create

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

pre_run_item(suite: Suite, **kwargs)[source]

Trigger right before commissioning experiment on platform.

This ensures that the item is created. It also ensures that the children(simulations) have also been created.

Parameters:

suite – Experiment to commission

Returns:

None

post_run_item(suite: Suite, **kwargs)[source]

Trigger right after commissioning suite on platform.

Parameters:

suite – Experiment just commissioned

Returns:

None

run_item(suite: Suite, **kwargs)[source]

Called during commissioning of an item. This should create the remote resource.

Parameters:

suite – suite to run

Returns:

None

platform_run_item(suite: Suite, **kwargs)[source]

Called during commissioning of an item. This should perform what is needed to commission job on platform.

Parameters:

suite

Returns:

None

abstract get_parent(suite: Any, **kwargs) Any[source]

Returns the parent of item. If the platform doesn’t support parents, you should throw a TopLevelItem error.

Parameters:
  • suite

  • **kwargs

Returns:

Parent of suite

Raises:

TopLevelItem

abstract get_children(suite: Any, **kwargs) List[Any][source]

Returns the children of an suite object.

Parameters:
  • suite – Suite object

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Children of suite object

to_entity(suite: Any, **kwargs) Suite[source]

Converts the platform representation of suite to idmtools representation.

Parameters:

suite – Platform suite object

Returns:

IDMTools suite object

abstract refresh_status(experiment: Suite, **kwargs)[source]

Refresh status of suite.

Parameters:

experiment

Returns:

None

get_assets(suite: Suite, files: List[str], **kwargs) Dict[str, Dict[str, Dict[str, bytearray]]][source]

Fetch assets for suite.

Parameters:
  • suite – suite to get assets for

  • files – Files to load

  • **kwargs

Returns:

Nested dictionaries in the structure experiment_id { simulation_id { files = content } } }

create_sim_directory_map(suite_id: str) Dict[source]

Build simulation working directory mapping. :param suite_id: suite id

Returns:

Dict

platform_delete(suite_id: str) None[source]

Delete platform suite. :param suite_id: suite id

Returns:

None

platform_cancel(suite_id: str) None[source]

Cancel platform suite. :param suite_id: suite id

Returns:

None

__init__(platform: IPlatform, platform_type: Type) None
idmtools.entities.iplatform_ops.iplatform_workflowitem_operations module

IPlatformWorkflowItemOperations defines workflow item operations interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform_ops.iplatform_workflowitem_operations.IPlatformWorkflowItemOperations(platform: IPlatform, platform_type: Type)[source]

Bases: CacheEnabled, ABC

IPlatformWorkflowItemOperations defines workflow item operations interface.

platform: IPlatform
platform_type: Type
abstract get(workflow_item_id: str, **kwargs) Any[source]

Returns the platform representation of an WorkflowItem.

Parameters:
  • workflow_item_id – Item id of WorkflowItems

  • **kwargs

Returns:

Platform Representation of an workflow_item

batch_create(workflow_items: List[IWorkflowItem], display_progress: bool = True, **kwargs) List[Any][source]

Provides a method to batch create workflow items.

Parameters:
  • workflow_items – List of worfklow items to create

  • display_progress – Whether to display progress bar

  • **kwargs

Returns:

List of tuples containing the create object and id of item that was created.

pre_create(workflow_item: IWorkflowItem, **kwargs) NoReturn[source]

Run the platform/workflow item post creation events.

Parameters:
  • workflow_item – IWorkflowItem to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

post_create(workflow_item: IWorkflowItem, **kwargs) NoReturn[source]

Run the platform/workflow item post creation events.

Parameters:
  • workflow_item – IWorkflowItem to run post-creation events

  • **kwargs – Optional arguments mainly for extensibility

Returns:

NoReturn

create(workflow_item: IWorkflowItem, do_pre: bool = True, do_post: bool = True, **kwargs) Any[source]

Creates an workflow item from an IDMTools IWorkflowItem object.

Also performs pre-creation and post-creation locally and on platform.

Parameters:
  • workflow_item – Suite to create

  • do_pre – Perform Pre creation events for item

  • do_post – Perform Post creation events for item

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the UUID of said item

abstract platform_create(workflow_item: IWorkflowItem, **kwargs) Tuple[Any, str][source]

Creates an workflow_item from an IDMTools workflow_item object.

Parameters:
  • workflow_item – WorkflowItem to create

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the id of said item

pre_run_item(workflow_item: IWorkflowItem, **kwargs)[source]

Trigger right before commissioning experiment on platform.

This ensures that the item is created. It also ensures that the children(simulations) have also been created.

Parameters:

workflow_item – Experiment to commission

Returns:

None

post_run_item(workflow_item: IWorkflowItem, **kwargs)[source]

Trigger right after commissioning workflow item on platform.

Parameters:

workflow_item – Experiment just commissioned

Returns:

None

run_item(workflow_item: IWorkflowItem, **kwargs)[source]

Called during commissioning of an item. This should create the remote resource.

Parameters:

workflow_item

Returns:

None

abstract platform_run_item(workflow_item: IWorkflowItem, **kwargs)[source]

Called during commissioning of an item. This should perform what is needed to commission job on platform.

Parameters:

workflow_item

Returns:

None

abstract get_parent(workflow_item: Any, **kwargs) Any[source]

Returns the parent of item. If the platform doesn’t support parents, you should throw a TopLevelItem error.

Parameters:
  • workflow_item – Workflow item to get parent of

  • **kwargs

Returns:

Parent of Worktflow item

Raises:

TopLevelItem

abstract get_children(workflow_item: Any, **kwargs) List[Any][source]

Returns the children of an workflow_item object.

Parameters:
  • workflow_item – WorkflowItem object

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Children of workflow_item object

to_entity(workflow_item: Any, **kwargs) IWorkflowItem[source]

Converts the platform representation of workflow_item to idmtools representation.

Parameters:

workflow_item – Platform workflow_item object

Returns:

IDMTools workflow_item object

abstract refresh_status(workflow_item: IWorkflowItem, **kwargs)[source]

Refresh status for workflow item.

Parameters:

workflow_item – Item to refresh status for

Returns:

None

abstract send_assets(workflow_item: Any, **kwargs)[source]

Send assets for workflow item to platform.

Parameters:

workflow_item – Item to send assets for

Returns:

None

abstract get_assets(workflow_item: IWorkflowItem, files: List[str], **kwargs) Dict[str, bytearray][source]

Load assets for workflow item.

Parameters:
  • workflow_item – Item

  • files – List of files to load

  • **kwargs

Returns:

Dictionary with filename as key and value as binary content

abstract list_assets(workflow_item: IWorkflowItem, **kwargs) List[Asset][source]

List available assets for a workflow item.

Parameters:

workflow_item – workflow item to list files for

Returns:

List of filenames

__init__(platform: IPlatform, platform_type: Type) None
idmtools.entities.iplatform_ops.utils module

Utils for platform operations.

Here we have mostly utilities to handle batch operations which tend to overlap across different item types.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.entities.iplatform_ops.utils.batch_items(items: Iterable | Generator, batch_size=16)[source]

Batch items.

Parameters:
  • items – Items to batch

  • batch_size – Size of the batch

Returns:

Generator

Raises:

StopIteration

idmtools.entities.iplatform_ops.utils.item_batch_worker_thread(create_func: Callable, items: List, **kwargs) List[source]

Default batch worker thread function. It just calls create on each item.

Parameters:
  • create_func – Create function for item

  • items – Items to create

Returns:

List of items created

idmtools.entities.iplatform_ops.utils.batch_create_items(items: Iterable | Generator, batch_worker_thread_func: Callable[[List], List] | None = None, create_func: Callable[[...], Any] | None = None, display_progress: bool = True, progress_description: str = 'Commissioning items', unit: str | None = None, **kwargs)[source]

Batch create items. You must specify either batch_worker_thread_func or create_func.

Parameters:
  • items – Items to create

  • batch_worker_thread_func – Optional Function to execute. Should take a list and return a list

  • create_func – Optional Create function

  • display_progress – Enable progress bar

  • progress_description – Description to show in progress bar

  • unit – Unit for progress bar

  • **kwargs

Returns:

Batches crated results

idmtools.entities.iplatform_ops.utils.show_progress_of_batch(progress_bar: tqdm, futures: List[Future]) List[source]

Show progress bar for batch.

Parameters:
  • progress_bar – Progress bar

  • futures – List of futures that are still running/queued

Returns:

Returns results

idmtools.entities Submodules
idmtools.entities.command_line module

Defines the CommandLine class that represents our remote command line to be executed.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.command_line.CommandLine(executable=None, *args, is_windows: bool = False, raw_args: List[Any] | None = None, **kwargs)[source]

Bases: object

A class to construct command-line strings from executable, options, and params.

__init__(executable=None, *args, is_windows: bool = False, raw_args: List[Any] | None = None, **kwargs)[source]

Initialize CommandLine.

Parameters:
  • executable – Executable

  • *args – Additional Arguments

  • is_windows – is the command for windows

  • raw_args – Any raw arguments

  • **kwargs – Keyword arguments

is_windows: bool = False

Is this a command line for a windows system

property executable: str

Return executable as string.

Returns:

Executable

add_argument(arg)[source]

Add argument.

Parameters:

arg – Argument string

Returns:

None

add_raw_argument(arg)[source]

Add an argument that won’t be quote on format.

Parameters:

arg – arg

Returns:

None

add_option(option, value)[source]

Add a command-line option.

Parameters:
  • option – Option to add

  • value – Value of option

Returns:

None

property options

Options as a string.

Returns:

Options string

property arguments

The CommandLine arguments.

Returns:

Arguments as string

property raw_arguments

Raw arguments(arguments not to be parsed).

Returns:

Raw arguments as a string

property cmd

Converts command to string.

Returns:

Command as string

static from_string(command: str, as_raw_args: bool = False) CommandLine[source]

Creates a command line object from string.

Parameters:
  • command – Command

  • as_raw_args – When set to true, arguments will preserve the quoting provided

Returns:

CommandLine object from string

idmtools.entities.command_task module

Command Task is the simplest task. It defined a simple task object with a command line.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.command_task.CommandTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, gather_common_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>, gather_transient_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>)[source]

Bases: ITask

CommandTask is the simplest task.

A CommandTask is basically a command line and assets.

gather_common_asset_hooks: List[Callable[[ITask], AssetCollection]]

Hooks to gather common assets

gather_transient_asset_hooks: List[Callable[[ITask], AssetCollection]]

Defines an extensible simple task that implements functionality through optional supplied use hooks

gather_common_assets() AssetCollection[source]

Gather common(experiment-level) assets for task.

Returns:

AssetCollection containing common assets

gather_transient_assets() AssetCollection[source]

Gather transient(experiment-level) assets for task.

Returns:

AssetCollection containing transient assets

reload_from_simulation(simulation: Simulation)[source]

Reload task from a simulation.

Parameters:

simulation – Simulation to load

Returns:

None

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

pre-creation for the command task.

The default is to set the windows on the command line based on the platform.

Parameters:
  • parent – Parent of task

  • platform – Platform we are going to pre-creation

Returns:

None

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, gather_common_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>, gather_transient_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>) None
class idmtools.entities.command_task.CommandTaskSpecification[source]

Bases: TaskSpecification

CommandTaskSpecification is the plugin definition for CommandTask.

get(configuration: dict) CommandTask[source]

Get instance of CommandTask with configuration.

Parameters:

configuration – configuration for CommandTask

Returns:

CommandTask with configuration

get_description() str[source]

Get description of plugin.

Returns:

Plugin description

get_example_urls() List[str][source]

Get example urls related to CommandTask.

Returns:

List of urls that have examples related to CommandTask

get_type() Type[CommandTask][source]

Get task type provided by plugin.

Returns:

CommandTask

get_version() str[source]

Get version of command task plugin.

Returns:

Version of plugin

idmtools.entities.experiment module

Our Experiment class definition.

Experiments can be thought of as a metadata object analogous to a folder on a filesystem. An experiment is a container that contains one or more simulations. Before creations, experiment.simulations can be either a list of a TemplatedSimulations. TemplatedSimulations are useful for building large numbers of similar simulations such as sweeps.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.experiment.Experiment(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, suite_id: str = None, task_type: str = 'idmtools.entities.command_task.CommandTask', platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, simulations: dataclasses.InitVar[typing.Union[idmtools.core.interfaces.entity_container.EntityContainer, typing.Generator[ForwardRef('Simulation'), NoneType, NoneType], idmtools.entities.templated_simulation.TemplatedSimulations, typing.Iterator[ForwardRef('Simulation')]]] = <property object>, _Experiment__simulations: ~idmtools.core.interfaces.entity_container.EntityContainer | ~typing.Generator[Simulation, None, None] | ~idmtools.entities.templated_simulation.TemplatedSimulations | ~typing.Iterator[Simulation] = <factory>, gather_common_assets_from_task: bool = None, disable_default_pre_create: bool = False)[source]

Bases: IAssetsEnabled, INamedEntity, IRunnableEntity

Class that represents a generic experiment.

This class needs to be implemented for each model type with specifics.

Parameters:
  • name – The experiment name.

  • assets – The asset collection for assets global to this experiment.

suite_id: str = None

Suite ID

item_type: ItemType = 'Experiment'

Item Item(always an experiment)

task_type: str = 'idmtools.entities.command_task.CommandTask'

Task Type(defaults to command)

platform_requirements: Set[PlatformRequirements]

List of Requirements for the task that a platform must meet to be able to run

frozen: bool = False

Is the Experiment Frozen

gather_common_assets_from_task: bool = None

Determines if we should gather assets from the first task. Only use when not using TemplatedSimulations

disable_default_pre_create: bool = False

Determines if we should gather assets from the first task. Only use when not using TemplatedSimulations

post_creation(platform: IPlatform) None[source]

Post creation of experiments.

Parameters:

platform – Platform the experiment was created on

Returns:

None

property status

Get status of experiment. Experiment status is based in simulations.

The first rule to be true is used. The rules are: * If simulations is a TemplatedSimulations we assume status is None if _platform_object is not set. * If simulations is a TemplatedSimulations we assume status is CREATED if _platform_object is set. * If simulations length is 0 or all simulations have a status of None, experiment status is none * If any simulation has a running status, experiment is considered running. * If any simulation has a created status and any other simulation has a FAILED or SUCCEEDED status, experiment is considered running. * If any simulation has a None status and any other simulation has a FAILED or SUCCEEDED status, experiment is considered Created. * If any simulation has a status of failed, experiment is considered failed. * If any simulation has a status of SUCCEEDED, experiment is considered SUCCEEDED. * If any simulation has a status of CREATED, experiment is considered CREATED.

Returns:

Status

property suite

Suite the experiment belongs to.

Returns:

Suite

property parent

Return parent object for item.

Returns:

Parent entity if set

display()[source]

Display the experiment.

Returns:

None

pre_creation(platform: IPlatform, gather_assets=True) None[source]

Experiment pre_creation callback.

Parameters:
  • platform – Platform experiment is being created on

  • gather_assets – Determines if an experiment will try to gather the common assets or defer. It most cases, you want this enabled but when modifying existing experiments you may want to disable if there are new assets and the platform has performance hits to determine those assets

Returns:

None

Raises:

ValueError - If simulations length is 0

property done

Return if an experiment has finished executing.

Returns:

True if all simulations have ran, False otherwise

property succeeded: bool

Return if an experiment has succeeded. An experiment is succeeded when all simulations have succeeded.

Returns:

True if all simulations have succeeded, False otherwise

property any_failed: bool

Return if an experiment has any simulation in failed state.

Returns:

True if all simulations have succeeded, False otherwise

property simulations: ExperimentParentIterator

Simulation in this experiment

property simulation_count: int

Return the total simulations.

Returns:

Length of simulations

refresh_simulations() NoReturn[source]

Refresh the simulations from the platform.

Returns:

None

refresh_simulations_status()[source]

Refresh the simulation status.

Returns:

None

pre_getstate()[source]

Return default values for pickle_ignore_fields().

Call before pickling.

gather_assets() -> AssetCollection(_uid=None, _IItem__pre_creation_hooks=[], _IItem__post_creation_hooks=[], platform_id=None, _platform=None, parent_id=None, _parent=None, status=None, tags={}, item_type=<ItemType.ASSETCOLLECTION: 'Asset Collection'>, _platform_object=None)[source]

Gather all our assets for our experiment.

Returns:

Assets

classmethod from_task(task, name: str | None = None, tags: Dict[str, Any] | None = None, assets: AssetCollection | None = None, gather_common_assets_from_task: bool = True) Experiment[source]

Creates an Experiment with one Simulation from a task.

Parameters:
  • task – Task to use

  • assets – Asset collection to use for common tasks. Defaults to gather assets from task

  • name – Name of experiment

  • tags – Tags for the items

  • gather_common_assets_from_task – Whether we should attempt to gather assets from the Task object for the experiment. With large amounts of tasks, this can be expensive as we loop through all

Returns:

classmethod from_builder(builders: SimulationBuilder | List[SimulationBuilder], base_task: ITask, name: str | None = None, assets: AssetCollection | None = None, tags: Dict[str, Any] | None = None) Experiment[source]

Creates an experiment from a SimulationBuilder object(or list of builders.

Parameters:
  • builders – List of builder to create experiment from

  • base_task – Base task to use as template

  • name – Experiment name

  • assets – Experiment level assets

  • tags – Experiment tags

Returns:

Experiment object from the builders

classmethod from_template(template: TemplatedSimulations, name: str | None = None, assets: AssetCollection | None = None, tags: Dict[str, Any] | None = None) Experiment[source]

Creates an Experiment from a TemplatedSimulation object.

Parameters:
  • template – TemplatedSimulation object

  • name – Experiment name

  • assets – Experiment level assets

  • tags – Tags

Returns:

Experiment object from the TemplatedSimulation object

list_static_assets(children: bool = False, platform: IPlatform = None, **kwargs) List[Asset][source]

List assets that have been uploaded to a server already.

Parameters:
  • children – When set to true, simulation assets will be loaded as well

  • platform – Optional platform to load assets list from

  • **kwargs

Returns:

List of assets

run(wait_until_done: bool = False, platform: IPlatform = None, regather_common_assets: bool = None, wait_on_done_progress: bool = True, **run_opts) NoReturn[source]

Runs an experiment on a platform.

Parameters:
  • wait_until_done – Whether we should wait on experiment to finish running as well. Defaults to False

  • platform – Platform object to use. If not specified, we first check object for platform object then the current context

  • regather_common_assets – Triggers gathering of assets for existing experiments. If not provided, we use the platforms default behaviour. See platform details for performance implications of this. For most platforms, it should be ok but for others, it could decrease performance when assets are not changing. It is important to note that when using this feature, ensure the previous simulations have finished provisioning. Failure to do so can lead to unexpected behaviour

  • wait_on_done_progress – Should experiment status be shown when waiting

  • **run_opts – Options to pass to the platform

Returns:

None

to_dict()[source]

Convert experiment to dictionary.

Returns:

Dictionary of experiment.

classmethod from_id(item_id: str, platform: IPlatform = None, copy_assets: bool = False, **kwargs) Experiment[source]

Helper function to provide better intellisense to end users.

Parameters:
  • item_id – Item id to load

  • platform – Optional platform. Fallbacks to context

  • copy_assets – Allow copying assets on load. Makes modifying experiments easier when new assets are involved.

  • **kwargs – Optional arguments to be passed on to the platform

Returns:

Experiment loaded with ID

print(verbose: bool = False)[source]

Print summary of experiment.

Parameters:

verbose – Verbose printing

Returns:

None

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, suite_id: str = None, task_type: str = 'idmtools.entities.command_task.CommandTask', platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, simulations: dataclasses.InitVar[typing.Union[idmtools.core.interfaces.entity_container.EntityContainer, typing.Generator[ForwardRef('Simulation'), NoneType, NoneType], idmtools.entities.templated_simulation.TemplatedSimulations, typing.Iterator[ForwardRef('Simulation')]]] = <property object>, _Experiment__simulations: ~idmtools.core.interfaces.entity_container.EntityContainer | ~typing.Generator[Simulation, None, None] | ~idmtools.entities.templated_simulation.TemplatedSimulations | ~typing.Iterator[Simulation] = <factory>, gather_common_assets_from_task: bool = None, disable_default_pre_create: bool = False) None
class idmtools.entities.experiment.ExperimentSpecification[source]

Bases: ExperimentPluginSpecification

ExperimentSpecification is the spec for Experiment plugins.

get_description() str[source]

Description of our plugin.

Returns:

Description

get(configuration: dict) Experiment[source]

Get experiment with configuration.

get_type() Type[Experiment][source]

Return the experiment type.

Returns:

Experiment type.

idmtools.entities.generic_workitem module

The GenericWorkitem when fetches workitems from a server.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.generic_workitem.GenericWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, command: dataclasses.InitVar[str] = None)[source]

Bases: IWorkflowItem

Idm GenericWorkItem.

command: dataclasses.InitVar[str] = None
__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, command: dataclasses.InitVar[str] = None) None
idmtools.entities.ianalyzer module

Defines our IAnalyzer interface used as base of all other analyzers.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.ianalyzer.IAnalyzer(uid=None, working_dir: str | None = None, parse: bool = True, filenames: List[str] | None = None)[source]

Bases: object

An abstract base class carrying the lowest level analyzer interfaces called by ExperimentManager.

abstract __init__(uid=None, working_dir: str | None = None, parse: bool = True, filenames: List[str] | None = None)[source]

A constructor.

Parameters:
  • uid – The unique id identifying this analyzer.

  • working_dir – A working directory to place files.

  • parse – True to leverage the OutputParser; False to get the raw data in the select_simulation_data().

  • filenames – The files for the analyzer to download.

property filenames

Returns user filenames.

Returns:

filenames

initialize() NoReturn[source]

Call once after the analyzer has been added to the AnalyzeManager.

Add everything depending on the working directory or unique ID here instead of in __init__.

per_group(items: IItemList) NoReturn[source]

Call once before running the apply on the items.

Parameters:

items – Objects with attributes of type ItemId. IDs of one or more higher-level hierarchical objects can be obtained from these IDs in order to perform tasks with them.

Returns:

None

filter(item: IWorkflowItem | Simulation) bool[source]

Decide whether the analyzer should process a simulation/work item.

Parameters:

item – An IItem to be considered for processing with this analyzer.

Returns:

A Boolean indicating whether simulation/work item should be analyzed by this analyzer.

abstract map(data: Dict[str, Any], item: IWorkflowItem | Simulation) Any[source]

In parallel for each simulation/work item, consume raw data from filenames and emit selected data.

Parameters:
  • data – A dictionary associating filename with content for simulation data.

  • itemIItem object that the passed data is associated with.

Returns:

Selected data for the given simulation/work item.

abstract reduce(all_data: Dict[IWorkflowItem | Simulation, Any]) Any[source]

Combine the map() data for a set of items into an aggregate result.

Parameters:

all_data – A dictionary with entries for the item ID and selected data.

destroy() NoReturn[source]

Call after the analysis is done.

class idmtools.entities.ianalyzer.BaseAnalyzer(uid=None, working_dir: str | None = None, parse: bool = True, filenames: List[str] | None = None)[source]

Bases: IAnalyzer

BaseAnalyzer to allow using previously used dtk-tools analyzers within idmtools.

__init__(uid=None, working_dir: str | None = None, parse: bool = True, filenames: List[str] | None = None)[source]

Constructor for Base Analyzer.

Parameters:
  • uid – The unique id identifying this analyzer.

  • working_dir – A working directory to place files.

  • parse – True to leverage the OutputParser; False to get the raw data in the select_simulation_data().

  • filenames – The files for the analyzer to download.

idmtools.entities.iplatform module

Here we define the Platform interface.

IPlatform is responsible for all the communication to our platform and translation from idmtools objects to platform specific objects and vice versa.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform.IPlatform(*args, **kwargs)[source]

Bases: IItem, CacheEnabled

Interface defining a platform.

A platform needs to implement basic operation such as:

  • Creating experiment

  • Creating simulation

  • Commissioning

  • File handling

platform_type_map: Dict[Type, ItemType] = None

Maps the platform types to idmtools types

supported_types: Set[ItemType]
validate_inputs_types() NoReturn[source]

Validate user inputs and case attributes with the correct data types.

Returns:

None

get_item(item_id: str, item_type: ItemType | None = None, force: bool = False, raw: bool = False, **kwargs) Experiment | Suite | Simulation | IWorkflowItem | AssetCollection | None[source]

Retrieve an object from the platform.

This function is cached; force allows you to force the refresh of the cache. If no object_type is passed, the function will try all the types (experiment, suite, simulation).

Parameters:
  • item_id – The ID of the object to retrieve.

  • item_type – The type of the object to be retrieved.

  • force – If True, force the object fetching from the platform.

  • raw – Return either an idmtools object or a platform object.

Returns:

The object found on the platform or None.

Raises:
get_children(item_id: str, item_type: ItemType, force: bool = False, raw: bool = False, item: Any | None = None, **kwargs) Any[source]

Retrieve the children of a given object.

Parameters:
  • item_id – The ID of the object for which we want the children.

  • force – If True, force the object fetching from the platform.

  • raw – Return either an idmtools object or a platform object.

  • item_type – Pass the type of the object for quicker retrieval.

  • item – optional platform or idm item to use instead of loading

Returns:

The children of the object or None.

get_children_by_object(parent: IEntity) List[IEntity][source]

Returns a list of children for an entity.

Parameters:

parent – Parent object

Returns:

List of children

get_parent_by_object(child: IEntity) IEntity[source]

Parent of object.

Parameters:

child – Child object to find parent for

Returns:

Returns parent object

get_parent(item_id: str, item_type: ItemType | None = None, force: bool = False, raw: bool = False, **kwargs)[source]

Retrieve the parent of a given object.

Parameters:
  • item_id – The ID of the object for which we want the parent.

  • force – If True, force the object fetching from the platform.

  • raw – Return either an idmtools object or a platform object.

  • item_type – Pass the type of the object for quicker retrieval.

Returns:

The parent of the object or None.

get_cache_key(force, item_id, item_type, kwargs, raw, prefix='p')[source]

Get cache key for an item.

Parameters:
  • force – Should we force the load

  • item_id – Item id

  • item_type – Item type

  • kwargs – Kwargs

  • raw – Should we use raw storage?

  • prefix – Prefix for the item

Returns:

Cache Key

create_items(items: List[IEntity] | IEntity, **kwargs) List[IEntity][source]

Create items (simulations, experiments, or suites) on the platform.

The function will batch the items based on type and call the self._create_batch for creation.

Parameters:
  • items – The list of items to create.

  • kwargs – Extra arguments

Returns:

List of item IDs created.

run_items(items: IEntity | List[IEntity], **kwargs)[source]

Run items on the platform.

Parameters:

items – Items to run

Returns:

None

validate_item_for_analysis(item: object, analyze_failed_items=False)[source]

Check if item is valid for analysis.

Parameters:
  • item – Which item to flatten

  • analyze_failed_items – bool

Returns: bool

flatten_item(item: object, **kwargs) List[object][source]

Flatten an item: resolve the children until getting to the leaves.

For example, for an experiment, will return all the simulations. For a suite, will return all the simulations contained in the suites experiments.

Parameters:
  • item – Which item to flatten

  • kwargs – extra parameters

Returns:

List of leaves

refresh_status(item: IEntity) NoReturn[source]

Populate the platform item and specified item with its status.

Parameters:

item – The item to check status for.

get_files(item: IEntity, files: Set[str] | List[str], output: str | None = None, **kwargs) Dict[str, Dict[str, bytearray]] | Dict[str, bytearray][source]

Get files for a platform entity.

Parameters:
  • item – Item to fetch files for

  • files – List of file names to get

  • output – save files to

  • kwargs – Platform arguments

Returns:

For simulations, this returns a dictionary with filename as key and values being binary data from file or a dict.

For experiments, this returns a dictionary with key as sim id and then the values as a dict of the simulations described above

get_files_by_id(item_id: str, item_type: ItemType, files: Set[str] | List[str], output: str | None = None) Dict[str, Dict[str, bytearray]] | Dict[str, bytearray][source]

Get files by item id (str).

Parameters:
  • item_id – COMPS Item, say, Simulation Id or WorkItem Id

  • item_type – Item Type

  • files – List of files to retrieve

  • output – save files to

Returns: dict with key/value: file_name/file_content

are_requirements_met(requirements: PlatformRequirements | Set[PlatformRequirements]) bool[source]

Does the platform support the list of requirements.

Parameters:

requirements – Requirements should be a list of PlatformRequirements or a single PlatformRequirements

Returns:

True if all the requirements are supported

is_task_supported(task: ITask) bool[source]

Is a task supported on this platform.

This depends on the task properly setting its requirements. See idmtools.entities.itask.ITask.platform_requirements and idmtools.entities.platform_requirements.PlatformRequirements

Parameters:

task – Task to check support of

Returns:

True if the task is supported, False otherwise.

wait_till_done(item: IRunnableEntity, timeout: int = 86400, refresh_interval: int = 5, progress: bool = True)[source]

Wait for the experiment to be done.

Parameters:
  • item – Experiment/Workitem to wait on

  • refresh_interval – How long to wait between polling.

  • timeout – How long to wait before failing.

  • progress – Should we display progress

See also

idmtools.entities.iplatform.IPlatform.wait_till_done_progress() idmtools.entities.iplatform.IPlatform.__wait_until_done_progress_callback() idmtools.entities.iplatform.IPlatform.__wait_till_callback()

wait_till_done_progress(item: IRunnableEntity, timeout: int = 86400, refresh_interval: int = 5, wait_progress_desc: str | None = None)[source]

Wait on an item to complete with progress bar.

Parameters:
  • item – Item to monitor

  • timeout – Timeout on waiting

  • refresh_interval – How often to refresh

  • wait_progress_desc – Wait Progress Description

Returns:

None

See also

idmtools.entities.iplatform.IPlatform.__wait_until_done_progress_callback() idmtools.entities.iplatform.IPlatform.wait_till_done() idmtools.entities.iplatform.IPlatform.__wait_till_callback()

Retrieve all related objects.

Parameters:
  • item – SSMTWorkItem

  • relation_type – Depends or Create

Returns: dict with key the object type

is_regather_assets_on_modify() bool[source]

Return default behaviour for platform when rerunning experiment and gathering assets.

Returns:

True or false

is_windows_platform(item: IEntity | None = None) bool[source]

Returns is the target platform is a windows system.

property common_asset_path

Return the path to common assets stored on the platform.

Returns:

Common Asset Path

join_path(*args) str[source]

Join path using platform rules.

Parameters:

*args – List of paths to join

Returns:

Joined path as string

id_from_file(filename: str)[source]

Load just the id portion of an id file.

Parameters:

filename – Filename

Returns:

Item id laoded from file

get_item_from_id_file(id_filename: PathLike | str, item_type: ItemType | None = None) IEntity[source]

Load an item from an id file. This ignores the platform in the file.

Parameters:
  • id_filename – Filename to load

  • item_type – Optional item type

Returns:

Item from id file.

get_defaults_by_type(default_type: Type) List[IPlatformDefault][source]

Returns any platform defaults for specific types. :param default_type: Default type

Returns:

List of default of that type

create_sim_directory_map(item_id: str, item_type: ItemType) Dict[source]

Build simulation working directory mapping. :param item_id: Entity id :param item_type: ItemType

Returns:

Dict of simulation id as key and working dir as value

create_sim_directory_df(exp_id: str, include_tags: bool = True) DataFrame[source]

Build simulation working directory mapping. :param exp_id: experiment id :param include_tags: True/False

Returns:

DataFrame

save_sim_directory_df_to_csv(exp_id: str, include_tags: bool = True, output: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs', save_header=False, file_name: str | None = None) None[source]

Save simulation directory df to csv file. :param exp_id: experiment id :param include_tags: True/False :param output: output directory :param save_header: True/False :param file_name: user csv file name

Returns:

None

__init__(_uid: str | None = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[~idmtools.core.interfaces.iitem.IItem, ~idmtools.entities.iplatform.IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[~idmtools.core.interfaces.iitem.IItem, ~idmtools.entities.iplatform.IPlatform], None]] = <factory>, _platform_defaults: ~typing.List[~idmtools.entities.iplatform_default.IPlatformDefault] = <factory>, _config_block: str | None = None) None
idmtools.entities.iplatform_default module

Here we define platform default interface.

Currently we use this for defaults in analyzer manager, but later we can extend to other default that need to be used lazily

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iplatform_default.IPlatformDefault[source]

Bases: object

Represents default configuration for different types.

__init__() None
idmtools.entities.iplatform_default.default_cpu_count()[source]

Default value for cpu count. We have to wrap this so it doesn’t load during plugin init.

Returns:

Default cpu count

class idmtools.entities.iplatform_default.AnalyzerManagerPlatformDefault(max_workers: int = <factory>)[source]

Bases: IPlatformDefault

Represents defaults for the AnalyzerManager.

max_workers: int
__init__(max_workers: int = <factory>) None
idmtools.entities.itask module

Defines our ITask interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.itask.ITask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>)[source]

Bases: object

ITask represents a task to be ran on a remote system.

A Task should provide all the files and command needed to run remotely.

platform_requirements: Set[PlatformRequirements]
common_assets: AssetCollection

Common(Experiment-level) assets

transient_assets: AssetCollection

Transient(Simulation-level) assets

property command

The Command to run

property metadata_fields

Collect all metadata fields.

Returns: set of metadata filed names

add_pre_creation_hook(hook: Callable[[Simulation | IWorkflowItem, IPlatform], NoReturn])[source]

Called before a simulation is created on a platform. Each hook receives either a Simulation or WorkflowTask.

Parameters:

hook – Function to call on event

Returns:

None

add_post_creation_hook(hook: Callable[[Simulation | IWorkflowItem, IPlatform], NoReturn])[source]

Called after a simulation has been created on a platform. Each hook receives either a Simulation or WorkflowTask.

Parameters:

hook – Function to call on event

Returns:

None

add_platform_requirement(requirement: PlatformRequirements | str) NoReturn[source]

Adds a platform requirements to a task.

Parameters:

requirement – Requirement to add task

Returns:

None

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Optional Hook called at the time of creation of task. Can be used to setup simulation and experiment level hooks.

Parameters:
  • parent – Parent of Item

  • platform – Platform executing the task. Useful for querying platform before execution

Returns:

None

post_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Optional Hook called at the after creation task. Can be used to setup simulation and experiment level hooks.

Parameters:
  • parent – Parent of Item

  • platform – Platform executing the task. Useful for querying platform before execution

Returns:

None

abstract gather_common_assets() AssetCollection[source]

Function called at runtime to gather all assets in the collection.

abstract gather_transient_assets() AssetCollection[source]

Function called at runtime to gather all assets in the collection.

gather_all_assets() AssetCollection[source]

Collect all common and transient assets.

Returns: new AssetCollection

copy_simulation(base_simulation: Simulation) Simulation[source]

Called when copying a simulation for batching. Override you your task has specific concerns when copying simulations.

reload_from_simulation(simulation: Simulation)[source]

Optional hook that is called when loading simulations from a platform.

to_simulation()[source]

Convert task to simulation.

Returns: new simulation

pre_getstate()[source]

Return default values for pickle_ignore_fields().

Call before pickling.

Returns: dict

post_setstate()[source]

Post load from pickle.

property pickle_ignore_fields

Pickle ignore fields.

Returns:

List of fields to ignore

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>) None
to_dict() Dict[source]

Select metadata fields and make a new dict.

Returns: dict

idmtools.entities.iworkflow_item module

Defines our IWorkflowItem interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.iworkflow_item.IWorkflowItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None)[source]

Bases: IAssetsEnabled, INamedEntity, IRunnableEntity, ABC

Interface of idmtools work item.

name: str = None

Name of the workflow step

item_name: dataclasses.InitVar[str] = None

Legacy name for workflow items

asset_collection_id: dataclasses.InitVar[str] = None

Legacy name. Set assets now

tags: Dict[str, Any]

Tags associated with the work item

transient_assets: AssetCollection

Common Assets for the workitem

asset_files: dataclasses.InitVar[FileList] = None

Legacy var. Going forward use assets

user_files: dataclasses.InitVar[FileList] = None

Legacy var. Going forward use assets

task: ITask = None
related_experiments: list
related_simulations: list
related_suites: list
related_work_items: list
related_asset_collections: list
work_item_type: str = None
item_type: ItemType = 'WorkItem'

Item Type(Experiment, Suite, Asset, etc)

gather_assets() NoReturn[source]

Function called at runtime to gather all assets in the collection.

add_file(af)[source]

Methods used to add new file.

Parameters:

af – file to add

Returns: None

clear_user_files()[source]

Clear all existing user files.

Returns: None

pre_creation(platform: IPlatform) None[source]

Called before the actual creation of the entity.

to_dict() Dict[source]

Convert IWorkflowItem to a dictionary.

Returns:

Dictionary of WorkflowItem

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None) None
idmtools.entities.platform_requirements module

Defines our PlatformRequirements enum.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.platform_requirements.PlatformRequirements(value)[source]

Bases: Enum

Defines possible requirements a task could need from a platform.

SHELL = 'shell'
NativeBinary = 'NativeBinary'
LINUX = 'Linux'
WINDOWS = 'windows'
GPU = 'gpu'
PYTHON = 'python'
DOCKER = 'docker'
SINGULARITY = 'singularity'
idmtools.entities.relation_type module

Defines our RelationType enum.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.relation_type.RelationType(value)[source]

Bases: Enum

An enumeration representing the type of relationship for related entities.

DependsOn = 0
Created = 1
idmtools.entities.simulation module

Defines our Simulation object.

The simulation object can be thought as a metadata object. It represents a configuration of a remote job execution metadata. All simulations have a task. Optionally that have assets. All simulations should belong to an Experiment.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.simulation.Simulation(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = ItemType.SIMULATION, _platform_object: ~typing.Any = None, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, task: ITask = None, _Simulation__assets_gathered: bool = False, _platform_kwargs: dict = <factory>)[source]

Bases: IAssetsEnabled, INamedEntity

Class that represents a generic simulation.

This class needs to be implemented for each model type with specifics.

task: ITask = None

Task representing the configuration of the command to be executed

item_type: ItemType = 'Simulation'

Item Type. Should not be changed from Simulation

property experiment: Experiment

Get experiment parent.

Returns:

Parent Experiment

pre_creation(platform: IPlatform)[source]

Runs before a simulation is created server side.

Parameters:

platform – Platform the item is being executed on

Returns:

None

post_creation(platform: IPlatform) None[source]

Called after a simulation is created.

Parameters:

platform – Platform simulation is being executed on

Returns:

None

pre_getstate()[source]

Return default values for pickle_ignore_fields(). Call before pickling.

gather_assets()[source]

Gather all the assets for the simulation.

classmethod from_task(task: ITask, tags: Dict[str, Any] = None, asset_collection: AssetCollection = None)[source]

Create a simulation from a task.

Parameters:
  • task – Task to create from

  • tags – Tags to create on the simulation

  • asset_collection – Simulation Assets

Returns:

Simulation using the parameters provided

list_static_assets(platform: IPlatform = None, **kwargs) List[Asset][source]

List assets that have been uploaded to a server already.

Parameters:
  • platform – Optional platform to load assets list from

  • **kwargs

Returns:

List of assets

Raises:

ValueError - If you try to list an assets for an simulation that hasn't been created/loaded from a remote platform.

to_dict() Dict[source]

Do a lightweight conversation to json.

Returns:

Dict representing json of object

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, item_type: ~idmtools.core.enums.ItemType = ItemType.SIMULATION, _platform_object: ~typing.Any = None, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, task: ITask = None, _Simulation__assets_gathered: bool = False, _platform_kwargs: dict = <factory>) None
idmtools.entities.suite module

Defines our Suite object.

The Suite object can be thought as a metadata object. It represents a container object for Experiments. All Suites should have one or more experiments.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.suite.Suite(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, experiments: ~idmtools.core.interfaces.entity_container.EntityContainer = <factory>, description: str = None)[source]

Bases: INamedEntity, ABC, IRunnableEntity

Class that represents a generic suite (a collection of experiments).

Parameters:

experiments – The child items of this suite.

experiments: EntityContainer
item_type: ItemType = 'Suite'

Item Type(Experiment, Suite, Asset, etc)

description: str = None
add_experiment(experiment: Experiment) NoReturn[source]

Add an experiment to the suite.

Parameters:

experiment – the experiment to be linked to suite

display()[source]

Display workflowitem.

Returns:

None

pre_creation(platform: IPlatform)[source]

Pre Creation of IWorkflowItem.

Parameters:

platform – Platform we are creating item on

Returns:

None

post_creation(platform: IPlatform)[source]

Post Creation of IWorkflowItem.

Parameters:

platform – Platform

Returns:

None

property done

Return if an suite has finished executing.

Returns:

True if all experiments have ran, False otherwise

property succeeded: bool

Return if an suite has succeeded. An suite is succeeded when all experiments have succeeded.

Returns:

True if all experiments have succeeded, False otherwise

to_dict() Dict[source]

Converts suite to a dictionary.

Returns:

Dictionary of suite.

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, experiments: ~idmtools.core.interfaces.entity_container.EntityContainer = <factory>, description: str = None) None
idmtools.entities.task_proxy module

Defines our TaskProxy object.

The TaskProxy object is mean to reduce the memory requirements of large simulation sets/configurations after provisioning. Instead of keeping the full original object in memory, the object is replaced with a proxy object with minimal information needed to work with the task.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.entities.task_proxy.TaskProxy(command: str | CommandLine | None = None, is_docker: bool = False, is_gpu: bool = False)[source]

Bases: object

This class is used to reduce the memory footprint of tasks after a simulation has been provisioned.

command: str | CommandLine = None
is_docker: bool = False
is_gpu: bool = False
static from_task(task: ITask)[source]

Create a task proxy from a task.

Parameters:

task – Task to proxy

Returns:

TaskProxy of task

__init__(command: str | CommandLine | None = None, is_docker: bool = False, is_gpu: bool = False) None
idmtools.entities.templated_simulation module

TemplatedSimulations provides a utility to build sets of simulations from a base simulation.

This is meant to be combined with builders.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.entities.templated_simulation.simulation_generator(builders, new_sim_func, additional_sims=None, batch_size=10)[source]

Generates batches of simulations from the templated simulations.

Parameters:
  • builders – List of builders to build

  • new_sim_func – Build new simulation callback

  • additional_sims – Additional simulations

  • batch_size – Batch size

Returns:

Generator for simulations in batches

class idmtools.entities.templated_simulation.TemplatedSimulations(builders: ~typing.Set[~idmtools.builders.simulation_builder.SimulationBuilder] = <factory>, base_simulation: ~idmtools.entities.simulation.Simulation = None, base_task: ~idmtools.entities.itask.ITask = None, parent: Experiment = None, tags: dataclasses.InitVar[typing.Dict] = <property object>, _TemplatedSimulations__extra_simulations: ~typing.List[~idmtools.entities.simulation.Simulation] = <factory>)[source]

Bases: object

Class for building templated simulations and commonly used with SimulationBuilder class.

Examples

Add tags to all simulations via base task:

ts = TemplatedSimulations(base_task=task)
ts.tags = {'a': 'test', 'b': 9}
ts.add_builder(builder)

Add tags to a specific simulation:

experiment =  Experiment.from_builder(builder, task, name=expname)
experiment.simulations = list(experiment.simulations)
experiment.simulations[2].tags['test']=123
builders: Set[SimulationBuilder]
base_simulation: Simulation = None
base_task: ITask = None
parent: Experiment = None
property builder: SimulationBuilder

For backward-compatibility purposes.

Returns:

The last TExperimentBuilder.

add_builder(builder: SimulationBuilder) None[source]

Add builder to builder collection.

Parameters:

builder – A builder to be added.

Returns:

None

Raises:

ValueError - Builder must be type of SimulationBuilder

property pickle_ignore_fields

Fields that we should ignore on the object.

Returns:

Fields to ignore

display()[source]

Display the templated simulation.

Returns:

None

simulations() Generator[Simulation, None, None][source]

Simulations iterator.

Returns:

Simulation iterator

extra_simulations() List[Simulation][source]

Returns the extra simulations defined on template.

Returns:

Returns the extra simulations defined

add_simulation(simulation: Simulation)[source]

Add a simulation that was built outside template engine to template generator.

This is useful we you can build most simulations through a template but need a some that cannot. This is especially true for large simulation sets.

Parameters:

simulation – Simulation to add

Returns:

None

add_simulations(simulations: List[Simulation])[source]

Add multiple simulations without templating. See add_simulation.

Parameters:

simulations – Simulation to add

Returns:

None

new_simulation()[source]

Return a new simulation object.

The simulation will be copied from the base simulation of the experiment.

Returns:

The created simulation.

property tags

Get tags for the base simulation.

Returns:

Tags for base simulation

__init__(builders: ~typing.Set[~idmtools.builders.simulation_builder.SimulationBuilder] = <factory>, base_simulation: ~idmtools.entities.simulation.Simulation = None, base_task: ~idmtools.entities.itask.ITask = None, parent: Experiment = None, tags: dataclasses.InitVar[typing.Dict] = <property object>, _TemplatedSimulations__extra_simulations: ~typing.List[~idmtools.entities.simulation.Simulation] = <factory>) None
classmethod from_task(task: ITask, tags: Dict[str, Any] | None = None) TemplatedSimulations[source]

Creates a templated simulation from a task.

We use the task to set as base_task, and the tags are applied to the base simulation we need internally.

Parameters:
  • task – Task to use as base task

  • tags – Tags to add to base simulation

Returns:

TemplatedSimulations from the task

idmtools.plugins package

idmtools plugins.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.plugins Submodules
idmtools.plugins.git_commit module

Git plugin to add git repo details to items.

idmtools.plugins.git_commit.idmtools_platform_pre_create_item(item: IEntity, kwargs)[source]

Adds git information from local repo as tags to items on creation.

There following options are valid kwargs and configuration options: * add_git_tags_to_all - Add git tags to everything * add_to_experiments - Add git tags to experiments * add_git_tags_to_simulations - Add git tags to simulations * add_git_tags_to_workitems - Add git tags to workitems * add_git_tags_to_suite - Add git tags to suites * add_git_tags_to_asset_collection - Add git tags to asset collections

Every option expects a truthy value, meaning “True, False, t, f, 1, 0, yes, or no. Any positive value, True, yes, 1, t, y will enable the option.

When defined in the idmtools.ini, these should be added under the “git_tag” section without the “git_tags” portion. For example

[git_tag] add_to_experiments = y

Also, you can do this through environment variables using IDMTOOLS_GIT_TAG_<option>. For example, experiments would be

IDMTOOLS_GIT_TAG_ADD_TO_EXPERIMENTS

Parameters:
  • item – Item to add tags two

  • kwargs – Optional kwargs

Returns:

None

idmtools.plugins.git_commit.add_details_using_gitpython()[source]

Support gitpython if installed.

Returns:

Git tags

idmtools.plugins.git_commit.add_details_using_pygit() Dict[str, str][source]

Support pygit if installed.

Returns:

Git tags

idmtools.plugins.item_sequence module

Defines a id generator plugin that generates ids in sequence by item type. To configure, set ‘id_generator’ in .ini configuration file to ‘item_sequence’: [COMMON] id_generator = item_sequence

You can also customize the sequence_file that stores the sequential ids per item type as well as the id format using the following parameters in the .ini configuration file: [item_sequence] sequence_file = <file_name>.json ex: index.json id_format_str = <custom_str_format> ex: {item_name}{data[item_name]:06d}

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.plugins.item_sequence.load_existing_sequence_data(sequence_file)[source]

Loads item sequence data from sequence_file into a dictionary.

Parameters:

sequence_file – File that user has indicated to store the sequential ids of items

Returns:

Data loaded from sequence_file as a dictionary

idmtools.plugins.item_sequence.get_plugin_config()[source]

Retrieves the sequence file and format string (for id generation) from the .ini config file.

Returns:

specified json file in .ini config in which id generator keeps track of sequential id’s id_format_str: string specified in .ini config by which id’s are formatted when assigned to sequential items

Return type:

sequence_file

idmtools.plugins.item_sequence.idmtools_generate_id(item: IEntity) str[source]

Generates a UUID.

Parameters:

item – IEntity using the item_sequence plugin

Returns:

ID for the respective item, based on the formatting defined in the id_format_str (in .ini config file)

idmtools.plugins.item_sequence.idmtools_platform_post_run(item: IEntity, kwargs) IEntity[source]

Do a backup of sequence file if it is the id generator.

Parameters:
  • item – Item(we only save on experiments/suites at the moment)

  • kwargs – extra args

Returns:

None

idmtools.plugins.uuid_generator module

Defines a uuid generator plugin that generates an item id as a uuid. To configure, set ‘id_generator’ in .ini configuration file to ‘uuid’: [COMMON] id_generator = uuid

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.plugins.uuid_generator.idmtools_generate_id(item: IEntity) str[source]

Generates a UUID.

Parameters:

item – respective item for which we are generating an id

Returns:

uuid str as item id

idmtools.registry package
idmtools.registry Submodules
idmtools.registry.experiment_specification module

ExperimentPluginSpecification provided definition for the experiment plugin specification, hooks, and plugin manager.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.registry.experiment_specification.ExperimentPluginSpecification[source]

Bases: PluginSpecification, ABC

ExperimentPluginSpecification defines the specification for Experiment plugins.

classmethod get_name(strip_all: bool = True) str[source]

Get name of plugin. By default we remove the PlatformSpecification portion.

Parameters:
  • strip_all – When true, ExperimentPluginSpecification and ExperimentPluginSpec is stripped from name.

  • Stripped (When false only Specification and Spec is) –

Returns:

Name of plugin

get(configuration: dict) Experiment[source]

Return a new model using the passed in configuration.

Parameters:

configuration – The INI configuration file to use.

Returns:

The new model.

get_type() Type[Experiment][source]

Get Experiment type.

Returns:

Experiment type.

class idmtools.registry.experiment_specification.ExperimentPlugins(strip_all: bool = True)[source]

Bases: object

ExperimentPlugins acts as registry for Experiment plugins.

__init__(strip_all: bool = True) None[source]

Initialize the Experiment Registry. When strip all is false, the full plugin name will be used for names in map.

Parameters:

strip_all – Whether to strip common parts of name from plugins in plugin map

get_plugins() Set[ExperimentPluginSpecification][source]

Get plugins.

Returns:

Experiment plugins.

get_plugin_map() Dict[str, ExperimentPluginSpecification][source]

Get experiment plugin map.

Returns:

Experiment plugin map.

idmtools.registry.functions module

FunctionPluginManager provided definition for the function plugin specification, hooks, and plugin manager.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.registry.functions.FunctionPluginManager[source]

Bases: PluginManager, SingletonMixin

FunctionPluginManager acts as registry for function based plugins.

__init__()[source]

Initialize function plugin manager.

idmtools.registry.hook_specs module

Define a list of function only hook specs. Useful for simple plugins.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.registry.hook_specs.idmtools_platform_pre_create_item(item: IEntity, kwargs) IEntity[source]

This callback is called by the pre_create of each object type on a platform. An item can be a suite, workitem, simulation, asset collection or an experiment.

Parameters:
  • item

  • kwargs – extra args

Returns:

None

idmtools.registry.hook_specs.idmtools_platform_post_create_item(item: IEntity, kwargs) IEntity[source]

This callback is called by the post_create of each object type on a platform. An item can be a suite, workitem, simulation, asset collection or an experiment.

Parameters:
  • item

  • kwargs – extra args

Returns:

None

idmtools.registry.hook_specs.idmtools_platform_post_run(item: IEntity, kwargs) IEntity[source]

This is called when item finishes calling the run on the server.

Parameters:
  • item

  • kwargs – extra args

Returns:

None

idmtools.registry.hook_specs.idmtools_on_start()[source]

Execute on startup when idmtools is first imported.

Returns:

None

idmtools.registry.hook_specs.idmtools_generate_id(item: IEntity) str[source]

Generates an id for an IItem.

Returns:

None

idmtools.registry.hook_specs.idmtools_runnable_on_done(item: IRunnableEntity, **kwargs)[source]

Called when a runnable item finishes when it was being monitored.

Parameters:
  • item – Item that was running

  • **kwargs

Returns:

None

idmtools.registry.hook_specs.idmtools_runnable_on_succeeded(item: IRunnableEntity, **kwargs)[source]

Executed when a runnable item succeeds.

Parameters:
  • item – Item that was running

  • **kwargs

Returns:

None

idmtools.registry.hook_specs.idmtools_runnable_on_failure(item: IRunnableEntity, **kwargs)[source]

Executed when a runnable item fails.

Parameters:
  • item – Item that was running

  • **kwargs

Returns:

None

idmtools.registry.master_plugin_registry module

Registry to aggregate all plugins to one place.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.registry.master_plugin_registry.MasterPluginRegistry[source]

Bases: SingletonMixin

MasterPluginRegistry indexes all type of plugins into one class.

Notes

TODO - Rename this class

__init__() None[source]

Initialize Master registry.

get_plugin_map() Dict[str, PluginSpecification][source]

Get plugin map.

Returns:

Plugin map

get_plugins() Set[PluginSpecification][source]

Get Plugins map.

Returns:

The full plugin map

idmtools.registry.platform_specification module

PlatformSpecification provided definition for the platform plugin specification, hooks, and plugin manager.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.registry.platform_specification.PlatformSpecification[source]

Bases: PluginSpecification, ABC

PlatformSpecification for Platform Plugins.

classmethod get_name(strip_all: bool = True) str[source]

Get name of plugin. By default we remove the PlatformSpecification portion.

Parameters:

strip_all – When true, PlatformSpecification is stripped from name. When false only Specification is Stripped

Returns:

Name of plugin

example_configuration()[source]

Example configuration for the platform. This is useful in help or error messages.

Returns:

Example configuration

get(configuration: dict) IPlatform[source]

Return a new platform using the passed in configuration.

Parameters:

configuration – The INI configuration file to use.

Returns:

The new platform.

get_type() Type[IPlatform][source]

Get type of the Platform type.

get_configuration_aliases() Dict[str, Dict][source]

Get a list of configuration aliases for the platform.

A configuration alias should be in the form of “name” -> “Spec, Config Options Dict” where name is the alias the user will use, and the config options is a dictionary of config options to be passed to the item Returns:

class idmtools.registry.platform_specification.PlatformPlugins(strip_all: bool = True)[source]

Bases: SingletonMixin

PlatformPlugin registry.

__init__(strip_all: bool = True) None[source]

Initialize the Platform Registry. When strip all is false, the full plugin name will be used for names in map.

Parameters:

strip_all – Whether to strip common parts of name from plugins in plugin map

get_plugins() Set[PlatformSpecification][source]

Get platform plugins.

Returns:

Platform plugins

get_aliases() Dict[str, Tuple[PlatformSpecification, Dict]][source]

Get Platform Configuration Aliases for Platform Plugin.

Returns:

Platform CConfiguration Aliases

get_plugin_map() Dict[str, PlatformSpecification][source]

Get plugin map for Platform Plugins.

Returns:

Plugin map

idmtools.registry.plugin_specification module

Defines our base plugin definition and specifications.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.registry.plugin_specification.ProjectTemplate(name: str, url: str | List[str], description: str | None = None, info: str | None = None)[source]

Bases: object

Defines a ProjectTemplate that plugins can define.

name: str
url: str | List[str]
description: str = None
info: str = None
static read_templates_from_json_stream(s) List[ProjectTemplate][source]

Read Project Template from stream.

Parameters:

s – Stream where json data resides

Returns:

Templates loaded from json

__init__(name: str, url: str | List[str], description: str | None = None, info: str | None = None) None
class idmtools.registry.plugin_specification.PluginSpecification[source]

Bases: object

Base class for all plugins.

classmethod get_name(strip_all: bool = True) str[source]

Get the name of the plugin. Although it can be overridden, the best practice is to use the class name as the plugin name.

Returns:

The name of the plugin as a string.

get_description() str[source]

Get a brief description of the plugin and its functionality.

Returns:

The plugin description.

get_project_templates() List[ProjectTemplate][source]

Returns a list of project templates related to the a plugin.

Returns:

List of project templates

get_example_urls() List[str][source]

Returns a list of URLs that a series of Examples for plugin can be downloaded from.

Returns:

List of urls

get_help_urls() Dict[str, str][source]

Returns a dictionary of topics and links to help.

Returns:

Dict of help urls

static get_version_url(version: str, extra: str | None = None, repo_base_url: str = 'https://github.com/InstituteforDiseaseModeling/idmtools/tree/', nightly_branch: str = 'dev')[source]

Build a url using version.

Here we assume the tag will exist for that specific version :param version: Version to look up. If it contains nightly, we default to nightly_branch :param extra: Extra parts of url pass base :param repo_base_url: Optional url :param nightly_branch: default to dev

Returns:

URL for item

get_version() str[source]

Returns the version of the plugin.

Returns:

Version for the plugin

idmtools.registry.task_specification module

TaskSpecification provided definition for the experiment plugin specification, hooks, and plugin manager.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.registry.task_specification.TaskSpecification[source]

Bases: PluginSpecification, ABC

TaskSpecification is spec for Task plugins.

classmethod get_name(strip_all: bool = True) str[source]

Get name of plugin. By default we remove the PlatformSpecification portion.

Parameters:
  • strip_all – When true, TaskSpecification and TaskSpec is stripped from name. When false only

  • Stripped (Specification and Spec is) –

Returns:

Name of plugin

get(configuration: dict) ITask[source]

Return a new model using the passed in configuration.

Parameters:

configuration – The INI configuration file to use.

Returns:

The new model.

get_type() Type[ITask][source]

Get task type.

Returns:

Task type

class idmtools.registry.task_specification.TaskPlugins(strip_all: bool = True)[source]

Bases: SingletonMixin

TaskPlugins acts as a registry for Task Plugins.

__init__(strip_all: bool = True) None[source]

Initialize the Task Registry. When strip all is false, the full plugin name will be used for names in map.

Parameters:

strip_all – Whether to strip common parts of name from plugins in plugin map

get_plugins() Set[TaskSpecification][source]

Get plugins for Tasks.

Returns:

Plugins

get_plugin_map() Dict[str, TaskSpecification][source]

Get a map of task plugins.

Returns:

Task plugin map

idmtools.registry.utils module

Provides utilities for plugins.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.registry.utils.is_a_plugin_of_type(value, plugin_specification: Type[PluginSpecification]) bool[source]

Determine if a value of a plugin specification is of type PluginSpecification.

Parameters:
  • value – The value to inspect.

  • plugin_specification – Plugin specification to check against.

Returns:

A Boolean indicating True if the plugin is of a subclass of PluginSpecification, else False.

idmtools.registry.utils.load_plugin_map(entrypoint: str, spec_type: Type[PluginSpecification], strip_all: bool = True) Dict[str, Type[PluginSpecification]][source]

Load plugins from entry point with the indicated type of specification into a map.

Warning

This could cause name collisions if plugins of the same name are installed.

Parameters:
  • entrypoint – The name of the entry point.

  • spec_type – The type of plugin specification.

  • strip_all – Pass through for get_name from Plugins. Changes names in plugin registries

Returns:

Returns a dictionary of name and PluginSpecification.

Return type:

(Dict[str, Type[PluginSpecification]])

idmtools.registry.utils.plugins_loader(entry_points_name: str, plugin_specification: Type[PluginSpecification]) Set[PluginSpecification][source]

Loads all the plugins of type PluginSpecification from entry point name.

idmtools also supports loading plugins through a list of strings representing the paths to modules containing plugins.

Parameters:
  • entry_points_name – Entry point name for plugins.

  • plugin_specification – Plugin specification to load.

Returns:

All the plugins of the type indicated.

Return type:

(Set[PluginSpecification])

idmtools.registry.utils.discover_plugins_from(library: Any, plugin_specification: Type[PluginSpecification]) List[Type[PluginSpecification]][source]

Search a library object for plugins of type PluginSpecification.

Currently it detects module and classes. In the future support for strings will be added.

Parameters:
  • library – Library object to discover plugins from.

  • plugin_specification – Specification to search for.

Returns:

List of plugins.

Return type:

List[Type[PluginSpecification]]

idmtools.services package

Internal services for idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.services Submodules
idmtools.services.ipersistance_service module

IPersistenceService allows caching of items locally into a diskcache db that does not expire upon deletion.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.services.ipersistance_service.IPersistenceService[source]

Bases: object

IPersistenceService provides a persistent cache. This is useful for network heavy operations.

cache_directory = None
cache_name = None
classmethod retrieve(uid)[source]

Retrieve item with id <uid> from cache.

Parameters:

uid – Id to fetch

Returns:

Item loaded from cache

classmethod save(obj)[source]

Save an item to our cache.

Parameters:

obj – Object to save.

Returns:

Object uid

classmethod delete(uid)[source]

Delete at item from our cache with id <uid>.

Parameters:

uid – Id to delete

Returns:

None

classmethod clear()[source]

Clear our cache.

Returns:

None

classmethod list()[source]

List items in our cache.

Returns:

List of items in our cache

classmethod length()[source]

Total length of our persistence cache.

Returns:

Count of our cache

idmtools.services.platforms module

PlatformPersistService provides cache for platforms.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.services.platforms.PlatformPersistService[source]

Bases: IPersistenceService

Provide a cache for our platforms.

cache_name = 'platforms'
idmtools.utils package

root of utilities for idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils Subpackages
idmtools.utils.display package

root of display utilities for idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.display.display(obj, settings)[source]

Display an object using our settings.

Parameters:
  • obj – Obj to display

  • settings – Display settings

Returns:

None

idmtools.utils.display Submodules
idmtools.utils.display.displays module

Tools around displays and formatting.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.display.displays.IDisplaySetting(header: str | None = None, field: str | None = None)[source]

Bases: object

Base class for a display setting.

The child class needs to implement the display() method.

Includes:

  • header: Optional header for the display.

  • field: If specified, the get_object() will call getattr for this field on the object.

__init__(header: str | None = None, field: str | None = None)[source]

Initialize our IDisplaySetting.

Parameters:
  • header – Header for display

  • field – Optional field to display instead of object

get_object(obj: Any) Any[source]

Get object or field depending if field is set.

Parameters:

obj – Object to get

Returns:

Either obj.field or obj depending if self.field is set

abstract display(obj: Any) str[source]

Display the object.

Note that the attribute (identified by self.field) should be handled with get_object().

Parameters:

obj – The object to consider for display.

Returns:

A string representing what to show.

class idmtools.utils.display.displays.StringDisplaySetting(header: str | None = None, field: str | None = None)[source]

Bases: IDisplaySetting

Class that displays the object as string.

display(obj)[source]

Display object.

Parameters:

obj – Object to display

Returns:

String of object

class idmtools.utils.display.displays.DictDisplaySetting(header: str | None = None, field: str | None = None, max_items: int = 10, flat: bool = False)[source]

Bases: IDisplaySetting

Class that displays a dictionary.

__init__(header: str | None = None, field: str | None = None, max_items: int = 10, flat: bool = False)[source]

DictDisplay.

Parameters:
  • header – Optional field header.

  • field – The field in the object to consider.

  • max_items – The maximum number of items to display.

  • flat – If False, display as a list; if True, display as a comma-separated list.

display(obj: Any) str[source]

Display a dictionary.

Parameters:

obj – Object to display

Returns:

String display of object

class idmtools.utils.display.displays.TableDisplay(columns, max_rows=5, field=None)[source]

Bases: IDisplaySetting

Class that displays the object as a table.

__init__(columns, max_rows=5, field=None)[source]

Initialize our TableDisplay.

Parameters:
  • columns – A list of display settings.

  • max_rows – The maximum number of rows to display.

  • field – The field of the object to consider.

display(obj) str[source]

Display our object as a table.

Parameters:

obj – Object to display

Returns:

Table represented as a string of the object

idmtools.utils.display.settings module

defines views for different types if items.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.filters package

defines filter utilities.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.filters Submodules
idmtools.utils.filters.asset_filters module

This module contains all the default filters for the assets.

A filter function needs to take only one argument: an asset. It returns True/False indicating whether to add or filter out the asset.

You can notice functions taking more than only an asset. To use those functions, use must create a partial before adding it to a filters list. For example:

python
fname = partial(file_name_is, filenames=["a.txt", "b.txt"])
AssetCollection.from_directory(... filters=[fname], ...)
idmtools.utils.filters.asset_filters.default_asset_file_filter(asset: TAsset) bool[source]

Default filter to leave out Python caching.

This filter is used in the creation of AssetCollection, regardless of user filters.

Returns:

True if no files match default patterns of “__py_cache__” and “.pyc”

idmtools.utils.filters.asset_filters.file_name_is(asset: TAsset, filenames: List[str]) bool[source]

Restrict filtering to assets with the indicated filenames.

Parameters:
  • asset – The asset to filter.

  • filenames – List of filenames to filter on.

Returns:

True if asset.filename in filenames

idmtools.utils.filters.asset_filters.file_extension_is(asset: TAsset, extensions: List[str]) bool[source]

Restrict filtering to assets with the indicated filetypes.

Parameters:
  • asset – The asset to filter.

  • extensions – List of extensions to filter on.

Returns:

True if extension in extensions

idmtools.utils.filters.asset_filters.asset_in_directory(asset: TAsset, directories: List[str], base_path: str = None) bool[source]

Restrict filtering to assets within a given directory.

This filter is not strict and simply checks if the directory portion is present in the assets absolute path.

Parameters:
  • asset – The asset to filter.

  • directories – List of directory portions to include.

  • base_path – base_path

idmtools.utils Submodules
idmtools.utils.caller module

Defines our IAnalyzer interface used as base of all other analyzers.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.caller.get_caller()[source]

Trace the stack and find the caller.

Returns:

The direct caller.

idmtools.utils.collections module

utilities for collections.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.collections.cut_iterable_to(obj: Iterable, to: int) Tuple[List | Mapping, int][source]

Cut an iterable to a certain length.

Parameters:
  • obj – The iterable to cut.

  • to – The number of elements to return.

Returns:

A list or dictionary (depending on the type of object) of elements and the remaining elements in the original list or dictionary.

class idmtools.utils.collections.ExperimentParentIterator(lst, parent: IEntity)[source]

Bases: Iterator[Simulation]

Wraps a list of simulations with iterator that always provides parent experiment.

__init__(lst, parent: IEntity)[source]

Initializes the ExperimentParentIterator.

Parameters:
  • lst – List of items(simulations) to iterator over

  • parent – Parent of items(Experiment)

append(item: Simulation)[source]

Adds a simulation to an object.

Parameters:

item – Item to add

Returns:

None

Raises:

ValueError when we cannot append because the item is not a simulation or our underlying object doesn't support appending

extend(item: List[Simulation] | TemplatedSimulations)[source]

Extends object.

Parameters:

item – Item to extend

Returns:

None

Raises:

ValueError when the underlying data object doesn't supporting adding additional item

class idmtools.utils.collections.ResetGenerator(generator_init)[source]

Bases: Iterator

Iterator that counts upward forever.

__init__(generator_init)[source]

Initialize the ResetGenerator from generator_init.

Creates a copy of the generator using tee.

Parameters:

generator_init – Initialize iterator/generator to copy

next_gen()[source]

The original generator/iterator.

Returns:

original generator/iterator.

idmtools.utils.collections.duplicate_list_of_generators(lst: List[Generator])[source]

Copy a list of iterators using tee.

Parameters:

lst – List of generators

Returns:

Tuple with duplicate of iterators

idmtools.utils.command_line module

utilities for command line.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.command_line.suppress_output(stdout=True, stderr=True)[source]

Suppress any print/logging from a block of code.

Parameters:
  • stdout – If True, hide output from stdout; if False, show it.

  • stderr – If True, hide output from stderr; if False, show it.

idmtools.utils.decorators module

Decorators defined for idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.decorators.abstractstatic(function)[source]

Bases: staticmethod

A decorator for defining a method both as static and abstract.

__init__(function)[source]

Initialize abstractstatic.

Parameters:

function – Function to wrap as abstract

idmtools.utils.decorators.optional_decorator(decorator: Callable, condition: bool | Callable[[], bool])[source]

A decorator that adds a decorator only if condition is true.

Parameters:
  • decorator – Decorator to add

  • condition – Condition to determine. Condition can be a callable as well

Returns:

Optionally wrapped func.

class idmtools.utils.decorators.SingletonMixin[source]

Bases: object

SingletonMixin defines a singleton that can be added to any class.

As a singleton, on one instance will be made per process.

classmethod instance()[source]

Return the instance of the object.

If the instance has not been created, it will be initialized before returning.

Returns:

The singleton instance

idmtools.utils.decorators.cache_for(ttl=None) Callable[source]

Cache a value for a certain time period.

Parameters:

ttl – Expiration of cache

Returns:

Wrapper Function

idmtools.utils.decorators.optional_yaspin_load(*yargs, **ykwargs) Callable[source]

Adds a CLI spinner to a function based on conditions.

The spinner will be present if

  • yaspin package is present.

  • NO_SPINNER environment variable is not defined.

Parameters:
  • *yargs – Arguments to pass to yaspin constructor.

  • **ykwargs – Keyword arguments to pass to yaspin constructor.

Examples

@optional_yaspin_load(text="Loading test", color="yellow")
def test():
    time.sleep(100)
Returns:

A callable wrapper function.

class idmtools.utils.decorators.ParallelizeDecorator(queue=None, pool_type: ~typing.Type[~concurrent.futures._base.Executor] | None = <class 'concurrent.futures.thread.ThreadPoolExecutor'>)[source]

Bases: object

ParallelizeDecorator allows you to easily parallelize a group of code.

A simple of example would be following

Examples

op_queue = ParallelizeDecorator()

class Ops:
    op_queue.parallelize
    def heavy_op():
        time.sleep(10)

    def do_lots_of_heavy():
        futures = [self.heavy_op() for i in range(100)]
        results = op_queue.get_results(futures)
__init__(queue=None, pool_type: ~typing.Type[~concurrent.futures._base.Executor] | None = <class 'concurrent.futures.thread.ThreadPoolExecutor'>)[source]

Initialize our ParallelizeDecorator.

Parameters:
  • queue – Queue to use. If not provided, one will be created.

  • pool_type – Pool type to use. Defaults to ThreadPoolExecutor.

parallelize(func)[source]

Wrap a function in parallelization.

Parameters:

func – Function to wrap with parallelization

Returns:

Function wrapped with parallelization object

join()[source]

Join our queue.

Returns:

Join operation from queue

get_results(futures, ordered=False)[source]

Get Results from our decorator.

Parameters:
  • futures – Futures to get results from

  • ordered – Do we want results in order provided or as they complete. Default is as they complete which is False.

Returns:

Results from all the futures.

Decorator to check symlink creation capabilities.

idmtools.utils.dropbox_location module

utilities for pathing of dropbox folders.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.dropbox_location.get_current_user()[source]

Get current user name.

Returns:

Current username

idmtools.utils.dropbox_location.get_dropbox_location()[source]

Get user dropbox location.

Returns:

User dropbox location

idmtools.utils.entities module

utilities for dataclasses.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.entities.get_dataclass_common_fields(src, dest, exclude_none: bool = True) Dict[source]

Extracts fields from a dataclass source object who are also defined on destination object.

Useful for situations like nested configurations of data class options.

Parameters:
  • src – Source dataclass object

  • dest – Dest dataclass object

  • exclude_none – When true, values of None will be excluded

Returns:

Dictionary of common fields between source and destination object

idmtools.utils.entities.as_dict(src, exclude: List[str] | None = None, exclude_private_fields: bool = True)[source]

Converts a dataclass to a dict while also obeys rules for exclusion.

Parameters:
  • src

  • exclude – List of fields to exclude

  • exclude_private_fields – Should fields that star

Returns:

Data class as dict

idmtools.utils.entities.validate_user_inputs_against_dataclass(field_type, field_value)[source]

Validates user entered data against dataclass fields and types.

Parameters:
  • field_type – Field type

  • field_value – Fields value

Returns:

Validates user values

idmtools.utils.entities.get_default_tags() Dict[str, str][source]

Get common default tags. Currently this is the version of idmtools.

Returns:

Default tags which is idmtools version

idmtools.utils.entities.save_id_as_file_as_hook(item: Experiment | IWorkflowItem, platform: IPlatform)[source]

Predefined hook that will save ids to files for Experiment or WorkItems.

Parameters:
  • item

  • platform

Returns:

None

idmtools.utils.file module

utilities for files.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.file.scan_directory(basedir: str, recursive: bool = True, ignore_directories: List[str] | None = None) Iterable[DirEntry][source]

Scan a directory recursively or not.

Parameters:
  • basedir – The root directory to start from.

  • recursive – True to search the sub-folders recursively; False to stay in the root directory.

  • ignore_directories – Ignore directories

Returns:

An iterator yielding all the files found.

idmtools.utils.file.file_content_to_generator(absolute_path, chunk_size=128) Generator[bytearray, None, None][source]

Create a generator from file contents in chunks(useful for streaming binary data and piping).

Parameters:
  • absolute_path – absolute path to file

  • chunk_size – chunk size

Returns:

Generator that return bytes in chunks of size chunk_size

idmtools.utils.file.content_generator(content: str | bytes, chunk_size=128) Generator[bytearray, None, None][source]

Create a generator from file contents in chunks(useful for streaming binary data and piping).

Parameters:
  • content – file content

  • chunk_size – chunk size

Returns:

Generator that return bytes in chunks of size chunk_size

idmtools.utils.file_parser module

File parser utility. Used to automatically load data.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.file_parser.FileParser[source]

Bases: object

FileParser to load contents in analysis.

classmethod parse(filename, content=None)[source]

Parse filename and load the content.

Parameters:
  • filename – Filename to load

  • content – Content to load

Returns:

Content loaded

classmethod load_json_file(filename, content) Dict[source]

Load JSON File.

Parameters:
  • filename – Filename to load

  • content – Content

Returns:

JSOn as dict

classmethod load_raw_file(filename, content)[source]

Load content raw.

Parameters:
  • filename – Filename is none

  • content – Content to load

Returns:

Content as it was

classmethod load_csv_file(filename, content) DataFrame[source]

Load csv file.

Parameters:
  • filename – Filename to load

  • content – Content is loading

Returns:

Loaded csv file

classmethod load_xlsx_file(filename, content) Dict[str, ExcelFile][source]

Load excel_file.

Parameters:
  • filename – Filename to load

  • content – Content to load

Returns:

Loaded excel file

classmethod load_txt_file(filename, content)[source]

Load text file.

Parameters:
  • filename – Filename to load

  • content – Content to load

Returns:

Content

classmethod load_bin_file(filename, content)[source]

Load a bin file.

Parameters:
  • filename – Filename to load

  • content – Content to load

Returns:

Loaded bin file

Notes

We should move this to a plugin in emodpy. We need to figure out how to structure that.

idmtools.utils.filter_simulations module

Filtering utility.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.filter_simulations.FilterItem[source]

Bases: object

FilterItem provides a utility to filter items on a platform.

static filter_item(platform: IPlatform, item: IEntity, skip_sims=None, max_simulations: int | None = None, **kwargs)[source]

Filter simulations from Experiment or Suite, by default it filter status with Succeeded.

If user wants to filter by other status, it also can be done, for example:

filter_item(platform, exp, status=EntityStatus.FAILED

If user wants to filter by tags, it also can be done, for example:

filter_item(platform, exp, tags={'Run_Number': '2'})
Parameters:
  • platform – Platform item

  • item – Item to filter

  • skip_sims – list of sim ids

  • max_simulations – Total simulations

  • kwargs – extra filters

Returns: list of simulation ids

classmethod filter_item_by_id(platform: IPlatform, item_id: UUID, item_type: ItemType = ItemType.EXPERIMENT, skip_sims=None, max_simulations: int | None = None, **kwargs)[source]

Filter simulations from Experiment or Suite.

Parameters:
  • platform – COMPSPlatform

  • item_id – Experiment/Suite id

  • item_type – Experiment or Suite

  • skip_sims – list of sim ids

  • max_simulations – #sims to be returned

  • kwargs – extra filters

Returns: list of simulation ids

idmtools.utils.gitrepo module

Utilities for getting information and examples from gitrepos.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.gitrepo.GitRepo(repo_owner: str | None = None, repo_name: str | None = None)[source]

Bases: object

GitRepo allows interaction with remote git repos, mainly for examples.

repo_owner: str = None
repo_name: str = None
property path

Path property.

Returns:

Return path property

property branch

Branch property.

Returns:

Return branch property

property verbose

Return verbose property.

Returns:

Return verbose property

property repo_home_url

Construct repo home url.

Returns: repo home url

property repo_example_url

Construct repo example url.

Returns: repo example url

property api_example_url

Construct api url of the examples for download.

Returns: api url

parse_url(url: str, branch: str | None = None, update: bool = True)[source]

Parse url for owner, repo, branch and example path.

Parameters:
  • url – example url

  • branch – user branch to replace the branch in url

  • update – True/False - update repo or not

Returns: None

list_public_repos(repo_owner: str | None = None, page: int = 1, raw: bool = False)[source]

Utility method to retrieve all public repos.

Parameters:
  • repo_owner – the owner of the repo

  • page – pagination of results

  • raw – bool - return rwo data or simplified list

Returns: repo list

list_repo_releases(repo_owner: str | None = None, repo_name: str | None = None, raw: bool = False)[source]

Utility method to retrieve all releases of the repo.

Parameters:
  • repo_owner – the owner of the repo

  • repo_name – the name of repo

  • raw – bool - return raw data or simplified list

Returns: the release list of the repo

download(path: str = '', output_dir: str = './', branch: str = 'main') int[source]

Download files with example url provided.

Parameters:
  • path – local file path to the repo

  • output_dir – user local folder to download files to

  • branch – specify branch for files download from

Returns: total file count downloaded

peep(path: str = '', branch: str = 'main')[source]

Download files with example url provided.

Parameters:
  • path – local file path to the repo

  • branch – specify branch for files download from

Returns: None

__init__(repo_owner: str | None = None, repo_name: str | None = None) None
idmtools.utils.hashing module

Fast hash of Python objects.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.hashing.Hasher(hash_name='md5')[source]

Bases: _Pickler

A subclass of pickler to do hashing, rather than pickling.

__init__(hash_name='md5')[source]

Initialize our hasher.

Parameters:

hash_name – Hash type to use. Defaults to md5

hash(obj, return_digest=True)[source]

Hash an object.

Parameters:
  • obj – Object to hash

  • return_digest – Should the digest be returned?

Returns:

None if return_digest is False, otherwise the hash digest is returned

save(obj)[source]

Save an object to hash.

Parameters:

obj – Obj to save.

Returns:

None

memoize(obj)[source]

Disable memoization for strings so hashing happens on value and not reference.

save_set(set_items)[source]

Save set hashing.

Parameters:

set_items – Set items

Returns:

None

idmtools.utils.hashing.hash_obj(obj, hash_name='md5')[source]

Quick calculation of a hash to identify uniquely Python objects.

Parameters:
  • obj – Object to hash

  • hash_name – The hashing algorithm to use. ‘md5’ is faster; ‘sha1’ is considered safer.

idmtools.utils.hashing.ignore_fields_in_dataclass_on_pickle(item)[source]

Ignore certain fields for pickling on dataclasses.

Parameters:

item – Item to pickle

Returns:

State of item to pickle

idmtools.utils.hashing.calculate_md5(filename: str, chunk_size: int = 8192) str[source]

Calculate MD5.

Parameters:
  • filename – Filename to caclulate md5 for

  • chunk_size – Chunk size

Returns:

md5 as string

idmtools.utils.hashing.calculate_md5_stream(stream: BytesIO | BinaryIO, chunk_size: int = 8192, hash_type: str = 'md5', file_hash=None)[source]

Calculate md5 on stream.

Parameters:
  • chunk_size

  • stream

  • hash_type – Hash function

  • file_hash – File hash

Returns:

md5 of stream

idmtools.utils.info module

Utilities to fetch info about local system such as packages installed.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.info.get_doc_base_url() str[source]

Get base url for documentation links.

Returns:

Doc base url

idmtools.utils.info.get_pip_packages_10_to_6()[source]

Load packages for versions 1.0 to 6 of pip.

Returns:

None

Raises:

ImportError – If the pip version is different.

idmtools.utils.info.get_pip_packages_6_to_9()[source]

Get packages for pip versions 6 through 9.

Returns:

None

Raises:

ImportError – If the pip version is different.

idmtools.utils.info.get_pip_packages_10_to_current()[source]

Get packages for pip versions 10 to current.

Returns:

None

Raises:

ImportError – If the pip version is different.

idmtools.utils.info.get_packages_from_pip()[source]

Attempt to load packages from pip.

Returns:

A list of packages installed.

Return type:

(List[str])

idmtools.utils.info.get_packages_list() List[str][source]

Return a list of installed packages in the current environment.

Currently idmtools depends on pip for this functionality and since it is just used for troubleshooting, errors can be ignored.

Returns:

A list of packages installed.

Return type:

(List[str])

idmtools.utils.info.get_help_version_url(help_path, url_template: str = 'https://docs.idmod.org/projects/idmtools/en/{version}/', version: str | None = None) str[source]

Get the help url for a subject based on a version.

Parameters:
  • help_path – Path to config(minus base url). For example, configuration.html

  • url_template – Template for URL containing version replacement formatter

  • version – Optional version. If not provided, the version of idmtools installed will be used. For development versions, the version will always be nightly

Returns:

Path to url

idmtools.utils.json module

JSON utilities for idmtools such as encoders and decoders.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.json.IDMJSONEncoder(*, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, sort_keys=False, indent=None, separators=None, default=None)[source]

Bases: JSONEncoder

IDMJSONEncoder handles encoding IDM specific items.

default(o)[source]

JSON Encode item.

Parameters:

o – Object to encode

Returns:

JSON encoded object

idmtools.utils.json.load_json_file(path: str) Dict[Any, Any] | List[source]

Load a json object from a file.

Parameters:

path – Path to file

Returns:

Contents of file parsed by JSON

idmtools.utils.language module

Tools to format different outputs for human consumption.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.language.on_off(test) str[source]

Print on or off depending on boolean state of test.

Parameters:

test – Boolean/object to check state

Returns:

On or off

idmtools.utils.language.pluralize(word, plural_suffix='s')[source]

Convert work to plural form.

Parameters:
  • word – Word

  • plural_suffix – plural suffix. Default to s

Returns:

Pluralized string

idmtools.utils.language.verbose_timedelta(delta)[source]

The verbose_timedelta provides ms accurate, human readable of a time delta.

Parameters:

delta

Returns:

Time delta

idmtools.utils.language.get_qualified_class_name(cls: Type) str[source]

Return the full class name for an object.

Parameters:

cls – Class object to get name

Returns:

Fully qualified class name

idmtools.utils.language.get_qualified_class_name_from_obj(obj: object) str[source]

Return the full class name from object.

Parameters:

obj – Object

Example

` a = Platform('COMPS') class_name = get_qualified_class_name(a) print(class_name) 'idmtools_platform_comps.comps_platform.COMPSPlatform' `

Returns:

Full module path to class of object

idmtools.utils.local_os module

Info to determine info about Operation Systems.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools.utils.local_os.LocalOS[source]

Bases: object

A Central class for representing values whose proper access methods may differ between platforms.

exception UnknownOS[source]

Bases: Exception

Unknown os detected.

os_mapping = {'darwin': 'mac', 'linux': 'lin', 'windows': 'win'}
username = 'docs'
name = 'lin'
static is_window() bool[source]

Are we running on a windows system?

Returns:

True if on windows

idmtools.utils.time module

Timestamp function.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools.utils.time.timestamp(time=None)[source]

Return a timestamp.

Parameters:

time – A time object; if None provided, use now.

Returns:

A string timestamp in UTC, format YYYYMMDD_HHmmSS.

idmtools_models
idmtools_models package

idmtools models package.

This package provides some common model tasks like Python, Template Scripts, or Python task.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_models Subpackages
idmtools_models.python package

idmtools python tasks

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_models.python Submodules
idmtools_models.python.json_python_task module

idmtools json configured python task.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_models.python.json_python_task.JSONConfiguredPythonTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, script_path: str = None, python_path: str = 'python', parameters: dict = <factory>, envelope: str = None, config_file_name: str = 'config.json', is_config_common: bool = False, configfile_argument: str | None = '--config', command_line_argument_no_filename: bool = False)[source]

Bases: JSONConfiguredTask, PythonTask

JSONConfiguredPythonTask combines JSONConfiguredTask and PythonTask.

Notes

  • TODO Add examples here

configfile_argument: str | None = '--config'
gather_common_assets()[source]

Return the common assets for a JSON Configured Task a derived class.

Returns:

Assets

gather_transient_assets() AssetCollection[source]

Get Transient assets. This should general be the config.json.

Returns:

Transient assets

reload_from_simulation(simulation: Simulation, **kwargs)[source]

Reload the task from a simulation.

Parameters:
  • simulation – Simulation to reload from

  • **kwargs

Returns:

None

See Also

idmtools_models.json_configured_task.JSONConfiguredTask.reload_from_simulation() idmtools_models.python.python_task.PythonTask.reload_from_simulation()

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Pre-creation.

Parameters:
  • parent – Parent of task

  • platform – Platform Python Script is being executed on

Returns:

None

See Also

idmtools_models.json_configured_task.JSONConfiguredTask.pre_creation() idmtools_models.python.python_task.PythonTask.pre_creation()

post_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Post-creation.

For us, we proxy the underlying JSONConfiguredTask and PythonTask/

Parameters:
  • parent – Parent

  • platform – Platform Python Script is being executed on

Returns:

None

See Also

idmtools_models.json_configured_task.JSONConfiguredTask.post_creation() idmtools_models.python.python_task.PythonTask.post_creation()

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, script_path: str = None, python_path: str = 'python', parameters: dict = <factory>, envelope: str = None, config_file_name: str = 'config.json', is_config_common: bool = False, configfile_argument: str | None = '--config', command_line_argument_no_filename: bool = False) None
class idmtools_models.python.json_python_task.JSONConfiguredPythonTaskSpecification[source]

Bases: TaskSpecification

JSONConfiguredPythonTaskSpecification provides the plugin info for JSONConfiguredPythonTask.

get(configuration: dict) JSONConfiguredPythonTask[source]

Get instance of JSONConfiguredPythonTask with configuration.

Parameters:

configuration – Configuration for task

Returns:

JSONConfiguredPythonTask with configuration

get_description() str[source]

Get description for plugin.

Returns:

Plugin Description

get_type() Type[JSONConfiguredPythonTask][source]

Get Type for Plugin.

Returns:

JSONConfiguredPythonTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_models.python.python_task module

idmtools python task.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_models.python.python_task.PythonTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, script_path: str = None, python_path: str = 'python')[source]

Bases: ITask

PythonTask makes running python scripts a bit easier through idmtools.

Notes

TODO - Link examples here

script_path: str = None
python_path: str = 'python'
platform_requirements: Set[PlatformRequirements]
gather_common_assets() AssetCollection[source]

Get the common assets. This should be a set of assets that are common to all tasks in an experiment.

Returns:

AssetCollection

gather_transient_assets() AssetCollection[source]

Gather transient assets. Generally this is the simulation level assets.

Returns:

Transient assets. Also known as simulation level assets.

reload_from_simulation(simulation: Simulation, **kwargs)[source]

Reloads a python task from a simulation.

Parameters:

simulation – Simulation to reload

Returns:

None

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Called before creation of parent.

Parameters:
  • parent – Parent

  • platform – Platform Python Task is being executed on

Returns:

None

Raises:

ValueError if script name is not provided

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, script_path: str = None, python_path: str = 'python') None
class idmtools_models.python.python_task.PythonTaskSpecification[source]

Bases: TaskSpecification

PythonTaskSpecification provides the plugin info for PythonTask.

get(configuration: dict) PythonTask[source]

Get instance of Python Task with specified configuration.

Parameters:

configuration – Configuration for task

Returns:

Python task

get_description() str[source]

Description of the plugin.

Returns:

Description string

get_example_urls() List[str][source]

Return List of urls that have examples using PythonTask.

Returns:

List of urls(str) that point to examples

get_type() Type[PythonTask][source]

Get Type for Plugin.

Returns:

PythonTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_models.r package

R Task and derived versions

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_models.r Submodules
idmtools_models.r.json_r_task module

idmtools jsonr task.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_models.r.json_r_task.JSONConfiguredRTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, image_name: str = None, build: bool = False, build_path: str | None = None, Dockerfile: str | None = None, pull_before_build: bool = True, use_nvidia_run: bool = False, _DockerTask__image_built: bool = False, script_path: str = None, r_path: str = 'Rscript', parameters: dict = <factory>, envelope: str = None, config_file_name: str = 'config.json', is_config_common: bool = False, configfile_argument: str | None = '--config', command_line_argument_no_filename: bool = False)[source]

Bases: JSONConfiguredTask, RTask

JSONConfiguredRTask combines JSONConfiguredTask and RTask.

Notes

  • TODO Add example references here

configfile_argument: str | None = '--config'
gather_common_assets()[source]

Return the common assets for a JSON Configured Task.

Returns:

Assets

gather_transient_assets() AssetCollection[source]

Get Transient assets. This should general be the config.json.

Returns:

Transient assets

reload_from_simulation(simulation: Simulation, **kwargs)[source]

Reload task details from a simulation. Used in some fetch operations.

Parameters:
  • simulation – Simulation that is parent item

  • **kwargs

Returns:

None

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Pre-creation event.

Proxy calls to JSONConfiguredTask and RTask

Parameters:
  • parent – Parent item

  • platform – Platform item is being created on

Returns:

None

post_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Post-creation of task.

Proxy calls to JSONConfiguredTask and RTask

Parameters:
  • parent – Parent item

  • platform – Platform we are executing on

Returns:

None

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, image_name: str = None, build: bool = False, build_path: str | None = None, Dockerfile: str | None = None, pull_before_build: bool = True, use_nvidia_run: bool = False, _DockerTask__image_built: bool = False, script_path: str = None, r_path: str = 'Rscript', parameters: dict = <factory>, envelope: str = None, config_file_name: str = 'config.json', is_config_common: bool = False, configfile_argument: str | None = '--config', command_line_argument_no_filename: bool = False) None
class idmtools_models.r.json_r_task.JSONConfiguredRTaskSpecification[source]

Bases: TaskSpecification

JSONConfiguredRTaskSpecification provides the plugin info for JSONConfiguredRTask.

get(configuration: dict) JSONConfiguredRTask[source]

Get instance of JSONConfiguredRTaskSpecification with configuration provided.

Parameters:

configuration – Configuration for object

Returns:

JSONConfiguredRTaskSpecification with configuration

get_description() str[source]

Get description of plugin.

Returns:

Description of plugin

get_example_urls() List[str][source]

Get Examples for JSONConfiguredRTask.

Returns:

List of Urls that point to examples for JSONConfiguredRTask

get_type() Type[JSONConfiguredRTask][source]

Get Type for Plugin.

Returns:

JSONConfiguredRTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_models.r.r_task module

idmtools rtask.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_models.r.r_task.RTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, image_name: str = None, build: bool = False, build_path: str | None = None, Dockerfile: str | None = None, pull_before_build: bool = True, use_nvidia_run: bool = False, _DockerTask__image_built: bool = False, script_path: str = None, r_path: str = 'Rscript')[source]

Bases: DockerTask

Defines an RTask for idmtools. Currently only useful for local platform.

Notes

  • TODO rework this to be non-docker

script_path: str = None
r_path: str = 'Rscript'
reload_from_simulation(simulation: Simulation, **kwargs)[source]

Reload RTask from a simulation. Used when fetching an simulation to do a recreation.

Parameters:
  • simulation – Simulation object containing our metadata to rebuild task

  • **kwargs

Returns:

None

gather_common_assets() AssetCollection[source]

Gather R Assets.

Returns:

Common assets

gather_transient_assets() AssetCollection[source]

Gather transient assets. Generally this is the simulation level assets.

Returns:

Transient assets(Simulation level Assets)

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Called before creation of parent.

Parameters:
  • parent – Parent

  • platform – Platform R Task is executing on

Returns:

None

Raises:

ValueError if script name is not provided

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, image_name: str = None, build: bool = False, build_path: str | None = None, Dockerfile: str | None = None, pull_before_build: bool = True, use_nvidia_run: bool = False, _DockerTask__image_built: bool = False, script_path: str = None, r_path: str = 'Rscript') None
class idmtools_models.r.r_task.RTaskSpecification[source]

Bases: TaskSpecification

RTaskSpecification defines plugin specification for RTask.

get(configuration: dict) RTask[source]

Get instance of RTask.

Parameters:

configuration – configuration for task

Returns:

RTask with configuration

get_description() str[source]

Returns the Description of the plugin.

Returns:

Plugin Description

get_type() Type[RTask][source]

Get Type for Plugin.

Returns:

RTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_models Submodules
idmtools_models.json_configured_task module

idmtools json configured task.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_models.json_configured_task.JSONConfiguredTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, parameters: dict = <factory>, envelope: str = None, config_file_name: str = 'config.json', is_config_common: bool = False, configfile_argument: str = None, command_line_argument_no_filename: bool = False)[source]

Bases: ITask

Defines an extensible simple task that implements functionality through optional supplied use hooks.

parameters: dict
envelope: str = None
config_file_name: str = 'config.json'
is_config_common: bool = False
configfile_argument: str = None
command_line_argument_no_filename: bool = False
gather_common_assets() AssetCollection[source]

Gather assets common across an Experiment(Set of Simulations).

Returns:

Common AssetCollection

gather_transient_assets() AssetCollection[source]

Gather assets that are unique to this simulation/worktiem.

Returns:

Simulation/workitem level AssetCollection

set_parameter(key: str | int | float, value: str | int | float | Dict[str | int | float, Any])[source]

Update a parameter. The type hinting encourages JSON supported types.

Parameters:
  • key – Config

  • value

Returns:

Tags to be defined on the simulation/workitem

get_parameter(key: str | int | float) str | int | float | Dict[str | int | float, Any][source]

Returns a parameter value.

Parameters:

key – Key of parameter

Returns:

Value of parameter

Raises:

KeyError

update_parameters(values: Dict[str | int | float, str | int | float | Dict[str | int | float, Any]])[source]

Perform bulk update from another dictionary.

Parameters:

values – Values to update as dictionaryy

Returns:

Values

reload_from_simulation(simulation: Simulation, config_file_name: str | None = None, envelope: str | None = None, **kwargs)[source]

Reload from Simulation.

To do this, the process is

  1. First check for a configfile name from arguments, then tags, or the default name

  2. Load the json config file

  3. Check if we got an envelope argument from parameters or the simulation tags, or on the task object

Parameters:
  • simulation – Simulation object with metadata to load info from

  • config_file_name – Optional name of config file

  • envelope – Optional name of envelope

Returns:

Populates the config with config from object

pre_creation(parent: Simulation | WorkflowItem, platform: IPlatform)[source]

Pre-creation. For JSONConfiguredTask, we finalize our configuration file and command line here.

Parameters:
  • parent – Parent of task

  • platform – Platform task is being created on

Returns:

None

static set_parameter_sweep_callback(simulation: Simulation, param: str, value: Any) Dict[str, Any][source]

Performs a callback with a parameter and a value. Most likely users want to use set_parameter_partial instead of this method.

Parameters:
  • simulation – Simulation object

  • param – Param name

  • value – Value to set

Returns:

Tags to add to simulation

classmethod set_parameter_partial(parameter: str)[source]

Callback to be used when sweeping with a json configured model.

Parameters:

parameter – Param name

Returns:

Partial setting a specific parameter

Notes

  • TODO Reference some examples code here

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, parameters: dict = <factory>, envelope: str = None, config_file_name: str = 'config.json', is_config_common: bool = False, configfile_argument: str = None, command_line_argument_no_filename: bool = False) None
class idmtools_models.json_configured_task.JSONConfiguredTaskSpecification[source]

Bases: TaskSpecification

JSONConfiguredTaskSpecification defines the plugin specs for JSONConfiguredTask.

get(configuration: dict) JSONConfiguredTask[source]

Get instance of JSONConfiguredTask with configuration specified.

Parameters:

configuration – Configuration for configuration

Returns:

JSONConfiguredTask with configuration

get_description() str[source]

Get description for plugin.

Returns:

Description of plugin

get_example_urls() List[str][source]

Get list of urls with examples for JSONConfiguredTask.

Returns:

List of urls that point to examples relating to JSONConfiguredTask

get_type() Type[JSONConfiguredTask][source]

Get task type provided by plugin.

Returns:

JSONConfiguredTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_models.templated_script_task module

Provides the TemplatedScriptTask.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_models.templated_script_task.TemplatedScriptTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, script_path: str = None, script_binary: str = None, template: str = None, template_file: str = None, template_is_common: bool = True, variables: ~typing.Dict[str, ~typing.Any] = <factory>, path_sep: str = '/', extra_command_arguments: str = '', gather_common_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>, gather_transient_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>)[source]

Bases: ITask

Defines a task to run a script using a template. Best suited to shell scripts.

Examples

In this example, we add modify the Python Path using TemplatedScriptTask and LINUX_PYTHON_PATH_WRAPPER

import os

from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools_models.python.python_task import PythonTask
from idmtools_models.templated_script_task import TemplatedScriptTask, get_script_wrapper_unix_task, LINUX_PYTHON_PATH_WRAPPER


platform = Platform("CALCULON")
# This task can be anytype of task that would run python. Here we are running a simple model script that consumes the example
# package "a_package"
task = PythonTask(script_path="model.py", python_path='python3.6')
# add our library. On Comps, you could use RequirementsToAssetCollection as well
task.common_assets.add_asset("a_package.py")
# we request a wrapper script for Unix. The wrapper should match the computation platform's OS
# We also use the built-it LINUX_PYTHON_PATH_WRAPPER template which modifies our PYTHONPATH to load libraries from Assets/site-packages and Assets folders
wrapper_task: TemplatedScriptTask = get_script_wrapper_unix_task(task, template_content=LINUX_PYTHON_PATH_WRAPPER)
# we have to set the bash path remotely
wrapper_task.script_binary = "/bin/bash"

# Now we define our experiment. We could just as easily use this wrapper in a templated simulation builder as well
experiment = Experiment.from_task(name=os.path.basename(__file__), task=wrapper_task)
experiment.run(wait_until_done=True)

In this example, we modify environment variable using TemplatedScriptTask and LINUX_DICT_TO_ENVIRONMENT

import os
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools_models.python.python_task import PythonTask
from idmtools_models.templated_script_task import get_script_wrapper_unix_task, LINUX_DICT_TO_ENVIRONMENT


platform = Platform("CALCULON")
# here we define the task we want to use the environment variables. In this example we have a simple python script that prints the EXAMPLE environment variable
task = PythonTask(script_path="model.py")
# Get a task to wrap the script in a shell script. Which get_script_wrapper function you use depends on the platform's OS
wrapper_task = get_script_wrapper_unix_task(
    task=task,
    # and set some values here
    variables=dict(EXAMPLE='It works!')
)
# some platforms need to you hint where their script binary is. Usually this is only applicable to Unix platforms(Linux, Mac, etc)
wrapper_task.script_binary = "/bin/bash"

# Now we define our experiment. We could just as easily use this wrapper in a templated simulation builder as well
experiment = Experiment.from_task(name=os.path.basename(__file__), task=wrapper_task)
experiment.run(wait_until_done=True)
script_path: str = None

Name of script

script_binary: str = None

If platform requires path to script executing binary(ie /bin/bash)

template: str = None

The template contents

template_file: str = None

The template file. You can only use either template or template_file at once

template_is_common: bool = True

Controls whether a template should be an experiment or a simulation level asset

variables: Dict[str, Any]
path_sep: str = '/'

Platform Path Separator. For Windows execution platforms, use , otherwise use the default of /

extra_command_arguments: str = ''

Extra arguments to add to the command line

gather_common_asset_hooks: List[Callable[[ITask], AssetCollection]]

Hooks to gather common assets

gather_transient_asset_hooks: List[Callable[[ITask], AssetCollection]]

Hooks to gather transient assets

gather_common_assets() AssetCollection[source]

Gather common(experiment-level) assets for task.

Returns:

AssetCollection containing common assets

gather_transient_assets() AssetCollection[source]

Gather transient(experiment-level) assets for task.

Returns:

AssetCollection containing transient assets

reload_from_simulation(simulation: Simulation)[source]

Reload a templated script task. When reloading, you will only have the rendered template available.

Parameters:

simulation

Returns:

None

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Before creating simulation, we need to set our command line.

Parameters:
  • parent – Parent object

  • platform – Platform item is being ran on

Returns:

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, script_path: str = None, script_binary: str = None, template: str = None, template_file: str = None, template_is_common: bool = True, variables: ~typing.Dict[str, ~typing.Any] = <factory>, path_sep: str = '/', extra_command_arguments: str = '', gather_common_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>, gather_transient_asset_hooks: ~typing.List[~typing.Callable[[~idmtools.entities.itask.ITask], ~idmtools.assets.asset_collection.AssetCollection]] = <factory>) None
class idmtools_models.templated_script_task.ScriptWrapperTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, template_script_task: ~idmtools_models.templated_script_task.TemplatedScriptTask = None, task: ~idmtools.entities.itask.ITask = None)[source]

Bases: ITask

Allows you to wrap a script with another script.

Raises:

ValueError if the template Script Task is not defined

template_script_task: TemplatedScriptTask = None
task: ITask = None
static from_dict(task_dictionary: Dict[str, Any])[source]

Load the task from a dictionary.

property command

Our task property. Again, we have to overload this because of wrapping a task.

property wrapped_task

Our task we are wrapping with a shell script.

Returns:

Our wrapped task

gather_common_assets()[source]

Gather all the common assets.

Returns:

Common assets(Experiment Assets)

gather_transient_assets() AssetCollection[source]

Gather all the transient assets.

Returns:

Transient Assets(Simulation level assets)

reload_from_simulation(simulation: Simulation)[source]

Reload from simulation.

Parameters:

simulation – simulation

Returns:

None

pre_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Before creation, create the true command by adding the wrapper name.

Here we call both our wrapped task and our template_script_task pre_creation :param parent: Parent Task :param platform: Platform Templated Task is executing on

Returns:

None

post_creation(parent: Simulation | IWorkflowItem, platform: IPlatform)[source]

Post creation of task.

Here we call both our wrapped task and our template_script_task post_creation

Parameters:
  • parent – Parent of task

  • platform – Platform we are running on

Returns:

None

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, template_script_task: ~idmtools_models.templated_script_task.TemplatedScriptTask = None, task: ~idmtools.entities.itask.ITask = None) None
idmtools_models.templated_script_task.get_script_wrapper_task(task: ITask, wrapper_script_name: str, template_content: str | None = None, template_file: str | None = None, template_is_common: bool = True, variables: Dict[str, Any] | None = None, path_sep: str = '/') ScriptWrapperTask[source]

Convenience function that will wrap a task for you with some defaults.

Parameters:
  • task – Task to wrap

  • wrapper_script_name – Wrapper script name

  • template_content – Template Content

  • template_file – Template File

  • template_is_common – Is the template experiment level

  • variables – Variables

  • path_sep – Path sep(Window or Linux)

Returns:

ScriptWrapperTask wrapping the task

idmtools_models.templated_script_task.get_script_wrapper_windows_task(task: ITask, wrapper_script_name: str = 'wrapper.bat', template_content: str = '{% for key, value in vars.items() %}\nset {{key}}="{{value}}"\n{% endfor %}\necho Running %*\n%*', template_file: str | None = None, template_is_common: bool = True, variables: Dict[str, Any] | None = None) ScriptWrapperTask[source]

Get wrapper script task for windows platforms.

The default content wraps a another task with a batch script that exports the variables to the run environment defined in variables. To modify python path, use WINDOWS_PYTHON_PATH_WRAPPER

You can adapt this script to modify any pre-scripts you need or call others scripts in succession

Parameters:
  • task – Task to wrap

  • wrapper_script_name – Wrapper script name(defaults to wrapper.bat)

  • template_content – Template Content.

  • template_file – Template File

  • template_is_common – Is the template experiment level

  • variables – Variables for template

Returns:

ScriptWrapperTask

See Also::

idmtools_models.templated_script_task.TemplatedScriptTask idmtools_models.templated_script_task.get_script_wrapper_task() idmtools_models.templated_script_task.get_script_wrapper_unix_task()

idmtools_models.templated_script_task.get_script_wrapper_unix_task(task: ITask, wrapper_script_name: str = 'wrapper.sh', template_content: str = '{% for key, value in vars.items() %}\nexport {{key}}="{{value}}"\n{% endfor %}\necho Running args $@\n"$@"\n', template_file: str | None = None, template_is_common: bool = True, variables: Dict[str, Any] | None = None)[source]

Get wrapper script task for unix platforms.

The default content wraps a another task with a bash script that exports the variables to the run environment defined in variables. To modify python path, you can use LINUX_PYTHON_PATH_WRAPPER

You can adapt this script to modify any pre-scripts you need or call others scripts in succession

Parameters:
  • task – Task to wrap

  • wrapper_script_name – Wrapper script name(defaults to wrapper.sh)

  • template_content – Template Content

  • template_file – Template File

  • template_is_common – Is the template experiment level

  • variables – Variables for template

Returns:

ScriptWrapperTask

See Also: idmtools_models.templated_script_task.TemplatedScriptTask idmtools_models.templated_script_task.get_script_wrapper_task() idmtools_models.templated_script_task.get_script_wrapper_windows_task()

class idmtools_models.templated_script_task.TemplatedScriptTaskSpecification[source]

Bases: TaskSpecification

TemplatedScriptTaskSpecification provides the plugin specs for TemplatedScriptTask.

get(configuration: dict) TemplatedScriptTask[source]

Get instance of TemplatedScriptTask with configuration.

Parameters:

configuration – configuration for TemplatedScriptTask

Returns:

TemplatedScriptTask with configuration

get_description() str[source]

Get description of plugin.

Returns:

Plugin description

get_example_urls() List[str][source]

Get example urls related to TemplatedScriptTask.

Returns:

List of urls that have examples related to CommandTask

get_type() Type[TemplatedScriptTask][source]

Get task type provided by plugin.

Returns:

TemplatedScriptTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

class idmtools_models.templated_script_task.ScriptWrapperTaskSpecification[source]

Bases: TaskSpecification

ScriptWrapperTaskSpecification defines the plugin specs for ScriptWrapperTask.

get(configuration: dict) ScriptWrapperTask[source]

Get instance of ScriptWrapperTask with configuration.

Parameters:

configuration – configuration for ScriptWrapperTask

Returns:

TemplatedScriptTask with configuration

get_description() str[source]

Get description of plugin.

Returns:

Plugin description

get_example_urls() List[str][source]

Get example urls related to ScriptWrapperTask.

Returns:

List of urls that have examples related to CommandTask

get_type() Type[ScriptWrapperTask][source]

Get task type provided by plugin.

Returns:

TemplatedScriptTask

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_platform_comps
idmtools_platform_comps package

idmtools comps platform.

We try to load the CLI here but if idmtools-cli is not installed, we fail gracefully.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps Subpackages
idmtools_platform_comps.cli package

idmtools comps cli module.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.cli Submodules
idmtools_platform_comps.cli.cli_functions module

idmtools cli utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.cli.cli_functions.validate_range(value: float, min: float, max: float) Tuple[bool, str][source]

Function used to validate an integer value between min and max.

Parameters:
  • value – The value set by the user

  • min – Minimum value

  • max – Maximum value

Returns: tuple with validation result and error message if needed

idmtools_platform_comps.cli.cli_functions.environment_list(previous_settings: Dict, current_field: Field) Dict[source]

Allows the CLI to provide a list of available environments.

Uses the previous_settings to get the endpoint to query for environments

Parameters:
  • previous_settings – previous settings set by the user in the CLI.

  • current_field – Current field specs

Returns: updates to the choices and default

idmtools_platform_comps.cli.comps module

idmtools comps cli comands.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.cli.comps.add_item(assets: AssetCollection, file: str)[source]

Add Item.

Parameters:
  • assets – Assets

  • file – File or Directory

Returns:

None

Raises:

FileNotFoundError if file cannot be found.

class idmtools_platform_comps.cli.comps.StaticCredentialPrompt(comps_url, username, password)[source]

Bases: CredentialPrompt

Provides a class to allow login to comps from a username password that is static or provided.

__init__(comps_url, username, password)[source]

Constructor.

prompt()[source]

Return our stores username and password.

idmtools_platform_comps.comps_operations package

idmtools comps operations module.

The operations define how to interact with specific item types.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.comps_operations Submodules
idmtools_platform_comps.comps_operations.asset_collection_operations module

idmtools comps asset collections operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.comps_operations.asset_collection_operations.CompsPlatformAssetCollectionOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.AssetCollection.AssetCollection'>)[source]

Bases: IPlatformAssetCollectionOperations

Provides AssetCollection Operations to COMPSPlatform.

platform: COMPSPlatform
platform_type

alias of AssetCollection

get(asset_collection_id: UUID | None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) AssetCollection[source]

Get an asset collection by id.

Parameters:
  • asset_collection_id – Id of asset collection

  • load_children – Optional list of children to load. Defaults to assets and tags

  • query_criteria – Optional query_criteria. Ignores children default

  • **kwargs

Returns:

COMPSAssetCollection

platform_create(asset_collection: AssetCollection, **kwargs) AssetCollection[source]

Create AssetCollection.

Parameters:
  • asset_collection – AssetCollection to create

  • **kwargs

Returns:

COMPSAssetCollection

to_entity(asset_collection: AssetCollection | SimulationFile | List[SimulationFile] | OutputFileMetadata | List[WorkItemFile], **kwargs) AssetCollection[source]

Convert COMPS Asset Collection or Simulation File to IDM Asset Collection.

Parameters:
  • asset_collection – Comps asset/asset collection to convert to idm asset collection

  • **kwargs

Returns:

AssetCollection

Raises:

ValueError - If the file is not a SimulationFile or WorkItemFile

__init__(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.AssetCollection.AssetCollection'>) None
idmtools_platform_comps.comps_operations.experiment_operations module

idmtools comps experiment operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.comps_operations.experiment_operations.CompsPlatformExperimentOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Experiment.Experiment'>)[source]

Bases: IPlatformExperimentOperations

Provides Experiment operations to the COMPSPlatform.

platform: COMPSPlatform
platform_type

alias of Experiment

get(experiment_id: UUID, columns: List[str] | None = None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) Experiment[source]

Fetch experiments from COMPS.

Parameters:
  • experiment_id – Experiment ID

  • columns – Optional Columns. If not provided, id, name, and suite_id are fetched

  • load_children – Optional Children. If not provided, tags and configuration are specified

  • query_criteria – Optional QueryCriteria

  • **kwargs

Returns:

COMPSExperiment with items

pre_create(experiment: Experiment, **kwargs) NoReturn[source]

Pre-create for Experiment. At moment, validation related to COMPS is all that is done.

Parameters:
  • experiment – Experiment to run pre-create for

  • **kwargs

Returns:

None

platform_create(experiment: Experiment, num_cores: int | None = None, executable_path: str | None = None, command_arg: str | None = None, priority: str | None = None, check_command: bool = True, use_short_path: bool = False, **kwargs) Experiment[source]

Create Experiment on the COMPS Platform.

Parameters:
  • experiment – IDMTools Experiment to create

  • num_cores – Optional num of cores to allocate using MPI

  • executable_path – Executable path

  • command_arg – Command Argument

  • priority – Priority of command

  • check_command – Run task hooks on item

  • use_short_path – When set to true, simulation roots will be set to “$COMPS_PATH(USER)

  • **kwargs – Keyword arguments used to expand functionality. At moment these are usually not used

Returns:

COMPSExperiment that was created

platform_modify_experiment(experiment: Experiment, regather_common_assets: bool = False, **kwargs) Experiment[source]

Executed when an Experiment is being ran that is already in Created, Done, In Progress, or Failed State.

Parameters:
  • experiment – Experiment to modify

  • regather_common_assets – Triggers a new AC to be associated with experiment. It is important to note that when using this feature, ensure the previous simulations have finished provisioning. Failure to do so can lead to unexpected behaviour.

Returns:

Modified experiment.

post_create(experiment: Experiment, **kwargs) NoReturn[source]

Post create of experiment.

The default behaviour is to display the experiment url if output is enabled.

post_run_item(experiment: Experiment, **kwargs)[source]

Ran after experiment. Nothing is done on comps other that alerting the user to the item.

Parameters:
  • experiment – Experiment to run post run item

  • **kwargs

Returns:

None

get_children(experiment: Experiment, columns: List[str] | None = None, children: List[str] | None = None, **kwargs) List[Simulation][source]

Get children for a COMPSExperiment.

Parameters:
  • experiment – Experiment to get children of Comps Experiment

  • columns – Columns to fetch. If not provided, id, name, experiment_id, and state will be loaded

  • children – Children to load. If not provided, Tags will be loaded

  • **kwargs

Returns:

Simulations belonging to the Experiment

get_parent(experiment: Experiment, **kwargs) Suite[source]

Get Parent of experiment.

Parameters:
  • experiment – Experiment to get parent of

  • **kwargs

Returns:

Suite of the experiment

platform_run_item(experiment: Experiment, **kwargs)[source]

Run experiment on COMPS. Here we commission the experiment.

Parameters:
  • experiment – Experiment to run

  • **kwargs

Returns:

None

send_assets(experiment: Experiment, **kwargs)[source]

Send assets related to the experiment.

Parameters:
  • experiment – Experiment to send assets for

  • **kwargs

Returns:

None

refresh_status(experiment: Experiment, **kwargs)[source]

Reload status for experiment(load simulations).

Parameters:
  • experiment – Experiment to load status for

  • **kwargs

Returns:

None

to_entity(experiment: Experiment, parent: Suite | None = None, children: bool = True, **kwargs) Experiment[source]

Converts a COMPSExperiment to an idmtools Experiment.

Parameters:
  • experiment – COMPS Experiment objet to convert

  • parent – Optional suite parent

  • children – Should we load children objects?

  • **kwargs

Returns:

Experiment

get_assets_from_comps_experiment(experiment: Experiment) AssetCollection | None[source]

Get assets for a comps experiment.

Parameters:

experiment – Experiment to get asset collection for.

Returns:

AssetCollection if configuration is set and configuration.asset_collection_id is set.

platform_list_asset(experiment: Experiment, **kwargs) List[Asset][source]

List assets for an experiment.

Parameters:
  • experiment – Experiment to list assets for.

  • **kwargs

Returns:

List of assets

create_sim_directory_map(experiment_id: str) Dict[source]

Build simulation working directory mapping. :param experiment_id: experiment id

Returns:

Dict of simulation id as key and working dir as value

platform_delete(experiment_id: str) None[source]

Delete platform experiment. :param experiment_id: experiment id

Returns:

None

platform_cancel(experiment_id: str) None[source]

Cancel platform experiment. :param experiment_id: experiment id

Returns:

None

__init__(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Experiment.Experiment'>) None
idmtools_platform_comps.comps_operations.simulation_operations module

idmtools comps simulation operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.comps_operations.simulation_operations.comps_batch_worker(simulations: List[Simulation], interface: CompsPlatformSimulationOperations, executor, num_cores: int | None = None, priority: str | None = None, asset_collection_id: str | None = None, min_time_between_commissions: int = 10, **kwargs) List[Simulation][source]

Run batch worker.

Parameters:
  • simulations – Batch of simulation to process

  • interface – SimulationOperation Interface

  • executor – Thread/process executor

  • num_cores – Optional Number of core to allocate for MPI

  • priority – Optional Priority to set to

  • asset_collection_id – Override asset collection id

  • min_time_between_commissions – Minimum amount of time(in seconds) between calls to commission on an experiment

  • for (extra info) –

Returns:

List of Comps Simulations

class idmtools_platform_comps.comps_operations.simulation_operations.CompsPlatformSimulationOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Simulation.Simulation'>)[source]

Bases: IPlatformSimulationOperations

Provides simuilation operations to COMPSPlatform.

platform: COMPSPlatform
platform_type

alias of Simulation

get(simulation_id: UUID, columns: List[str] | None = None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) Simulation[source]

Get Simulation from Comps.

Parameters:
  • simulation_id – ID

  • columns – Optional list of columns to load. Defaults to “id”, “name”, “experiment_id”, “state”

  • load_children – Optional children to load. Defaults to “tags”, “configuration”

  • query_criteria – Optional query_criteria object to use your own custom criteria object

  • **kwargs

Returns:

COMPSSimulation

platform_create(simulation: Simulation, num_cores: int | None = None, priority: str | None = None, enable_platform_task_hooks: bool = True, asset_collection_id: str | None = None, **kwargs) Simulation[source]

Create Simulation on COMPS.

Parameters:
  • simulation – Simulation to create

  • num_cores – Optional number of MPI Cores to allocate

  • priority – Priority to load

  • enable_platform_task_hooks – Should platform task hoooks be ran

  • asset_collection_id – Override for asset collection id on sim

  • **kwargs – Expansion fields

Returns:

COMPS Simulation

to_comps_sim(simulation: Simulation, num_cores: int | None = None, priority: str | None = None, config: Configuration | None = None, asset_collection_id: str | None = None, **kwargs)[source]

Covert IDMTools object to COMPS Object.

Parameters:
  • simulation – Simulation object to convert

  • num_cores – Optional Num of MPI Cores to allocate

  • priority – Optional Priority

  • config – Optional Configuration object

  • asset_collection_id

  • comps (**kwargs additional option for) –

Returns:

COMPS Simulation

get_simulation_config_from_simulation(simulation: Simulation, num_cores: int | None = None, priority: str | None = None, asset_collection_id: str | None = None, **kwargs) Configuration[source]

Get the comps configuration for a Simulation Object.

Parameters:
  • simulation – Simulation

  • num_cores – Optional Num of core for MPI

  • priority – Optional Priority

  • asset_collection_id – Override simulation asset_collection_id

  • comps (**kwargs additional option for) –

Returns:

Configuration

batch_create(simulations: List[Simulation], num_cores: int | None = None, priority: str | None = None, asset_collection_id: str | None = None, **kwargs) List[Simulation][source]

Perform batch creation of Simulations.

Parameters:
  • simulations – Simulation to create

  • num_cores – Optional MPI Cores to allocate per simulation

  • priority – Optional Priority

  • asset_collection_id – Asset collection id for sim(overide experiment)

  • **kwargs – Future expansion

Returns:

List of COMPSSimulations that were created

get_parent(simulation: Any, **kwargs) Experiment[source]

Get the parent of the simulation.

Parameters:
  • simulation – Simulation to load parent for

  • **kwargs

Returns:

COMPSExperiment

platform_run_item(simulation: Simulation, **kwargs)[source]

For simulations, there is no running for COMPS.

send_assets(simulation: Simulation, comps_sim: Simulation | None = None, add_metadata: bool = False, **kwargs)[source]

Send assets to Simulation.

Parameters:
  • simulation – Simulation to send asset for

  • comps_sim – Optional COMPSSimulation object to prevent reloading it

  • add_metadata – Add idmtools metadata object

  • **kwargs

Returns:

None

refresh_status(simulation: Simulation, additional_columns: List[str] | None = None, **kwargs)[source]

Refresh status of a simulation.

Parameters:
  • simulation – Simulation to refresh

  • additional_columns – Optional additional columns to load from COMPS

  • **kwargs

Returns:

None

to_entity(simulation: Simulation, load_task: bool = False, parent: Experiment | None = None, load_parent: bool = False, load_metadata: bool = False, load_cli_from_workorder: bool = False, **kwargs) Simulation[source]

Convert COMPS simulation object to IDM Tools simulation object.

Parameters:
  • simulation – Simulation object

  • load_task – Should we load tasks. Defaults to No. This can increase the load items on fetchs

  • parent – Optional parent object to prevent reloads

  • load_parent – Force load of parent(Beware, This could cause loading loops)

  • load_metadata – Should we load metadata by default. If load task is enabled, this is also enabled

  • load_cli_from_workorder – Used with COMPS scheduling where the CLI is defined in our workorder

  • **kwargs

Returns:

Simulation object

get_asset_collection_from_comps_simulation(simulation: Simulation) AssetCollection | None[source]

Get assets from COMPS Simulation.

Parameters:

simulation – Simulation to get assets from

Returns:

Simulation Asset Collection, if any.

get_assets(simulation: Simulation, files: List[str], include_experiment_assets: bool = True, **kwargs) Dict[str, bytearray][source]

Fetch the files associated with a simulation.

Parameters:
  • simulation – Simulation

  • files – List of files to download

  • include_experiment_assets – Should we also load experiment assets?

  • **kwargs

Returns:

Dictionary of filename -> ByteArray

list_assets(simulation: Simulation, common_assets: bool = False, **kwargs) List[Asset][source]

List assets for a simulation.

Parameters:
  • simulation – Simulation to load data for

  • common_assets – Should we load asset files

  • **kwargs

Returns:

AssetCollection

retrieve_output_files(simulation: Simulation)[source]

Retrieve the output files for a simulation.

Parameters:

simulation – Simulation to fetch files for

Returns:

List of output files for simulation

all_files(simulation: Simulation, common_assets: bool = False, outfiles: bool = True, **kwargs) List[Asset][source]

Returns all files for a specific simulation including experiments or non-assets.

Parameters:
  • simulation – Simulation all files

  • common_assets – Include experiment assets

  • outfiles – Include output files

  • **kwargs

Returns:

AssetCollection

create_sim_directory_map(simulation_id: str) Dict[source]

Build simulation working directory mapping. :param simulation_id: simulation id

Returns:

Dict of simulation id as key and working dir as value

__init__(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Simulation.Simulation'>) None
idmtools_platform_comps.comps_operations.suite_operations module

idmtools comps suite operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.comps_operations.suite_operations.CompsPlatformSuiteOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Suite.Suite'>)[source]

Bases: IPlatformSuiteOperations

Provides Suite operation to the COMPSPlatform.

platform: COMPSPlatform
platform_type

alias of Suite

get(suite_id: UUID, columns: List[str] | None = None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) Suite[source]

Get COMPS Suite.

Parameters:
  • suite_id – Suite id

  • columns – Optional list of columns. Defaults to id and name

  • load_children – Optional list of children to load. Defaults to “tags”, “configuration”

  • query_criteria – Optional query criteria

  • **kwargs

Returns:

COMPSSuite

platform_create(suite: Suite, **kwargs) Tuple[Suite, UUID][source]

Create suite on COMPS.

Parameters:
  • suite – Suite to create

  • **kwargs

Returns:

COMPS Suite object and a UUID

get_parent(suite: Suite, **kwargs) Any[source]

Get parent of suite. We always return None on COMPS.

Parameters:
  • suite – Suite to get parent of

  • **kwargs

Returns:

None

get_children(suite: Suite, **kwargs) List[Experiment | WorkItem][source]

Get children for a suite.

Parameters:
  • suite – Suite to get children for

  • **kwargs – Any arguments to pass on to loading functions

Returns:

List of COMPS Experiments/Workitems that are part of the suite

refresh_status(suite: Suite, **kwargs)[source]

Refresh the status of a suite. On comps, this is done by refreshing all experiments.

Parameters:
  • suite – Suite to refresh status of

  • **kwargs

Returns:

None

to_entity(suite: Suite, children: bool = True, **kwargs) Suite[source]

Convert a COMPS Suite to an IDM Suite.

Parameters:
  • suite – Suite to Convert

  • children – When true, load simulations, false otherwise

  • **kwargs

Returns:

IDM Suite

create_sim_directory_map(suite_id: str) Dict[source]

Build simulation working directory mapping. :param suite_id: suite id

Returns:

Dict of simulation id as key and working dir as value

platform_delete(suite_id: str) None[source]

Delete platform suite. :param suite_id: platform suite id

Returns:

None

__init__(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Suite.Suite'>) None
idmtools_platform_comps.comps_operations.workflow_item_operations module

idmtools comps workflow item operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.comps_operations.workflow_item_operations.CompsPlatformWorkflowItemOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.WorkItem.WorkItem'>)[source]

Bases: IPlatformWorkflowItemOperations

Provides IWorkflowItem COMPSPlatform.

platform: COMPSPlatform
platform_type

alias of WorkItem

get(workflow_item_id: UUID, columns: List[str] | None = None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) WorkItem[source]

Get COMPSWorkItem.

Parameters:
  • workflow_item_id – Item id

  • columns – Optional columns to load. Defaults to “id”, “name”, “state”

  • load_children – Optional list of COMPS Children objects to load. Defaults to “Tags”

  • query_criteria – Optional QueryCriteria

  • **kwargs

Returns:

COMPSWorkItem

platform_create(work_item: IWorkflowItem, **kwargs) Tuple[Any][source]

Creates an workflow_item from an IDMTools work_item object.

Parameters:
  • work_item – WorkflowItem to create

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Created platform item and the UUID of said item

platform_run_item(work_item: IWorkflowItem, **kwargs)[source]

Start to rum COMPS WorkItem created from work_item.

Parameters:

work_item – workflow item

Returns: None

get_parent(work_item: IWorkflowItem, **kwargs) Any[source]

Returns the parent of item. If the platform doesn’t support parents, you should throw a TopLevelItem error.

Parameters:
  • work_item – COMPS WorkItem

  • **kwargs – Optional arguments mainly for extensibility

Returns: item parent

Raises:

TopLevelItem

get_children(work_item: IWorkflowItem, **kwargs) List[Any][source]

Returns the children of an workflow_item object.

Parameters:
  • work_item – WorkflowItem object

  • **kwargs – Optional arguments mainly for extensibility

Returns:

Children of work_item object

refresh_status(workflow_item: IWorkflowItem, **kwargs)[source]

Refresh status for workflow item.

Parameters:

work_item – Item to refresh status for

Returns:

None

send_assets(workflow_item: IWorkflowItem, **kwargs)[source]

Add asset as WorkItemFile.

Parameters:

workflow_item – workflow item

Returns: None

list_assets(workflow_item: IWorkflowItem, **kwargs) List[str][source]

Get list of asset files.

Parameters:
  • workflow_item – workflow item

  • **kwargs – Optional arguments mainly for extensibility

Returns: list of assets associated with WorkItem

get_assets(workflow_item: IWorkflowItem, files: List[str], **kwargs) Dict[str, bytearray][source]

Retrieve files association with WorkItem.

Parameters:
  • workflow_item – workflow item

  • files – list of file paths

  • **kwargs – Optional arguments mainly for extensibility

Returns: dict with key/value: file_path/file_content

to_entity(work_item: WorkItem, **kwargs) IWorkflowItem[source]

Converts the platform representation of workflow_item to idmtools representation.

Parameters:
  • work_item – Platform workflow_item object

  • kwargs – optional arguments mainly for extensibility

Returns:

IDMTools workflow item

Get related WorkItems, Suites, Experiments, Simulations and AssetCollections.

Parameters:
  • item – workflow item

  • relation_type – RelationType

Returns: Dict

__init__(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.WorkItem.WorkItem'>) None
idmtools_platform_comps.ssmt_operations package

idmtools ssmt operations.

Since SSMT is the same as comps, we only derive the simulation and workfitem operations to do local file access.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.ssmt_operations Submodules
idmtools_platform_comps.ssmt_operations.simulation_operations module

idmtools simulation operations for ssmt.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_operations.simulation_operations.SSMTPlatformSimulationOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.Simulation.Simulation'>)[source]

Bases: CompsPlatformSimulationOperations

SSMTPlatformSimulationOperations provides Simulation operations to SSMT.

In this case, we only have to redefine get_assets to optimize file usage.

get(simulation_id: UUID, columns: List[str] | None = None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) Simulation[source]

Get Simulation from Comps.

Parameters:
  • simulation_id – ID

  • columns – Optional list of columns to load. Defaults to “id”, “name”, “experiment_id”, “state”

  • load_children – Optional children to load. Defaults to “tags”, “configuration”

  • query_criteria – Optional query_criteria object to use your own custom criteria object

  • **kwargs

Returns:

COMPSSimulation

get_assets(simulation: Simulation, files: List[str], **kwargs) Dict[str, bytearray][source]

Get assets for Simulation.

Parameters:
  • simulation – Simulation to fetch

  • files – Files to get

  • **kwargs – Any keyword arguments

Returns:

Files fetched

platform: COMPSPlatform
idmtools_platform_comps.ssmt_operations.workflow_item_operations module

idmtools workflow item operations for ssmt.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_operations.workflow_item_operations.SSMTPlatformWorkflowItemOperations(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.WorkItem.WorkItem'>)[source]

Bases: CompsPlatformWorkflowItemOperations

SSMTPlatformWorkflowItemOperations provides IWorkflowItem actions for SSMT Platform.

In IWorkflowItem’s case, we just need to change how get_assets works.

get(workflow_item_id: UUID, columns: List[str] | None = None, load_children: List[str] | None = None, query_criteria: QueryCriteria | None = None, **kwargs) WorkItem[source]

Get COMPSWorkItem.

Parameters:
  • workflow_item_id – Item id

  • columns – Optional columns to load. Defaults to “id”, “name”, “state”, “environment_name”, “working_directory”

  • load_children – Optional list of COMPS Children objects to load. Defaults to “Tags”

  • query_criteria – Optional QueryCriteria

  • **kwargs

Returns:

COMPSWorkItem

get_assets(workflow_item: IWorkflowItem, files: List[str], **kwargs) Dict[str, bytearray][source]

Get Assets for workflow_item.

Parameters:
  • workflow_item – WorkflowItem

  • files – Files to get

  • **kwargs

Returns:

Files requested

__init__(platform: COMPSPlatform, platform_type: ~typing.Type = <class 'COMPS.Data.WorkItem.WorkItem'>) None
platform: COMPSPlatform
idmtools_platform_comps.ssmt_work_items package

idmtools ssmt work items.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.ssmt_work_items Submodules
idmtools_platform_comps.ssmt_work_items.comps_work_order_task module

idmtools CompsWorkOrderTask.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_work_items.comps_work_order_task.CompsWorkOrderTask(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, work_order: ~idmtools_platform_comps.ssmt_work_items.work_order.IWorkOrder = None)[source]

Bases: ITask

Defines a task that is purely work order driven, like Singularity build.

work_order: IWorkOrder = None
gather_common_assets() AssetCollection[source]

Gather common assets.

gather_transient_assets() AssetCollection[source]

Gather transient assets.

reload_from_simulation(simulation: Simulation)[source]

Reload simulation.

__init__(command: str | ~idmtools.entities.command_line.CommandLine = <property object>, platform_requirements: ~typing.Set[~idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, _ITask__pre_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, _ITask__post_creation_hooks: ~typing.List[~typing.Callable[[Simulation | IWorkflowItem, IPlatform], ~typing.NoReturn]] = <factory>, common_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, work_order: ~idmtools_platform_comps.ssmt_work_items.work_order.IWorkOrder = None) None
idmtools_platform_comps.ssmt_work_items.comps_workitems module

idmtools SSMTWorkItem. This is the base of most comps workitems.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_work_items.comps_workitems.SSMTWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None)[source]

Bases: ICOMPSWorkflowItem

Defines the SSMT WorkItem.

Notes

  • We have lots of workitem bases. We need to consolidate these a bit.

docker_image: str = None
command: dataclasses.InitVar[str] = None
get_base_work_order()[source]

Builder basic work order.

Returns: work order as a dictionary

get_comps_ssmt_image_name()[source]

Build comps ssmt docker image name.

Returns: final validated name

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None) None
class idmtools_platform_comps.ssmt_work_items.comps_workitems.InputDataWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE')[source]

Bases: ICOMPSWorkflowItem

Idm InputDataWorkItem.

Notes

  • TODO add examples

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE') None
work_order: dict
class idmtools_platform_comps.ssmt_work_items.comps_workitems.VisToolsWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE')[source]

Bases: ICOMPSWorkflowItem

Idm VisToolsWorkItem.

Notes

  • TODO add examples

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE') None
work_order: dict
idmtools_platform_comps.ssmt_work_items.icomps_workflowitem module

idmtools ICOMPSWorkflowItem.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_work_items.icomps_workflowitem.ICOMPSWorkflowItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE')[source]

Bases: IWorkflowItem, ABC

Interface of idmtools work item.

name: str = 'idmtools workflow item'

Name of the workflow step

work_order: dict
work_item_type: str = None
plugin_key: str = '1.0.0.0_RELEASE'
get_base_work_order()[source]

Get the base work order.

load_work_order(wo_file)[source]

Load work order from a file.

set_work_order(wo)[source]

Update wo for the name with value.

Parameters:

wo – user wo

Returns: None

update_work_order(name, value)[source]

Update wo for the name with value.

Parameters:
  • name – wo arg name

  • value – wo arg value

Returns: None

clear_wo_args()[source]

Clear all existing wo args.

Returns: None

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE') None
idmtools_platform_comps.ssmt_work_items.work_order module

idmtools WorkOrder classes.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_work_items.work_order.IWorkOrder(WorkItem_Type: str)[source]

Bases: ABC

Base workorder type.

WorkItem_Type: str
__init__(WorkItem_Type: str) None
class idmtools_platform_comps.ssmt_work_items.work_order.ExecutionDefinition(Command: str, ImageName: str = 'DockerWorker')[source]

Bases: object

Define the execution definition for workorders.

Command: str
ImageName: str = 'DockerWorker'
__init__(Command: str, ImageName: str = 'DockerWorker') None
class idmtools_platform_comps.ssmt_work_items.work_order.DockerWorkOrder(WorkItem_Type: str = 'DockerWorker', Execution: ~idmtools_platform_comps.ssmt_work_items.work_order.ExecutionDefinition = <factory>)[source]

Bases: IWorkOrder

Define the docker worker.

WorkItem_Type: str = 'DockerWorker'
Execution: ExecutionDefinition
__init__(WorkItem_Type: str = 'DockerWorker', Execution: ~idmtools_platform_comps.ssmt_work_items.work_order.ExecutionDefinition = <factory>) None
class idmtools_platform_comps.ssmt_work_items.work_order.BuildFlags(section: ~typing.List[str] = <factory>, library: str = 'https://library.sylabs.io', Switches: ~typing.List[str] = <factory>)[source]

Bases: object

Define build flags.

section: List[str]
library: str = 'https://library.sylabs.io'
Switches: List[str]
__init__(section: ~typing.List[str] = <factory>, library: str = 'https://library.sylabs.io', Switches: ~typing.List[str] = <factory>) None
class idmtools_platform_comps.ssmt_work_items.work_order.BuildDefinition(Type: str = 'singularity', Input: str | None = None, Flags: ~idmtools_platform_comps.ssmt_work_items.work_order.BuildFlags = <factory>)[source]

Bases: object

Define options for build definitions.

Type: str = 'singularity'
Input: str = None
Flags: BuildFlags
__init__(Type: str = 'singularity', Input: str | None = None, Flags: ~idmtools_platform_comps.ssmt_work_items.work_order.BuildFlags = <factory>) None
class idmtools_platform_comps.ssmt_work_items.work_order.ImageBuilderWorkOrder(WorkItem_Type: str = 'ImageBuilderWorker', Build: str = BuildDefinition(Type='singularity', Input=None, Flags=BuildFlags(section=['all'], library='https://library.sylabs.io', Switches=[])), Output: str = 'image.sif', Tags: ~typing.Dict[str, str] = <factory>, AdditionalMounts: ~typing.List[str] = <factory>, StaticEnvironment: ~typing.Dict[str, str] = <factory>)[source]

Bases: IWorkOrder

Defines our Image Builder service workorder.

WorkItem_Type: str = 'ImageBuilderWorker'
Build: str = BuildDefinition(Type='singularity', Input=None, Flags=BuildFlags(section=['all'], library='https://library.sylabs.io', Switches=[]))
Output: str = 'image.sif'
Tags: Dict[str, str]
AdditionalMounts: List[str]
StaticEnvironment: Dict[str, str]
__init__(WorkItem_Type: str = 'ImageBuilderWorker', Build: str = BuildDefinition(Type='singularity', Input=None, Flags=BuildFlags(section=['all'], library='https://library.sylabs.io', Switches=[])), Output: str = 'image.sif', Tags: ~typing.Dict[str, str] = <factory>, AdditionalMounts: ~typing.List[str] = <factory>, StaticEnvironment: ~typing.Dict[str, str] = <factory>) None
idmtools_platform_comps.utils package

idmtools comps utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils Subpackages
idmtools_platform_comps.utils.assetize_output package

idmtools assetize output.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.assetize_output Submodules
idmtools_platform_comps.utils.assetize_output.assetize_output module

idmtools assetize output work item.

Notes

  • TODO add example heres

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.utils.assetize_output.assetize_output.AssetizeOutput(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None, file_patterns: ~typing.List[str] = <factory>, exclude_patterns: ~typing.List[str] = <factory>, include_assets: bool = False, simulation_prefix_format_str: str = '{simulation.id}', work_item_prefix_format_str: str = None, no_simulation_prefix: bool = False, verbose: bool = False, pre_run_functions: ~typing.List[~typing.Callable] = <factory>, entity_filter_function: ~typing.Callable[[~COMPS.Data.CommissionableEntity.CommissionableEntity], bool] = None, filename_format_function: ~typing.Callable[[str], str] = None, dry_run: bool = False, _ssmt_script: str = None, _ssmt_depends: ~typing.List[str] = <factory>, asset_tags: ~typing.Dict[str, str] = <factory>, asset_collection: ~idmtools.assets.asset_collection.AssetCollection = None)[source]

Bases: FileFilterWorkItem

AssetizeOutput allows creating assets from previously ran items in COMPS.

Notes

  • TODO link examples here.

asset_tags: Dict[str, str]
asset_collection: AssetCollection = None

The asset collection created by Assetize

run(wait_until_done: bool = False, platform: IPlatform | None = None, wait_on_done_progress: bool = True, **run_opts) AssetCollection | None[source]

Run the AssetizeOutput.

Parameters:
  • wait_until_done – Wait until Done will wait for the workitem to complete

  • platform – Platform Object

  • wait_on_done_progress – When set to true, a progress bar will be shown from the item

  • **run_opts – Additional options to pass to Run on platform

Returns:

AssetCollection created if item succeeds

wait(wait_on_done_progress: bool = True, timeout: int | None = None, refresh_interval=None, platform: IPlatform | None = None) AssetCollection | None[source]

Waits on Assetize Workitem to finish. This first waits on any dependent items to finish(Experiment/Simulation/WorkItems).

Parameters:
  • wait_on_done_progress – When set to true, a progress bar will be shown from the item

  • timeout – Timeout for waiting on item. If none, wait will be forever

  • refresh_interval – How often to refresh progress

  • platform – Platform

Returns:

AssetCollection created if item succeeds

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None, file_patterns: ~typing.List[str] = <factory>, exclude_patterns: ~typing.List[str] = <factory>, include_assets: bool = False, simulation_prefix_format_str: str = '{simulation.id}', work_item_prefix_format_str: str = None, no_simulation_prefix: bool = False, verbose: bool = False, pre_run_functions: ~typing.List[~typing.Callable] = <factory>, entity_filter_function: ~typing.Callable[[~COMPS.Data.CommissionableEntity.CommissionableEntity], bool] = None, filename_format_function: ~typing.Callable[[str], str] = None, dry_run: bool = False, _ssmt_script: str = None, _ssmt_depends: ~typing.List[str] = <factory>, asset_tags: ~typing.Dict[str, str] = <factory>, asset_collection: ~idmtools.assets.asset_collection.AssetCollection = None) None
idmtools_platform_comps.utils.assetize_output.assetize_ssmt_script module

idmtools ssmt script.

This script is used on server side only and not meant to be ran on a local machine.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

exception idmtools_platform_comps.utils.assetize_output.assetize_ssmt_script.NoFileFound[source]

Bases: Exception

idmtools_platform_comps.utils.assetize_output.assetize_ssmt_script.create_asset_collection(file_list: Set[Tuple[str, str, UUID, int]], ac_files: List[AssetCollectionFile], tags: Dict[str, str])[source]
Parameters:
  • file_list

  • ac_files – AC Files

  • tags – Tags to add

Returns:

idmtools_platform_comps.utils.assetize_output.assetize_ssmt_script.get_argument_parser()[source]
idmtools_platform_comps.utils.assetize_output.assetize_ssmt_script.build_asset_tags(parsed_args: Namespace) Dict[str, str][source]

Builds our Asset tag dic from tags

Parameters:

parsed_args – Parse Arg

Returns:

Dict of tags

idmtools_platform_comps.utils.download package

idmtools download workitem.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.download Submodules
idmtools_platform_comps.utils.download.download module

idmtools download work item output.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.utils.download.download.CompressType(value)[source]

Bases: Enum

Defines the compression types we support.

lzma is the best balance between speed and compression ratio typically

lzma = 'lzma'
deflate = 'deflate'
bz = 'bz'
class idmtools_platform_comps.utils.download.download.DownloadWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None, file_patterns: ~typing.List[str] = <factory>, exclude_patterns: ~typing.List[str] = <factory>, include_assets: bool = False, simulation_prefix_format_str: str = '{simulation.id}', work_item_prefix_format_str: str = None, no_simulation_prefix: bool = False, verbose: bool = False, pre_run_functions: ~typing.List[~typing.Callable] = <factory>, entity_filter_function: ~typing.Callable[[~COMPS.Data.CommissionableEntity.CommissionableEntity], bool] = None, filename_format_function: ~typing.Callable[[str], str] = None, dry_run: bool = False, _ssmt_script: str = None, _ssmt_depends: ~typing.List[str] = <factory>, output_path: str = <factory>, extract_after_download: bool = True, delete_after_download: bool = True, zip_name: str = 'output.zip', compress_type: ~idmtools_platform_comps.utils.download.download.CompressType = None)[source]

Bases: FileFilterWorkItem

DownloadWorkItem provides a utility to download items through a workitem with compression.

The main advantage of this over Analyzers is the compression. This is most effective when the targets to download have many items that are similar to download. For example, an experiment with 1000 simulations with similar output can greatly benefit from downloading through this method.

Notes

  • TODO Link examples here.

output_path: str
extract_after_download: bool = True
delete_after_download: bool = True
zip_name: str = 'output.zip'
compress_type: CompressType = None
wait(wait_on_done_progress: bool = True, timeout: int | None = None, refresh_interval=None, platform: IPlatform | None = None) None[source]

Waits on Download WorkItem to finish. This first waits on any dependent items to finish(Experiment/Simulation/WorkItems).

Parameters:
  • wait_on_done_progress – When set to true, a progress bar will be shown from the item

  • timeout – Timeout for waiting on item. If none, wait will be forever

  • refresh_interval – How often to refresh progress

  • platform – Platform

Returns:

AssetCollection created if item succeeds

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None, file_patterns: ~typing.List[str] = <factory>, exclude_patterns: ~typing.List[str] = <factory>, include_assets: bool = False, simulation_prefix_format_str: str = '{simulation.id}', work_item_prefix_format_str: str = None, no_simulation_prefix: bool = False, verbose: bool = False, pre_run_functions: ~typing.List[~typing.Callable] = <factory>, entity_filter_function: ~typing.Callable[[~COMPS.Data.CommissionableEntity.CommissionableEntity], bool] = None, filename_format_function: ~typing.Callable[[str], str] = None, dry_run: bool = False, _ssmt_script: str = None, _ssmt_depends: ~typing.List[str] = <factory>, output_path: str = <factory>, extract_after_download: bool = True, delete_after_download: bool = True, zip_name: str = 'output.zip', compress_type: ~idmtools_platform_comps.utils.download.download.CompressType = None) None
idmtools_platform_comps.utils.download.download_ssmt module

idmtools download workitem ssmt script.

This script is meant to be ran remotely on SSMT, not locally.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.download.download_ssmt.get_argument_parser()[source]
idmtools_platform_comps.utils.download.download_ssmt.create_archive_from_files(args: Namespace, files, files_from_ac, compress_type: str = 'lzma')[source]
idmtools_platform_comps.utils.python_requirements_ac package

idmtools python requirement ac output.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.python_requirements_ac Submodules
idmtools_platform_comps.utils.python_requirements_ac.create_asset_collection module

idmtools create asset collection script.

This is part of the RequirementsToAssetCollection tool. This is ran on the SSMT to convert installed files to a AssetCollection.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.python_requirements_ac.create_asset_collection.build_asset_file_list(prefix='L')[source]

Utility function to build all library files.

Parameters:

prefix – used to identify library files

Returns: file paths as a list

idmtools_platform_comps.utils.python_requirements_ac.create_asset_collection.get_first_simulation_of_experiment(exp_id)[source]

Retrieve the first simulation from an experiment.

Parameters:

exp_id – use input (experiment id)

Returns: list of files paths

idmtools_platform_comps.utils.python_requirements_ac.create_asset_collection.main()[source]

Main entry point for our create asset collection script.

idmtools_platform_comps.utils.python_requirements_ac.install_requirements module

idmtools script to run on Slurm to install python files.

This is part of the RequirementsToAssetCollection tool. This will run on the HPC in an Experiment to install the python requirements as output that will be converted to an AssetCollection later.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.python_requirements_ac.install_requirements.install_packages_from_requirements(python_paths=None)[source]

Install our packages to a local directory.

Parameters:

python_paths – system Python path

Returns: None

idmtools_platform_comps.utils.python_requirements_ac.install_requirements.set_python_dates()[source]

Set python to the same dates so we don’t create pyc files with differing dates.

Pyc embed the date, so this is a workaround for that behaviour.

idmtools_platform_comps.utils.python_requirements_ac.install_requirements.compile_all(python_paths=None)[source]

Compile all the python files to pyc.

This is useful to reduce how often this happens since python will be an asset

idmtools_platform_comps.utils.python_requirements_ac.requirements_to_asset_collection module

idmtools requirements to asset collection.

This is the entry point for users to use RequirementsToAssetCollection tool.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.utils.python_requirements_ac.requirements_to_asset_collection.RequirementsToAssetCollection(platform: COMPSPlatform | None = None, name: str = 'install custom requirements', requirements_path: str | None = None, pkg_list: list | None = None, local_wheels: list | None = None, asset_tags: dict | None = None)[source]

Bases: object

RequirementsToAssetCollection provides a utility to install python packages into an asset collection.

Notes

  • TODO - Incorporate example in this docs

platform: COMPSPlatform = None

Platform object

name: str = 'install custom requirements'

Name of experiment when installing requirements

requirements_path: str = None

Path to requirements file

pkg_list: list = None

list of packages

local_wheels: list = None

list of wheel files locally to upload and install

asset_tags: dict = None
property checksum

Calculate checksum on the requirements file.

Returns:

The md5 of the requirements.

property md5_tag

Get unique key for our requirements + target.

Returns:

The md5 tag.

property requirements

Requirements property. We calculate this using consolidate_requirements.

Returns:

Consolidated requirements.

init_platform()[source]

Initialize the platform.

run(rerun=False)[source]

Run our utility.

The working logic of this utility:
  1. check if asset collection exists for given requirements, return ac id if exists

  2. create an Experiment to install the requirements on COMPS

  3. create a WorkItem to create a Asset Collection

Returns: return ac id based on the requirements if Experiment and WorkItem Succeeded

Raises:

Exception - If an error happens on workitem

Notes

  • TODO The exceptions here should be rewritten to parse errors from remote system like AssetizeOutputs

save_updated_requirements()[source]

Save consolidated requirements to a file requirements_updated.txt.

Returns:

None

retrieve_ac_by_tag(md5_check=None)[source]

Retrieve comps asset collection given ac tag.

Parameters:

md5_check – also can use custom md5 string as search tag

Returns: comps asset collection

retrieve_ac_from_wi(wi)[source]

Retrieve ac id from file ac_info.txt saved by WI.

Parameters:

wi – SSMTWorkItem (which was used to create ac from library)

Returns: COMPS asset collection

add_wheels_to_assets(experiment)[source]

Add wheels to assets of our experiment.

Parameters:

experiment – Experiment to add assets to

Returns:

None

run_experiment_to_install_lib()[source]

Create an Experiment which will run another py script to install requirements.

Returns: Experiment created

run_wi_to_create_ac(exp_id)[source]

Create an WorkItem which will run another py script to create new asset collection.

Parameters:

exp_id – the Experiment id (which installed requirements)

Returns: work item created

consolidate_requirements()[source]

Combine requirements and dynamic requirements (a list).

We do the following:
  • get the latest version of package if version is not provided

  • dynamic requirements will overwrites the requirements file

Returns: the consolidated requirements (as a list)

__init__(platform: COMPSPlatform | None = None, name: str = 'install custom requirements', requirements_path: str | None = None, pkg_list: list | None = None, local_wheels: list | None = None, asset_tags: dict | None = None) None
idmtools_platform_comps.utils.ssmt_utils package

idmtools ssmt utils.

These tools are meant to be used server-side within SSMT.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.ssmt_utils Submodules
idmtools_platform_comps.utils.ssmt_utils.common module

idmtools common ssmt tools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.ssmt_utils.common.ensure_debug_logging()[source]

Ensure we have debug logging enabled in idmtools.

idmtools_platform_comps.utils.ssmt_utils.common.setup_verbose(args: Namespace)[source]

Setup verbose logging for ssmt.

idmtools_platform_comps.utils.ssmt_utils.common.login_to_env()[source]

Ensure we are logged in to COMPS client.

idmtools_platform_comps.utils.ssmt_utils.common.get_error_handler_dump_config_and_error(job_config)[source]

Define our exception handler for ssmt.

This exception handler writes a “error_reason.json” file to the job that contains error info with additional data.

Parameters:

job_config – Job config used to execute items

Returns:

Error handler for ssmt

idmtools_platform_comps.utils.ssmt_utils.file_filter module

idmtools ssmt file filter tools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.ssmt_utils.file_filter.get_common_parser(app_description)[source]

Creates a get common argument parser used with file filter function.

idmtools_platform_comps.utils.ssmt_utils.file_filter.gather_files(directory: str, file_patterns: List[str], exclude_patterns: List[str] | None = None, assets: bool = False, prefix: str | None = None, filename_format_func: Callable[[str], str] | None = None) Set[Tuple[str, str, UUID, int]][source]

Gather file_list.

Parameters:
  • directory – Directory to gather from

  • file_patterns – List of file patterns

  • exclude_patterns – List of patterns to exclude

  • assets – Should assets be included

  • prefix – Prefix for file_list

  • filename_format_func – Function that can format the filename

Returns:

Return files that match patterns.

idmtools_platform_comps.utils.ssmt_utils.file_filter.is_file_excluded(filename: str, exclude_patterns: List[str]) bool[source]

Is file excluded by excluded patterns.

Parameters:
  • filename – File to filter

  • exclude_patterns – List of file patterns to exclude

Returns:

True is file is excluded

Gather files from different related entities.

Parameters:
  • work_item – Work item to gather from

  • file_patterns – List of File Patterns

  • exclude_patterns – List of Exclude patterns

  • assets – Should items be gathered from Assets Directory

  • simulation_prefix_format_str – Format string for prefix of Simulations

  • work_item_prefix_format_str – Format string for prefix of WorkItem

  • entity_filter_func – Function to filter entities

  • filename_format_func – Filename filter function

Returns:

Set of File Tuples in format Filename, Destination Name, and Checksum

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_experiments(assets: bool, entity_filter_func: Callable[[CommissionableEntity], bool], exclude_patterns_compiles: List, file_patterns: List[str], futures: List[Future], pool: ThreadPoolExecutor, simulation_prefix_format_str: str, work_item: WorkItem, filename_format_func: Callable[[str], str])[source]

Filter Experiments outputs using our patterns.

Parameters:
  • assets – Assets to filter

  • entity_filter_func – Function to filter functions

  • exclude_patterns_compiles – List of patterns to exclude

  • file_patterns – File patterns to match

  • futures – Future queue

  • pool – Pool to execute jobs on

  • simulation_prefix_format_str – Format string for prefix of Simulations

  • work_item – Parent WorkItem

  • filename_format_func – Function to filter filenames

Returns:

None

idmtools_platform_comps.utils.ssmt_utils.file_filter.get_simulation_prefix(parent_work_item: WorkItem, simulation: Simulation, simulation_prefix_format_str: str, experiment: Experiment | None = None) str[source]

Get Simulation Prefix.

Parameters:
  • parent_work_item – Parent workitem

  • simulation – Simulation to form

  • simulation_prefix_format_str – Prefix format string

  • experiment – Optional experiment to be used with the

Returns:

Name of the simulation

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_experiment_assets(work_item: WorkItem, assets: bool, entity_filter_func: Callable[[CommissionableEntity], bool], exclude_patterns_compiles: List, experiment: Experiment, file_patterns: List[str], futures: List[Future], pool: ThreadPoolExecutor, simulation_prefix_format_str: str, simulations: List[Simulation], filename_format_func: Callable[[str], str])[source]

Filter experiment assets. This method uses the first simulation to gather experiment assets.

Parameters:
  • work_item – Parent Workitem

  • assets – Whether assets should be matched

  • entity_filter_func – Entity Filter Function

  • exclude_patterns_compiles – List of files to exclude

  • experiment – Experiment

  • file_patterns – File patterns to filter

  • futures – List of futures

  • pool – Pool to submit search jobs to

  • simulation_prefix_format_str – Format string for simulation

  • simulations – List of simulations

  • filename_format_func – Name function for filename

Returns:

None

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_simulations_files(assets: bool, entity_filter_func: Callable[[CommissionableEntity], bool], exclude_patterns_compiles: List, file_patterns: List[str], futures: List[Future], pool: ThreadPoolExecutor, simulation_prefix_format_str: str, work_item: WorkItem, filename_format_func: Callable[[str], str])[source]

Filter Simulations files.

Parameters:
  • assets – Whether assets should be matched

  • entity_filter_func – Entity Filter Function

  • exclude_patterns_compiles – List of files to exclude

  • file_patterns – File patterns to filter

  • futures – List of futures

  • pool – Pool to submit search jobs to

  • simulation_prefix_format_str – Format string for simulation

  • work_item

  • filename_format_func – Filename function

Returns:

None

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_simulation_list(assets: bool, entity_filter_func: Callable[[CommissionableEntity], bool], exclude_patterns_compiles: List, file_patterns: List[str], futures: List[Future], pool: ThreadPoolExecutor, simulation_prefix_format_str: str, simulations: List[Simulation], work_item: WorkItem, experiment: Experiment | None = None, filename_format_func: Callable[[str], str] | None = None)[source]

Filter simulations list. This method is used for experiments and simulations.

Parameters:
  • assets – Whether assets should be matched

  • entity_filter_func – Entity Filter Function

  • exclude_patterns_compiles – List of files to exclude

  • file_patterns – File patterns to filter

  • futures – List of futures

  • pool – Pool to submit search jobs to

  • simulation_prefix_format_str – Format string for simulation

  • simulations – List of simulations

  • work_item – Parent workitem

  • experiment – Optional experiment.

  • filename_format_func – Filename function

Returns:

None

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_work_items_files(assets: bool, entity_filter_func: Callable[[CommissionableEntity], bool], exclude_patterns_compiles: List, file_patterns: List[str], futures: List[Future], pool: ThreadPoolExecutor, work_item: WorkItem, work_item_prefix_format_str: str, filename_format_func: Callable[[str], str])[source]

Filter work items files.

Parameters:
  • assets – Whether assets should be matched

  • entity_filter_func – Entity Filter Function

  • exclude_patterns_compiles – List of files to exclude

  • file_patterns – File patterns to filter

  • futures – List of futures

  • pool – Pool to submit search jobs to

  • work_item – WorkItem

  • work_item_prefix_format_str – WorkItemPrefix

  • filename_format_func – Filename function

Returns:

None

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_ac_files(wi: WorkItem, patterns, exclude_patterns) List[AssetCollectionFile][source]

Filter Asset Collection File.

Parameters:
  • wi – WorkItem

  • patterns – File patterns

  • exclude_patterns – Exclude patterns

Returns:

List of filters asset collection files

idmtools_platform_comps.utils.ssmt_utils.file_filter.get_asset_file_path(file)[source]

Get asset file path which combined the relative path and filename if relative path is set.

Otherwise we use just the filename.

Parameters:

file – Filename

Returns:

Filename

exception idmtools_platform_comps.utils.ssmt_utils.file_filter.DuplicateAsset[source]

Bases: Exception

Error for when we encountered output paths that overlap.

idmtools_platform_comps.utils.ssmt_utils.file_filter.ensure_no_duplicates(ac_files, files)[source]

Ensure no duplicates are in asset.

Parameters:
  • ac_files – Ac files

  • files – Simulation/Experiment/Workitem files

Returns:

None

Raises:

DuplicateAsset - if asset with same output path is found

idmtools_platform_comps.utils.ssmt_utils.file_filter.print_results(ac_files, files)[source]

Print Results.

Parameters:
  • ac_files – Ac Files

  • files – Files

Returns:

None

idmtools_platform_comps.utils.ssmt_utils.file_filter.apply_custom_filters(args: Namespace)[source]

Apply user defined custom filter functions.

The function does the following workflow.

1. Check if there is a pre_run_func(s) defined. 1b) If there are pre-run funcs, run each of those 2) Is there an entity_filter_func. This function allows us to filter items(Experiment/Simulations/etc) directly. If not defined, we use a default function returns true. 3) If filename format function is defined, we set that, otherwise we use the default which just uses the original file name

Parameters:

args – argparse namespace.

Returns:

entity_filter_func and filename format func

idmtools_platform_comps.utils.ssmt_utils.file_filter.parse_filter_args_common(args: Namespace)[source]

Parse filter arguments from an argparse namespace.

We need this because we use filtering across multiple scripts.

Parameters:

args – Argparse args

Returns:

entity_filter_func and filename formart func

idmtools_platform_comps.utils.ssmt_utils.file_filter.filter_files_and_assets(args: Namespace, entity_filter_func: Callable[[CommissionableEntity], bool], wi: WorkItem, filename_format_func: Callable[[str], str]) Tuple[Set[Tuple[str, str, UUID, int]], List[AssetCollectionFile]][source]

Filter files and assets using provided parameters.

Parameters:
  • args – Argparse details

  • entity_filter_func – Optional filter function for entities. This function is ran on every item. If it returns true, we return the item

  • wi – WorkItem we are running in

  • filename_format_func – Filename format function allows use to customize how we filter filenames for output.

Returns:

Files that matches the filter and the assets that matches the filter as well.

idmtools_platform_comps.utils Submodules
idmtools_platform_comps.utils.download_experiment module

idmtools download experiment tools.

This allow downloading experiments for local testing.

Notes

  • We need some details around this somewhere. Maybe some documentation?

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.download_experiment.get_script_extension()[source]

Determine extension to write out file as.

idmtools_platform_comps.utils.download_experiment.download_asset(asset, path)[source]

Download a single asset.

idmtools_platform_comps.utils.download_experiment.write_script(simulation: Simulation, path)[source]

Writes a shell script to execute simulation.

Parameters:
  • simulation

  • path

Returns:

None

idmtools_platform_comps.utils.download_experiment.write_experiment_script(experiment: Experiment, path: str)[source]

Write an experiment script.

Parameters:
  • experiment

  • path

Returns:

None

idmtools_platform_comps.utils.download_experiment.download_experiment(experiment: Experiment, destination: str)[source]

Downloads experiment to local directory.

Useful for troubleshooting experiments

Parameters:
  • experiment – Experiment to download

  • destination – Destionation Directory

Returns:

None

idmtools_platform_comps.utils.file_filter_workitem module

idmtools FileFilterWorkItem is a interface for SSMT command to act on files using filters in WorkItems.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

exception idmtools_platform_comps.utils.file_filter_workitem.CrossEnvironmentFilterNotSupport[source]

Bases: Exception

Defines cross environment error for when a user tried to filter across multiple comps environments.

exception idmtools_platform_comps.utils.file_filter_workitem.AtLeastOneItemToWatch[source]

Bases: Exception

Defines error for when there are not items being watched by FileFilterWorkItem.

class idmtools_platform_comps.utils.file_filter_workitem.FileFilterWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None, file_patterns: ~typing.List[str] = <factory>, exclude_patterns: ~typing.List[str] = <factory>, include_assets: bool = False, simulation_prefix_format_str: str = '{simulation.id}', work_item_prefix_format_str: str = None, no_simulation_prefix: bool = False, verbose: bool = False, pre_run_functions: ~typing.List[~typing.Callable] = <factory>, entity_filter_function: ~typing.Callable[[~COMPS.Data.CommissionableEntity.CommissionableEntity], bool] = None, filename_format_function: ~typing.Callable[[str], str] = None, dry_run: bool = False, _ssmt_script: str = None, _ssmt_depends: ~typing.List[str] = <factory>)[source]

Bases: SSMTWorkItem, ABC

Defines our filtering workitem base that is used by assetize outputs and download work items.

file_patterns: List[str]

//docs.python.org/3.7/library/glob.html for details on the patterns

Type:

List of glob patterns. See https

exclude_patterns: List[str]
include_assets: bool = False

Include Assets directories. This allows patterns to also include items from the assets directory

simulation_prefix_format_str: str = '{simulation.id}'

Formatting pattern for directory names. Simulations tend to have similar outputs so the workitem puts those in directories using the simulation id by default as the directory name

work_item_prefix_format_str: str = None

WorkFlowItem outputs will not have a folder prefix by default. If you are filtering multiple work items, you may want to set this to “{workflow_item.id}”

no_simulation_prefix: bool = False

Simulations outputs will not have a folder. Useful when you are filtering a single simulation

verbose: bool = False

Enable verbose

pre_run_functions: List[Callable]

Python Functions that will be ran before Filtering script. The function must be named

entity_filter_function: Callable[[CommissionableEntity], bool] = None

Python Function to filter entities. This Function should receive a Comps CommissionableEntity. True means include item, false is don’t

filename_format_function: Callable[[str], str] = None

Function to pass a custom function that is called on the name. This can be used to do advanced mapping or renaming of files

dry_run: bool = False

Enables running jobs without creating executing. It instead produces a file list of what would be includes in the final filter

create_command() str[source]

Builds our command line for the SSMT Job.

Returns:

Command string

clear_exclude_patterns()[source]

Clear Exclude Patterns will remove all current rules.

Returns:

None

pre_creation(platform: IPlatform) None[source]

Pre-Creation.

Parameters:

platform – Platform

Returns:

None

total_items_watched() int[source]

Returns the number of items watched.

Returns:

Total number of items watched

run_after_by_id(item_id: str, item_type: ItemType, platform: COMPSPlatform | None = None)[source]

Runs the workitem after an existing item finishes.

Parameters:
  • item_id – ItemId

  • item_type – ItemType

  • platform – Platform

Returns:

None

Raises:

ValueError - If item_type is not an experiment, simulation, or workflow item

from_items(item: Experiment | Simulation | IWorkflowItem | List[Experiment | Simulation | IWorkflowItem])[source]

Add items to load assets from.

Parameters:

item – Item or list of items to watch.

Returns:

None

Raises:

ValueError - If any items specified are not an Experiment, Simulation or WorkItem

Notes

We should add suite support in the future if possible. This should be done in client side by converting suite to list of experiments.

wait(wait_on_done_progress: bool = True, timeout: int | None = None, refresh_interval=None, platform: COMPSPlatform | None = None) None[source]

Waits on Filter Workitem to finish. This first waits on any dependent items to finish(Experiment/Simulation/WorkItems).

Parameters:
  • wait_on_done_progress – When set to true, a progress bar will be shown from the item

  • timeout – Timeout for waiting on item. If none, wait will be forever

  • refresh_interval – How often to refresh progress

  • platform – Platform

Returns:

AssetCollection created if item succeeds

fetch_error(print_error: bool = True) Dict[source]

Fetches the error from a WorkItem.

Parameters:

print_error – Should error be printed. If false, error will be returned

Returns:

Error info

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = 'idmtools workflow item', assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', docker_image: str = None, command: dataclasses.InitVar[str] = None, file_patterns: ~typing.List[str] = <factory>, exclude_patterns: ~typing.List[str] = <factory>, include_assets: bool = False, simulation_prefix_format_str: str = '{simulation.id}', work_item_prefix_format_str: str = None, no_simulation_prefix: bool = False, verbose: bool = False, pre_run_functions: ~typing.List[~typing.Callable] = <factory>, entity_filter_function: ~typing.Callable[[~COMPS.Data.CommissionableEntity.CommissionableEntity], bool] = None, filename_format_function: ~typing.Callable[[str], str] = None, dry_run: bool = False, _ssmt_script: str = None, _ssmt_depends: ~typing.List[str] = <factory>) None
idmtools_platform_comps.utils.general module

idmtools general status.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.general.fatal_code(e: Exception) bool[source]

Uses to determine if we should stop retrying based on request status code.

Parameters:

e – Exception to check

Returns:

True is exception is a request and status code matches 404

idmtools_platform_comps.utils.general.convert_comps_status(comps_status: SimulationState) EntityStatus[source]

Convert status from COMPS to IDMTools.

Parameters:

comps_status – Status in Comps

Returns:

EntityStatus

idmtools_platform_comps.utils.general.convert_comps_workitem_status(comps_status: WorkItemState) EntityStatus[source]

Convert status from COMPS to IDMTools.

Created = 0 # WorkItem has been saved to the database CommissionRequested = 5 # WorkItem is ready to be processed by the next available worker of the correct type Commissioned = 10 # WorkItem has been commissioned to a worker of the correct type and is beginning execution Validating = 30 # WorkItem is being validated Running = 40 # WorkItem is currently running Waiting = 50 # WorkItem is waiting for dependent items to complete ResumeRequested = 60 # Dependent items have completed and WorkItem is ready to be processed by the next available worker of the correct type CancelRequested = 80 # WorkItem cancellation was requested Canceled = 90 # WorkItem was successfully canceled Resumed = 100 # WorkItem has been claimed by a worker of the correct type and is resuming Canceling = 120 # WorkItem is in the process of being canceled by the worker Succeeded = 130 # WorkItem completed successfully Failed = 140 # WorkItem failed

Parameters:

comps_status – Status in Comps

Returns:

EntityStatus

idmtools_platform_comps.utils.general.clean_experiment_name(experiment_name: str) str[source]

Enforce any COMPS-specific demands on experiment names.

Parameters:

experiment_name – name of the experiment

Returns:the experiment name allowed for use

idmtools_platform_comps.utils.general.get_file_from_collection(platform: IPlatform, collection_id: UUID, file_path: str) bytearray[source]

Retrieve a file from an asset collection.

Parameters:
  • platform – Platform object to use

  • collection_id – Asset Collection ID

  • file_path – Path within collection

Examples:: >>> import uuid >>> get_file_from_collection(platform, uuid.UUID(“fc461146-3b2a-441f-bc51-0bff3a9c1ba0”), “StdOut.txt”)

Returns:

Object Byte Array

idmtools_platform_comps.utils.general.get_file_as_generator(file: SimulationFile | AssetCollectionFile | AssetFile | WorkItemFile | OutputFileMetadata, chunk_size: int = 128, resume_byte_pos: int | None = None) Generator[bytearray, None, None][source]

Get file as a generator.

Parameters:
  • file – File to stream contents through a generator

  • chunk_size – Size of chunks to load

  • resume_byte_pos – Optional start of download

Returns:

Generator for file content

class idmtools_platform_comps.utils.general.Workitem[source]

Bases: object

SimpleItem to define workitem for proxy purposes.

Notes

  • TODO deprecate this if possible

idmtools_platform_comps.utils.general.get_asset_for_comps_item(platform: IPlatform, item: IEntity, files: List[str], cache=None, load_children: List[str] | None = None, comps_item: Experiment | Workitem | Simulation | None = None) Dict[str, bytearray][source]

Retrieve assets from an Entity(Simulation, Experiment, WorkItem).

Parameters:
  • platform – Platform Object to use

  • item – Item to fetch assets from

  • files – List of file names to retrieve

  • cache – Cache object to use

  • load_children – Optional Load children fields

  • comps_item – Optional comps item

Returns:

Dictionary in structure of filename -> bytearray

idmtools_platform_comps.utils.general.update_item(platform: IPlatform, item_id: str, item_type: ItemType, tags: dict | None = None, name: str | None = None)[source]

Utility function to update existing COMPS experiment/simulation/workitem’s tags.

For example, you can add/update simulation’s tag once its post-process is done to mark the simulation with more meaningful text with tag/name :param platform: Platform :param item_id: experiment/simulation/workitem id :param item_type: The type of the object to be retrieved :param tags: tags dict for update :param name: name of experiment/simulation/workitem

Returns:

None

idmtools_platform_comps.utils.general.generate_ac_from_asset_md5(file_name: str, asset_md5: [<class 'str'>, <class 'uuid.UUID'>], platform: ~idmtools.entities.iplatform.IPlatform | None = None, tags: dict | None = None)[source]

Get an asset collection by asset id(md5). :param file_name: file name string :param asset_md5: asset md5 string :param platform: Platform object :param tags: tags dict for asset collection

Returns:

COMPS AssetCollection

idmtools_platform_comps.utils.general.generate_ac_from_asset_md5_file(file_path: str)[source]

Get an asset collection by file path. :param file_path: file path

Returns:

COMPS AssetCollection

idmtools_platform_comps.utils.general.save_sif_asset_md5_from_ac_id(ac_id: str)[source]

Get the md5 of the asset in the asset collection of singularity. :param ac_id: asset collection id

idmtools_platform_comps.utils.linux_mounts module

idmtools set linux mounts.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.linux_mounts.set_linux_mounts(platform: IPlatform, linux_environment: str = None) None[source]

For COMPS Platform, check and set linux mounts. :param platform: idmtools COMPS Platform :param linux_environment: platform environment

Returns:

None

idmtools_platform_comps.utils.linux_mounts.clear_linux_mounts(platform: IPlatform, linux_environment: str = None) None[source]

For COMPS Platform, check and clear linux mounts. :param platform: idmtools COMPS Platform :param linux_environment: platform environment

Returns:

None

idmtools_platform_comps.utils.linux_mounts.get_workdir_from_simulations(platform: IPlatform, comps_simulations: List[Simulation]) Dict[str, str][source]

Get COMPS simulations working directory. :param platform: idmtools COMPS Platform :param comps_simulations: COMPS Simulations

Returns:

dictionary with simulation id as key and working directory as value

idmtools_platform_comps.utils.lookups module

idmtools comps lookups.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.lookups.get_experiment_by_id(exp_id, query_criteria: QueryCriteria = None) Experiment[source]

Get an experiment by id.

idmtools_platform_comps.utils.lookups.get_simulation_by_id(sim_id, query_criteria: QueryCriteria = None) Simulation[source]

Fetches simulation by id and optional query criteria.

Wrapped in additional Retry Logic. Used by other lookup methods

Parameters:
  • sim_id

  • query_criteria – Optional QueryCriteria to search with

Returns:

Simulation with ID

idmtools_platform_comps.utils.lookups.get_all_experiments_for_user(user: str) List[Experiment][source]

Returns all the experiments for a specific user.

Parameters:

user – username to locate

Returns:

Experiments for a user

idmtools_platform_comps.utils.lookups.get_simulations_from_big_experiments(experiment_id)[source]

Get simulation for large experiment. This allows us to pull simulations in chunks.

Parameters:

experiment_id – Experiment id to load

Returns:

List of simulations

idmtools_platform_comps.utils.package_version module

idmtools Tools to filter versions of packages for requriements for asset collections.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.utils.package_version.PackageHTMLParser[source]

Bases: HTMLParser, ABC

Base Parser for our other parsers.

previous_tag = None
__init__()[source]

Constructor.

pkg_version = None
class idmtools_platform_comps.utils.package_version.LinkHTMLParser[source]

Bases: PackageHTMLParser

Parse hrefs from links.

handle_starttag(tag, attrs)[source]

Parse links and extra hrefs.

class idmtools_platform_comps.utils.package_version.LinkNameParser[source]

Bases: PackageHTMLParser

Provides parsing of packages from pypi/arfifactory.

We parse links that match versions patterns

ver_pattern = re.compile('^[\\d\\.brcdev\\+nightly]+$')
handle_starttag(tag, attrs)[source]

Handle begin of links.

handle_endtag(tag)[source]

End link tags.

handle_data(data)[source]

Process links.

idmtools_platform_comps.utils.package_version.get_latest_package_version_from_pypi(pkg_name, display_all=False)[source]

Utility to get the latest version for a given package name.

Parameters:
  • pkg_name – package name given

  • display_all – determine if output all package releases

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.get_latest_pypi_package_version_from_artifactory(pkg_name, display_all=False, base_version: str | None = None)[source]

Utility to get the latest version for a given package name.

Parameters:
  • pkg_name – package name given

  • display_all – determine if output all package releases

  • base_version – Base version

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.get_pypi_package_versions_from_artifactory(pkg_name, display_all=False, base_version: str | None = None, exclude_pre_release: bool = True)[source]

Utility to get versions of a package in artifactory.

Parameters:
  • pkg_name – package name given

  • display_all – determine if output all package releases

  • base_version – Base version

  • exclude_pre_release – Exclude any prerelease versions

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.get_latest_ssmt_image_version_from_artifactory(pkg_name='comps_ssmt_worker', base_version: str | None = None, display_all=False)[source]

Utility to get the latest version for a given package name.

Parameters:
  • pkg_name – package name given

  • base_version – Optional base version. Versions above this will not be added.

  • display_all – determine if output all package releases

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.get_docker_manifest(image_path='idmtools/comps_ssmt_worker', repo_base='https://packages.idmod.org/artifactory/list/docker-production')[source]

Get docker manifest from IDM Artifactory. It mimics latest even when user has no latest tag defined.

Parameters:
  • image_path – Path of docker image we want

  • repo_base – Base of the repo

Returns:

None

Raises:

ValueError - When the manifest cannot be found

idmtools_platform_comps.utils.package_version.get_digest_from_docker_hub(repo, tag='latest')[source]

Get the digest for image from docker.

Parameters:
  • repo – string, repository (e.g. ‘library/fedora’)

  • tag – string, tag of the repository (e.g. ‘latest’)

idmtools_platform_comps.utils.package_version.fetch_versions_from_server(pkg_url: str, parser: ~typing.Type[~idmtools_platform_comps.utils.package_version.PackageHTMLParser] = <class 'idmtools_platform_comps.utils.package_version.LinkHTMLParser'>) List[str][source]

Fetch all versions from server.

Parameters:
  • pkg_url – Url to fetch

  • parser – Parser tp use

Returns:

All the releases for a package

idmtools_platform_comps.utils.package_version.fetch_versions_from_artifactory(pkg_name: str, parser: ~typing.Type[~idmtools_platform_comps.utils.package_version.PackageHTMLParser] = <class 'idmtools_platform_comps.utils.package_version.LinkHTMLParser'>) List[str][source]

Fetch all versions from server.

Parameters:
  • pkg_name – Url to fetch

  • parser – Parser tp use

Returns:

Available releases

idmtools_platform_comps.utils.package_version.get_versions_from_site(pkg_url, base_version: str | None = None, display_all=False, parser: ~typing.Type[~idmtools_platform_comps.utils.package_version.PackageHTMLParser] = <class 'idmtools_platform_comps.utils.package_version.LinkNameParser'>, exclude_pre_release: bool = True)[source]

Utility to get the the available versions for a package.

The default properties filter out pre releases. You can also include a base version to only list items starting with a particular version

Parameters:
  • pkg_url – package name given

  • base_version – Optional base version. Versions above this will not be added. For example, to get versions 1.18.5, 1.18.4, 1.18.3, 1.18.2 pass 1.18

  • display_all – determine if output all package releases

  • parser – Parser needs to be a HTMLParser that returns a pkg_versions

  • exclude_pre_release – Exclude prerelease versions

Returns: the latest version of ven package

Raises:

ValueError - If a latest versions cannot be determined

idmtools_platform_comps.utils.package_version.get_latest_version_from_site(pkg_url, base_version: str | None = None, display_all=False, parser: ~typing.Type[~idmtools_platform_comps.utils.package_version.PackageHTMLParser] = <class 'idmtools_platform_comps.utils.package_version.LinkNameParser'>, exclude_pre_release: bool = True)[source]

Utility to get the latest version for a given package name.

Parameters:
  • pkg_url – package name given

  • base_version – Optional base version. Versions above this will not be added.

  • display_all – determine if output all package releases

  • parser – Parser needs to be a HTMLParser that returns a pkg_versions

  • exclude_pre_release – Exclude pre-release versions

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.fetch_package_versions_from_pypi(pkg_name)[source]

Utility to get the latest version for a given package name.

Parameters:

pkg_name – package name given

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.fetch_package_versions(pkg_name, is_released=True, sort=True, display_all=False)[source]

Utility to get the latest version for a given package name.

Parameters:
  • pkg_name – package name given

  • is_released – get released version only

  • sort – make version sorted or not

  • display_all – determine if output all package releases

Returns: the latest version of ven package

idmtools_platform_comps.utils.package_version.get_highest_version(pkg_requirement: str)[source]

Utility to get the latest version for a given package name.

Parameters:

pkg_requirement – package requirement given

Returns: the highest valid version of the package

idmtools_platform_comps.utils.package_version.get_latest_version(pkg_name)[source]

Utility to get the latest version for a given package name.

Parameters:

pkg_name – package name given

Returns: the latest version of package

idmtools_platform_comps.utils.package_version.get_latest_compatible_version(pkg_name, base_version=None, versions=None, validate=True)[source]

Utility to get the latest compatible version from a given version list.

Parameters:
  • base_version – Optional base version. Versions above this will not be added.

  • pkg_name – package name given

  • versions – user input of version list

  • validate – bool, if True, will validate base_version

Returns: the latest compatible version from versions

Raises:

Exception - If we cannot find version

Notes

  • TODO - Make custom exception or use ValueError

idmtools_platform_comps.utils.python_version module

idmtools special comps hooks.

Notes

  • TODO update this to use new function plugins

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.python_version.platform_task_hooks(task, platform)[source]

Update python task with proper python executable.

Parameters:
  • task – PythonTask or CommandTask

  • platform – the platform user uses

Returns: re-build task

Notes

  • TODO revisit with SingularityTasks later

idmtools_platform_comps.utils.scheduling module

idmtools scheduling utils for comps.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_comps.utils.scheduling.default_add_workerorder_sweep_callback(simulation, file_name, file_path)[source]

Utility function to add updated WorkOrder.json to each simulation as linked file via simulation task.

first loads original workorder file from local, then update Command field in it from each simulation object’s simulation.task.command.cmd, then write updated command to WorkOrder.json, and load this file to simulation

Parameters:
  • simulation – Simulation we are configuring

  • file_name – Filename to use

  • file_path – Path to file

Returns:

None

idmtools_platform_comps.utils.scheduling.default_add_schedule_config_sweep_callback(simulation, command: str | None = None, node_group_name: str = 'idm_cd', num_cores: int = 1, **config_opts)[source]

Default callback to be used for sweeps that affect a scheduling config.

idmtools_platform_comps.utils.scheduling.scheduled(simulation: Simulation)[source]

Determine if scheduling is defined on the simulation.

Parameters:

simulation – Simulation to check

Returns:

True if simulation.scheduling is defined and true.

idmtools_platform_comps.utils.scheduling.add_work_order(item: Experiment | Simulation | TemplatedSimulations, file_name: str = 'WorkOrder.json', file_path: str | PathLike = './WorkOrder.json')[source]

Adds workorder.json.

Parameters:
  • item – Item to add work order to

  • file_name – Workorder file name

  • file_path – Path to file(locally)

Returns:

None

Raises:

ValueError - If experiment is empty – If item is not an experiment, simulation, or TemplatedSimulations

idmtools_platform_comps.utils.scheduling.add_schedule_config(item: Experiment | Simulation | TemplatedSimulations, command: str | None = None, node_group_name: str = 'idm_cd', num_cores: int = 1, **config_opts)[source]

Add scheduling config to an Item.

Scheduling config supports adding to Experiments, Simulations, and TemplatedSimulations

Parameters:
  • item – Item to add scheduling config to

  • command – Command to run

  • node_group_name – Node group name

  • num_cores – Num of cores to use

  • **config_opts – Additional config options

Returns:

None

Raises:

ValueError - If experiment is empty – If item is not an experiment, simulation, or TemplatedSimulations

Notes

  • TODO refactor to resuse the add_work_order if possible. The complication is simulation command possibly

idmtools_platform_comps.utils.singularity_build module

idmtools singularity build workitem.

Notes

  • TODO add examples here.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.utils.singularity_build.SingularityBuildWorkItem(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', definition_file: ~os.PathLike | str = None, definition_content: str = None, is_template: bool = False, template_args: ~typing.Dict[str, str] = <factory>, image_url: dataclasses.InitVar[str] = <property object>, image_name: str = None, image_tags: ~typing.Dict[str, str] = <factory>, library: str = None, section: ~typing.List[str] = <factory>, fix_permissions: bool = False, asset_collection: ~idmtools.assets.asset_collection.AssetCollection = None, additional_mounts: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, force: bool = False, disable_default_tags: bool = None, run_id: ~uuid.UUID = <factory>, _SingularityBuildWorkItem__digest: ~typing.Dict[str, str] = None, _SingularityBuildWorkItem__image_tag: str = None, _SingularityBuildWorkItem__rendered_template: str = None)[source]

Bases: InputDataWorkItem

Provides a wrapper to build utilizing the COMPS build server.

Notes

  • TODO add references to examples

definition_file: PathLike | str = None

Path to definition file

definition_content: str = None

definition content. Alternative to file

is_template: bool = False

Enables Jinja parsing of the definition file or content

template_args: Dict[str, str]

template_args

image_name: str = None

Destination image name

name: str = None

Name of the workitem

image_tags: Dict[str, str]

Tages to add to container asset collection

library: str = None

//sylabs.io/guides/3.5/user-guide/cli/singularity_build.html

Type:

Allows you to set a different library. (The default library is “https

Type:

//library.sylabs.io”). See https

section: List[str]

only run specific section(s) of definition file (setup, post, files, environment, test, labels, none) (default [all])

fix_permissions: bool = False

build using user namespace to fake root user (requires a privileged installation)

asset_collection: AssetCollection = None
additional_mounts: List[str]

Additional Mounts

environment_variables: Dict[str, str]

Environment vars for remote build

force: bool = False

Force build

disable_default_tags: bool = None

Don’t include default tags

run_id: UUID
get_container_info() Dict[str, str][source]

Get container info.

Notes

  • TODO remove this

property image_url

Image Url

context_checksum() str[source]

Calculate the context checksum of a singularity build.

The context is the checksum of all the assets defined for input, the singularity definition file, and the environment variables

Returns:

Conext checksum.

render_template() str | None[source]

Render template. Only applies when is_template is True. When true, it renders the template using Jinja to a cache value.

Returns:

Rendered Template

static find_existing_container(sbi: SingularityBuildWorkItem, platform: IPlatform = None) AssetCollection | None[source]

Find existing container.

Parameters:
  • sbi – SingularityBuildWorkItem to find existing container matching config

  • platform – Platform To load the object from

Returns:

Existing Asset Collection

pre_creation(platform: IPlatform) None[source]

Pre-Creation item.

Parameters:

platform – Platform object

Returns:

None

run(wait_until_done: bool = True, platform: IPlatform = None, wait_on_done_progress: bool = True, **run_opts) AssetCollection | None[source]

Run the build.

Parameters:
  • wait_until_done – Wait until build completes

  • platform – Platform to run on

  • wait_on_done_progress – Show progress while waiting

  • **run_opts – Extra run options

Returns:

Asset collection that was created if successful

wait(wait_on_done_progress: bool = True, timeout: int = None, refresh_interval=None, platform: IPlatform = None, wait_progress_desc: str = None) AssetCollection | None[source]

Waits on Singularity Build Work item to finish and fetches the resulting asset collection.

Parameters:
  • wait_on_done_progress – When set to true, a progress bar will be shown from the item

  • timeout – Timeout for waiting on item. If none, wait will be forever

  • refresh_interval – How often to refresh progress

  • platform – Platform

  • wait_progress_desc – Wait Progress Description Text

Returns:

AssetCollection created if item succeeds

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, platform_id: str = None, _platform: IPlatform = None, parent_id: str = None, _parent: IEntity = None, status: ~idmtools.core.enums.EntityStatus = None, tags: ~typing.Dict[str, ~typing.Any] = <factory>, _platform_object: ~typing.Any = None, _IRunnableEntity__pre_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, _IRunnableEntity__post_run_hooks: ~typing.List[~typing.Callable[[IRunnableEntity, IPlatform], None]] = <factory>, name: str = None, assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, item_name: dataclasses.InitVar[str] = None, asset_collection_id: dataclasses.InitVar[str] = None, transient_assets: ~idmtools.assets.asset_collection.AssetCollection = <factory>, asset_files: dataclasses.InitVar[FileList] = None, user_files: dataclasses.InitVar[FileList] = None, task: ~idmtools.entities.itask.ITask = None, related_experiments: list = <factory>, related_simulations: list = <factory>, related_suites: list = <factory>, related_work_items: list = <factory>, related_asset_collections: list = <factory>, work_item_type: str = None, work_order: dict = <factory>, plugin_key: str = '1.0.0.0_RELEASE', definition_file: ~os.PathLike | str = None, definition_content: str = None, is_template: bool = False, template_args: ~typing.Dict[str, str] = <factory>, image_url: dataclasses.InitVar[str] = <property object>, image_name: str = None, image_tags: ~typing.Dict[str, str] = <factory>, library: str = None, section: ~typing.List[str] = <factory>, fix_permissions: bool = False, asset_collection: ~idmtools.assets.asset_collection.AssetCollection = None, additional_mounts: ~typing.List[str] = <factory>, environment_variables: ~typing.Dict[str, str] = <factory>, force: bool = False, disable_default_tags: bool = None, run_id: ~uuid.UUID = <factory>, _SingularityBuildWorkItem__digest: ~typing.Dict[str, str] = None, _SingularityBuildWorkItem__image_tag: str = None, _SingularityBuildWorkItem__rendered_template: str = None) None
get_id_filename(prefix: str | None = None) str[source]

Determine the id filename. Mostly used when use does not provide one.

The logic is combine prefix and either * definition file minus extension * image url using with parts filtered out of the name.

Parameters:

prefix – Optional prefix.

Returns:

id file name

Raises:

ValueError - When the filename cannot be calculated

to_id_file(filename: str | PathLike | None = None, save_platform: bool = False)[source]

Create an ID File.

If the filename is not provided, it will be calculate for definition files or for docker pulls

Parameters:
  • filename – Filename

  • save_platform – Save Platform info to file as well

Returns:

None

idmtools_platform_comps.utils.spatial_output module

idmtools utility.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.utils.spatial_output.SpatialOutput[source]

Bases: object

SpatialOutput class is used to parse data from binary file (.bin).

__init__()[source]

Initialize an instance of SpatialOutput. This constructor does not take any parameters other than the implicit ‘self’.

classmethod from_bytes(bytes, filtered=False)[source]

Convert from bytes to class object.

Parameters:
  • bytes – bytes

  • filtered – flag for applying filter

to_dict()[source]

Convert to dict. Return: dict

idmtools_platform_comps Submodules
idmtools_platform_comps.comps_cli module

Define the comps cli spec.

Notes

  • We eventually need to deprecate this

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.comps_cli.CompsCLI[source]

Bases: IPlatformCLI

Defines our CLI interface for COMPS using IPlatformCLI.

get_experiment_status(*args, **kwargs) NoReturn[source]

Experiment status command.

get_simulation_status(*args, **kwargs) NoReturn[source]

Simulation status command.

get_platform_information() dict[source]

Platofrm info.

class idmtools_platform_comps.comps_cli.COMPSCLISpecification[source]

Bases: PlatformCLISpecification

Provides plugin definition for CompsCLI.

get(configuration: dict) CompsCLI[source]

Get our CLI plugin with config.

get_additional_commands() NoReturn[source]

Get our CLI commands.

get_description() str[source]

Get COMPS CLI plugin description.

idmtools_platform_comps.comps_platform module

idmtools COMPSPlatform.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.comps_platform.COMPSPriority(value)[source]

Bases: Enum

An enumeration.

Lowest = 'Lowest'
BelowNormal = 'BelowNormal'
Normal = 'Normal'
AboveNormal = 'AboveNormal'
Highest = 'Highest'
class idmtools_platform_comps.comps_platform.COMPSPlatform(*args, **kwargs)[source]

Bases: IPlatform, CacheEnabled

Represents the platform allowing to run simulations on COMPS.

MAX_SUBDIRECTORY_LENGTH = 35
endpoint: str = 'https://comps2.idmod.org'
environment: str = 'Bayesian'
priority: str = 'Lowest'
simulation_root: str = '$COMPS_PATH(USER)\\output'
node_group: str = None
num_retries: int = 0
num_cores: int = 1
max_workers: int = 16
batch_size: int = 10
min_time_between_commissions: int = 15
exclusive: bool = False
docker_image: str = None
post_setstate()[source]

Function called after restoring the state if additional initialization is required.

get_username()[source]
is_windows_platform(item: IEntity | None = None) bool[source]

Returns is the target platform is a windows system.

validate_item_for_analysis(item: object, analyze_failed_items=False)[source]

Check if item is valid for analysis.

Parameters:
  • item – which item to flatten

  • analyze_failed_items – bool

Returns: bool

get_files(item: Simulation | WorkItem | AssetCollection, files: Set[str] | List[str], output: str | None = None, **kwargs) Dict[str, Dict[str, bytearray]] | Dict[str, bytearray][source]

Get files for a platform entity.

Parameters:
  • item – Item to fetch files for

  • files – List of file names to get

  • output – save files to

  • kwargs – Platform arguments

Returns:

For simulations, this returns a dictionary with filename as key and values being binary data from file or a dict.

For experiments, this returns a dictionary with key as sim id and then the values as a dict of the simulations described above

flatten_item(item: object, raw=False, **kwargs) List[object][source]

Flatten an item: resolve the children until getting to the leaves.

For example, for an experiment, will return all the simulations. For a suite, will return all the simulations contained in the suites experiments.

Parameters:
  • item – Which item to flatten

  • raw – bool

  • kwargs – extra parameters

Returns:

List of leaves

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _platform_defaults: ~typing.List[~idmtools.entities.iplatform_default.IPlatformDefault] = <factory>, _config_block: str = None, endpoint: str = 'https://comps2.idmod.org', environment: str = 'Bayesian', priority: str = 'Lowest', simulation_root: str = '$COMPS_PATH(USER)\\output', node_group: str = None, num_retries: int = 0, num_cores: int = 1, max_workers: int = 16, batch_size: int = 10, min_time_between_commissions: int = 15, exclusive: bool = False, docker_image: str = None, _skip_login: bool = False) None
idmtools_platform_comps.plugin_info module

idmtools comps platform plugin definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.plugin_info.COMPSPlatformSpecification[source]

Bases: PlatformSpecification

Provide the plugin definition for COMPSPlatform.

__init__()[source]

Constructor.

get_description() str[source]

Get description.

get(**configuration) COMPSPlatform[source]

Get COMPSPlatform object with configuration.

example_configuration()[source]

Get example config.

get_type() Type[COMPSPlatform][source]

Get COMPSPlatform type.

get_example_urls() List[str][source]

Get Comps examples.

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

get_configuration_aliases() Dict[str, Dict][source]

Provides configuration aliases that exist in COMPS.

class idmtools_platform_comps.plugin_info.SSMTPlatformSpecification[source]

Bases: COMPSPlatformSpecification

Provides the plugic spec for SSMTPlatform.

__init__()[source]

Constructor.

get_description() str[source]

Provide description of SSMT plugin.

get(**configuration) SSMTPlatform[source]

Get an instance of SSMTPlatform using the configuration.

example_configuration()[source]

Get example config.

get_type() Type[SSMTPlatform][source]

Get SSMT type.

get_example_urls() List[str][source]

Get SSMT example urls.

get_version() str[source]

Returns the version of the plugin.

Returns:

Plugin Version

get_configuration_aliases() Dict[str, Dict][source]

Provides configuration aliases that exist in COMPS.

idmtools_platform_comps.ssmt_platform module

define the ssmt platform.

SSMT platform is the same as the COMPS platform but file access is local.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_comps.ssmt_platform.SSMTPlatform(*args, **kwargs)[source]

Bases: COMPSPlatform

Represents the platform allowing to run simulations on SSMT.

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _platform_defaults: ~typing.List[~idmtools.entities.iplatform_default.IPlatformDefault] = <factory>, _simulations: ~idmtools_platform_comps.ssmt_operations.simulation_operations.SSMTPlatformSimulationOperations = None, _workflow_items: ~idmtools_platform_comps.ssmt_operations.workflow_item_operations.SSMTPlatformWorkflowItemOperations = None, _config_block: str = None, endpoint: str = 'https://comps2.idmod.org', environment: str = 'Bayesian', priority: str = 'Lowest', simulation_root: str = '$COMPS_PATH(USER)\\output', node_group: str = None, num_retries: int = 0, num_cores: int = 1, max_workers: int = 16, batch_size: int = 10, min_time_between_commissions: int = 15, exclusive: bool = False, docker_image: str = None, _skip_login: bool = False) None
idmtools_platform_slurm
idmtools_platform_slurm package
idmtools_platform_slurm Subpackages
idmtools_platform_slurm.assets package

SlurmPlatform utilities.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.assets.generate_batch(platform: SlurmPlatform, experiment: Experiment, max_running_jobs: int | None = None, array_batch_size: int | None = None, dependency: bool | None = None, template: Path | str = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/envs/stable/lib/python3.9/site-packages/idmtools_platform_slurm/assets/batch.sh.jinja2'), **kwargs) None[source]

Generate bash script file batch.sh :param platform: Slurm Platform :param experiment: idmtools Experiment :param max_running_jobs: int, how many allowed to run :param array_size: INT, array size for slurm job :param dependency: bool, determine if Slurm jobs depend on each other :param template: template to be used to build batch file :param kwargs: keyword arguments used to expand functionality

Returns:

None

idmtools_platform_slurm.assets.generate_script(platform: SlurmPlatform, experiment: Experiment, max_running_jobs: int | None = None, template: Path | str = PosixPath('/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/envs/stable/lib/python3.9/site-packages/idmtools_platform_slurm/assets/sbatch.sh.jinja2'), **kwargs) None[source]

Generate batch file sbatch.sh :param platform: Slurm Platform :param experiment: idmtools Experiment :param max_running_jobs: int, how many allowed to run at the same time :param template: template to be used to build batch file :param kwargs: keyword arguments used to expand functionality

Returns:

None

idmtools_platform_slurm.assets.generate_simulation_script(platform: SlurmPlatform, simulation, retries: int | None = None) None[source]

Generate batch file _run.sh :param platform: Slurm Platform :param simulation: idmtools Simulation :param retries: int

Returns:

None

idmtools_platform_slurm.cli package

idmtools comps cli module.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.cli Submodules
idmtools_platform_slurm.cli.slurm module

idmtools slurm cli commands.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.platform_operations package
idmtools_platform_slurm.platform_operations Submodules
idmtools_platform_slurm.platform_operations.asset_collection_operations module

Here we implement the SlurmPlatform asset collection operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.platform_operations.asset_collection_operations.SlurmPlatformAssetCollectionOperations(platform: SlurmPlatform, platform_type: Type = None)[source]

Bases: IPlatformAssetCollectionOperations

Provides AssetCollection Operations to SlurmPlatform.

platform: SlurmPlatform
platform_type: Type = None
get(asset_collection_id: str | None, **kwargs) AssetCollection[source]

Get an asset collection by id. :param asset_collection_id: id of asset collection :param kwargs: keyword arguments used to expand functionality.

Returns:

AssetCollection

platform_create(asset_collection: AssetCollection, **kwargs) AssetCollection[source]

Create AssetCollection. :param asset_collection: AssetCollection to create :param kwargs: keyword arguments used to expand functionality.

Returns:

AssetCollection

Link directory/files. :param simulation: Simulation :param common_asset_dir: the common asset folder path

Returns:

None

get_assets(simulation: Simulation | SlurmSimulation, files: List[str], **kwargs) Dict[str, bytearray][source]

Get assets for simulation. :param simulation: Simulation or SlurmSimulation :param files: files to be retrieved :param kwargs: keyword arguments used to expand functionality.

Returns:

Dict[str, bytearray]

list_assets(item: Experiment | Simulation, exclude: List[str] | None = None, **kwargs) List[Asset][source]

List assets for Experiment/Simulation. :param item: Experiment/Simulation :param exclude: list of file path :param kwargs: keyword arguments used to expand functionality.

Returns:

list of Asset

static copy_asset(src: Asset | Path | str, dest: Path | str) None[source]

Copy asset/file to destination. :param src: the file content :param dest: the file path

Returns:

None

dump_assets(item: Experiment | Simulation, **kwargs) None[source]

Dump item’s assets. :param item: Experiment/Simulation :param kwargs: keyword arguments used to expand functionality.

Returns:

None

__init__(platform: SlurmPlatform, platform_type: Type = None) None
idmtools_platform_slurm.platform_operations.experiment_operations module

Here we implement the SlurmPlatform experiment operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.platform_operations.experiment_operations.SlurmPlatformExperimentOperations(platform: 'SlurmPlatform', platform_type: Type = <class 'idmtools_platform_slurm.platform_operations.utils.SlurmExperiment'>)[source]

Bases: IPlatformExperimentOperations

platform: SlurmPlatform
platform_type

alias of SlurmExperiment

get(experiment_id: str, **kwargs) Dict[source]

Gets an experiment from the Slurm platform. :param experiment_id: experiment id :param kwargs: keyword arguments used to expand functionality

Returns:

Slurm Experiment object

platform_create(experiment: Experiment, **kwargs) SlurmExperiment[source]

Creates an experiment on Slurm Platform. :param experiment: idmtools experiment :param kwargs: keyword arguments used to expand functionality

Returns:

Slurm Experiment object created

get_children(experiment: SlurmExperiment, parent: Experiment | None = None, raw=True, **kwargs) List[Any][source]

Fetch slurm experiment’s children. :param experiment: Slurm experiment :param raw: True/False :param parent: the parent of the simulations :param kwargs: keyword arguments used to expand functionality

Returns:

List of slurm simulations

get_parent(experiment: SlurmExperiment, **kwargs) SlurmSuite[source]

Fetches the parent of an experiment. :param experiment: Slurm experiment :param kwargs: keyword arguments used to expand functionality

Returns:

The Suite being the parent of this experiment.

platform_run_item(experiment: Experiment, dry_run: bool = False, **kwargs)[source]

Run experiment. :param experiment: idmtools Experiment :param dry_run: True/False :param kwargs: keyword arguments used to expand functionality

Returns:

None

send_assets(experiment: Experiment, **kwargs)[source]

Copy our experiment assets. Replaced by self.platform._assets.dump_assets(experiment) :param experiment: idmtools Experiment :param kwargs: keyword arguments used to expand functionality

Returns:

None

list_assets(experiment: Experiment, **kwargs) List[Asset][source]

List assets for an experiment. :param experiment: Experiment to get assets for :param kwargs:

Returns:

List[Asset]

get_assets_from_slurm_experiment(experiment: SlurmExperiment) AssetCollection[source]

Get assets for a comps experiment. :param experiment: Experiment to get asset collection for.

Returns:

AssetCollection if configuration is set and configuration.asset_collection_id is set.

to_entity(slurm_exp: SlurmExperiment, parent: Suite | None = None, children: bool = True, **kwargs) Experiment[source]

Convert a SlurmExperiment to idmtools Experiment. :param slurm_exp: simulation to convert :param parent: optional experiment object :param children: bool :param kwargs:

Returns:

Experiment object

refresh_status(experiment: Experiment, **kwargs)[source]

Refresh status of experiment. :param experiment: idmtools Experiment :param kwargs: keyword arguments used to expand functionality

Returns:

Dict of simulation id as key and working dir as value

create_sim_directory_map(experiment_id: str) Dict[source]

Build simulation working directory mapping. :param experiment_id: experiment id

Returns:

Dict of simulation id as key and working dir as value

platform_delete(experiment_id: str) None[source]

Delete platform experiment. :param experiment_id: platform experiment id

Returns:

None

platform_cancel(experiment_id: str, force: bool = True) None[source]

Cancel platform experiment’s slurm job. :param experiment_id: experiment id :param force: bool, True/False

Returns:

Any

post_run_item(experiment: Experiment, **kwargs)[source]

Trigger right after commissioning experiment on platform.

Parameters:
  • experiment – Experiment just commissioned

  • kwargs – keyword arguments used to expand functionality

Returns:

None

__init__(platform: SlurmPlatform, platform_type: ~typing.Type = <class 'idmtools_platform_slurm.platform_operations.utils.SlurmExperiment'>) None
idmtools_platform_slurm.platform_operations.json_metadata_operations module

Here we implement the JSON Metadata operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.platform_operations.json_metadata_operations.JSONMetadataOperations(platform: 'SlurmPlatform', platform_type: Type = None, metadata_filename: str = 'metadata.json')[source]

Bases: IMetadataOperations

platform: SlurmPlatform
platform_type: Type = None
metadata_filename: str = 'metadata.json'
get_metadata_filepath(item: Suite | Experiment | Simulation) Path[source]

Retrieve item’s metadata file path. :param item: idmtools entity (Suite, Experiment and Simulation)

Returns:

item’s metadata file path

get(item: Suite | Experiment | Simulation) Dict[source]

Obtain item’s metadata. :param item: idmtools entity (Suite, Experiment and Simulation)

Returns:

key/value dict of metadata from the given item

dump(item: Suite | Experiment | Simulation) None[source]

Save item’s metadata to a file. :param item: idmtools entity (Suite, Experiment and Simulation)

Returns:

None

load(item: Suite | Experiment | Simulation) Dict[source]

Obtain item’s metadata file. :param item: idmtools entity (Suite, Experiment and Simulation)

Returns:

key/value dict of metadata from the given item

load_from_file(metadata_filepath: Path | str) Dict[source]

Obtain the metadata for the given filepath. :param metadata_filepath: str

Returns:

key/value dict of metadata from the given filepath

update(item: Suite | Experiment | Simulation, metadata: Dict = {}, replace=True) None[source]

Update or replace item’s metadata file. :param item: idmtools entity (Suite, Experiment and Simulation.) :param metadata: dict to be updated or replaced :param replace: True/False

Returns:

None

clear(item: Suite | Experiment | Simulation) None[source]

Clear the item’s metadata file. :param item: clear the item’s metadata file

Returns:

None

get_children(item: Suite | Experiment) List[Dict][source]

Fetch item’s children. :param item: idmtools entity (Suite, SlurmSuite, Experiment, SlurmExperiment)

Returns:

Lis of metadata

get_all(item_type: ItemType) List[Dict][source]

Obtain all the metadata for a given item type. :param item_type: the type of metadata to search for matches (simulation, experiment, suite, etc)

Returns:

list of metadata with given item type

filter(item_type: ItemType, property_filter: Dict | None = None, tag_filter: Dict | None = None, meta_items: List[Dict] | None = None, ignore_none=True) List[Dict][source]

Obtain all items that match the given properties key/value pairs passed. The two filters are applied on item with ‘AND’ logical checking. :param item_type: the type of items to search for matches (simulation, experiment, suite, etc) :param property_filter: a dict of metadata key/value pairs for exact match searching :param tag_filter: a dict of metadata key/value pairs for exact match searching :param meta_items: list of metadata :param ignore_none: True/False (ignore None value or not)

Returns:

a list of metadata matching the properties key/value with given item type

__init__(platform: SlurmPlatform, platform_type: Type = None, metadata_filename: str = 'metadata.json') None
idmtools_platform_slurm.platform_operations.simulation_operations module

Here we implement the SlurmPlatform simulation operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.platform_operations.simulation_operations.SlurmPlatformSimulationOperations(platform: 'SlurmPlatform', platform_type: Type = <class 'idmtools_platform_slurm.platform_operations.utils.SlurmSimulation'>)[source]

Bases: IPlatformSimulationOperations

platform: SlurmPlatform
platform_type

alias of SlurmSimulation

get(simulation_id: str, **kwargs) Dict[source]

Gets a simulation from the Slurm platform. :param simulation_id: Simulation id :param kwargs: keyword arguments used to expand functionality

Returns:

Slurm Simulation object

platform_create(simulation: Simulation, **kwargs) SlurmSimulation[source]

Create the simulation on Slurm Platform. :param simulation: Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

Slurm Simulation object created.

get_parent(simulation: SlurmSimulation, **kwargs) SlurmExperiment[source]

Fetches the parent of a simulation. :param simulation: Slurm Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

The Experiment being the parent of this simulation.

platform_run_item(simulation: Simulation, **kwargs)[source]

For simulations on slurm, we let the experiment execute with sbatch :param simulation: idmtools Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

None

send_assets(simulation: Simulation, **kwargs)[source]

Send assets. Replaced by self.platform._metas.dump(simulation) :param simulation: idmtools Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

None

get_assets(simulation: Simulation, files: List[str], **kwargs) Dict[str, bytearray][source]

Get assets for simulation. :param simulation: idmtools Simulation :param files: files to be retrieved :param kwargs: keyword arguments used to expand functionality

Returns:

Dict[str, bytearray]

list_assets(simulation: Simulation, **kwargs) List[Asset][source]

List assets for simulation. :param simulation: idmtools Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

List[Asset]

to_entity(slurm_sim: SlurmSimulation, parent: Experiment | None = None, **kwargs) Simulation[source]

Convert a SlurmSimulation object to idmtools Simulation.

Parameters:
  • slurm_sim – simulation to convert

  • parent – optional experiment object

  • kwargs – keyword arguments used to expand functionality

Returns:

Simulation object

refresh_status(simulation: Simulation, **kwargs)[source]

Refresh simulation status: we actually don’t really refresh simulation’ status directly. :param simulation: idmtools Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

None

create_sim_directory_map(simulation_id: str) Dict[source]

Build simulation working directory mapping. :param simulation_id: simulation id

Returns:

Dict of simulation id as key and working dir as value

platform_delete(sim_id: str) None[source]

Delete platform simulation. :param sim_id: platform simulation id

Returns:

None

__init__(platform: SlurmPlatform, platform_type: ~typing.Type = <class 'idmtools_platform_slurm.platform_operations.utils.SlurmSimulation'>) None
platform_cancel(sim_id: str, force: bool = False) Any[source]

Cancel platform simulation’s slurm job. :param sim_id: simulation id :param force: bool, True/False

Returns:

Any

idmtools_platform_slurm.platform_operations.suite_operations module

Here we implement the SlurmPlatform suite operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.platform_operations.suite_operations.SlurmPlatformSuiteOperations(platform: SlurmPlatform, platform_type: ~typing.Type = <class 'idmtools_platform_slurm.platform_operations.utils.SlurmSuite'>)[source]

Bases: IPlatformSuiteOperations

Provides Suite operation to the SlurmPlatform.

platform: SlurmPlatform
platform_type

alias of SlurmSuite

get(suite_id: str, **kwargs) Dict[source]

Get a suite from the Slurm platform. :param suite_id: Suite id :param kwargs: keyword arguments used to expand functionality

Returns:

Slurm Suite object

platform_create(suite: Suite, **kwargs) Tuple[source]

Create suite on Slurm Platform. :param suite: idmtools suite :param kwargs: keyword arguments used to expand functionality

Returns:

Slurm Suite object created

platform_run_item(suite: Suite, **kwargs)[source]

Called during commissioning of an item. This should perform what is needed to commission job on platform. :param suite:

Returns:

None

get_parent(suite: SlurmSuite, **kwargs) Any[source]

Fetches the parent of a suite. :param suite: Slurm suite :param kwargs: keyword arguments used to expand functionality

Returns:

None

get_children(suite: SlurmSuite, parent: Suite | None = None, raw=True, **kwargs) List[Any][source]

Fetch Slurm suite’s children. :param suite: Slurm suite :param raw: True/False :param parent: the parent of the experiments :param kwargs: keyword arguments used to expand functionality

Returns:

List of Slurm experiments

to_entity(slurm_suite: SlurmSuite, children: bool = True, **kwargs) Suite[source]

Convert a SlurmSuite object to idmtools Suite. :param slurm_suite: simulation to convert :param children: bool True/False :param kwargs: keyword arguments used to expand functionality

Returns:

Suite object

refresh_status(suite: Suite, **kwargs)[source]

Refresh the status of a suite. On comps, this is done by refreshing all experiments. :param suite: idmtools suite :param kwargs: keyword arguments used to expand functionality

Returns:

None

create_sim_directory_map(suite_id: str) Dict[source]

Build simulation working directory mapping. :param suite_id: suite id

Returns:

Dict of simulation id as key and working dir as value

platform_delete(suite_id: str) None[source]

Delete platform suite. :param suite_id: platform suite id

Returns:

None

platform_cancel(suite_id: str, force: bool = False) None[source]

Cancel platform suite’s slurm job. :param suite_id: suite id :param force: bool, True/False

Returns:

None

__init__(platform: SlurmPlatform, platform_type: ~typing.Type = <class 'idmtools_platform_slurm.platform_operations.utils.SlurmSuite'>) None
idmtools_platform_slurm.platform_operations.utils module

This is SlurmPlatform operations utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.platform_operations.utils.SlurmItem(metas: Dict)[source]

Bases: object

Represent Slurm Object

__init__(metas: Dict)[source]
get_platform_object()[source]
class idmtools_platform_slurm.platform_operations.utils.SlurmSuite(metas: Dict)[source]

Bases: SlurmItem

Represent Slurm Suite

__init__(metas: Dict)[source]
class idmtools_platform_slurm.platform_operations.utils.SlurmExperiment(metas: Dict)[source]

Bases: SlurmItem

Represent Slurm Experiment

__init__(metas: Dict)[source]
class idmtools_platform_slurm.platform_operations.utils.SlurmSimulation(metas: Dict)[source]

Bases: SlurmItem

Represent Slurm Simulation

__init__(metas: Dict)[source]
idmtools_platform_slurm.platform_operations.utils.clean_experiment_name(experiment_name: str) str[source]

Handle some special characters in experiment names. :param experiment_name: name of the experiment

Returns:the experiment name allowed for use

idmtools_platform_slurm.platform_operations.utils.add_dummy_suite(experiment: Experiment, suite_name: str | None = None, tags: Dict | None = None) Suite[source]

Create Suite parent for given experiment :param experiment: idmtools Experiment :param suite_name: new Suite name :param tags: new Suite tags

Returns:

Suite

idmtools_platform_slurm.platform_operations.utils.get_max_array_size()[source]

Get Slurm MaxArraySize from configuration. :returns: Slurm system MaxArraySize

idmtools_platform_slurm.platform_operations.utils.check_home(directory: str) bool[source]

Check if a directory is under HOME. :param directory: a directory

Returns:

True/False

idmtools_platform_slurm.slurm_operations package

Here we implement the SlurmPlatform operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.slurm_operations Submodules
idmtools_platform_slurm.slurm_operations.bridged_operations module

Here we implement the SlurmPlatform bridged operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.slurm_operations.bridged_operations.create_bridged_job(working_directory, bridged_jobs_directory, results_directory, cleanup_results: bool = True) None[source]

Creates a bridged job.

Parameters:
  • working_directory – Work Directory

  • bridged_jobs_directory – Jobs Directory

  • results_directory – Results directory

  • cleanup_results – Should we clean up results file

Returns:

None

idmtools_platform_slurm.slurm_operations.bridged_operations.cancel_bridged_job(job_ids: str | List[str], bridged_jobs_directory, results_directory, cleanup_results: bool = True) Any[source]

Cancel a bridged job.

Parameters:
  • job_ids – slurm job list

  • bridged_jobs_directory – Work Directory

  • results_directory – Results directory

  • cleanup_results – Should we clean up results file

Returns:

Result from scancel job

class idmtools_platform_slurm.slurm_operations.bridged_operations.BridgedLocalSlurmOperations(platform: 'SlurmPlatform', platform_type: Type = None)[source]

Bases: LocalSlurmOperations

submit_job(item: Experiment | Simulation, **kwargs) None[source]

Submit a Slurm job. :param item: idmtools Experiment or Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

None

cancel_job(job_ids: str | List[str], **kwargs) Any[source]

Cancel slurm job generated from the item. :param job_ids: Slurm job id :param kwargs: keyword arguments used to expand functionality

Returns:

Any

__init__(platform: SlurmPlatform, platform_type: Type = None) None
platform: SlurmPlatform
idmtools_platform_slurm.slurm_operations.local_operations module

Here we implement the SlurmPlatform local operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.slurm_operations.local_operations.LocalSlurmOperations(platform: 'SlurmPlatform', platform_type: Type = None)[source]

Bases: SlurmOperations

get_directory(item: Suite | Experiment | Simulation) Path[source]

Get item’s path. :param item: Suite, Experiment, Simulation

Returns:

item file directory

get_directory_by_id(item_id: str, item_type: ItemType) Path[source]

Get item’s path. :param item_id: entity id (Suite, Experiment, Simulation) :param item_type: the type of items (Suite, Experiment, Simulation)

Returns:

item file directory

mk_directory(item: Suite | Experiment | Simulation | None = None, dest: Path | str | None = None, exist_ok: bool | None = None) None[source]

Make a new directory. :param item: Suite/Experiment/Simulation :param dest: the folder path :param exist_ok: True/False

Returns:

None

Link files. :param target: the source file path :param link: the file path

Returns:

None

Link directory/files. :param target: the source folder path. :param link: the folder path

Returns:

None

static update_script_mode(script_path: Path | str, mode: int = 511) None[source]

Change file mode. :param script_path: script path :param mode: permission mode

Returns:

None

make_command_executable(simulation: Simulation) None[source]

Make simulation command executable :param simulation: idmtools Simulation

Returns:

None

create_batch_file(item: Experiment | Simulation, max_running_jobs: int | None = None, retries: int | None = None, array_batch_size: int | None = None, dependency: bool = True, **kwargs) None[source]

Create batch file. :param item: the item to build batch file for :param kwargs: keyword arguments used to expand functionality.

Returns:

None

submit_job(item: Experiment | Simulation, **kwargs) None[source]

Submit a Slurm job. :param item: idmtools Experiment or Simulation :param kwargs: keyword arguments used to expand functionality

Returns:

None

get_simulation_status(sim_id: str, **kwargs) EntityStatus[source]

Retrieve simulation status. :param sim_id: Simulation ID :param kwargs: keyword arguments used to expand functionality

Returns:

EntityStatus

create_file(file_path: str, content: str) None[source]

Create a file with given content and file path.

Parameters:
  • file_path – the full path of the file to be created

  • content – file content

Returns:

Nothing

static cancel_job(job_ids: str | List[str]) Any[source]

Cancel Slurm job for given job ids. :param job_ids: slurm jobs id

Returns:

Any

get_job_id(item_id: str, item_type: ItemType) List[source]

Retrieve the job id for item that had been run. :param item_id: id of experiment/simulation :param item_type: ItemType (Experiment or Simulation)

Returns:

List of slurm job ids

__init__(platform: SlurmPlatform, platform_type: Type = None) None
platform: SlurmPlatform
idmtools_platform_slurm.slurm_operations.operations_interface module

Here we implement the SlurmPlatform operations interfaces.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.slurm_operations.operations_interface.SlurmOperations(platform: 'SlurmPlatform', platform_type: Type = None)[source]

Bases: ABC

platform: SlurmPlatform
platform_type: Type = None
abstract get_directory(item: IEntity) Path[source]
abstract get_directory_by_id(item_id: str, item_type: ItemType) Path[source]
abstract mk_directory(item: IEntity, exist_ok: bool = False) None[source]
abstract update_script_mode(script_path: Path | str, mode: int) None[source]
abstract make_command_executable(simulation: Simulation) None[source]
abstract create_batch_file(item: IEntity, **kwargs) None[source]
abstract submit_job(item: Experiment | Simulation, **kwargs) None[source]
abstract get_simulation_status(sim_id: str) Any[source]
abstract create_file(file_path: str, content: str) None[source]
abstract get_job_id(item_id: str, item_type: ItemType) str[source]
__init__(platform: SlurmPlatform, platform_type: Type = None) None
abstract cancel_job(job_id: str) Any[source]
idmtools_platform_slurm.slurm_operations.remote_operations module

Here we implement the SlurmPlatform remote operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.slurm_operations.remote_operations.RemoteSlurmOperations(platform: 'SlurmPlatform', platform_type: Type = None, hostname: str = None, username: str = None, key_file: str = None, port: int = 22)[source]

Bases: SlurmOperations

hostname: str = None
username: str = None
key_file: str = None
port: int = 22
get_directory(item: IEntity) Path[source]
get_directory_by_id(item_id: str, item_type: ItemType) Path[source]
mk_directory(item: IEntity) None[source]
update_script_mode(script_path: Path | str, mode: int) None[source]
make_command_executable(simulation: Simulation) None[source]
create_batch_file(item: IEntity, **kwargs) None[source]
submit_job(item: Experiment | Simulation, **kwargs) Any[source]
get_simulation_status(sim_id: str) Any[source]
create_file(file_path: str, content: str) None[source]
get_job_id(item_id: str, item_type: ItemType) str[source]
cancel_job(job_id: str) Any[source]
__init__(platform: SlurmPlatform, platform_type: Type = None, hostname: str = None, username: str = None, key_file: str = None, port: int = 22) None
idmtools_platform_slurm.slurm_operations.slurm_constants module

Here we implement the SlurmPlatform operations constants.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.slurm_operations.slurm_constants.SlurmOperationalMode(value)[source]

Bases: Enum

An enumeration.

SSH = 'ssh'
LOCAL = 'local'
BRIDGED = 'bridged'
idmtools_platform_slurm.utils package

idmtools comps utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.utils Subpackages
idmtools_platform_slurm.utils.slurm_job package

idmtools SlurmPlatform SlurmJob utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.utils.slurm_job.create_slurm_indicator() NoReturn[source]

Add environment variable. :returns: None

idmtools_platform_slurm.utils.slurm_job.remove_slurm_indicator() NoReturn[source]

Remove the environment variable. :returns: None

idmtools_platform_slurm.utils.slurm_job.check_slurm_indicator() bool[source]

Check if the environment set to ‘1’. :returns: True/False

idmtools_platform_slurm.utils.slurm_job.slurm_installed() bool[source]

Check if Slurm system is installed or available. :returns: True/False

idmtools_platform_slurm.utils.slurm_job.run_script_on_slurm(platform: SlurmPlatform, run_on_slurm: bool = False, cleanup: bool = True) bool[source]

This is a utility tool which wraps the SlurmJob creation and run. :param platform: idmtools Platform :param run_on_slurm: True/False :param cleanup: True/False to delete the generated slurm job related files

Returns:

True/False

idmtools_platform_slurm.utils.slurm_job Submodules
idmtools_platform_slurm.utils.slurm_job.slurm_job module

This is a SlurmPlatform utility.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.utils.slurm_job.slurm_job.generate_script(platform: SlurmPlatform, command: str, template: Path | str = 'script_sbatch.sh.jinja2', batch_dir: str = None, **kwargs) None[source]

Generate batch file sbatch.sh :param platform: Slurm Platform :param command: execution command :param template: template to be used to build batch file :param kwargs: keyword arguments used to expand functionality

Returns:

None

class idmtools_platform_slurm.utils.slurm_job.slurm_job.SlurmJob(script_path: os.PathLike, platform: 'SlurmPlatform' = None, executable: str = 'python3', script_params: List[str] = None, cleanup: bool = True)[source]

Bases: object

script_path: PathLike
platform: SlurmPlatform = None
executable: str = 'python3'
script_params: List[str] = None
cleanup: bool = True
initialization()[source]
run(dry_run: bool = False, **kwargs) NoReturn[source]
clean(cwd: str = '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs')[source]

Delete generated slurm job related files. :param cwd: the directory containing the files

Returns:

None

__init__(script_path: PathLike, platform: SlurmPlatform = None, executable: str = 'python3', script_params: List[str] = None, cleanup: bool = True) None
idmtools_platform_slurm.utils.status_report package

idmtools SlurmPlatform utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.utils.status_report Submodules
idmtools_platform_slurm.utils.status_report.status_report module

This is a SlurmPlatform simulation status utility.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.utils.status_report.status_report.StatusViewer(platform: IPlatform, scope: Tuple[str, ItemType] = None)[source]

Bases: object

A class to wrap the functions involved in retrieving simulations status.

platform: IPlatform
scope: Tuple[str, ItemType] = None
initialize() None[source]

Determine the experiment and build dictionary with basic info. :returns: None

apply_filters(status_filter: Tuple[str] | None = None, job_filter: Tuple[str] | None = None, sim_filter: Tuple[str] | None = None, root: str = 'sim', verbose: bool = True) None[source]

Filter simulations. :param status_filter: tuple with target status :param job_filter: tuple with slurm job id :param sim_filter: tuple with simulation id :param root: dictionary root key: ‘sim’ or ‘job’ :param verbose: True/False to include simulation directory

Returns:

None

static output_definition() None[source]

Output the status definition. :returns: None

output_summary() None[source]

Output slurm job id, suite/experiment id and job directory. :returns: None

output_status_report(status_filter: Tuple[str] | None = None, job_filter: Tuple[str] | None = None, sim_filter: Tuple[str] | None = None, root: str = 'sim', verbose: bool = True, display: bool = True, display_count: int = 20) None[source]

Output simulations status with possible override parameters. :param status_filter: tuple with target status :param job_filter: tuple with slurm job id :param sim_filter: tuple with simulation id :param root: dictionary root key: ‘sim’ or ‘job’ :param verbose: True/False to include simulation directory :param display: True/False to print the searched results :param display_count: how many to print

Returns:

None

__init__(platform: IPlatform, scope: Tuple[str, ItemType] = None) None
idmtools_platform_slurm.utils.status_report.status_report.generate_status_report(platform: IPlatform, scope: Tuple[str, ItemType] = None, status_filter: Tuple[str] = None, job_filter: Tuple[str] = None, sim_filter: Tuple[str] = None, root: str = 'sim', verbose: bool = True, display: bool = True, display_count: int = 20) None[source]

The entry point of status viewer. :param platform: idmtools Platform :param scope: the search base :param status_filter: tuple with target status :param job_filter: tuple with slurm job id :param sim_filter: tuple with simulation id :param root: dictionary with root key: ‘sim’ or ‘job’ :param verbose: True/False to include simulation directory :param display: True/False to print the search results :param display_count: how many to print

Returns:

None

idmtools_platform_slurm.utils.status_report.utils module

This is a SlurmPlatform utility.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_slurm.utils.status_report.utils.get_latest_experiment(platform: IPlatform) Dict[source]

Find the latest experiment. :param platform:

Returns:

Dictionary with experiment info

idmtools_platform_slurm.utils.status_report.utils.check_status(platform: IPlatform, exp_id: str = None, display: bool = False) None[source]

List simulations status. :param platform: Platform :param exp_id: experiment id :param display: True/False

Returns:

None

idmtools_platform_slurm Submodules
idmtools_platform_slurm.plugin_info module

idmtools slurm platform plugin definition.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.plugin_info.SlurmPlatformSpecification[source]

Bases: PlatformSpecification

get_description() str[source]

Get a brief description of the plugin and its functionality.

Returns:

The plugin description.

get(**configuration) IPlatform[source]

Build our slurm platform from the passed in configuration object

We do our import of platform here to avoid any weirdness :param configuration:

Returns:

example_configuration()[source]

Example configuration for the platform. This is useful in help or error messages.

Returns:

Example configuration

get_type() Type[SlurmPlatform][source]

Get type of the Platform type.

get_version() str[source]

Returns the version of the plugin

Returns:

Plugin Version

get_configuration_aliases() Dict[str, Dict][source]

Provides configuration aliases that exist in SLURM.

idmtools_platform_slurm.slurm_platform module

Here we implement the SlurmPlatform object.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_slurm.slurm_platform.SlurmPlatform(_uid: str = None, _IItem__pre_creation_hooks: List[Callable[[ForwardRef('IItem'), ForwardRef('IPlatform')], NoneType]] = <factory>, _IItem__post_creation_hooks: List[Callable[[ForwardRef('IItem'), ForwardRef('IPlatform')], NoneType]] = <factory>, _platform_defaults: List[idmtools.entities.iplatform_default.IPlatformDefault] = <factory>, _config_block: str = None, job_directory: str = None, bridged_jobs_directory: str = PosixPath('/home/docs/.idmtools/singularity-bridge'), bridged_results_directory: str = PosixPath('/home/docs/.idmtools/singularity-bridge/results'), mode: idmtools_platform_slurm.slurm_operations.slurm_constants.SlurmOperationalMode = None, mail_type: Optional[str] = None, mail_user: Optional[str] = None, nodes: Optional[int] = None, ntasks: Optional[int] = None, cpus_per_task: Optional[int] = None, ntasks_per_core: Optional[int] = None, max_running_jobs: Optional[int] = None, mem: Optional[int] = None, mem_per_cpu: Optional[int] = None, partition: Optional[str] = None, constraint: Optional[str] = None, time: str = None, account: str = None, exclusive: bool = False, requeue: bool = True, retries: int = 1, sbatch_custom: Optional[str] = None, modules: list = <factory>, dir_exist_ok: bool = False, array_batch_size: int = None, run_on_slurm: bool = False)[source]

Bases: IPlatform

job_directory: str = None
bridged_jobs_directory: str = PosixPath('/home/docs/.idmtools/singularity-bridge')

Needed for bridge mode

bridged_results_directory: str = PosixPath('/home/docs/.idmtools/singularity-bridge/results')
mode: SlurmOperationalMode = None
mail_type: str | None = None
mail_user: str | None = None
nodes: int | None = None
ntasks: int | None = None
cpus_per_task: int | None = None
ntasks_per_core: int | None = None
max_running_jobs: int | None = None
mem: int | None = None
mem_per_cpu: int | None = None
partition: str | None = None
constraint: str | None = None
time: str = None
account: str = None
exclusive: bool = False
requeue: bool = True
retries: int = 1
sbatch_custom: str | None = None
modules: list
dir_exist_ok: bool = False
array_batch_size: int = None
run_on_slurm: bool = False
post_setstate()[source]

Function called after restoring the state if additional initialization is required.

property slurm_fields

Get list of fields that have metadata sbatch. :returns: Set of fields that have sbatch metadata

get_slurm_configs(**kwargs) Dict[str, Any][source]

Identify the Slurm config parameters from the fields. :param kwargs: additional parameters

Returns:

slurm config dict

flatten_item(item: IEntity, raw=False, **kwargs) List[object][source]

Flatten an item: resolve the children until getting to the leaves.

For example, for an experiment, will return all the simulations. For a suite, will return all the simulations contained in the suites experiments.

Parameters:
  • item – Which item to flatten

  • raw – bool

  • kwargs – extra parameters

Returns:

List of leaves

validate_item_for_analysis(item: Simulation | SlurmSimulation, analyze_failed_items=False)[source]

Check if item is valid for analysis.

Parameters:
  • item – which item to verify status

  • analyze_failed_items – bool

Returns: bool

get_directory(item: Suite | Experiment | Simulation) Path[source]

Get item’s path. :param item: Suite, Experiment, Simulation

Returns:

item file directory

get_directory_by_id(item_id: str, item_type: ItemType) Path[source]

Get item’s path. :param item_id: entity id (Suite, Experiment, Simulation) :param item_type: the type of items (Suite, Experiment, Simulation)

Returns:

item file directory

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _platform_defaults: ~typing.List[~idmtools.entities.iplatform_default.IPlatformDefault] = <factory>, _config_block: str = None, job_directory: str = None, bridged_jobs_directory: str = PosixPath('/home/docs/.idmtools/singularity-bridge'), bridged_results_directory: str = PosixPath('/home/docs/.idmtools/singularity-bridge/results'), mode: ~idmtools_platform_slurm.slurm_operations.slurm_constants.SlurmOperationalMode = None, mail_type: str | None = None, mail_user: str | None = None, nodes: int | None = None, ntasks: int | None = None, cpus_per_task: int | None = None, ntasks_per_core: int | None = None, max_running_jobs: int | None = None, mem: int | None = None, mem_per_cpu: int | None = None, partition: str | None = None, constraint: str | None = None, time: str = None, account: str = None, exclusive: bool = False, requeue: bool = True, retries: int = 1, sbatch_custom: str | None = None, modules: list = <factory>, dir_exist_ok: bool = False, array_batch_size: int = None, run_on_slurm: bool = False) None
idmtools_slurm_utils
idmtools_slurm_utils package

idmtools slurm utils package.

idmtools_slurm_utils Submodules
idmtools_slurm_utils.bash module

Handles interaction with bash command.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.bash.command_bash(info: Dict) Dict[source]

Process command request for sbatch commands to be executed.

Parameters:

info – Info command

Returns:

Result dict

idmtools_slurm_utils.bash.run_bash(working_directory: Path)[source]

Just a bash script.

Parameters:

working_directory – Working directory

idmtools_slurm_utils.cli module

Handles CLI portion of idmtools-slurm-bridge.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.cli.setup_loggers(config_directory: Path, console_level: int, file_level: int)[source]

Setup loggers.

Parameters:
  • config_directory – Directory where config lives

  • console_level – Level to log to console

  • file_level – Level to log to file

Returns:

None

idmtools_slurm_utils.cli.existing_process_running(pid_file: Path)[source]

Existing Process running.

Parameters:

pid_file

Returns:

None

idmtools_slurm_utils.cli.main()[source]

CLI main.

idmtools_slurm_utils.cli.cleanup(*args, **kwargs)[source]

Cleanup pid when user tries to kill process.

Parameters:

config_directory – The base config directory

idmtools_slurm_utils.sbatch module

Handles interaction with sbatch command.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.sbatch.command_sbatch(info: Dict) Dict[source]

Process command request for sbatch commands to be executed.

Parameters:

info – Info command

Returns:

Result dict

idmtools_slurm_utils.sbatch.run_sbatch(working_directory: Path)[source]

Just a sbatch script.

Parameters:

working_directory – Working directory

idmtools_slurm_utils.scancel module

Handles interaction with scancel command.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.scancel.command_scancel(info: Dict) Dict[source]

Interacts with the scancel command.

Parameters:

info – Info on what to cancel

Returns:

Result from cancel

idmtools_slurm_utils.scancel.run_cancel(job_ids: str | List[str]) Tuple[source]

Kick out slurm scancel command.

Parameters:

job_ids – slurm job id list

idmtools_slurm_utils.singularity_bridge module

Allows bridged mode for idmtools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.utils module

Utils for slurm bridge.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.utils.process_job(job_path, result_dir, cleanup_job: bool = True)[source]

Process a job.

Parameters:
  • job_path – Path to the job

  • result_dir – Result directory

  • cleanup_job – Cleanup job when done(true), false leave it in place.

idmtools_slurm_utils.utils.get_job_result(job_path: PathLike) Dict[source]

Read a job file in from path and return a result.

Parameters:

job_path – Path

Returns:

Result

idmtools_slurm_utils.utils.write_result(result: Dict, result_name: Path)[source]

Write the result of a job to a directory.

Parameters:
  • result – Result to write

  • result_name – Path to write result to.

idmtools_slurm_utils.verify module

Helps user verify environment information.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_slurm_utils.verify.command_verify(info: Dict)[source]

Processes the verify command.

Parameters:

info – Info from container

Returns:

Info about server

idmtools_slurm_utils.watcher module

Provides facility to watch bridge files.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_slurm_utils.watcher.IdmtoolsJobWatcher(directory_to_watch: PathLike, directory_for_status: PathLike, check_every: int = 5)[source]

Bases: object

Watches the bridge directory and communicates jobs to slurm.

__init__(directory_to_watch: PathLike, directory_for_status: PathLike, check_every: int = 5)[source]

Creates our watcher.

Parameters:
  • directory_to_watch – Directory to sync from

  • directory_for_status – Directory for status messages

  • check_every – How often should directory be synced

run()[source]

Run the watcher.

class idmtools_slurm_utils.watcher.IdmtoolsJobHandler(directory_for_status: PathLike, cleanup_job: bool = True)[source]

Bases: FileSystemEventHandler

Handles messages about new messages.

__init__(directory_for_status: PathLike, cleanup_job: bool = True)[source]

Creates handler.

Parameters:
  • directory_for_status – Directory to use for status

  • cleanup_job – Should the job be cleaned up after submission

on_created(event)[source]

On Created events.

Parameters:

event – Event details.

Plugin Documentation

ID Generation Plugins

1. Create a file to host the plugin callback for generator (under idmtools_core/idmtools/plugins). The plugin must have the following format:

from idmtools.core.interfaces.ientity import IEntity

from idmtools.registry.hook_specs import function_hook_impl

@function_hook_impl

def idmtools_generate_id(item: IEntity) -> str:
    Args:
        item: Item for which ID is being generated
    Returns:
    return <your id implementation here>

The key things in this file are:

@function_hook_impl
def idmtools_generate_id(item: 'IEntity') -> str:

This registers the plugin type with idmtools. By using the name idmtools_generate_id, we know you are defining a callback for ids. The callback must match the expected signature.

2. Modify setup.py ‘idmtools_hooks’ to include the new id generation plugin:

entry_points=dict(
    idmtools_hooks=[
        "idmtools_id_generate_<name> = <path to plugin>"
    ]
),

The label of the id plugin must start with idmtools_id_generate_ The letters after idmtools_id_generate_ will be used to select generator in the config.

3. Modify .ini config file to specify the desired id generator.

In the .ini configuration file under the ‘COMMON’ section, use the ‘id_generator’ option to specify the desired id plugin.

For example, if you want to use the uuid generation plugin (‘idmtools_id_generate_uuid’), in the .ini file, you would set the following:

[COMMON]
id_generator = uuid

Similarly, if you want to use the item_sequence plugin (‘idmtools_id_generate_item_sequence’), you would specify the following in the .ini file:

[COMMON]
id_generator = item_sequence

The item_sequence plugin allows you to use sequential ids for items in your experiment (experiments themselves as well as simulations, etc). You can customize use of this plugin by defining an ‘item_sequence’ section in the .ini file and using the variables:

  • sequence_file: Json file that is used to store the last-used numbers for item ids. For example, if we have one experiment that was defined with two simulations, this file would keep track of the most recently used ids with the following: {“Simulation”: 2, “Experiment”: 1}. To note: the sequences start at 0. The default value for this filename (if it is not defined by the user) is index.json, which would be created in the user’s home directory (at ‘.idmtools/itemsequence/index.json’). If a sequence_file IS specified, it is stored in the current working directory unless otherwise specified by a full path. If an item is generated that does not have the item_type attribute (i.e. Platform), its sequence will be stored under the ‘Unknown’ key in this json file. After an experiment is run, there will be a backup of this sequence file generated at the same location ({sequence_file_name}.json.bak); this is called as a post_run hook (specified under ‘idmtools_platform_post_run’ in item_sequence.py).

  • id_format_str: This defines the desired format of the item ids (using the sequential id numbers stored in the sequence_file). In this string, one may access the sequential ids by using ‘data[item_name]’ (which would resolve to the next id #) as well as the ‘item_name’ (i.e. ‘Simulation’, ‘Experiment’). The default for this value is ‘{item_name}{data[item_name]:07d}’ (which would yield ids of ‘Simulation0000000’, ‘Simulation0000001’, etc).

Configuration format:

[item_sequence]
sequence_file = <custom file name>.json
id_format_str = '<custom string format>'

The configuration string format should be a jinja2 template. See https://jinja.palletsprojects.com/

User recipes

The following user recipes provide specific examples on accomplishing some common tasks and scenarios with idmtools.

Asset collections

Modifying asset collections

The following demonstrates how to modify an existing asset collection.

# This recipes demos how to extend/modify and existing AssetCollection
from idmtools.assets import AssetCollection, Asset
from idmtools.core.platform_factory import Platform

with Platform("CALCULON") as platform:
    # first we start by loading our existing asset collection
    existing_ac = AssetCollection.from_id("50002755-20f1-ee11-aa12-b88303911bc1")  # comps asset id
    # now we want to add one file to it. Since asset collection on the server our immutable, what we can do is the following
    #
    # create a new asset collection object
    ac = AssetCollection(existing_ac)
    # or
    # ac = AssetCollection.from_id("98d329b5-95d6-ea11-a2c0-f0921c167862", as_copy=True)
    # ac = existing_ac.copy()
    # ac = AssetCollection()
    # ac += existing_ac
    # add our items to the new collection
    ac.add_asset(Asset(filename="Example", content="Blah"))

    # then depending on the workflow, we can create directly or use within an Experiment/Task/Simulation
    platform.create_items(ac)

    # Experiment
    # e = Experiment.from_task(..., assets=ac)

    # Task
    # task = CommandTask(common_assets = ac)
    # or
    # task.common_assets = ac

Builders

Simulation Builder

The follow demonstrates how to build a sweep using the standard builder, SimulationBuilder

This example uses the following model.

import json
import os
import sys

current_dir = os.path.abspath(os.getcwd())

# create 'output' dir in COMPS under current working dir which is one dir above "Assets" dir
output_dir = os.path.join(current_dir, "output")
config_path = os.path.join(current_dir, "config.json")
if not os.path.exists(output_dir):
    os.mkdir(output_dir)

config = json.load(open(config_path, "r"))
print(config)

# write each configs to result.json in comps's simulation output
with open(os.path.join(output_dir, "result.json"), "w") as fp:
    json.dump(config, fp)

sys.exit(0)

It then builds sweeps through Arms

import os
import sys

from idmtools.assets import AssetCollection
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

with platform('Calculon'):
    base_task = JSONConfiguredPythonTask(
        script_path=os.path.join(COMMON_INPUT_PATH, "compsplatform", "working_model.py"),
        # add common assets from existing collection
        common_assets=AssetCollection.from_id('41c1b14d-0a04-eb11-a2c7-c4346bcb1553', as_copy=True)
    )

    ts = TemplatedSimulations(base_task=base_task)
    # sweep parameter
    builder = SimulationBuilder()
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("min_x"), range(-2, 0))
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("max_x"), range(1, 3))
    ts.add_builder(builder)

    e = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1])
    e.run(wait_until_done=True)
    # use system status as the exit code
    sys.exit(0 if e.succeeded else -1)

See SimulationBuilder for more details.

Arm Experiment Builder

The follow demonstrates how to build a sweep using ArmSimulationBuilder

This example uses the following model.

import json
import os
import sys

current_dir = os.path.abspath(os.getcwd())

# create 'output' dir in COMPS under current working dir which is one dir above "Assets" dir
output_dir = os.path.join(current_dir, "output")
config_path = os.path.join(current_dir, "config.json")
if not os.path.exists(output_dir):
    os.mkdir(output_dir)

config = json.load(open(config_path, "r"))
print(config)

# write each configs to result.json in comps's simulation output
with open(os.path.join(output_dir, "result.json"), "w") as fp:
    json.dump(config, fp)

sys.exit(0)

It then builds sweeps through Arms

"""
        This file demonstrates how to use ArmExperimentBuilder in PythonExperiment's builder.
        We are then adding the builder to PythonExperiment.

        |__sweep arm1
            |_ a = 1
            |_ b = [2,3]
            |_ c = [4,5]
        |__ sweep arm2
            |_ a = [6,7]
            |_ b = 2
        Expect sims with parameters:
            sim1: {a:1, b:2, c:4}
            sim2: {a:1, b:2, c:5}
            sim3: {a:1, b:3, c:4}
            sim4: {a:1, b:3, c:5}
            sim5: {a:6, b:2}
            sim6: {a:7, b:2}
        Note:
            arm1 and arm2 are adding to total simulations
"""
import os
import sys
from functools import partial

from idmtools.builders import SweepArm, ArmType, ArmSimulationBuilder
from idmtools.core.platform_factory import platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

# define specific callbacks for a, b, and c
setA = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="a")
setB = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="b")
setC = partial(JSONConfiguredPythonTask.set_parameter_sweep_callback, param="c")


if __name__ == "__main__":
    with platform('CALCULON'):
        base_task = JSONConfiguredPythonTask(script_path=os.path.join(COMMON_INPUT_PATH, "python", "model1.py"))
        # define that we are going to create multiple simulations from this task
        ts = TemplatedSimulations(base_task=base_task)

        # define our first sweep Sweep Arm
        arm1 = SweepArm(type=ArmType.cross)
        builder = ArmSimulationBuilder()
        arm1.add_sweep_definition(setA, 1)
        arm1.add_sweep_definition(setB, [2, 3])
        arm1.add_sweep_definition(setC, [4, 5])
        builder.add_arm(arm1)

        # adding more simulations with sweeping
        arm2 = SweepArm(type=ArmType.cross)
        arm2.add_sweep_definition(setA, [6, 7])
        arm2.add_sweep_definition(setB, [2])
        builder.add_arm(arm2)

        # add our builders to our template
        ts.add_builder(builder)

        # create experiment from the template
        experiment = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1],
                                              tags={"string_tag": "test", "number_tag": 123, "KeyOnly": None})
        # run the experiment
        experiment.run()
        # in most real scenarios, you probably do not want to wait as this will wait until all simulations
        # associated with an experiment are done. We do it in our examples to show feature and to enable
        # testing of the scripts
        experiment.wait()
        # use system status as the exit code
        sys.exit(0 if experiment.succeeded else -1)

See ArmSimulationBuilder for more details

Multiple argument sweep

The follow demonstrates how to build a sweep when multiple arguments are required at the same time. Typically, defining sweeps per argument is best like in the example Simulation Builder but in some cases, such as when we need all parameters to create an object, we want these parameters passed to a single callback at the same time. This example uses the following model.

import json
import os
import sys

current_dir = os.path.abspath(os.getcwd())

# create 'output' dir in COMPS under current working dir which is one dir above "Assets" dir
output_dir = os.path.join(current_dir, "output")
config_path = os.path.join(current_dir, "config.json")
if not os.path.exists(output_dir):
    os.mkdir(output_dir)

config = json.load(open(config_path, "r"))
print(config)

# write each configs to result.json in comps's simulation output
with open(os.path.join(output_dir, "result.json"), "w") as fp:
    json.dump(config, fp)

sys.exit(0)

We then make a class within our example script that requires the parameters a, b, and c be defined at creation time. With this defined, we then add our sweep callback.

"""
This file demonstrates doing a multi-argument sweep

Sometimes you need multiple parameters at the same time, usually to create objects within a callback. The *
"""

import os
import sys
from dataclasses import dataclass
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH


@dataclass
class ModelConfig:
    a: int
    b: int
    c: int


# define a custom sweep callback that sets b to a + 2
def param_update(simulation, a_value, b_value, c_value):
    mc = ModelConfig(a_value, b_value, c_value)
    simulation.task.set_parameter('a', mc.a)
    simulation.task.set_parameter('b', mc.b)
    simulation.task.set_parameter('c', mc.c)
    return dict(a=a_value, b=b_value, c=c_value)


if __name__ == "__main__":
    # define what platform we want to use. Here we use a context manager but if you prefer you can
    # use objects such as Platform('Calculon') instead
    with Platform('Calculon'):
        # define our base task
        base_task = JSONConfiguredPythonTask(script_path=os.path.join(COMMON_INPUT_PATH, "python", "model1.py"),
                                             parameters=dict())
        # define our input csv sweep
        builder = SimulationBuilder()
        # we can use add_sweep_definition call to do multiple parameter sweeping now
        builder.add_sweep_definition(param_update, range(2), range(2), range(2))
        #builder.add_multiple_parameter_sweep_definition(param_update, range(2), range(2), range(2))

        # define our experiment with its metadata
        experiment = Experiment.from_builder(
            builders=[builder], base_task=base_task,
            name=os.path.split(sys.argv[0])[1],
            tags={"string_tag": "test", "number_tag": 123}
        )

        # run experiment
        experiment.run()
        # wait until done with longer interval
        # in most real scenarios, you probably do not want to wait as this will wait until all simulations
        # associated with an experiment are done. We do it in our examples to show feature and to enable
        # testing of the scripts
        experiment.wait(refresh_interval=10)
        # use system status as the exit code
        sys.exit(0 if experiment.succeeded else -1)

COMPS Recipes

Assetize outputs

For an overview of assetizing outputs, see Assetize outputs Workitem.

Excluding files from assetizing

Sometimes some of the files overlap with patterns you would like to include in a destination. In the following example, there are 100 files created with the .out extension. We would like all the files except “3.out” and “5.out”. We can just add these two files to the exclude patterns of an AssetizeOutput object.

from idmtools.core.platform_factory import Platform
from idmtools.entities.command_task import CommandTask
from idmtools.entities.experiment import Experiment
from idmtools_platform_comps.utils.assetize_output.assetize_output import AssetizeOutput

task = CommandTask(command="python Assets/model.py")
task.common_assets.add_asset("model.py")

platform = Platform("CALCULON")
experiment = Experiment.from_task(task)

# Since we have one simulation in our experiment, we can "flatten output" by using the format str
ao = AssetizeOutput(file_patterns=["*.out"], related_experiments=[experiment], no_simulation_prefix=True)
# Exclude some output files while preserving the default exclusions of stdout and stderr.txt
ao.exclude_patterns.append("3.out")
ao.exclude_patterns.append("5.out")
ao.run(wait_until_done=True)

if ao.succeeded:
    for asset in sorted(ao.asset_collection, key=lambda sa: sa.short_remote_path().rjust(6)):
        print(asset.short_remote_path().rjust(6))
else:
    print('Item failed. Check item output')
Using with experiments

The following demonstrates assetizing the output of experiments. An important part to remember with experiments is that they typically have multiple simulations. To avoid conflicts in the assetized output, the default behavior is to use the simulation.id as a folder name for each simulation output. We can include the original experiment asset collection in filtering as well by using the include_assets parameter.

from functools import partial
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.command_task import CommandTask
from idmtools.entities.experiment import Experiment
from idmtools_platform_comps.utils.assetize_output.assetize_output import AssetizeOutput


############## Setup outputs to assetize in demo

base_task = CommandTask(command="python3 model.py")
base_task.common_assets.add_asset("model.py")
# Command task have no configs. Since it is a python object, we add our own item
base_task.config = dict(a=1,b=1)

# define a template for our commands
command_template = "python Assets/model.py --a {a} --b {b}"


# Define a function that renders our command line as we build simulations
def create_command_line_hook(simulation, platform):
    # we get our simulations object. Use the task and render our command line
    simulation.task.command = command_template.format(**simulation.task.config)


# Define sweeps
def set_parameter(simulation, parameter, value):
    simulation.task.config[parameter] = value


# add hook to our base task
base_task.add_pre_creation_hook(create_command_line_hook)
builder = SimulationBuilder()
builder.add_sweep_definition(partial(set_parameter, parameter="a"), range(3))
builder.add_sweep_definition(partial(set_parameter, parameter="b"), range(3))

platform = Platform("CALCULON")

experiment = Experiment.from_builder(builders=builder, base_task=base_task, name="Create example output")

############### Demo of assetize of experiment outputs
# Define what we want to assetize output
ao = AssetizeOutput(file_patterns=["output.json"], related_experiments=[experiment])
# run the Assetize job. It will ensure other items are ran if they are entities(Experiment, Simulation or Workitems)
ao.run(wait_until_done=True)
print(f"Asset Collection: {ao.id}")
SSMT Recipes
Run Analysis Remotely(Platform Analysis)

The following example demonstrates using the PlatformAnalysis object to run AnalyzerManager server-side. Running on the server side has the advantage of not needing to download the files required for analysis, as well as additional computational power.

In this example, we are going to run the following analyzers

import json
import os
from typing import Any, Dict, Union
from idmtools.entities.ianalyzer import IAnalyzer as BaseAnalyzer
import matplotlib as mpl
from idmtools.entities.iworkflow_item import IWorkflowItem
from idmtools.entities.simulation import Simulation

mpl.use('Agg')


class AdultVectorsAnalyzer(BaseAnalyzer):

    def __init__(self, name='hi'):
        super().__init__(filenames=["output\\InsetChart.json"])
        print(name)

    def initialize(self):
        """
        Perform the Initialization of Analyzer
        Here we ensure our output directory exists
        Returns:

        """
        if not os.path.exists(os.path.join(self.working_dir, "output")):
            os.mkdir(os.path.join(self.working_dir, "output"))

    def map(self, data: Dict[str, Any], item: Union[IWorkflowItem, Simulation]) -> Any:
        """
        Select the Adult Vectors channel for the InsertChart

        Args:
            data: A dictionary that contains a mapping of filename to data
            item: Item can be a Simulation or WorkItem

        Returns:

        """
        return data[self.filenames[0]]["Channels"]["Adult Vectors"]["Data"]

    def reduce(self, all_data: Dict[Union[IWorkflowItem, Simulation], Any]) -> Any:
        """
        Creates the final adult_vectors.json and Plot

        Args:
            all_data: Dictionary mapping our Items to the mapped data

        Returns:

        """
        output_dir = os.path.join(self.working_dir, "output")
        with open(os.path.join(output_dir, "adult_vectors.json"), "w") as fp:
            json.dump({str(s.uid): v for s, v in all_data.items()}, fp)

        import matplotlib.pyplot as plt

        fig = plt.figure()
        ax = fig.add_subplot(111)

        for pop in list(all_data.values()):
            ax.plot(pop)
        ax.legend([str(s.uid) for s in all_data.keys()])
        fig.savefig(os.path.join(output_dir, "adult_vectors.png"))
import json
import os
from typing import Dict, Any, Union
from idmtools.entities.ianalyzer import IAnalyzer as BaseAnalyzer

import matplotlib as mpl
from idmtools.entities.iworkflow_item import IWorkflowItem
from idmtools.entities.simulation import Simulation

mpl.use('Agg')


class PopulationAnalyzer(BaseAnalyzer):

    def __init__(self, title='idm'):
        super().__init__(filenames=["output\\InsetChart.json"])
        print(title)

    def initialize(self):
        """
        Initialize our Analyzer. At the moment, this just creates our output folder
        Returns:

        """
        if not os.path.exists(os.path.join(self.working_dir, "output")):
            os.mkdir(os.path.join(self.working_dir, "output"))

    def map(self, data: Dict[str, Any], item: Union[IWorkflowItem, Simulation]) -> Any:
        """
        Extracts the Statistical Population, Data channel from InsetChart.
        Called for Each WorkItem/Simulation.

        Args:
            data: Data mapping str to content of file
            item: Item to Extract Data from(Usually a Simulation)

        Returns:

        """
        return data[self.filenames[0]]["Channels"]["Statistical Population"]["Data"]

    def reduce(self, all_data: Dict[Union[IWorkflowItem, Simulation], Any]) -> Any:
        """

        Create the Final Population JSON and Plot
        Args:
            all_data: Populate data from all the Simulations

        Returns:
            None
        """
        output_dir = os.path.join(self.working_dir, "output")
        with open(os.path.join(output_dir, "population.json"), "w") as fp:
            json.dump({str(s.uid): v for s, v in all_data.items()}, fp)
        import matplotlib.pyplot as plt
        fig = plt.figure()
        ax = fig.add_subplot(111)
        for pop in list(all_data.values()):
            ax.plot(pop)
        ax.legend([str(s.uid) for s in all_data.keys()])
        fig.savefig(os.path.join(output_dir, "population.png"))
from examples.ssmt.simple_analysis.analyzers.AdultVectorsAnalyzer import AdultVectorsAnalyzer
from examples.ssmt.simple_analysis.analyzers.PopulationAnalyzer import PopulationAnalyzer
from idmtools.core.platform_factory import Platform
from idmtools.analysis.platform_anaylsis import PlatformAnalysis

if __name__ == "__main__":
    platform = Platform('CALCULON')
    analysis = PlatformAnalysis(
        platform=platform, experiment_ids=["b3e4fceb-bb71-ed11-aa00-b88303911bc1"],
        analyzers=[PopulationAnalyzer, AdultVectorsAnalyzer], analyzers_args=[{'title': 'idm'}, {'name': 'global good'}],
        analysis_name="SSMT Analysis Simple 1",
        # You can pass any additional arguments needed to AnalyzerManager through the extra_args parameter
        extra_args=dict(max_workers=8)
    )

    analysis.analyze(check_status=True)
    wi = analysis.get_work_item()
    print(wi)

See idmtools.analysis.platform_anaylsis.PlatformAnalysis.

Environment

Modifying environment variables on platforms without native support

Some platforms do not support changing environment variables in a job definition. idmtools provides a utility through TemplatedScriptTask to allow you to still modify environment variables, as demonstrated in the following example.

Here is our model.py. It imports our package and then prints the environment variable “EXAMPLE”.

import os

print(os.environ['EXAMPLE'])

This is our idmtools orchestration script that defines our Python task and wrapper task with additional variables.

import os
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools_models.python.python_task import PythonTask
from idmtools_models.templated_script_task import get_script_wrapper_unix_task, LINUX_DICT_TO_ENVIRONMENT


platform = Platform("CALCULON")
# here we define the task we want to use the environment variables. In this example we have a simple python script that prints the EXAMPLE environment variable
task = PythonTask(script_path="model.py")
# Get a task to wrap the script in a shell script. Which get_script_wrapper function you use depends on the platform's OS
wrapper_task = get_script_wrapper_unix_task(
    task=task,
    # and set some values here
    variables=dict(EXAMPLE='It works!')
)
# some platforms need to you hint where their script binary is. Usually this is only applicable to Unix platforms(Linux, Mac, etc)
wrapper_task.script_binary = "/bin/bash"

# Now we define our experiment. We could just as easily use this wrapper in a templated simulation builder as well
experiment = Experiment.from_task(name=os.path.basename(__file__), task=wrapper_task)
experiment.run(wait_until_done=True)

Experiments

Adding simulations to an existing experiment
import os

from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask

# load up an existing experiment with completed simulations
with Platform('Calculon', node_group="idm_48cores", priority="Highest"):
    # Create First Experiment
    builder = SimulationBuilder()
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("a"),
                                 [i * i for i in range(5)])
    model_path = os.path.join("..", "..", "python_model", "inputs", "python_model_with_deps", "Assets", "model.py")
    sims_template = TemplatedSimulations(base_task=JSONConfiguredPythonTask(script_path=model_path))
    sims_template.add_builder(builder=builder)

    experiment = Experiment.from_template(sims_template)
    experiment.run(wait_until_done=True)

    # You could start with experiment = Experiment.from_id(....)
    # create a new sweep for new simulations
    sims_template = TemplatedSimulations(base_task=JSONConfiguredPythonTask(script_path=model_path))
    builder = SimulationBuilder()
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("a"),
                                 [i * i for i in range(6, 10)])
    sims_template.add_builder(builder=builder)
    experiment.simulations.extend(sims_template)

    # If you are adding a large amount of simulations through a Template, it is recommended to do the following to keep generator use and lower memory footprint
    #
    # sims_template.add_simulations(experiment.simulations)
    # experiment.simulations = simulations

    # run all simulations in the experiment that have not run before and wait
    experiment.run(wait_until_done=True)
Adding simulations to an existing experiment with new common assets
import os

from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask

# load up an existing experiment with completed simulations
with Platform('CALCULON'):
    # Create First Experiment
    builder = SimulationBuilder()
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("a"),
                                 [i for i in range(5)])
    model_path = os.path.join("..", "..", "python_model", "inputs", "python_model_with_deps", "Assets", "model.py")
    sims_template = TemplatedSimulations(base_task=JSONConfiguredPythonTask(script_path=model_path))
    sims_template.add_builder(builder=builder)

    experiment = Experiment.from_template(sims_template)
    experiment.run(wait_until_done=True)

    # IMPORTANT NOTE:
    # Currently it is important you wait on existing Simulations to finish provision when changing assets later
    # idmtools cannot detect this state at the moment and it can leave to unexpected behaviour


    # You could start with experiment = Experiment.from_id(...., copy_assets=True)
    # Changing the Common Assets
    experiment.assets = experiment.assets.copy()
    # Add new simulations to the experiment
    model_path = os.path.join("..", "..", "python_model", "inputs", "python_model_with_deps", "Assets", "newmodel2.py")
    sims_template = TemplatedSimulations(base_task=JSONConfiguredPythonTask(script_path=model_path))
    builder = SimulationBuilder()
    builder.add_sweep_definition(JSONConfiguredPythonTask.set_parameter_partial("a"),
                                 [i for i in range(6, 10)])
    sims_template.add_builder(builder=builder)
    experiment.simulations.extend(sims_template)

    # If you are adding a large amount of simulations through a Template, it is recommended to do the following to keep generator use and lower memory footprint
    #
    # sims_template.add_simulations(experiment.simulations)
    # experiment.simulations = simulations

    # run all simulations in the experiment that have not run before and wait
    experiment.run(wait_until_done=True, regather_common_assets=True)
Creating experiments with one simulation

When you want to run a single simulation instead of a set, you can create an experiment directly from a task.

from idmtools.core.platform_factory import Platform
from idmtools.entities.command_task import CommandTask
from idmtools.entities.experiment import Experiment

platform = Platform('CALCULON')
task = CommandTask(command="python --version")
experiment = Experiment.from_task(task=task, name="Example of experiment from task")
experiment.run(wait_until_done=True)
Creating an experiment with a pre- and post- creation hook

Prior to running an experiment or a work item, you can add pre or post creation hooks to the item.

import os
import sys
import typing
from idmtools.builders import SimulationBuilder
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.templated_simulation import TemplatedSimulations
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools.utils.entities import save_id_as_file_as_hook
from time import time
if typing.TYPE_CHECKING:
    from idmtools_platform_comps.comps_platform import COMPSPlatform

# Define our base task
task = JSONConfiguredPythonTask(script_path=os.path.join("..", "..", "python_model", "inputs", "python_model_with_deps",
                                                         "Assets", "model.py"), parameters=(dict(c=0)))
ts = TemplatedSimulations(base_task=task)

# Build sweeps for our simulations
builder = SimulationBuilder()

class setParam:
    def __init__(self, param):
        self.param = param

    def __call__(self, simulation, value):
        return simulation.task.set_parameter(self.param, value)


builder.add_sweep_definition(setParam("a"), [1, 2, 3])
builder.add_sweep_definition(setParam("b"), [1, 2])

ts.add_builder(builder)

# Create our experiment
experiment = Experiment.from_template(ts, name=os.path.split(sys.argv[0])[1])

# Add our pre-creation and post-creation hooks:
#   Our pre-creation hook adds the date as a tag to our experiment
#   Our post-creation hook saves the experiment ID within a file titled '{item.item_type}.{item.name}.id' (within current directory)
ran_at = str(time())

def add_date_as_tag(experiment: Experiment, platform: 'COMPSPlatform'):
    experiment.tags['date'] = ran_at

# Add a pre/post creation hook to either an experiment or a work item using the 'add_pre_creation_hook' or 'add_post_creation_hook' methods
experiment.add_pre_creation_hook(add_date_as_tag)
experiment.add_post_creation_hook(save_id_as_file_as_hook)

with Platform('CALCULON'):
    experiment.run(True)
    sys.exit(0 if experiment.succeeded else -1)

Logging

Enabling/Disabling/Changing Log Level at Runtime

Sometime you want to be able to enable console logging or change a logging level directly in a script without the need for an idmtools.ini file. The following example shows how to do that.

from logging import getLogger

from idmtools.core.logging import setup_logging, IdmToolsLoggingConfig

# At any point in running you can import the setup logging to reset logging
# In this example, we enable the console logger at run time
logger = getLogger()
logger.debug("This will not be visible at the command line")
setup_logging(IdmToolsLoggingConfig(console=True, level='DEBUG', force=True))
logger.debug("You should be able to see this at the command line")

See Logging overview for details on configuring logging through the idmtools.ini.

Python

Adding items to the Python path

The example below runs a simple model that depends on a user produced package. It uses a wrapper script to add the items to the PYTHONPATH environment variables so the package can be imported into model.py.

Here is our dummy package. It just has a variable we are going to use in model.py

a = "123"

Here is our model.py. It imports our package and then prints the variables defined in our model

import a_package
print(a_package.a)

This is our idmtools orchestration script to run the add our package, define our python task, and wrap that with a bash script.

import os

from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools_models.python.python_task import PythonTask
from idmtools_models.templated_script_task import TemplatedScriptTask, get_script_wrapper_unix_task, LINUX_PYTHON_PATH_WRAPPER


platform = Platform("CALCULON")
# This task can be anytype of task that would run python. Here we are running a simple model script that consumes the example
# package "a_package"
task = PythonTask(script_path="model.py", python_path='python3.6')
# add our library. On Comps, you could use RequirementsToAssetCollection as well
task.common_assets.add_asset("a_package.py")
# we request a wrapper script for Unix. The wrapper should match the computation platform's OS
# We also use the built-it LINUX_PYTHON_PATH_WRAPPER template which modifies our PYTHONPATH to load libraries from Assets/site-packages and Assets folders
wrapper_task: TemplatedScriptTask = get_script_wrapper_unix_task(task, template_content=LINUX_PYTHON_PATH_WRAPPER)
# we have to set the bash path remotely
wrapper_task.script_binary = "/bin/bash"

# Now we define our experiment. We could just as easily use this wrapper in a templated simulation builder as well
experiment = Experiment.from_task(name=os.path.basename(__file__), task=wrapper_task)
experiment.run(wait_until_done=True)

CLI reference

Templates

You can use the cookiecutter templates included with idmtools to get started with Python projects and idmtools. These templates provide a logical, reasonably standardized, but flexible project structure for doing and sharing data science work. To see the list of included cookiecutter templates type the following at a command prompt.

$ idmtools init --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools init [OPTIONS] COMMAND [ARGS]...

  Commands to help start or extend projects through templating.

Options:
  --help  Show this message and exit.

Commands:
  data-science          A logical, reasonably standardized, but flexible...
  docker-science        This project is a tiny template for machine...
  reproducible-science  A boilerplate for reproducible and transparent...

CLI COMPS

The COMPS platform related commands can be accessed with either idmtools comps or comps-cli. All comps command require a target configuration block or alias to use to configure the connection to COMPS. See the details of the top level command below for detailed help:

$ idmtools comps --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools comps [OPTIONS] CONFIG_BLOCK COMMAND [ARGS]...

  Commands related to managing the COMPS platform.

  CONFIG_BLOCK - Name of configuration section or alias to load COMPS
  connection information from

Options:
  --help  Show this message and exit.

Commands:
  ac-exist          Check ac existing based on requirement file Args:...
  assetize-outputs  Allows assetizing outputs from the command line
  download          Allows Downloading outputs from the command line
  login             Login to COMPS
  req2ac            Create ac from requirement file Args: asset_tag: tag...
  singularity       Singularity commands

You can login to a COMPS environment by using the idmtools comps CONFIG_BLOCK login command. See the help below:

$ idmtools comps CONFIG_BLOCK login --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools comps CONFIG_BLOCK login [OPTIONS]

  Login to COMPS

Options:
  --username TEXT  Username  [required]
  --password TEXT  Password
  --help           Show this message and exit.

You can assetize outputs from the CLI by running idmtools comps CONFIG_BLOCK assetize-outputs:

$ idmtools comps CONFIG_BLOCK assetize-outputs --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools comps CONFIG_BLOCK assetize-outputs [OPTIONS]

  Allows assetizing outputs from the command line

Options:
  --pattern TEXT                  File patterns
  --exclude-pattern TEXT          File patterns
  --experiment TEXT               Experiment ids to assetize
  --simulation TEXT               Simulation ids to assetize
  --work-item TEXT                WorkItems ids to assetize
  --asset-collection TEXT         Asset Collection ids to assetize
  --dry-run / --no-dry-run        Gather a list of files that would be
                                  assetized instead of actually assetizing
  --wait / --no-wait              Wait on item to finish
  --include-assets / --no-include-assets
                                  Scan common assets of WorkItems and
                                  Experiments when filtering
  --verbose / --no-verbose        Enable verbose output in worker
  --json / --no-json              Outputs File list as JSON when used with dry
                                  run
  --simulation-prefix-format-str TEXT
                                  Simulation Prefix Format str. Defaults to
                                  '{simulation.id}'. For no prefix, pass a
                                  empty string
  --work-item-prefix-format-str TEXT
                                  WorkItem Prefix Format str. Defaults to ''
  --tag <TEXT TEXT>...            Tags to add to the created asset collection
                                  as pairs.
  --name TEXT                     Name of AssetizeWorkitem. If not provided,
                                  one will be generated
  --id-file / --no-id-file        Enable or disable writing out an id file
  --id-filename TEXT              Name of ID file to save build as. Required
                                  when id file is enabled
  --help                          Show this message and exit.

You can download from the CLI by running idmtools comps CONFIG_BLOCK download:

$ idmtools comps CONFIG_BLOCK download --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools comps CONFIG_BLOCK download [OPTIONS]

  Allows Downloading outputs from the command line

Options:
  --pattern TEXT                  File patterns
  --exclude-pattern TEXT          File patterns
  --experiment TEXT               Experiment ids to filter for files to
                                  download
  --simulation TEXT               Simulation ids to filter for files to
                                  download
  --work-item TEXT                WorkItems ids to filter for files to
                                  download
  --asset-collection TEXT         Asset Collection ids to filter for files to
                                  download
  --dry-run / --no-dry-run        Gather a list of files that would be
                                  downloaded instead of actually downloading
  --wait / --no-wait              Wait on item to finish
  --include-assets / --no-include-assets
                                  Scan common assets of WorkItems and
                                  Experiments when filtering
  --verbose / --no-verbose        Enable verbose output in worker
  --json / --no-json              Outputs File list as JSON when used with dry
                                  run
  --simulation-prefix-format-str TEXT
                                  Simulation Prefix Format str. Defaults to
                                  '{simulation.id}'. For no prefix, pass a
                                  empty string
  --work-item-prefix-format-str TEXT
                                  WorkItem Prefix Format str. Defaults to ''
  --name TEXT                     Name of Download Workitem. If not provided,
                                  one will be generated
  --output-path TEXT              Output path to save zip
  --delete-after-download / --no-delete-after-download
                                  Delete the workitem used to gather files
                                  after download
  --extract-after-download / --no-extract-after-download
                                  Extract zip after download
  --zip-name TEXT                 Name of zipfile
  --help                          Show this message and exit.

CLI Examples

You can use IDM CLI to download the included Python example scripts from GitHub to a local folder using the idmtools gitrepo command. To see the list of commands and options for idmtools gitrepo, type the following at a command prompt:

$ idmtools gitrepo --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools gitrepo [OPTIONS] COMMAND [ARGS]...

  Contains command to download examples.

Options:
  --help  Show this message and exit.

Commands:
  download  Download files from GitHub repo to user location.
  peep      Display all current files/dirs of the repo folder (not...
  releases  Display all the releases of the repo.
  repos     Display all public repos of the owner.
  view      Display all idmtools available examples.

or view examples by typing idmtools examples list

$ idmtools examples list
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini

COMPSPlatform
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/ssmt
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/workitem
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/vistools

SSMTPlatform
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/ssmt
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/vistools

JSONConfiguredRTask
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/r_model

JSONConfiguredTask
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/python_model
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/load_lib

CommandTask
    - https://github.com/InstituteforDiseaseModeling/corvid-idmtools

PythonTask
    - https://github.com/InstituteforDiseaseModeling/idmtools/tree/v1.7.1/examples/load_lib

To see the list of commands and options for downloading examples, type the following at a command prompt:

$ idmtools gitrepo download --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools gitrepo download [OPTIONS]

  Download files from GitHub repo to user location.

  Args:     type: Object type (COMPSPlatform, PythonTask, etc)     url: GitHub
  repo files url     output: Local folder

  Returns: Files download count

Options:
  --type TEXT    Download examples by type (COMPSPlatform, PythonTask, etc)
  --url TEXT     Repo files url
  --output TEXT  Files download destination
  --help         Show this message and exit.

or

$ idmtools examples download --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools examples download [OPTIONS]

  Download examples from specified location.

  Args:     type: Object type (COMPSPlatform, PythonTask, etc)     url: GitHub
  repo files url     output: Local folder

  Returns: Files download count

Options:
  --type TEXT    Download examples by type (COMPSPlatform, PythonTask, etc)
  --url TEXT     Repo files url
  --output TEXT  Files download destination
  --help         Show this message and exit.

Troubleshooting

You can use troubleshooting commands to get information about plugins (CLI, Platform, and Task) and to get detailed system information. To see the list of troubleshooting commands, type the following at a command prompt:

$ idmtools info --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools info [OPTIONS] COMMAND [ARGS]...

  Troubleshooting and debugging information

Options:
  --help  Show this message and exit.

Commands:
  plugins  Commands to get information about installed IDM-Tools plugins
  system   Provide an output with details about your current execution...

To see the list of troubleshooting commands and options for the plugins command, type the following at a command prompt:

$ idmtools info plugins --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools info plugins [OPTIONS] COMMAND [ARGS]...

  Commands to get information about installed IDM-Tools plugins

Options:
  --help  Show this message and exit.

Commands:
  cli               List CLI plugins
  platform          List Platform plugins
  platform-aliases  List Platform plugins configuration aliases
  task              List Task plugins

To see the list of troubleshooting options for the system command, type the following at a command prompt:

$ idmtools info system --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools info system [OPTIONS]

  Provide an output with details about your current execution platform and
  IDM-Tools install

Options:
  --copy-to-clipboard / --no-copy-to-clipboard
                                  Copy output to clipboard
  --no-format-for-gh / --format-for-gh
                                  When copying to clipboard, do we want to
                                  formatted for Github
  --issue / --no-issue            Copy data and format for github alias
  --output-filename TEXT          Output filename
  --help                          Show this message and exit.

To see the versions of idmtools and related modules, along with the plugins they provide, you can use the version command. Here is an example of the output:

$ idmtools version
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
idmtools                             Version: 1.7.10                          
  Plugins:
    CommandTask               
idmtools-cli                         Version: 1.7.10                          
idmtools-models                      Version: 1.7.10                          
  Plugins:
    JSONConfiguredPythonTask  
    JSONConfiguredRTask       
    JSONConfiguredTask        
    PythonTask                
    RTask                     
    ScriptWrapperTask         
    TemplatedScriptTask       
idmtools-platform-comps              Version: 1.7.10                          
  Plugins:
    COMPSPlatform             
    SSMTPlatform              
idmtools-platform-slurm              Version: 1.7.10                          
  Plugins:
    SlurmPlatform

To see a list of the predefined configurations from platform plugins, use the command:

$ idmtools info plugins platform-aliases
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
+---------------------------+-------------------------------------------------------------------------+
| Platform Plugin Aliases   | Configuration Options                                                   |
|---------------------------+-------------------------------------------------------------------------|
| SLURM_LOCAL               | {'mode': 'local', 'job_directory': '/home/docs'}                        |
| SLURM_BRIDGED             | {'mode': 'bridged', 'job_directory': '/home/docs'}                      |
| BELEGOST                  | {'endpoint': 'https://comps.idmod.org', 'environment': 'Belegost'}      |
| CALCULON                  | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| IDMCLOUD                  | {'endpoint': 'https://comps.idmod.org', 'environment': 'IDMcloud'}      |
| NDCLOUD                   | {'endpoint': 'https://comps.idmod.org', 'environment': 'NDcloud'}       |
| BMGF_IPMCLOUD             | {'endpoint': 'https://comps.idmod.org', 'environment': 'BMGF_IPMcloud'} |
| QSTART                    | {'endpoint': 'https://comps.idmod.org', 'environment': 'Qstart'}        |
| BAYESIAN                  | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Bayesian'}     |
| SLURMSTAGE                | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| CUMULUS                   | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Cumulus'}      |
| SLURM                     | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| SLURM2                    | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| BOXY                      | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| BELEGOST_SSMT             | {'endpoint': 'https://comps.idmod.org', 'environment': 'Belegost'}      |
| CALCULON_SSMT             | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| IDMCLOUD_SSMT             | {'endpoint': 'https://comps.idmod.org', 'environment': 'IDMcloud'}      |
| NDCLOUD_SSMT              | {'endpoint': 'https://comps.idmod.org', 'environment': 'NDcloud'}       |
| BMGF_IPMCLOUD_SSMT        | {'endpoint': 'https://comps.idmod.org', 'environment': 'BMGF_IPMcloud'} |
| QSTART_SSMT               | {'endpoint': 'https://comps.idmod.org', 'environment': 'Qstart'}        |
| BAYESIAN_SSMT             | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Bayesian'}     |
| SLURMSTAGE_SSMT           | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| CUMULUS_SSMT              | {'endpoint': 'https://comps2.idmod.org', 'environment': 'Cumulus'}      |
| SLURM_SSMT                | {'endpoint': 'https://comps.idmod.org', 'environment': 'Calculon'}      |
| SLURM2_SSMT               | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
| BOXY_SSMT                 | {'endpoint': 'https://comps2.idmod.org', 'environment': 'SlurmStage'}   |
+---------------------------+-------------------------------------------------------------------------+

IDM includes a command-line interface (CLI) with options and commands to assist with getting started, managing and monitoring, and troubleshooting simulations and experiments. After you’ve installed IDM you can view the available options and commands by typing the following at a command prompt

$ idmtools --help
INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools/checkouts/stable/docs/idmtools.ini
Usage: idmtools [OPTIONS] COMMAND [ARGS]...

  Allows you to perform multiple idmtools commands.

Options:
  --debug / --no-debug  When selected, enables console level logging
  --help                Show this message and exit.

Commands:
  comps        COMPS Related Commands
  config       Contains commands related to the creation of idmtools.ini.
  examples     Display a list of examples organized by plugin type
  experiment   Contains commands related to experiments for Local Platform.
  gitrepo      Contains commands related to examples download
  info         Troubleshooting and debugging information
  init         Commands to help start or extend projects through templating.
  init-export  Export list of project templates
  package      Contains commands related to package versions
  simulation   Contains commands related to simulations.
  slurm        SLURM Related Commands
  version      List version info about idmtools and plugins

Convert scripts from DTK-Tools

Understanding some of the similarities and differences between DTK-Tools and idmtools will help when converting scripts from DTK-Tools to use within idmtools.

Configuration .ini files have different names, simtools.ini in DTK-Tools and idmtools.ini in idmtools. Simtools.ini is required while idmtools.ini is optional.

Platform configuration, HPC or local, in DTK-Tools is set using the SetupParser class while in idmtools the Platform class is used.

Simulation configuration, such as intervention parameters and reports, in DTK-Tools are set using the DTKConfigBuilder class while in idmtools it’s configured through a base task object.

Configuration .ini files

Please see the following diagram which helps illustrate some of the differences between the required simtools.ini used in DTK-Tools with the optional idmtools.ini used in idmtools.

_images/5653c6118518043be343d6b5c87662f5f77b4344d0f2e90ea9cf07f13b617d1c.svg

Platform configuration

In addition to using INI files for platform configuration parameters you can also use Python class objects, SetupParser in DTK-Tools and Platform in idmtools. If platform configuration parameters are configured in an INI file and also configured in a Python class object then the parameters in the Python object take priority.

DTK-Tools:

SetupParser.default_block = 'HPC'

idmtools:

platform = Platform("Belegost")

When using Platform you can specify a predefined configuration alias, such as Belegost, when using the COMPS platform. To see a list of aliases, use the cli command idmtools info plugins platform-aliases.

Simulation configuration

DTKConfigBuilder in DTK-Tools is used for setting the intervention parameters and reports for simulations run with DTK-Tools while idmtools uses task objects. For example, when using emodpy the EMODTask class is used.

_images/364abdbe0d0dc1baa41c2c85a1ff866cc85a129a6850f5b6eb9c3ead8711183a.svg

Example

To see an applied example of the previously described information you can see a converted DTK-Tools csv analyzer to idmtools and additional information on converting analyzers here: Convert analyzers from DTK-Tools.

Frequently asked questions

As you get started with idmtools, you may have questions. The most common questions are answered below. If you are using idmtools with emodpy packages, see the FAQs from those packages for additional guidance.

Why am I receiving the error: “ImportError: DLL load failed: The specified module could not be found.”?

This error can be caused when using Microsoft Visual C++ runtime version 14.0.24151.1 and running analyzers, such as test_ssmt_platforanalysis.py. Workarounds are to either use pip install msvc-runtime to install the latest Microsoft Visual C++ runtime version or to install the latest Microsoft Build Tools.

Why am I getting an “ImportError: cannot import name ‘NoReturn’” error when importing idmtools?

Because you have a version of Python that is less than Python 3.6.5 64-bit installed somewhere and you are running with that, perhaps accidentally.

How do I specify the number of cores?

You can specify the num_cores parameter in COMPSPlatform. It is not an EMOD configuration parameter.

Glossary

The following terms describe both the features and functionality of the idmtools software, as well as information relevant to using idmtools.

analyzer

Functionality that uses the MapReduce framework to process large data sets in parallel, typically on a high-performance computing (HPC) cluster. For example, if you would like to focus on specific data points from all simulations in one or more experiments then you can do this using analyzers with idmtools and plot the final output.

asset collection

A collection of user created input files, such as demographics, temperature, weather, binaries, and overlay files. These files are stored in COMPS and can be available for use by other users.

assets

See asset collection.

builder

A function and list of values with which to call that function that is used to sweep through parameter values in a simulation.

calibration

The process of adjusting the parameters of a simulation to better match the data from a particular time and place.

EMOD

An agent-based mechanistic disease transmission model built by IDM that can be used with idmtools. See the EMOD GitHub repo.

entity

Each of the interfaces or classes that are well-defined models, types, and validations for idmtools items, such as simulations, analyzers, or tasks.

experiment

Logical grouping of simulations. This allows for managing numerous simulations as a single unit or grouping.

high-performance computing (HPC)

The use of parallel processing for running advanced applications efficiently, reliably, and quickly.

parameter sweep

An iterative process in which simulations are run repeatedly using different values of the parameter(s) of choice. This process enables the modeler to determine what a parameter’s “best” value or range of values.

platform

The computing resource on which the simulation runs. See Supported platforms for more information on those that are currently supported.

server-side modeling tools (SSMT)

Modeling tools used with COMPS that handle computation on the server side, rather than the client side, to speed up analysis.

simulation

An individual run of a model. Generally, multiple simulations are run as part of an experiement.

suite

Logical grouping of experiments. This allows for managing multiple experiments as a single unit or grouping.

task

The individual actions that are processed for each simulation.

Changelog

0.1.0

Analyzers
  • #0060 - Analyzer base class

Bugs
  • #0095 - idmtools is not working for python 3.6

  • #0096 - pytest (and pytest-runner) should be installed by setup

  • #0105 - UnicodeDecodeError when run python example in LocalPlatform mode

  • #0114 - It should be possible to set base_simulation in the PythonExperiment constructor

  • #0115 - PythonSimulation constructor should abstract the parameters dict

  • #0124 - Can not run teststest_python_simulation.py from console

  • #0125 - relative_path for AssetCollection does not work

  • #0126 - Same test in issue #125 does not working for localPlatform

  • #0129 - new python model root node changed from “config” to “parameters”

  • #0137 - PythonExperiment fails if pass assets

  • #0138 - test_sir.py does not set parameter

  • #0142 - experiment.batch_simulations seems not to be batching

  • #0143 - COMPSPlatform’s refresh_experiment_status() get called too much from ExperimentManager’s wait_till_done() mathod

  • #0150 - missing pandas package

  • #0151 - log throw error from IPersistanceService.py’s save method

  • #0161 - tests/test_python_simulation.py’s test_add_dirs_to_assets_comps() return different asset files for windows and linux

  • #0171 - Workflow: fix loop detection

  • #0203 - Running new builds on Linux fails in Bamboo due to datapostgres-data file folder permissions

  • #0206 - test_python_simulation.py failed for all local test in windows

CLI
  • #0007 - Command line functions definition

  • #0118 - Add the printing of children in the EntityContainer

Configuration
  • #0047 - Configuration file read on a per-folder basis

  • #0048 - Validation for the configuration file

  • #0049 - Configuration file is setting correct parameters in platform

Core
  • #0006 - Service catalog

  • #0014 - Package organization and pre-requisites

  • #0081 - Allows the sweeps to be created in arms

  • #0087 - Raise an exception if we have 2 files with the same relative path in the asset collection

  • #0091 - Refactor the Experiment/Simulation objects to not persist the simulations

  • #0092 - Generalize the simulations/experiments for Experiment/Suite

  • #0102 - [Local Runner] Retrieve simulations for experiment

  • #0107 - LocalPlatform does not detect duplicate files in AssetCollectionFile for pythonExperiment

  • #0140 - Fetch simulations at runtime

  • #0148 - Add python tasks

  • #0180 - switch prettytable for tabulate

Documentation
  • #0004 - Notebooks exploration for examples

  • #0085 - Setup Sphinx and GitHub pages for the docs

  • #0090 - “Development installation steps” missing some steps

Models
  • #0008 - Which models support out of the box?

  • #0136 - Create an envelope argument for the PythonSimulation

Platforms
  • #0068 - [Local Runner] Simulation status monitoring

  • #0069 - [Local Runner] Database

  • #0094 - Batch and parallelize simulation creation in the COMPSPlatform

1.0.0

Analyzers
  • #0034 - Create the Plotting step

  • #0057 - Output files retrieval

  • #0196 - Filtering

  • #0197 - Select_simulation_data

  • #0198 - Finalize

  • #0279 - Port dtk-tools analyze system to idmtools

  • #0283 - Fix up all platform-based test due to analyzer/platform refactor/genericization

  • #0337 - Change AnalyzeManager to support passing ids (Experiment, Simulation, Suite)

  • #0338 - Two AnalyzeManager files - one incorrect and needs to be removed

  • #0340 - Cleanup DownloadAnalyzer

  • #0344 - AnalyzeManager configuration should be option parameter

  • #0589 - Rename suggestion: example_analysis_multiple_cases => example_analysis_MultipleCases

  • #0592 - analyzers error on platform.get_files for COMPS: argument of type ‘NoneType’ is not iterable

  • #0594 - analyzer error multiprocessing pool StopIteration error in finalize_results

  • #0614 - Convenience function to exclude items in analyze manager

  • #0619 - Ability to get exp sim object ids in analyzers

Bugs
  • #0124 - Can not run teststest_python_simulation.py from console

  • #0125 - relative_path for AssetCollection does not work

  • #0129 - new python model root node changed from “config” to “parameters”

  • #0142 - experiment.batch_simulations seems not to be batching

  • #0143 - COMPSPlatform’s refresh_experiment_status() get called too much from ExperimentManager’s wait_till_done() mathod

  • #0150 - missing pandas package

  • #0184 - Missing ‘data’ dir for test_experiment_manager test. (TestPlatform)

  • #0223 - UnicodeDecodeError for testcases in test_dtk.py when run with LocalPlatform

  • #0236 - LocalRunner: ExperimentsClient get_all method should have parameter ‘tags’ not ‘tag’

  • #0265 - load_files for DTKExperiment create nested ‘parameters’ in config.json

  • #0266 - load_files for demographics.json does not work

  • #0272 - diskcache objects cause cleanup failure if used in failing processes

  • #0294 - Docker containers failed to start if they are created but stopped

  • #0299 - Sometime in Windows command line, local docker runner stuck and no way to stop from command line

  • #0302 - Local Platform delete is broken

  • #0318 - Postgres Connection error on Local Platform

  • #0320 - COMPSPlatform Asset handling - currently DuplicatedAssetError content is not same

  • #0323 - idmtools is not retro-compatible with pre-idmtools experiments

  • #0332 - with large number of simulations, local platform either timeout on dramatiq or stuck on persistamceService save method

  • #0339 - Analyzer tests fails on AnalyzeManager analyze len(self.potential_items) == 0

  • #0341 - AnalyzeManager Runtime error on worker_pool

  • #0346 - UnknownItemException for analyzers on COMPSPlatform PythonExperiments

  • #0350 - RunTask in local platform should catch exception

  • #0351 - AnalyzeManager finalize_results Process cannot access the cache.db because it is being used by another process

  • #0352 - Current structure of code leads to circular dependencies or classes as modules

  • #0367 - Analyzer does not work with reduce method with no hashable object

  • #0375 - AnalyzerManager does not work for case to add experiment to analyzermanager

  • #0376 - AnalyzerManager does not work for simulation

  • #0378 - experiment/simulation display and print are messed up in latest dev

  • #0386 - Local platform cannot create more than 20 simulations in a given experiment

  • #0398 - Ensure that redis and postgres ports work as expected

  • #0399 - PopulaionAnalyzer does not return all items in reduce mathod in centos platform

  • #0424 - ExperimentBuilder’s add_sweep_definition is not flexible enough to take more parameters

  • #0427 - Access to the experiment object in analyzers

  • #0453 - cli: “idmtools local down –delete-data” not really delete any .local_data in user default dir

  • #0458 - There is no way to add custom tags to simulations

  • #0465 - BuilderExperiment for sweep “string” is wrong

  • #0545 - pymake docker-local always fail in centos

  • #0553 - BLOCKING: idmtools_model_r does not get built with make setup-dev

  • #0560 - docker-compose build does not work for r-model example

  • #0562 - workflow_item_operations get workitem querycriteria fails

  • #0564 - typing is missing in asset_collection.py which almost break every tests

  • #0565 - missing ‘copy’ in local_platform.py

  • #0566 - test_tasks.py fail for case test_command_is_required

  • #0567 - ‘platform_supports’ is missing for test_comps_plugin.py in idmtools_platform_comps/tests

  • #0570 - webui for localhost:5000 got 403 error

  • #0572 - python 3.7.3 less version will fail for task type changing

  • #0585 - print(platform) throws exception for Python 3.6

  • #0588 - Running the dev installation in a virtualenv “installs” it globally

  • #0598 - CSVAnalyzer pass wrong value to parse in super().__init__ call

  • #0602 - Analyzer doesn’t work for my Python SEIR model

  • #0605 - When running multiple analyzers together, ‘data’ in one analyzer should not contains data from other analyzer

  • #0606 - can not import cached_property

  • #0608 - Cannot add custom tag to AssetCollection in idmtools

  • #0613 - idmtools webui does not working anymore

  • #0616 - AssetCollection pre_creation failed if no tag

  • #0617 - AssetCollection’s find_index_of_asset is wrong

  • #0618 - analyzer-manager should fail if map status return False

  • #0641 - Remove unused code in the python_requirements_ac

  • #0644 - Platform cannot run workitem directly

  • #0646 - platform.get_items(ac) not return tags

  • #0667 - analyzer_manager could stuck on _run_and_wait_for_reducing

CLI
  • #0009 - Boilerplate command

  • #0118 - Add the printing of children in the EntityContainer

  • #0187 - Move the CLI package to idmtools/cli

  • #0190 - Add a platform attribute to the CLI commands

  • #0191 - Create a PlatformFactory

  • #0241 - CLI should be distinct package and implement as plugins

  • #0251 - Setup for the CLI package should provide a entrypoint for easy use of commands

  • #0252 - Add –debug to cli main level

Configuration
  • #0248 - Logging needs to support user configuration through the idmtools.ini

  • #0392 - Improve IdmConfigParser: make decorator for ensure_ini() method…

  • #0597 - Platform should not be case sensitive.

Core
  • #0032 - Create NextPointAlgorithm Step

  • #0042 - Stabilize the IStep object

  • #0043 - Create the generic Workflow object

  • #0044 - Implement validation for the Steps of a workflow based on Marshmallow

  • #0058 - Filtering system for simulations

  • #0081 - Allows the sweeps to be created in arms

  • #0091 - Refactor the Experiment/Simulation objects to not persist the simulations

  • #0141 - Standard Logging throughout tools

  • #0169 - Handle 3.6 requirements automatically

  • #0172 - Decide what state to store for tasks

  • #0173 - workflows: Decide on state storage scheme

  • #0174 - workflows: Reimplement state storage

  • #0175 - workflows: Create unit tests of core classes and behaviors

  • #0176 - workflows: reorganize files into appropriate repo/directory

  • #0180 - switch prettytable for tabulate

  • #0200 - Platforms should be plugins

  • #0238 - Simulations of Experiment should be made pickle ignored

  • #0244 - Inputs values needs to be validated when creating a Platform

  • #0257 - CsvExperimentBuilder does not handle csv field with empty space

  • #0268 - demographics filenames should be loaded to asset collection

  • #0274 - Unify id attribute naming scheme

  • #0281 - Improve Platform to display selected Block info when creating a platform

  • #0297 - Fix issues with platform factory

  • #0308 - idmtools: Module names should be consistent

  • #0315 - Basic support of suite in the tools

  • #0357 - ExperimentPersistService.save are not consistent

  • #0359 - SimulationPersistService is not used in Idmtools

  • #0361 - assets in Experiment should be made “pickle-ignore”

  • #0362 - base_simulation in Experiment should be made “pickle-ignore”

  • #0368 - PersistService should support clear() method

  • #0369 - The method create_simulations of Experiment should consider pre-defined max_workers and batch_size in idmtools.ini

  • #0370 - Add unit test for deepcopy on simulations

  • #0371 - Wrong type for platform_id in IEntity definition

  • #0391 - Improve Asset and AssetCollection classes by using @dataclass (field) for clear comparison

  • #0394 - Remove the ExperimentPersistService

  • #0438 - Support pulling Eradication from URLs and bamboo

  • #0518 - Add a task class.

  • #0520 - Rename current experiment builders to sweep builders

  • #0526 - Create New Generic Experiment Class

  • #0527 - Create new Generic Simulation Class

  • #0528 - Remove old Experiments/Simulations

  • #0529 - Create New Task API

  • #0530 - Rename current model api to simulation/experiment API.

  • #0538 - Refactor platform interface into subinterfaces

  • #0681 - idmtools should have way to query comps with filter

Developer/Test
  • #0631 - Ensure setup.py is consistent throughout

Documentation
  • #0100 - Installation steps documented for users

  • #0312 - idmtools: there is a typo in README

  • #0360 - The tools should refer to “EMOD” not “DTK”

  • #0474 - Stand alone builder

  • #0486 - Overview of the analysis in idmtools

  • #0510 - Local platform options

  • #0512 - SSMT platform options

  • #0578 - Add installation for users

  • #0593 - Simple Python SEIR model demo example

  • #0632 - Update idmtools_core setup.py to remove model emod from idm install

Feature Request
  • #0061 - Built-in DownloadAnalyzer

  • #0064 - Support of CSV files

  • #0070 - [Local Runner] Output files serving

  • #0233 - Add local runner timeout

  • #0437 - Prompt users for docker credentials when not available

  • #0603 - Implement install custom requirement libs to asset collection with WorkItem

Models
  • #0021 - Python model

  • #0024 - R Model support

  • #0053 - Support of demographics files

  • #0212 - Models should be plugins

  • #0287 - Add info about support models/docker support to platform

  • #0288 - Create DockerExperiment and subclasses

  • #0519 - Move experiment building to ExperimentBuilder

  • #0521 - Create Generic Dictionary Config Task

  • #0522 - Create PythonTask

  • #0523 - Create PythonDictionaryTask

  • #0524 - Create RTask

  • #0525 - Create EModTask

  • #0535 - Create DockerTask

Platforms
  • #0025 - LOCAL Platform

  • #0027 - SSMT Platform

  • #0094 - Batch and parallelize simulation creation in the COMPSPlatform

  • #0122 - Ability to create an AssetCollection based on a COMPS asset collection id

  • #0130 - User configuration and data storage location

  • #0186 - The local_runner client should move to the idmtools package

  • #0194 - COMPS Files retrieval system

  • #0195 - LOCAL Files retrieval system

  • #0221 - Local runner for experiment/simulations have different file hierarchy than COMPS

  • #0254 - Local Platform Asset should be implemented via API or Docker socket

  • #0264 - idmtools_local_runner’s tasks/run.py should have better handle for unhandled exception

  • #0276 - Docker services should be started for end-users without needing to use docker-compose

  • #0280 - Generalize sim/exp/suite format of ISimulation, IExperiment, IPlatform

  • #0286 - Add special GPU queue to Local Platform

  • #0305 - Create a website for local platform

  • #0306 - AssetCollection’s assets_from_directory logic wrong if set flatten and relative path at same time

  • #0313 - idmtools: MAX_SUBDIRECTORY_LENGTH = 35 should be made Global in COMPSPlatform definition

  • #0314 - Fix local platform to work with latest analyze/platform updates

  • #0316 - Integrate website with Local Runner Container

  • #0321 - COMPSPlatform _retrieve_experiment errors on experiments with and without suites

  • #0329 - Experiment level status

  • #0330 - Paging on simulation/experiment APIs for better UI experience

  • #0333 - ensure pyComps allows compatible releases

  • #0364 - Local platform should use production artfactory for docker images

  • #0381 - Support Work Items in COMPS Platform

  • #0387 - Local platform webUI only show simulations up to 20

  • #0393 - local platform tests keep getting EOFError while logger is in DEBUG and console is on

  • #0405 - Support analysis of data from Work Items in Analyze Manager

  • #0407 - Support Service Side Analysis through SSMT

  • #0447 - Set limitation for docker container’s access to memory

  • #0532 - Make updates to ExperimentManager/Platform to support tasks

  • #0540 - Create initial SSMT Plaform from COMPS Platform

  • #0596 - COMPSPlatform.get_files(item,..) not working for Experiment or Suite

  • #0635 - Update SSMT base image

  • #0639 - Add a way for the python_requirements_ac to use additional wheel file

  • #0676 - ssmt missing QueryCriteria support

  • #0677 - ssmt: refresh_status returns None

User Experience
  • #0457 - Option to analyze failed simulations

1.0.1

Analyzers
  • #0778 - Add support for context platforms to analyzer manager

Bugs
  • #0637 - pytest: ValueError: I/O operation on closed file, Printed at the end of tests.

  • #0663 - SSMT PlatformAnalysis can not put 2 analyzers in same file as main entry

  • #0696 - Rename num_retires to num_retries on COMPS Platform

  • #0702 - Can not analyze workitem

  • #0739 - Logging should load defaults with default config block is missing

  • #0741 - MAX_PATH issues with RequirementsToAssetCollection WI create_asset_collection

  • #0752 - type hint in analyzer_manager is wrong

  • #0758 - Workitem config should be validated on WorkItem for PythonAsset Collection

  • #0776 - Fix hook execution order for pre_creation

  • #0779 - Additional Sims is not being detected on TemplatedSimulations

  • #0788 - Correct requirements on core

  • #0791 - Missing asset file with RequirementsToAssetCollection

Core
  • #0343 - Genericize experiment_factory to work for other items

  • #0611 - Consider excluding idmtools.log and COMPS_log.log on SSMT WI submission

  • #0737 - Remove standalone builder in favor of regular python

Developer/Test
  • #0083 - Setup python linting for the Pull requests

  • #0671 - Python Linting

  • #0735 - Tag or remove local tests in idmtools-core tests

  • #0736 - Mark set of smoke tests to run in github actions

  • #0773 - Move model-emod to new repo

  • #0794 - build idmtools_platform_local fail with idmtools_webui error

Documentation
  • #0015 - Add cookiecutter projects

  • #0423 - Create a clear document on what features are provided by what packages

  • #0473 - Create sweep without builder

  • #0476 - ARM builder

  • #0477 - CSV builder

  • #0478 - YAML builder

  • #0487 - Creation of an analyzer

  • #0488 - Base analyzer - Constructor

  • #0489 - Base analyzer - Filter function

  • #0490 - Base analyzer - Parsing

  • #0491 - Base analyzer - Working directory

  • #0492 - Base analyzer - Map function

  • #0493 - Base analyzer - Reduce function

  • #0494 - Base analyzer - per group function

  • #0495 - Base analyzer - Destroy function

  • #0496 - Features of AnalyzeManager - Overview

  • #0497 - Features of AnalyzeManager - Partial analysis

  • #0498 - Features of AnalyzeManager - Max items

  • #0499 - Features of AnalyzeManager - Working directory forcing

  • #0500 - Features of AnalyzeManager - Adding items

  • #0501 - Built-in analyzers - InsetChart analyzer

  • #0502 - Built-in analyzers - CSV Analyzer

  • #0503 - Built-in analyzers - Tags analyzer

  • #0504 - Built-in analyzers - Download analyzer

  • #0508 - Logging and Debugging

  • #0509 - Global parameters

  • #0511 - COMPS platform options

  • #0629 - Update docker endpoint on ssmt/local platform to use external endpoint for pull/running

  • #0630 - Investigate packaging idmtools as wheel file

  • #0714 - Document the Versioning details

  • #0717 - Sweep Simulation Builder

  • #0720 - Documentation on Analyzing Failed experiments

  • #0721 - AddAnalyer should have example in its self documentation

  • #0722 - CSVAnalyzer should have example in its self documentation

  • #0723 - DownloadAnalyzer should have example in its self documentation

  • #0724 - PlatformAnalysis should have explanation of its used documented

  • #0727 - SimulationBuilder Sweep builder documentation

  • #0734 - idmtools does not install dataclasses on python3.6

  • #0751 - Switch to apidoc generated RSTs for modules and remove from source control

Feature Request
  • #0059 - Chaining of Analyzers

  • #0097 - Ability to batch simulations within simulation

  • #0704 - Tthere is no way to load custom wheel using the RequirementsToAssets utility

  • #0784 - Remove default node_group value ‘emod_abcd’ from platform

  • #0786 - Improve Suite support

Platforms
  • #0277 - Need way to add tags to COMPSPlatform ACs after creation

  • #0638 - Change print statement to logger in python_requirements_ac utility

  • #0640 - Better error reporting when the python_requirements_ac fails

  • #0651 - A user should not need to specify the default SSMT image

  • #0688 - Load Custom Library Utility should support install packages from Artifactory

  • #0705 - Should have way to regenerate AssetCollection id from RequirementsToAssetCollection

  • #0757 - Set PYTHONPATH on Slurm

User Experience
  • #0760 - Email for issues and feature requests

  • #0781 - Suites should support run on object

  • #0787 - idmtools should print experiment id by default in console

1.1.0

Additional Changes
  • #0845 - Sprint 1 Retrospective Results

Bugs
  • #0430 - test_docker_operations.test_port_taken_has_coherent_error fails in Linux VM with no host machine

  • #0650 - analyzer_manager.py _run_and_wait_for_mapping fail frequently in bamboo

  • #0706 - Correct the number of simulations being submitted in the progress bar

  • #0846 - Checking for platform not installed

  • #0872 - python executable is not correct for slurm production

CLI
  • #0342 - Add list of task to cli

  • #0543 - develop idm cookie cutter templates needs

  • #0820 - Add examples url to plugins specifications and then each plugin if they have examples

  • #0869 - CLI: idmtools gitrepo view - CommandTask points to /corvid-idmtools

Core
  • #0273 - Add kwargs functionality to CacheEnabled

  • #0818 - Create Download Examples Core Functionality

  • #0828 - Add a master plugin registry

Developer/Test
  • #0652 - Packing process should be fully automated

  • #0731 - Add basic testing to Github Actions to Pull Requests

  • #0785 - Add a miniconda agent to the bamboo testing of idmtools

  • #0833 - Add emodpy to idm and full extra installs in core

  • #0844 - For make setup-dev, we may want put login to artifactory first

Documentation
  • #0729 - Move local platform worker container to Github Actions

  • #0814 - High Level Diagram of Packages/Repos for idmtools

  • #0858 - Fix doc publish to ghpages

  • #0861 - emodpy - add updated api diagram (API class specifications) to architecture doc

Platforms
  • #0728 - Restructure local platform docker container build for Github Action

  • #0730 - Move SSMT Image build to github actions

  • #0826 - SSMT Build as part of GithubActions

User Experience
  • #0010 - Configuration file creation command

  • #0684 - Create process for Changelog for future releases

  • #0819 - Create Download Examples CLI Command

  • #0821 - Provide plugin method to get Help URLs for plugin

1.2.0

Bugs
  • #0859 - After install idmtools, still can not find model ‘idmtools’

  • #0873 - Task Plugins all need a get_type

  • #0877 - Change RequirementsToAssetCollection to link AssetCollection and retrieve Id more reliability

  • #0881 - With CommandTask, experiment must have an asset to run

  • #0882 - CommandTask totally ignores common_assets

  • #0893 - CommandTask: with transient asset hook, it will ignore user’s transient_assets

Developer/Test
  • #0885 - Platform to lightly execute tasks locally to enable better testing of Task life cycle

Documentation
  • #0482 - Running experiments locally

  • #0768 - Update breadcrumbs for docs

  • #0860 - Create .puml files for UML doc examples within docs, add new files to existing .puml in diagrams directory, link to files

  • #0867 - Examples - document cli download experience for example scripts

  • #0870 - CLI - update documentation to reflect latest changes

  • #0875 - Enable JSON Documentation Builds on Help for future Help Features

  • #0889 - Parameter sweeps with EMOD

  • #0896 - Add version to docs build

  • #0903 - Add version to documentation

Feature Request
  • #0832 - Implement underlying API needed for reload_from_simulation

  • #0876 - Add option to optionally rebuild tasks on reload

  • #0883 - Add new task type TemplateScriptTask to support Templated Scripts

Platforms
  • #0692 - Get Docker Public Repo naming aligned with others

User Experience
  • #0713 - Move all user output to customer logger

1.2.2

Dependencies
  • #0929 - Update psycopg2-binary requirement from ~=2.8.4 to ~=2.8.5

  • #0930 - Bump pandas from 0.24.2 to 1.0.5

  • #0931 - Bump docker from 4.0.1 to 4.2.1

  • #0932 - Update beautifulsoup4 requirement from ~=4.8.2 to ~=4.9.1

  • #0933 - Update pytest requirement from ~=5.4.1 to ~=5.4.3

  • #0942 - Update pyperclip requirement from ~=1.7 to ~=1.8

  • #0943 - Update packaging requirement from ~=20.3 to ~=20.4

1.3.0

Bugs
  • #0921 - PlatformAnalysis requires login before execution

  • #0937 - RequirementsToAssetCollection fail with Max length

  • #0946 - Upgrade pycomps to 2.3.7

  • #0972 - Template script wrapper task should proxy calls where possible

  • #0984 - Make idmtools_metadata.json default to off

Documentation
  • #0481 - Overview of the local platform

  • #0483 - Monitoring local experiments

  • #0910 - Add documentation on plotting analysis output using matplotlib as an example

  • #0925 - Platform Local - add documentation (getting started, run example, etc)

  • #0965 - Add Analysis Output Format Support Table

  • #0969 - Create base documentation for creating a new platform plugin

Feature Request
  • #0830 - Support for python 3.8

  • #0924 - YamlSimulationBuilder should accept a single function to be mapped to all values

Models
  • #0834 - Add a COVASIM example with idmtools

Platforms
  • #0852 - Add emodpy to SSMT image

User Experience
  • #0682 - Support full query criteria on COMPS items

1.4.0

Bugs
  • #1012 - Asset.py length return wrong

  • #1034 - AssetCollections should not be mutable after save

  • #1035 - RequirementsToAssetCollection run return same ac_id between SLURM and COMPS

  • #1046 - print(ac) cause maximum recursion depth exceeded

  • #1047 - datetime type is missing from IDMJSONEncoder

  • #1048 - Refresh Status bug on additional columns

  • #1049 - The text should be generic not specific to asset collection in method from_id(…)

Dependencies
  • #1007 - Update flask-sqlalchemy requirement from ~=2.4.3 to ~=2.4.4

  • #1009 - Update matplotlib requirement from ~=3.2.2 to ~=3.3.0

  • #1015 - Bump pandas from 1.0.5 to 1.1.0

  • #1024 - Update pytest requirement from ~=6.0.0 to ~=6.0.1

  • #1031 - Update yaspin requirement from ~=0.18.0 to ~=1.0.0

  • #1032 - Update tqdm requirement from ~=4.48.1 to ~=4.48.2

  • #1033 - Update pygithub requirement from ~=1.51 to ~=1.52

  • #1053 - Update sphinx requirement from ~=3.1.2 to ~=3.2.0

Documentation
  • #0970 - idmtools.ini documentation - review current docs and possibly make changes

  • #1043 - Update build of doc to be more ReadTheDocs Friendly

Feature Request
  • #1020 - Requirements to Asset Collection should first check what assets exist before uploading

1.5.0

Bugs
  • #0459 - There is no way to add simulations to existing experiment

  • #0840 - Experiment and Suite statuses not updated properly after success

  • #0841 - Reloaded experiments and simulations have incorrect status

  • #0842 - Reloaded simulations (likely all children) do not have their platform set

  • #0866 - Recursive simulation loading bug

  • #0898 - Update Experiment#add_new_simulations() to accept additions in any state

  • #1046 - print(ac) cause maximum recursion depth exceeded while calling a Python object

  • #1047 - datetime type is missing from IDMJSONEncoder

  • #1048 - typo/bug: cols.append(cols)

  • #1049 - The text should be generic not specific to asset collection in method from_id(…)

  • #1066 - User logging should still be initialized if missing_ok is supplied when loading configuration/platform

  • #1071 - Detect if an experiment needs commissioning

  • #1076 - wi_ac create ac with tag wrong for Calculon

  • #1094 - AssetCollection should check checksums when checking for duplicates

  • #1098 - Add experiment id to CSVAnalyzer and TagAnalyzer

Dependencies
  • #1075 - Update sphinx requirement from ~=3.2.0 to ~=3.2.1

  • #1077 - Update sqlalchemy requirement from ~=1.3.18 to ~=1.3.19

  • #1078 - Update pygithub requirement from ~=1.52 to ~=1.53

  • #1080 - Bump docker from 4.3.0 to 4.3.1

  • #1087 - Update more-itertools requirement from ~=8.4.0 to ~=8.5.0

  • #1088 - Update paramiko requirement from ~=2.7.1 to ~=2.7.2

  • #1101 - Update psycopg2-binary requirement from ~=2.8.5 to ~=2.8.6

  • #1102 - Bump pandas from 1.1.1 to 1.1.2

  • #1103 - Bump diskcache from 5.0.2 to 5.0.3

  • #1107 - Update tqdm requirement from ~=4.48.2 to ~=4.49.0

  • #1108 - Update pytest requirement from ~=6.0.1 to ~=6.0.2

Documentation
  • #1073 - Update example and tests to use platform context

Feature Request
  • #1064 - Allow running without a idmtools.ini file

  • #1068 - COMPPlatform should allow commissioning as it goes

1.5.1

Bugs
  • #1166 - Properly remove/replace unsupported characters on the COMPS platform in experiment names

  • #1173 - Ensure assets are not directories on creation of Asset

Documentation
  • #1191 - Remove idmtools.ini from examples to leverage configuration aliases. This change allows executing of examples with minimal local configuration

Feature Request
  • #1127 - Remove emodpy from idmtools[full] and idmtools[idm] install options. This allows a more control of packages used in projects

  • #1179 - Supply multiple default templates for template script wrapper. See the examples in the cookbook.

  • #1180 - Support Configuration Aliases. This provides out of the box configurations for common platform configurations. For example, COMPS environments have predefined aliases such as Calculon, Belegost, etc

Known Issues
  • PlatformAnalysis requires an idmtools.ini

Upcoming breaking changes in 1.6.0
  • Assets will no longer support both absolute_path and content. That will be mutually exclusive going forward

  • The task API pre_creation method has a new parameter to pass the platform object. All tasks implementing the API will need to update the pre_creation method

  • Deprecation of the delete function from AssetCollection in favor or remove.

Upcoming features in the coming releases
  • Ability to query the platform from task for items such as OS, supported workflows, etc

  • Utility to Asset-ize outputs within COMPS. This should make it into 1.6.0

  • HPC Container build and run utilities. Slated for next few releases

  • Better integration of errors with references to relevant documentation(ongoing)

  • Improves support for Mac OS

1.5.2

Bugs
  • #1271 - Fix default SSMT image detection for platform COMPS

1.6.0

Bugs
  • #0300 - Canceling simulations using cli’s Restful api throws Internal server error (Local Platform)

  • #0462 - Redis port configuration not working (Local Platform)

  • #0988 - Fix issues with multi-threading and requests on mac in python 3.7 or lower

  • #1104 - Run AnalyzeManager outputs ini file used multiple times

  • #1111 - File path missing in logger messages when level set to INFO

  • #1154 - Add option for experiment run in COMPS to use the minimal execution path

  • #1156 - COMPS should dynamically add Windows and LINUX Requirements based on environments

  • #1195 - PlatformAnalysis should support aliases as well

  • #1198 - PlatformAnalysis should detect should find user’s idmtools.ini instead of searching current directory

  • #1230 - Fix parsing of executable on commandline

  • #1244 - Logging should fall back to console if the log file cannot be opened

CLI
  • #1167 - idmtools config CLI command should have option to use global path

  • #1237 - Add ability to suppress outputs for CLI commands that might generate pipe-able output

  • #1234 - Add AssetizeOutputs as COMPS Cli command

  • #1236 - Add COMPS Login command to CLI

Configuration
  • #1242 - Enable loading configuration options from environment variables

Core
  • #0571 - Support multi-cores(MPI) on COMPS through num_cores

  • #1220 - Workflow items should use name

  • #1221 - Workflow items should use Assets instead of asset_collection_id

  • #1222 - Workflow items should use transient assets vs user_files

  • #1223 - Commands from WorkflowItems should support Tasks

  • #1224 - Support creating AssetCollection from list of file paths

Dependencies
  • #1136 - Remove marshmallow as a dependency

  • #1207 - Update pytest requirement from ~=6.1.0 to ~=6.1.1

  • #1209 - Update flake8 requirement from ~=3.8.3 to ~=3.8.4

  • #1211 - Bump pandas from 1.1.2 to 1.1.3

  • #1214 - Update bump2version requirement from ~=1.0.0 to ~=1.0.1

  • #1216 - Update tqdm requirement from ~=4.50.0 to ~=4.50.2

  • #1226 - Update pycomps requirement from ~=2.3.8 to ~=2.3.9

  • #1227 - Update sqlalchemy requirement from ~=1.3.19 to ~=1.3.20

  • #1228 - Update colorama requirement from ~=0.4.1 to ~=0.4.4

  • #1246 - Update yaspin requirement from ~=1.1.0 to ~=1.2.0

  • #1251 - Update junitparser requirement from ~=1.4.1 to ~=1.4.2

Documentation
  • #1134 - Add a copy to clipboard option to source code and command line examples in documentation

Feature Request
  • #1121 - Experiment should error if no simulations are defined

  • #1148 - Support global configuration file for idmtools from user home directory/local app directory or specified using an Environment Variable

  • #1158 - Pass platform to pre_creation and post_creation methods to allow dynamic querying from platform

  • #1193 - Support Asset-izing Outputs through WorkItems

  • #1194 - Add support for post_creation hooks on Experiments/Simulation/Workitems

  • #1231 - Allow setting command from string on Task

  • #1232 - Add a function to determine if target is Windows to platform

  • #1233 - Add property to grab the common asset path from a platform

  • #1247 - Add support for singularity to the local platform

Platforms
  • #0230 - Entities should support created_on/modified_on fields on the Local Platform

  • #0324 - Detect changes to Local Platform config

User Experience
  • #1127 - IDMtools install should not include emodpy, emodapi, etc when installing with idmtools[full]

  • #1141 - Add warning when user is using a development version of idmtools

  • #1160 - get_script_wrapper_unix_task should use default template that adds assets to python path

  • #1200 - Log idmtools core version when in debug mode

  • #1240 - Give clear units for progress bars

  • #1241 - Support disabling progress bars with environment variable or config

Special Notes
  • If you encounter an issue with matplotlib after install, you may need to run pip install matplotlib –force-reinstall

  • Workitems will require a Task starting in 1.7.0

  • Containers support on COMPS and early singularity support will be coming in 1.6.1

1.6.1

Additional Changes
  • #1165 - Support basic building of singularity images

  • #1315 - Assets should always return paths using posix style

  • #1321 - Comps CLI should have singularity build support

Bugs
  • #1271 - COMPS SSMT Version fetch should fetch latest compatible idmtools image

  • #1303 - Fix platform object assignment on AssetCollection

  • #1312 - Update analyze_manager.py to remove iterkeys in support of diskcache 5.1.0

  • #1313 - Support tags in prefix on AssetizeOutputs

Dependencies
  • #1281 - Update pytest requirement from ~=6.1.1 to ~=6.1.2

  • #1287 - Update allure-pytest requirement from ~=2.8.18 to ~=2.8.19

  • #1288 - Update junitparser requirement from ~=1.6.0 to ~=1.6.1

  • #1289 - Update sphinx-copybutton requirement from ~=0.3.0 to ~=0.3.1

  • #1290 - Bump pandas from 1.1.3 to 1.1.4

  • #1291 - Update more-itertools requirement from ~=8.5.0 to ~=8.6.0

  • #1293 - Latest numpy 1.19.4 (released 11/2/2020) breaks all idmtools tests in windows

  • #1298 - Update junitparser requirement from ~=1.6.1 to ~=1.6.2

  • #1299 - Update pygit2 requirement from ~=1.3.0 to ~=1.4.0

  • #1300 - Bump diskcache from 5.0.3 to 5.1.0

  • #1307 - Update requests requirement from ~=2.24.0 to ~=2.25.0

  • #1308 - Update matplotlib requirement from ~=3.3.2 to ~=3.3.3

  • #1309 - Update sphinx requirement from ~=3.3.0 to ~=3.3.1

  • #1310 - Update pytest-html requirement from ~=2.1.1 to ~=3.0.0

  • #1311 - Update tqdm requirement from ~=4.51.0 to ~=4.52.0

  • #1327 - Update allure-pytest requirement from ~=2.8.19 to ~=2.8.20

Documentation
  • #1279 - Add examples to override config values

  • #1285 - Examples should use Calculon instead of SLURM alias

  • #1302 - cookbook link for modifying-asset-collection is wrong

Platforms
  • #1264 - Comps CLI should have singularity build support

User Experience
  • #1170 - Add progress bar to upload of Assets through new callback in pyCOMPS

  • #1320 - Add progress bar to workitems

1.6.2

Bugs
  • #1343 - Singularity Build CLI should write AssetCollection ID to file

  • #1345 - Loading a platform within a Snakefile throws an exception

  • #1348 - We should be able to download files using glob patterns from comps from the CLI

  • #1351 - Add support to detect if target platform is windows or linux on COMPS taking into account if it is an SSMT job

  • #1363 - Ensure the lookup for latest version uses only pypi not artifactory api

  • #1368 - idmtools log rotation can crash in multi process environments

Developer/Test
  • #1367 - Support installing SSMT packages dynamically on workitems

Feature Request
  • #1344 - Singularity Build CLI command should support writing workitem to file

  • #1349 - Add support PathLike for add_asset in Assets

  • #1350 - Setup global exception handler

  • #1353 - Add “Assets” directory to the PYTHONPATH by default on idmtools SSMT image

  • #1366 - Support adding git commit, branch, and url to Experiments, Simulations, Workitems, or other taggable entities as tags

Platforms
  • #0990 - Support creating and retrieving container images in AssetCollections

  • #1352 - Redirect calls to task.command to wrapped command in TemplatedScriptTask

  • #1354 - AssetizeOutputs CLI should support writing to id files

1.6.3

Bugs
  • #1396 - requirements to ac should default to one core

  • #1403 - Progress bar displayed when expecting only json on AssetizeOutput

  • #1404 - Autocompletion of cli does not work due to warning

  • #1408 - GA fail for local platform

  • #1416 - Default batch create_items method does not support kwargs

  • #1417 - ITask To_Dict depends on platform_comps

  • #1436 - Packages order is important in req2ac utility

CLI
  • #1430 - Update yaspin requirement from ~=1.2.0 to ~=1.3.0

Dependencies
  • #1340 - Bump docker from 4.3.1 to 4.4.0

  • #1374 - Update humanfriendly requirement from ~=8.2 to ~=9.0

  • #1387 - Update coloredlogs requirement from ~=14.0 to ~=15.0

  • #1414 - Update dramatiq[redis,watch] requirement from ~=1.9.0 to ~=1.10.0

  • #1418 - Update docker requirement from <=4.4.0,>=4.3.1 to >=4.3.1,<4.5.0

  • #1435 - Update gevent requirement from ~=20.12.1 to ~=21.1.2

  • #1442 - Update pygit2 requirement from ~=1.4.0 to ~=1.5.0

  • #1444 - Update pyyaml requirement from <5.4,>=5.3.0 to >=5.3.0,<5.5

  • #1448 - Update matplotlib requirement from ~=3.3.3 to ~=3.3.4

  • #1449 - Update jinja2 requirement from ~=2.11.2 to ~=2.11.3

  • #1450 - Update sqlalchemy requirement from ~=1.3.22 to ~=1.3.23

  • #1457 - Update more-itertools requirement from ~=8.6.0 to ~=8.7.0

  • #1466 - Update tabulate requirement from ~=0.8.7 to ~=0.8.9

  • #1467 - Update yaspin requirement from <1.4.0,>=1.2.0 to >=1.2.0,<1.5.0

Developer/Test
  • #1390 - Update pytest requirement from ~=6.1.2 to ~=6.2.0

  • #1391 - Update pytest-html requirement from ~=3.1.0 to ~=3.1.1

  • #1394 - Update pytest-xdist requirement from ~=2.1 to ~=2.2

  • #1398 - Update pytest requirement from ~=6.2.0 to ~=6.2.1

  • #1411 - Update build tools to 1.0.3

  • #1413 - Update idm-buildtools requirement from ~=1.0.1 to ~=1.0.3

  • #1424 - Update twine requirement from ~=3.2.0 to ~=3.3.0

  • #1428 - Update junitparser requirement from ~=1.6.3 to ~=2.0.0

  • #1434 - Update pytest-cov requirement from ~=2.10.1 to ~=2.11.1

  • #1443 - Update pytest requirement from ~=6.2.1 to ~=6.2.2

  • #1446 - Update coverage requirement from ~=5.3 to ~=5.4

  • #1458 - Update pytest-runner requirement from ~=5.2 to ~=5.3

  • #1463 - Update allure-pytest requirement from ~=2.8.33 to ~=2.8.34

  • #1468 - Update coverage requirement from <5.5,>=5.3 to >=5.3,<5.6

  • #1478 - Update flake8 requirement from ~=3.8.4 to ~=3.9.0

  • #1481 - Update twine requirement from ~=3.4.0 to ~=3.4.1

Documentation
  • #1259 - Provide examples container and development guide

  • #1347 - Read the Docs build broken, having issues with Artifactory/pip installation

  • #1423 - Update sphinx-rtd-theme requirement from ~=0.5.0 to ~=0.5.1

  • #1474 - Update sphinx requirement from ~=3.4.3 to ~=3.5.2

Feature Request
  • #1384 - Add assets should ignore common directories through option

  • #1392 - RequirementsToAssetCollection should allow to create user tag

  • #1437 - req2ac utility should support getting compatible version (~=) of a package

Platforms
  • #0558 - Develop Test Harness for SSMT platform

1.6.4

Additional Changes
  • #1407 - import get_latest_package_version_from_pypi throws exception

  • #1593 - Pandas items as defaults cause issue with Simulation Builder

Analyzers
  • #1097 - Analyzer may get stuck on error

  • #1506 - DownloadAnalyzer should not stop if one sim fails, but try to download all sims independently.

  • #1540 - Convert AnalyzeManager to use futures and future pool

  • #1594 - Disable log re-initialization in subthreads

  • #1596 - PlatformAnalysis should support extra_args to be passed to AnalyzeManager on the server

  • #1608 - CSVAnalyzer should not allow users to override parse value as it is required

Bugs
  • #1452 - idmtools work for using new slurm scheduling mechanism

  • #1518 - CommandLine add_argument should convert arguments to strings

  • #1522 - Load command line from work order on load when defined

Core
  • #1586 - Fix the help on the top-level makefile

Dependencies
  • #1440 - Update diskcache requirement from ~=5.1.0 to ~=5.2.1

  • #1490 - Update flask-sqlalchemy requirement from ~=2.4.4 to ~=2.5.1

  • #1498 - Update yaspin requirement from <1.5.0,>=1.2.0 to >=1.2.0,<1.6.0

  • #1520 - Update docker requirement from <4.5.0,>=4.3.1 to >=4.3.1,<5.1.0

  • #1545 - Update pygithub requirement from ~=1.54 to ~=1.55

  • #1552 - Update matplotlib requirement from ~=3.4.1 to ~=3.4.2

  • #1555 - Update sqlalchemy requirement from ~=1.4.14 to ~=1.4.15

  • #1562 - Bump werkzeug from 1.0.1 to 2.0.1

  • #1563 - Update jinja2 requirement from ~=2.11.3 to ~=3.0.1

  • #1566 - Update cookiecutter requirement from ~=1.7.2 to ~=1.7.3

  • #1568 - Update more-itertools requirement from ~=8.7.0 to ~=8.8.0

  • #1570 - Update dramatiq[redis,watch] requirement from ~=1.10.0 to ~=1.11.0

  • #1585 - Update psycopg2-binary requirement from ~=2.8.6 to ~=2.9.1

Developer/Test
  • #1511 - Add document linting to rules

  • #1549 - Update pytest requirement from ~=6.2.3 to ~=6.2.4

  • #1554 - Update flake8 requirement from ~=3.9.1 to ~=3.9.2

  • #1567 - Update allure-pytest requirement from <2.9,>=2.8.34 to >=2.8.34,<2.10

  • #1577 - Update junitparser requirement from ~=2.0.0 to ~=2.1.1

  • #1587 - update docker python version

Documentation
  • #0944 - Set up intersphinx to link emodpy and idmtools docs

  • #1445 - Enable intersphinx for idmtools

  • #1499 - Update sphinx requirement from ~=3.5.2 to ~=3.5.3

  • #1510 - Update sphinxcontrib-programoutput requirement from ~=0.16 to ~=0.17

  • #1516 - Update sphinx-rtd-theme requirement from ~=0.5.1 to ~=0.5.2

  • #1531 - Update sphinx requirement from ~=3.5.3 to ~=3.5.4

  • #1584 - Update sphinx-copybutton requirement from ~=0.3.1 to ~=0.4.0

Feature Request
  • #0831 - Support for python 3.9

Platforms
  • #1604 - idmtools_platform_local run “make docker” failed

User Experience
  • #1485 - Add files and libraries to an Asset Collection - new documentation

1.6.5

Analyzers
  • #1674 - Analyzers stalling with failed simulations

Bugs
  • #1543 - Control output of pyCOMPS logs

  • #1551 - workitem reference asset_file to user_file

  • #1579 - [Logging] section in idmtools seems not working

  • #1600 - idmtools.log does not honor file_level

  • #1618 - A special case in idmtools logging system with user_output = off

  • #1620 - idmtools logging throw random error

  • #1633 - Two issues noticed in idmtools logging

  • #1634 - idmtools: logging should honor level parameter

Dependencies
  • #1569 - Update flask-restful requirement from ~=0.3.8 to ~=0.3.9

  • #1682 - Update click requirement from ~=7.1.2 to ~=8.1.2

  • #1688 - Update gevent requirement from <=21.2.0,>=20.12.1 to >=20.12.1,<21.13.0

Developer/Test
  • #1689 - Update pytest-timeout requirement from ~=1.4.2 to ~=2.1.0

Platforms
  • #0703 - Slurm simulation_operations needs to be refactored

  • #1615 - for calibra repo, if console=on, it will not print experiment url

  • #1644 - console comps client logging is too chatty

1.6.6

Analyzers
  • #1546 - idmtools AnalyzerManager took much longer time to start analyzer than dtktools AnalyzerManager with same input data

Dependencies
  • #1682 - Update pyComps requirement from ~=2.5.0 to ~=2.6.0

1.6.7

Bugs
  • #1762 - hash_obj cause maximum recursion exception

Dependencies
  • #1601 - Update packaging requirement from <21.0,>=20.4 to >=20.4,<22.0

  • #1694 - Update pygit2 requirement from <1.6.0,>=1.4.0 to >=1.4.0,<1.10.0

  • #1695 - Update psycopg2-binary requirement from ~=2.9.1 to ~=2.9.3

  • #1702 - Bump moment from 2.24.0 to 2.29.2 in /idmtools_platform_local/idmtools_webui

  • #1703 - Bump async from 2.6.3 to 2.6.4 in /idmtools_platform_local/idmtools_webui

  • #1734 - Update sqlalchemy requirement from ~=1.4.15 to ~=1.4.37

  • #1742 - Bump eventsource from 1.0.7 to 1.1.2 in /idmtools_platform_local/idmtools_webui

  • #1743 - Bump url-parse from 1.4.7 to 1.5.10 in /idmtools_platform_local/idmtools_webui

  • #1744 - Bump follow-redirects from 1.10.0 to 1.15.1 in /idmtools_platform_local/idmtools_webui

  • #1745 - Bump postcss from 7.0.26 to 7.0.39 in /idmtools_platform_local/idmtools_webui

  • #1746 - Bump markupsafe from 2.0.1 to 2.1.1

  • #1747 - Update yaspin requirement from <1.6.0,>=1.2.0 to >=1.2.0,<2.2.0

  • #1748 - Update pyyaml requirement from <5.5,>=5.3.0 to >=5.3.0,<6.1

  • #1777 - Update cookiecutter requirement from ~=1.7.3 to ~=2.1.1

  • #1778 - Update jinja2 requirement from ~=3.0.1 to ~=3.1.2

  • #1780 - Update sqlalchemy requirement from ~=1.4.37 to ~=1.4.39

  • #1781 - Update colorama requirement from ~=0.4.4 to ~=0.4.5

  • #1782 - Update pandas requirement from <1.2,>=1.1.4 to >=1.1.4,<1.5

  • #1783 - Update dramatiq[redis,watch] requirement from ~=1.11.0 to ~=1.13.0

  • #1784 - Update pygit2 requirement from <1.10.0,>=1.4.0 to >=1.4.0,<1.11.0

  • #1786 - Bump numpy from 1.18.1 to 1.22.0 in /idmtools_platform_comps/tests/inputs/simple_load_lib_example

  • #1787 - Bump moment from 2.29.2 to 2.29.4 in /idmtools_platform_local/idmtools_webui

  • #1788 - Update more-itertools requirement from ~=8.8.0 to ~=8.13.0

Developer/Test
  • #1789 - Update coverage requirement from <5.6,>=5.3 to >=5.3,<6.5

  • #1792 - Update pytest-runner requirement from ~=5.3 to ~=6.0

  • #1793 - Update flake8 requirement from ~=3.9.2 to ~=4.0.1

1.7.0

Additional Changes
  • #1671 - experiment post creation hooks NOT get invoked

Bugs
  • #1581 - We should default console=on for logging when use alias platform

  • #1614 - User logger should only be used for verbose or higher messages

  • #1806 - batch load module with wrong variable

  • #1807 - get_children missing status refresh

  • #1811 - Suite metadata not written when an experiment is run directly on slurm platform

  • #1812 - Running a suite does not run containing children (experiments)

  • #1820 - Handle empty status messages

CLI
  • #1774 - need a patch release to update pandas requirement

Core
  • #1757 - Suite to_dict method NO need to output experiments details

Dependencies
  • #1749 - Update pluggy requirement from ~=0.13.1 to ~=1.0.0

  • #1794 - Bump pipreqs from 0.4.10 to 0.4.11

  • #1867 - Update sqlalchemy requirement from ~=1.4.39 to ~=1.4.41

  • #1870 - Update yaspin requirement from <2.2.0,>=1.2.0 to >=1.2.0,<2.3.0

  • #1873 - Update docker requirement from <5.1.0,>=4.3.1 to >=4.3.1,<6.1.0

  • #1878 - Update natsort requirement from ~=8.1.0 to ~=8.2.0

  • #1880 - Update diskcache requirement from ~=5.2.1 to ~=5.4.0

  • #1882 - Update flask requirement from ~=2.1.3 to ~=2.2.2

  • #1883 - Update backoff requirement from <1.11,>=1.10.0 to >=1.10.0,<2.2

  • #1885 - Bump async from 2.6.3 to 2.6.4 in /idmtools_platform_local/idmtools_webui

Developer/Test
  • #1795 - Update twine requirement from ~=3.4.1 to ~=4.0.1

  • #1830 - Update pytest requirement from ~=6.2.4 to ~=7.1.3

  • #1831 - Update pytest-xdist requirement from ~=2.2 to ~=2.5

  • #1868 - Update flake8 requirement from ~=4.0.1 to ~=5.0.4

  • #1874 - Update allure-pytest requirement from <2.10,>=2.8.34 to >=2.8.34,<2.11

  • #1884 - Update junitparser requirement from ~=2.1.1 to ~=2.8.0

Documentation
  • #1750 - Slurm Documentation skeleton

Feature Request
  • #1691 - Feature request: Add existing experiments to suite

  • #1809 - Add cpus_per_task to SlurmPlatform

  • #1818 - Improve the output to user after a job is executed

  • #1821 - Status improvement: make “checking slurm finish” configurable

Platforms
  • #1038 - Slurm experiment operations needs updating with newest API

  • #1039 - Slurm needs to implement some basic asset operations

  • #1040 - Slurm Simulations operations is out of date

  • #1041 - Implement suite operations on Slurm Platform

  • #1675 - File Operations: Link Operations

  • #1676 - Move metadata operation to its own class for future API

  • #1678 - Retry logic for slurm

  • #1693 - Abstract file operations in a way the underlying implementation can be changed and shared across platforms

  • #1697 - Create a new metadata operations API

  • #1717 - Formalize shell script for SLURM job submission

  • #1737 - Cleanup Metadata Operations

  • #1738 - Integrate Metadata, FileOperations, and Slurm Script into Slurm Platform

  • #1758 - Document how to cancel jobs on slurm using slurm docs

  • #1764 - Update the sbatch script to dump the SARRAY job id

  • #1765 - Update the simulation script to dump the Job id into a file within each simulation directory

  • #1770 - Develop base singularity image

  • #1822 - COMPSPlatform suite operation: platform_create returns Tuple[COMPSSuite, UUID]

1.7.1

Bugs
  • #1907 - Make cache directory configurable

1.7.3

Additional Changes
  • #1835 - Do the release of 1.7.0.pre

  • #1837 - Release 1.7.0

  • #1855 - Generate Changelog for 1.7.0

  • #1857 - Test final singularity image

  • #1858 - Complete basic use of idmtools-slurm-bridge docs

  • #1863 - Presentation for Jaline

  • #1876 - Build new singularity image

  • #1947 - Utility code to support running on COMPS/Slurm

Bugs
  • #1623 - We should not generate debug log for _detect_command_line_from_simulation in simulation_operations.py

  • #1661 - Script seems to require pwd module but not included in requirements.txt

  • #1666 - logging.set_file_logging should pass level to create_file_handler()

  • #1756 - Suite Operation run_item doesn’t pass kwargs to sub-calls

  • #1813 - Writing experiment parent id in experiment metadata records the wrong suite id

  • #1877 - Revert sphinx to 4 and pin in dependabot

  • #1907 - Make cache directory configurable

  • #1915 - run_simulation.sh should be copied over instead of link

Core
  • #1826 - Update to require at east python 3.7

Dependencies
  • #1906 - Update pygithub requirement from ~=1.55 to ~=1.56

  • #1910 - Update flask-sqlalchemy requirement from ~=2.5.1 to ~=3.0.2

  • #1911 - Update sqlalchemy requirement from ~=1.4.41 to ~=1.4.42

  • #1912 - Update gevent requirement from <21.13.0,>=20.12.1 to >=20.12.1,<22.11.0

  • #1914 - Update more-itertools requirement from ~=8.14.0 to ~=9.0.0

  • #1920 - Update psycopg2-binary requirement from ~=2.9.4 to ~=2.9.5

  • #1921 - Update pytest-html requirement from ~=3.1.1 to ~=3.2.0

  • #1922 - Update pycomps requirement from ~=2.8 to ~=2.9

  • #1923 - Update colorama requirement from ~=0.4.5 to ~=0.4.6

  • #1933 - Update pytest-xdist requirement from ~=2.5 to ~=3.0

  • #1934 - Update pytest requirement from ~=7.1.3 to ~=7.2.0

  • #1942 - Update sqlalchemy requirement from ~=1.4.42 to ~=1.4.43

  • #1943 - Update pygithub requirement from ~=1.56 to ~=1.57

Developer/Test
  • #1649 - github action test failed which can not retrieve the latest ssmt image

  • #1652 - Changelog not showing after 1.6.2 release

Documentation
  • #1378 - Container Python Package development guide

  • #1453 - emodpy example for the local platform

Feature Request
  • #1359 - PlatformFactory should save extra args to an object to be able to be serialized later

Platforms
  • #1853 - Add utils to platform-comps Utils

  • #1854 - Add utils to platform-slurm utils

  • #1864 - Document user installed packages in Singularity images

  • #1963 - slurm job count issue with add_multiple_parameter_sweep_definition

User Experience
  • #1804 - Default root for run/job directories in slurm local platform is ‘.’

  • #1805 - Slurm local platform should make containing experiments/suites as needed

1.7.4

Core
  • #1977 - Disable simulations in Experiment metadata for now

Feature Request
  • #1817 - Feature request: better to have a utility to display simulations status

  • #2007 - Feature request: Make batch_size and max_workers configurable with run method

  • #2008 - Add new slurm parameter: constraint

Platforms
  • #1829 - Performance issue: slurm commission is too slow

  • #1996 - Hotfix Slurm commission performance issue

1.7.5

Bugs
  • #1395 - Time for Simulation Creation Increases with Python Requirements

Platforms
  • #2006 - Hotfix SlutmPlatform memory space issue

  • #2000 - slurm commission take too much memory which can exceeds head node’s max memory

1.7.6

Additional Changes
  • #1954 - idmtools objects id’s should be unique strings not UUIDs

  • #2022 - Add example for ssmt with extra packages

Core
  • #1810 - Support alternate id generators

  • #1930 - Add idmtools_platform_post_create_item to plugin registry

  • #1939 - Add idmtools_platform_post_create_item hook

Dependencies
  • #1943 - Update pygithub requirement from ~=1.56 to ~=1.57

  • #1946 - Update pygit2 requirement from <1.11.0,>=1.4.0 to >=1.4.0,<1.12.0

  • #1948 - Update sqlalchemy requirement from ~=1.4.43 to ~=1.4.44

  • #1949 - Bump sphinx-copybutton from 0.5.0 to 0.5.1

  • #1957 - Update pycomps requirement from ~=2.9 to ~=2.10

  • #1958 - Update allure-pytest requirement from <2.12,>=2.8.34 to >=2.8.34,<2.13

  • #1959 - Update flake8 requirement from ~=5.0.4 to ~=6.0.0

  • #1961 - Update twine requirement from ~=4.0.1 to ~=4.0.2

  • #1967 - Update pytest-xdist requirement from ~=3.0 to ~=3.1

  • #1976 - Bump qs from 6.5.2 to 6.5.3 in /idmtools_platform_local/idmtools_webui

  • #1978 - Update sqlalchemy requirement from ~=1.4.44 to ~=1.4.45

  • #1983 - Bump express from 4.17.1 to 4.18.2 in /idmtools_platform_local/idmtools_webui

Developer/Test
  • #2020 - Add example for ssmt with extra packages based on Clinton’s example

  • #2023 - Add unittests for idmtools_platform_file and add/update github actions

  • #2026 - Fix test in File platform

  • #2039 - Add file platform cli tests

  • #2045 - Add unittests and examples for file and process platforms

Feature Request
  • #1817 - Feature request: better to have a utility to display simulations status

  • #2004 - Implement SlurmPlatform Status utility

  • #2025 - File platform: implemented experiment execution (batch and status, etc.)

  • #1928 - Design: File Only Platform

  • #2029 - Add support for ProcessPlatform

  • #1938 - File platform: implementation of CLI utility

  • #2044 - Platform-General: implementation of ProcessPlatform

1.7.7

Bugs
  • #2084 - Potential issue with mismatch version of pandas and matplotlib

Dependencies
  • #2013 - Update yaspin requirement from <2.3.0,>=1.2.0 to >=1.2.0,<2.4.0

  • #2024 - Update coverage requirement from <6.6,>=5.3 to >=5.3,<7.3

Documentation
  • #2000 - slurm commission take too much memory which can exceeds head node’s max memory

  • #2042 - Write doc: run main script as SLURM job

Feature Request
  • #1998 - Potential issue with max count of simulations in slurm platform

  • #2043 - Write Python utility to run main script as SLURM job

  • #2041 - Write workaround steps: run main script as SLURM job

  • #2095 - Add singularity bind experiment by default for slurm

  • #2096 - Add few more COMPS server aliases

1.7.8

Additional Changes
  • #2100 - Setup.py does not conform to newest pip in python requires

  • #2101 - Deprecate 3.6 references from idmtools

  • #2102 - Doc fix

Bugs
  • #2083 - python11 issue with dataclass

1.7.9

Bugs
  • #1657 - Analyzer manager failed to retrieve bin file to map()

  • #2135 - COMPSPlatform: not invoke idmtools_platform_post_create_item for Experiment

  • #2136 - SLURM/FILE Platforms: idmtools_platform_pre/post_create_item are not invoked for AssetCollection

  • #2137 - LOGGING_STARTED never get updated to True

  • #2138 - Log typo when invoke hook

  • #2140 - suite retrieved from COMPS has duplicated experiments

Core
  • #1936 - Add a constant for path to user home “.idmtools” directory

Dependencies
  • #1985 - Bump json5 from 1.0.1 to 1.0.2 in /idmtools_platform_local/idmtools_webui

  • #2046 - Bump @hapi/hoek from 8.5.0 to 8.5.1 in /idmtools_platform_local/idmtools_webui

  • #2072 - Update pytest-xdist requirement from ~=3.1 to ~=3.3

  • #2081 - Bump crypto-js from 3.1.9-1 to 3.3.0 in /idmtools_platform_local/idmtools_webui

  • #2085 - Update natsort requirement from ~=8.2.0 to ~=8.4.0

  • #2099 - Bump pipreqs from 0.4.11 to 0.4.12 in /examples

  • #2111 - Update sqlalchemy requirement from ~=1.4.45 to ~=2.0.18

  • #2113 - Update more-itertools requirement from ~=9.0.0 to ~=9.1.0

  • #2114 - Update pluggy requirement from ~=1.0.0 to ~=1.2.0

  • #2115 - Update psycopg2-binary requirement from ~=2.9.5 to ~=2.9.6

  • #2116 - Update flask requirement from ~=2.2.2 to ~=2.3.2

  • #2117 - Bump sphinx-copybutton from 0.5.1 to 0.5.2

  • #2118 - Update allure-pytest requirement from <2.13,>=2.8.34 to >=2.8.34,<2.14

  • #2076 - Bump markupsafe from 2.1.1 to 2.1.3

Developer/Test
  • #2155 - Remove Bayesian tests

  • #2150 - Add hook tests to comps, file, slurm platforms

Platforms
  • #1950 - Can not find Eradication when use shared folder

  • #1925 - PlatformAnalysis cannot do single simulation Analysis

  • #2108 - add metadata for python3.11 to all setup.py package

Documentation
  • #2141 - docs/gitignores

  • #2142 - Read the Docs Sphinx updates

  • #2151 - changed default search behavior to search only across this doc project