Welcome to idmtools_platform_local#

idmtools_platform_local is the idmtools local runner which allows execution of tasks in a local docker container and provides a platform that is somewhat similar to COMPS, though much more limited.

Workflows for using idmtools_platform_local is user-dependent, and can include any of the tasks listed below.

Contents#

Installation#

You can install idmtools_platform_local in two different ways. If you intend to use idmtools_platform_local as Institute for Disease Modeling (IDM) builds it, follow the instructions in Basic installation. However, if you intend to modify the idmtools_platform_local source code to add new functionality, follow the instructions in Developer installation. Whichever installation method you choose, the prerequisites are the same.

Prerequisites#

idmtools_platform_local uses Docker to run idmtools_platform_local within a container to keep the idmtools_platform_local environment securely isolated. You must also have Python 3.7, or 3.8 64-bit and Python virtual environments installed to isolate your idmtools_platform_local installation in a separate Python environment. If you do not already have these installed, see the links below for instructions.

  • Windows 10 Pro or Enterprise

  • Python 3.7, or 3.8 64-bit (https://www.python.org/downloads/release)

  • Python virtual environments

    Python virtual environments enable you to isolate your Python environments from one another and give you the option to run multiple versions of Python on the same computer. When using a virtual environment, you can indicate the version of Python you want to use and the packages you want to install, which will remain separate from other Python environments. You may use virtualenv, which requires a separate installation, but venv is recommended and included with Python 3.3+.

  • Docker (https://docs.docker.com/)

Basic installation#

Follow the steps below if you will use idmtools_platform_local to run and analyze simulations, but will not make source code changes.

  1. Open a command prompt and create a virtual environment in any directory you choose. The command below names the environment “idmtools_local”, but you may use any desired name:

    python -m venv idmtools_local
    
  2. Activate the virtual environment:

    • For Windows, enter the following:

      idmtools_local\Scripts\activate
      
    • For Linux, enter the following:

      source idmtools_local/bin/activate
      
  3. Install idmtools_platform_local packages:

    pip install idmtools_platform_local --index-url=https://packages.idmod.org/api/pypi/pypi-production/simple
    

    Note

    When reinstalling idmtools_platform_local you should use the --no-cache-dir and --force-reinstall options, such as: pip install idmtools_platform_local --index-url=https://packages.idmod.org/api/pypi/pipi-production/simple --no-cache-dir --force-reinstall. Otherwise, you may see the error, idmtools_platform_local not found, when attempting to open and run one of the example Python scripts.

  4. Verify installation by pip list, you should see idmtools_platform_local package:

    pip list
    
  5. When you are finished, deactivate the virtual environment by entering the following at a command prompt:

    deactivate
    
Developer installation#

Follow the steps below if you will use idmtools_platform_local to run and analyze simulations, but will not make source code changes.

  1. Open a command prompt and create a virtual environment in any directory you choose. The command below names the environment “idmtools_local”, but you may use any desired name:

    python -m venv idmtools_local
    
  2. Activate the virtual environment:

    • For Windows, enter the following:

      idmtools_local\Scripts\activate
      
    • For Linux, enter the following:

      source idmtools_local/bin/activate
      
  3. Clone the repository:

    git clone https://github.com/InstituteforDiseaseModeling/idmtools_local.git
    
  4. Run bootstrap:

    python dev_scripts/bootstrap.py
    
  5. Verify installation by pip list, you should see idmtools_platform_local package:

    pip list
    
  6. When you are finished, deactivate the virtual environment by entering the following at a command prompt:

    deactivate
    

Configuration#

The configuration of idmtools_platform_local is set in the idmtools.ini file. This file is normally located in the project directory but idmtools_platform_local will search up through the directory hierarchy, and lastly the files ~/.idmtools.ini on Linux and %LOCALAPPDATA%\idmtools_local\idmtools.ini on Windows. If no configuration file is found, an error is displayed. To supress this error, you can use IDMTOOLS_NO_CONFIG_WARNING=1

Global parameters#

The idmtool.ini file includes some global parameters that drive features within idmtools_platform_local. These primarily control features around workers and threads and are defined within the [COMMON] section of idmtool.ini. Most likely, you will not need to change these.

The following includes an example of the [COMMON] section of idmtools.ini with the default settings:

[COMMON]
max_threads = 16
sims_per_thread = 20
max_local_sims = 6
max_workers = 16
batch_size = 10
  • max_threads - Maximum number of threads for analysis and other multi-threaded activities.

  • sims_per_thread - How many simulations per threads during simulation creation.

  • max_local_sims - Maximum simulations to run locally.

  • max_workers - Maximum number of workers processing in parallel.

  • batch_size - Maximum batch size to retrieve simulations.

Logging overview#

idmtools_platform_local includes built-in logging, which is configured in the [LOGGING] section of the idmtools.ini file, and includes the following parameters: level, console, and filename. Default settings are shown in the following example:

[LOGGING]
level = INFO
console = off
filename = idmtools.log

Logging verbosity is controlled by configuring the parameter, level, with one of the below listed options. They are in descending order, where the lower the item in the list, the more verbose logging is included.

CRITICAL
ERROR
WARNING
INFO
DEBUG

Console logging is enabled by configuring the parameter, console, to “on”. The filename parameter can be configured to something other than the default filename, “idmtools.log”.

Below is an example idmtools.ini configuration file:

[COMMON]
# Number of threads idmtools will use for analysis and other multithreaded activities
max_threads = 16

# How many simulations per threads during simulation creation
sims_per_thread = 20

# Maximum number of LOCAL simulation ran simultaneously
max_local_sims = 6


[Local]
type = Local

# This is a test we used to validate loading local from section block
[Custom_Local]
type = Local

[Logging]
level = DEBUG
console = on

Supported platforms#

idmtools_platform_local currently supports running on the following platform:

Local: You can run simulations and analysis locally on your computer within docker container, rather than on a remote high-performance computer (HPC). For more information about these modules, see idmtools_platform_local.

For additional information about configuring idmtools.ini, see Configuration.

Local#

To run simulations and experiments on the local platform you must have met the installation prerequisites. For more information, see Installation. In addition, the Docker client must be running. For more information, see InstituteforDiseaseModeling/idmtools.

Verify local platform is running#

Type the following at a command prompt to verify that local platform is running:

idmtools local status

You should see the status of running for each of the following docker containers:

  • idmtools_redis

  • idmtools_postgres

  • idmtools_workers

If not then you may need to run:

idmtools local start
Run examples#

To run the included examples on local platform you must configure Platform to Local, such as:

platform = Platform('Local')

And, you must include the following block in the idmtools.ini file:

[Local]
type = Local
View simulations and experiments#

You can use the dashboard or the CLI for idmtools_platform_local to view and monitor the status of your simulations and experiments.

The dashboard runs on a localhost server on port 5000 (http://localhost:5000). It is recommended that you use Google Chrome to open the dashboard.

The CLI command to see the status of simulations is:

idmtools simulation --platform Local status

And, for experiments:

idmtools experiment --platform Local status

Create platform plugin#

You can add a new platform to idmtools by creating a new platform plugin, as described in the following sections:

Adding fields to the config CLI#

If you are developing a new platform plugin, you will need to add some metadata to the Platform class’ fields. All fields with a help key in the metadata will be picked up by the idmtools config block command line and allow a user to set a value. help should contain the help text that will be displayed. A choices key can optionally be present to restrict the available choices.

For example, for the given platform:

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    field1: int = field(default=1, metadata={"help": "This is the first field."})
    internal_field: imt = field(default=2)
    field2: str = field(default="a", metadata={"help": "This is the second field.", "choices": ["a", "b", "c"]})

The CLI wizard will pick up field1 and field2 and ask the user to provide values. The type of the field will be enforced and for field2, the user will have to select among the choices.

Modify fields metadata at runtime#

Now, what happens if we want to change the help text, choices, or default value of a field based on a previously set field? For example, let’s consider an example platform where the user needs to specify an endpoint. This endpoint needs to be used to retrieve a list of environments and we want the user to choose select one of them.

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    endpoint: str = field(default="https://myapi.com", metadata={"help": "Enter the URL of the endpoint."})
    environment: str = field(metadata={"help": "Select an environment."})

The list of environments is dependent on the endpoint value. To achieve this, we need to provide a callback function to the metadata. This function will receive all the previously set user parameters, and will have the opportunity to modify the current field’s choices, default, and help parameters.

Let’s create a function querying the endpoint to get the list of environments and setting them as choices. Selecting the first one as default.

def environment_list(previous_settings:Dict, current_field:Field) -> Dict:
    """
    Allows the CLI to provide a list of available environments.
    Uses the previous_settings to get the endpoint to query for environments.
    Args:
        previous_settings: Previous settings set by the user in the CLI.
        current_field: Current field specs.

    Returns:
        Updates to the choices and default.
    """
    # Retrieve the endpoint set by the user
    # The key of the previous_settings is the name of the field we want the value of
    endpoint = previous_settings["endpoint"]

    # Query the API for environments
    client.connect(endpoint)
    environments = client.get_all_environments()

    # If the current field doesnt have a set default already, set one by using the first environment
    # If the field in the platform class has a default, consider it first
    if current_field.default not in environments:
        default_env = environment_choices[0]
    else:
        default_env = current_field.default

    # Return a dictionary that will be applied to the current field
    # Setting the new choices and default at runtime
    return {"choices": environment_choices, "default": default_env}

We can then use this function on the field, and the user will be prompted with the correct list of available environments.

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    endpoint: str = field(default="https://myapi.com", metadata={"help": "Enter the URL of the endpoint"})
    environment: str = field(metadata={"help": "Select an environment ", "callback": environment_list})
Fields validation#

By default the CLI will provide validation on type. For example an int field, will only accept an integer value. To fine tune this validation, we can leverage the validation key of the metadata.

For example, if you want to create a field that has an integer value between 1 and 10, you can pass a validation function as shown:

def validate_number(value):
    if 1 <= value <= 10:
        return True, ''
    return False, "The value needs to be bewtween 1 and 10."

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    custom_validation: int = field(default=1, metadata={"help": "Enter a number between 1 and 10.", "validation":validate_number})

The validation function will receive the user input as value and is expected to return a bool representing the result of the validation (True if the value is correct, False if not) and a string to give an error message to the user.

We can leverage the Python partials and make the validation function more generic to use in multiple fields:

from functools import partial

def validate_range(value, min, max):
    if min <= value <= max:
        return True, ''
    return False, f"The value needs to be between {min} and {max}."

@dataclass(repr=False)
class MyNewPlatform(IPlatform, CacheEnabled):
    custom_validation: int = field(default=1, metadata={"help": "Enter a number between 1 and 10.", "validation":partial(validate_range, min=1, max=10)})
    custom_validation2: int = field(default=100, metadata={"help": "Enter a number between 100 and 500.", "validation":partial(validate_range, min=100, max=500)})

Parameter sweeps and model iteration#

In modeling, parameter sweeps are an important method for fine-tuning parameter values, exploring parameter space, and calibrating simulations to data. A parameter sweep is an iterative process in which simulations are run repeatedly using different values of the parameter(s) of choice. This process enables the modeler to determine a parameter’s “best” value (or range of values), or even where in parameter space the model produces desirable (or non-desirable) behaviors.

When fitting models to data, it is likely that there will be numerous parameters that do not have a pre-determined value. Some parameters will have a range of values that are biologically plausible, or have been determined from previous experiments; however, selecting a particular numerical value to use in the model may not be feasible or realistic. Therefore, the best practice involves using a parameter sweep to narrow down the range of possible values or to provide a range of outcomes for those possible values.

idmtools provides an automated approach to parameter sweeps. With few lines of code, it is possible to test the model over any range of parameter values with any combination of parameters.

With a stochastic model (such as EMOD), it is especially important to utilize parameter sweeps, not only for calibration to data or parameter selection, but to fully explore the stochasticity in output. Single model runs may appear to provide good fits to data, but variation will arise and multiple runs are necessary to determine the appropriate range of parameter values necessary to achieve desired outcomes. Multiple iterations of a single set of parameter values should be run to determine trends in simulation output: a single simulation output could provide results that are due to random chance.

How to do parameter sweeps#

With idmtools, you can do parameter sweeps with builders or without builders using a base task to set your simulation parameters.

The typical ‘output’ of idmtools is a config.json file for each created simulation, which contains the parameter values assigned: both the constant values and those being swept.

Using builders#

In this release, to support parameter sweeps for models, we have the following builders to assist you:

  1. SimulationBuilder - you set your sweep parameters in your scripts and it generates a config.json file with your sweeps for your experiment/simulations to use

  2. CsvExperimentBuilder - you can use a CSV file to do your parameter sweeps

  3. YamlSimulationBuilder - you can use a Yaml file to do your parameter sweeps

  4. ArmSimulationBuilder for cross and pair parameters, which allows you to cross parameters, like you cross your arms.

There are two types of sweeping, cross and pair. Cross means you have for example, 3 x 3 = 9 set of parameters, and pair means 3 + 3 = 3 pairs of parameters, for example, a, b, c and d, e, f.

For cross sweeping, let’s say again you have parameters a, b, c and d, e, f that you want to cross so you would have the following possible matches: - a & d - a & e - a & f - b & d - b & e - b & f - c & d - c & e - c & f

For Python models, we also support them using a JSONConfiguredPythonTask. In the future we will support additional configured tasks for Python and R models.

Add sweep definition#

You can use the following two different methods for adding a sweep definition to a builder object:

  • add_sweep_definition

  • add_multiple_parameter_sweep_definition

Generally add_sweep_definition is used; however, in scenarios where you need to add multiple parameters to the sweep definition you use add_multiple_parameter_sweep_definiton - as seen in idmtools.examples.python_model.multiple_parameter_sweeping.py. More specifically, add_multiple_parameter_sweep_definition is used for sweeping with the same definition callback that takes multiple parameters, where the parameters can be a list of arguments or a list of keyword arguments. The sweep function will do cross-product sweeps between the parameters.

Creating sweeps without builders#

You can also create sweeps without using builders. Like this example:

"""
        This file demonstrates how to create param sweeps without builders.

        we create base task including our common assets, e.g. our python model to run
        we create 5 simulations and for each simulation, we set parameter 'a' = [0,4] and 'b' = a + 10 using this task
        then we are adding this to task to our Experiment to run our simulations
"""
import copy
import os
import sys

from idmtools.assets import AssetCollection
from idmtools.core.platform_factory import Platform
from idmtools.entities.experiment import Experiment
from idmtools.entities.simulation import Simulation
from idmtools_models.python.json_python_task import JSONConfiguredPythonTask
from idmtools_test import COMMON_INPUT_PATH

if __name__ == "__main__":

    # define our platform
    platform = Platform('Local')

    # create experiment  object and define some extra assets
    assets_path = os.path.join(COMMON_INPUT_PATH, "python", "Assets")
    e = Experiment(name=os.path.split(sys.argv[0])[1],
                   tags={"string_tag": "test", "number_tag": 123},
                   assets=AssetCollection.from_directory(assets_path))

    # define paths to model and extra assets folder container more common assets
    model_path = os.path.join(COMMON_INPUT_PATH, "python", "model.py")

    # define our base task including the common assets. We could also add these assets to the experiment above
    base_task = JSONConfiguredPythonTask(script_path=model_path, envelope='parameters')
    base_simulation = Simulation.from_task(base_task)

    # now build our simulations
    for i in range(5):
        # first copy the simulation
        sim = copy.deepcopy(base_simulation)
        # configure it
        sim.task.set_parameter("a", i)
        sim.task.set_parameter("b", i + 10)
        # and add it to the simulations
        e.simulations.append(sim)

    # run the experiment
    e.run(platform=platform)
    # wait on it
    # in most real scenarios, you probably do not want to wait as this will wait until all simulations
    # associated with an experiment are done. We do it in our examples to show feature and to enable
    # testing of the scripts
    e.wait()
    # use system status as the exit code
    sys.exit(0 if e.succeeded else -1)

Architecture and packages reference#

idmtools is built in Python and includes an architecture designed for ease of use, flexibility, and extensibility. You can quickly get up and running and see the capabilities of idmtools by using one of the many included example Python scripts demonstrating the functionality of the packages.

idmtools is built in a modular fashion, as seen in the diagrams below. idmtools design includes multiple packages and APIs, providing both the flexibility to only include the necessary packages for your modeling needs and the extensibility by using the APIs for any needed customization.

Packages overview#

_images/77edd6be54670b09b5276ddaff270252c271bd662aac59f436f00bbf903e76dd.svg

Packages and APIs#

The following diagrams help illustrate the primary packages and associated APIs available for modeling and development with idmtools_platform_local:

Local platform#
_images/67a487b953c1ad173865c0dbbe55ea30620a16fbc30379a451b08db80ed7da2d.svg

API Documentation#

idmtools_platform_local#
idmtools_platform_local package#

idmtools local platform.

The local platform allows running experiments locally using docker.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local Subpackages#
idmtools_platform_local.cli package#

idmtools cli module.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.cli Submodules#
idmtools_platform_local.cli.experiment module#

idmtools cli experiment tools.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.cli.experiment.prettify_experiment(experiment: Dict[str, Any])[source]#

Prettifies a JSON Experiment object for printing on a console.

This includes - Making a pretty progress bar - URL-ifying the data paths - sorting the columns

Parameters:

experiment – JSON representation of the Experiment(from API)

Returns:

Prettify experiment

idmtools_platform_local.cli.experiment.status(id: str | None, tags: List[Tuple[str, str]] | None)[source]#

List the status of experiment(s) with the ability to filter by experiment id and tags.

Parameters:
  • id (Optional[str]) – Optional ID of the experiment you want to filter by

  • tags (Optional[List[Tuple[str, str]]]) – Optional list of tuples in form of tag_name tag_value to user to filter experiments with

idmtools_platform_local.cli.experiment.extra_commands()[source]#

This ensures our local platform specific commands are loaded.

idmtools_platform_local.cli.local module#

idmtools local platform group.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.cli.local.LocalCliContext(config=None)[source]#

Bases: object

Local CLI Context.

__init__(config=None)[source]#

Constructor.

do: DockerIO = None#
sm: DockerServiceManager = None#
idmtools_platform_local.cli.local.cli_command_type#

alias of LocalCliContext

idmtools_platform_local.cli.local.stop_services(cli_context: LocalCliContext, delete_data)[source]#

Stop services.

idmtools_platform_local.cli.local.container_status_text(name, container)[source]#

Print container status in color based on state.

Parameters:
  • name – Name to display

  • container – Container to status

Returns:

None

idmtools_platform_local.cli.simulation module#

idmtools local platform simulation cli commands.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.cli.simulation.prettify_simulation(simulation: Dict[str, Any])[source]#

Prettifies a JSON Simulation object for printing on a console.

This includes - Making a pretty progress bar - URL-ifying the data paths

Parameters:

simulation – JSON representation of the Experiment(from API)

Returns:

Prettified simulation

idmtools_platform_local.cli.simulation.status(id: str | None, experiment_id: str | None, status: str | None, tags: List[Tuple[str, str]] | None)[source]#

List of statuses for simulation(s) with the ability to filter by id, experiment_id, status, and tags.

Parameters:
  • id (Optional[str]) – Optional Id of simulation

  • experiment_id (Optional[str]) – Optional experiment id

  • status (Optional[str]) – Optional status string to filter by

  • tag (Optional[List[Tuple[str, str]]]) – Optional list of tuples in form of tag_name tag_value to user to filter experiments with

Returns:

None

idmtools_platform_local.cli.utils module#

idmtools local platform cli utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.cli.utils.get_service_info(service_manger: DockerServiceManager, diff: bool, logs: bool) str[source]#

Get info about services for statusing.

Parameters:
  • service_manger – Service manager to use

  • diff – Should we run a diff on the container.

  • logs – Should logs be used

Returns:

Local platform container info.

idmtools_platform_local.cli.utils.colorize_status(status: Status) str[source]#

Colorizes a status for the console.

Parameters:

status (Status) – Status to colorize

Returns:

Unicode colorized string of the status

Return type:

str

idmtools_platform_local.cli.utils.parent_status_to_progress(status: Dict[Status, int], width: int = 12) str[source]#

Convert a status object into a colorized progress bar for the console.

Parameters:
  • status (Dict[Status, int]) – Status dictionary. The dictionary should Status values for keys and the values should be the total number of simulations in the specific status. An example would be {Status.done: 30, Status.created: 1}

  • width (int) – The desired width of the progress bar

Returns:

Progress bar of the status

Return type:

str

idmtools_platform_local.cli.utils.urlize_data_path(path: str) str[source]#

URL-ize a data-path so it can be made click-able in the console(if the console supports it).

Parameters:

path (str) – path to urilze

Returns:

Path as URL.

Return type:

str

idmtools_platform_local.client package#
idmtools_platform_local.client Submodules#
idmtools_platform_local.client.base module#

idmtools local platform API Client.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.client.base.BaseClient[source]#

Bases: object

Base Local platform client.

base_url = 'http://localhost:5000/api'#
classmethod get(path, **kwargs) Response[source]#

Get request.

classmethod post(path, **kwargs) Response[source]#

Post request.

classmethod put(path, **kwargs) Response[source]#

Put request.

classmethod delete(path, **kwargs) Response[source]#

Delete request.

idmtools_platform_local.client.experiments_client module#

idmtools local platform experiment API Client.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.client.experiments_client.ExperimentsClient[source]#

Bases: BaseClient

Provides API client for Experiments.

path_url = 'experiments'#
classmethod get_all(tags: List[Tuple[str, str]] | None = None, page: int | None = None, per_page: int | None = None) List[Dict[str, Any]][source]#

Get all experiments with options to filter by tags.

Parameters:
  • per_page – How many experiments to return per page

  • page – Which page

  • tags (Optional[List[Tuple[str, str]]]) – List of tags/values to filter experiment by

Returns:

returns list of experiments

Return type:

List[Dict[str, Any]]

classmethod get_one(id: str, tags: List[Tuple[str, str]] | None = None) Dict[str, Any][source]#

Convenience method to get one experiment.

Parameters:
  • id (str) – ID of the experiment

  • tags (Optional[List[Tuple[str, str]]]) – List of tags/values to filter experiment by

Returns:

Dictionary containing the experiment objects

Return type:

dict

classmethod delete(id: str, delete_data: bool = False, ignore_doesnt_exist: bool = True) bool[source]#

Delete an experiment. Optionally you can delete the experiment data. WARNING: Deleting the data is irreversible.

Parameters:
  • id (str) – ID of the experiments

  • delete_data (bool) – Delete data directory including simulations

  • ignore_doesnt_exist – Ignore error if the specific experiment doesn’t exist

Returns:

True if deletion is succeeded

idmtools_platform_local.client.healthcheck_client module#

idmtools local platform healthcheck API Client.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.client.healthcheck_client.HealthcheckClient[source]#

Bases: BaseClient

Provides Healthcheck API client.

path_url = 'healthcheck'#
classmethod get_all() List[Dict[str, Any]][source]#

Get all health check info.

Returns:

returns list of experiments

Return type:

List[Dict[str, Any]]

classmethod get_one() Dict[str, Any][source]#

Convenience method to get one specific healthcheck.

Returns:

Dictionary containing the experiment objects

Return type:

dict

classmethod delete(*args, **kwargs) bool[source]#

Delete request.

classmethod post(*args, **kwargs) bool[source]#

Post request.

idmtools_platform_local.client.simulations_client module#

idmtools local platform simulations API Client.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.client.simulations_client.SimulationsClient[source]#

Bases: BaseClient

Provide API client for Simulations.

path_url = 'simulations'#
classmethod get_all(experiment_id: str | None = None, status: Status | None = None, tags: List[Tuple[str, str]] | None = None, page: int | None = None, per_page: int | None = None) List[Dict[str, Any]][source]#

Get all simulation matcher a criteria.

Parameters:
  • experiment_id – ID of the simulation

  • status – Optional status

  • tags – List of tags/values to filter experiment by

  • page – page

  • per_page – items per page

Returns:

return list of simulations

Return type:

List[Dict[str, Any]]

classmethod get_one(simulation_id: str, experiment_id: str | None = None, status: Status | None = None, tags: List[Tuple[str, str]] | None = None) Dict[str, Any][source]#

Get one simulation.

Args:

simulation_id (str): ID of the simulation experiment_id (Optional[str]): ID of experiments status (Optional[Status]): Optional status tags (Optional[List[Tuple[str, str]]]): List of tags/values to filter experiment by

Returns:

the simulation as a dict

Return type:

Dict[str, Any]

classmethod cancel(simulation_id: str) Dict[str, Any][source]#

Marks a simulation to be canceled. Canceled jobs are only truly canceled when the queue message is processed.

Parameters:

simulation_id (st) –

Returns:

Cancel result

idmtools_platform_local.infrastructure package#

idmtools infrastructure module.

This module manages all our internal services/docker container for the local platform including

  • Postgres

  • Redis

  • The API/Workers

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.infrastructure Submodules#
idmtools_platform_local.infrastructure.base_service_container module#

idmtools base service container.

This defined the base docker container. We use this amongst each service to build our containers as needed.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.infrastructure.base_service_container.BaseServiceContainer(container_name: str | None = None, image: str | None = None, client: DockerClient | None = None, config_prefix: str | None = None, network: str | None = None)[source]#

Bases: ABC

Providers base abstract class for local platform container objects/managers.

container_name: str = None#
image: str = None#
client: DockerClient = None#
config_prefix: str = None#
network: str = None#
static get_common_config(container_name: str, image: str, network: str, port_bindings: Dict | None = None, volumes: Dict | None = None, mem_limit: str | None = None, mem_reservation: str | None = None, environment: List[str] | None = None, extra_labels: Dict | None = None, **extras) dict[source]#

Returns portions of docker container configs that are common between all the different containers used within our platform.

Parameters:
  • container_name – Container name

  • image – Image to use

  • network – Network to use

  • port_bindings – Port binding

  • volumes – Volume definitions

  • mem_limit – Memory limit

  • mem_reservation – Memory reservation

  • environment – Environment vars

  • extra_labels – Extra labels to use

  • **extras

Returns:

Common configuration object to use when creating a container.

Notes

Memory strings should match those used by docker. See –memory details at https://docs.docker.com/engine/reference/run/#runtime-constraints-on-resources

get() Container | None[source]#

Get the container.

Returns:

Container if it is running.

get_or_create(spinner=None) Container[source]#

Get or Create a container.

Parameters:

spinner – Optional spinner to display

Returns:

Docker container object representing service container

ensure_container_is_running(container: Container, spinner=None) Container[source]#

Ensures is running.

Parameters:
  • container – container to check if it is running

  • spinner – Optional spinner to show we are busy while checking.

Returns:

Container

has_different_config(container, show_diff: bool = True)[source]#

Detect if the config is difference that running container.

Parameters:
  • container – Container

  • show_diff – Should we display diff

Returns:

True if there is differing configuration

get_running_config(container)[source]#

Fetches the config used to start a container.

Parameters:

container – Container to use

Returns:

The config from the running container.

static copy_config_to_container(container: Container, config: dict)[source]#

Copy the configuration used to create container to a container for future reference.

Parameters:
  • container – Target container

  • config – Config to copy

Returns:

None

create(spinner=None) Container[source]#

Create our container.

Parameters:

spinner – Optional spinner. When provided, we will use it to indicate we are busy during long-running tasks.

Returns:

Created Container.

Raises:
  • EnvironmentError - If a container tries to start with a port in use

  • ValueError - Unkown cause

Notes

  • TODO - Add Exception for environment error with doc_link

static wait_on_status(container, sleep_interval: float = 0.2, max_time: float = 2, statutes_to_wait_for: List[str] | None = None)[source]#

Wait on a container to achieve a specific status.

Parameters:
  • container – Target container

  • sleep_interval – How long to wait between checks

  • max_time – Max time to wait(default to 2 seconds)

  • statutes_to_wait_for – List of statuss to wait for. When not set, we use starting and created.

Returns:

None

stop(remove=False, container: Container | None = None)[source]#

Stop a container.

Parameters:
  • remove – When true, the container will be removed after being stopped

  • container – Container to stop

Returns:

None

restart(container: Container | None = None)[source]#

Restart a container.

Parameters:

container – Container to restart.

Returns:

None

get_logs()[source]#

Get logs from container.

Returns:

Container logs as a string.

abstract get_configuration() Dict[source]#

Get configuration.

Each sub-class should define this to provide their own specific configuration.

Returns:

Configuration dict

__init__(container_name: str | None = None, image: str | None = None, client: DockerClient | None = None, config_prefix: str | None = None, network: str | None = None) None#
idmtools_platform_local.infrastructure.docker_io module#

idmtools docker input/output operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.infrastructure.docker_io.DockerIO(host_data_directory: str = '/home/docs/.local_data')[source]#

Bases: object

Provides most of our file operations for our docker containers/local platform.

host_data_directory: str = '/home/docs/.local_data'#
delete_files_below_level(directory, target_level=1, current_level=1)[source]#

Delete files below a certain depth in a target directory.

Parameters:
  • directory – Target directory

  • target_level – Depth to begin deleting. Default to 1

  • current_level – Current level. We call recursively so this should be 1 on initial call.

Returns:

None

Raises:

PermissionError - Raised if there are issues deleting a file.

cleanup(delete_data: bool = True, shallow_delete: bool = False) NoReturn[source]#

Stops the running services, removes local data, and removes network.

You can optionally disable the deleting of local data.

Parameters:
  • delete_data (bool) – When true, deletes local data

  • shallow_delete (bool) – Deletes the data but not the container folders(redis, workers). Preferred to preserve permissions and resolve docker issues

Returns:

(NoReturn)

copy_to_container(container: Container, destination_path: str, file: str | bytes | None = None, content: str | bytes = None, dest_name: str | None = None) bool[source]#

Copies a physical file or content in memory to a container.

You can also choose a different name for the destination file by using the dest_name option.

Parameters:
  • container – Container to copy the file to

  • file – Path to the file to copy

  • content – Content to copy

  • destination_path – Path within the container to copy the file to(should be a directory)

  • dest_name – Optional parameter for destination filename. By default, the source filename is used

Returns:

(bool) True if the copy succeeds, False otherwise

sync_copy(futures)[source]#

Sync the copy operations queue in the io_queue.

This allows us to take advantage of multi-threaded copying while also making it convenient to have sync points, such as uploading the assets in parallel but pausing just before sync point.

Parameters:

futures – Futures to wait on.

Returns:

Final results of copy operations.

copy_multiple_to_container(container: Container, files: Dict[str, Dict[str, Any]], join_on_copy: bool = True)[source]#

Copy multiple items to a container.

Parameters:
  • container – Target container

  • files – Files to copy in form target directory -> filename -> data

  • join_on_copy – Should we join the threading on copy(treat as an atomic unit)

Returns:

True if copy succeeded, false otherwise

static create_archive_from_bytes(content: bytes | BytesIO | BinaryIO, name: str) BytesIO[source]#

Create a tar archive from bytes. Used to copy to docker.

Parameters:
  • content – Content to copy into tar

  • name – Name for file in archive

Returns:

(BytesIO) Return bytesIO object

create_directory(dir: str) bool[source]#

Create a directory in a container.

Parameters:

dir – Path to directory to create

Returns:

(ExecResult) Result of the mkdir operation

__init__(host_data_directory: str = '/home/docs/.local_data') None#
idmtools_platform_local.infrastructure.postgres module#

idmtools postgres service. Used for experiment data.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.infrastructure.postgres.PostgresContainer(container_name: str = 'idmtools_postgres', image: str = 'postgres:11.4', client: DockerClient | None = None, config_prefix: str = 'postgres_', network: str | None = None, host_data_directory: str | None = None, port: int = 5432, mem_limit: str = '128m', mem_reservation: str = '32m', run_as: str | None = None, password: str = 'idmtools', data_volume_name: str = 'idmtools_local_postgres')[source]#

Bases: BaseServiceContainer

Defines the postgres container for the local platform.

host_data_directory: str = None#
port: int = 5432#
mem_limit: str = '128m'#
mem_reservation: str = '32m'#
run_as: str = None#
image: str = 'postgres:11.4'#
container_name: str = 'idmtools_postgres'#
password: str = 'idmtools'#
data_volume_name: str = 'idmtools_local_postgres'#
config_prefix: str = 'postgres_'#
get_configuration() Dict[source]#

Returns the docker config for the postgres container.

Returns:

(dict) Dictionary representing the docker config for the postgres container

create(spinner=None) Container[source]#

Create our postgres container.

Here we create the postgres data volume before creating the container.

create_postgres_volume() NoReturn[source]#

Creates our postgres volume.

Returns:

None

__init__(container_name: str = 'idmtools_postgres', image: str = 'postgres:11.4', client: DockerClient | None = None, config_prefix: str = 'postgres_', network: str | None = None, host_data_directory: str | None = None, port: int = 5432, mem_limit: str = '128m', mem_reservation: str = '32m', run_as: str | None = None, password: str = 'idmtools', data_volume_name: str = 'idmtools_local_postgres') None#
idmtools_platform_local.infrastructure.redis module#

idmtools redis service.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.infrastructure.redis.RedisContainer(container_name: str = 'idmtools_redis', image: str = 'redis:5.0.4-alpine', client: DockerClient | None = None, config_prefix: str = 'redis_', network: str | None = None, host_data_directory: str | None = None, mem_limit: str = '256m', mem_reservation: str = '64m', run_as: str | None = None, port: int = 6379, data_volume_name: str | None = None)[source]#

Bases: BaseServiceContainer

Provides the redis container for local platform.

host_data_directory: str = None#
mem_limit: str = '256m'#
mem_reservation: str = '64m'#
run_as: str = None#
port: int = 6379#
image: str = 'redis:5.0.4-alpine'#
data_volume_name: str = None#
container_name: str = 'idmtools_redis'#
config_prefix: str = 'redis_'#
get_configuration() dict[source]#

Get our configuration to run redis.

Returns:

Redis config.

__init__(container_name: str = 'idmtools_redis', image: str = 'redis:5.0.4-alpine', client: DockerClient | None = None, config_prefix: str = 'redis_', network: str | None = None, host_data_directory: str | None = None, mem_limit: str = '256m', mem_reservation: str = '64m', run_as: str | None = None, port: int = 6379, data_volume_name: str | None = None) None#
idmtools_platform_local.infrastructure.service_manager module#

idmtools service manager. Manages all our local platform services.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.infrastructure.service_manager.DockerServiceManager(client: DockerClient, host_data_directory: str = '/home/docs/.local_data', network: str = 'idmtools', redis_image: str = 'redis:5.0.4-alpine', heartbeat_timeout: int = 15, redis_port: int = 6379, runtime: str | None = 'runc', redis_mem_limit: str = '256m', redis_mem_reservation: str = '32m', postgres_image: str = 'postgres:11.4', postgres_mem_limit: str = '128m', postgres_mem_reservation: str = '32m', postgres_port: str | None = 5432, workers_image: str | None = None, workers_ui_port: int = 5000, workers_mem_limit: str | None = None, workers_mem_reservation: str = '64m', run_as: str | None = None, enable_singularity_support: bool = False, _services: Dict[str, BaseServiceContainer] | None = None)[source]#

Bases: object

Provides single interface to manage all the local platform services.

client: DockerClient#
host_data_directory: str = '/home/docs/.local_data'#
network: str = 'idmtools'#
redis_image: str = 'redis:5.0.4-alpine'#
heartbeat_timeout: int = 15#
redis_port: int = 6379#
runtime: str | None = 'runc'#
redis_mem_limit: str = '256m'#
redis_mem_reservation: str = '32m'#
postgres_image: str = 'postgres:11.4'#
postgres_mem_limit: str = '128m'#
postgres_mem_reservation: str = '32m'#
postgres_port: str | None = 5432#
workers_image: str = None#
workers_ui_port: int = 5000#
workers_mem_limit: str = None#
workers_mem_reservation: str = '64m'#
run_as: str | None = None#
enable_singularity_support: bool = False#
init_services()[source]#

Start all the containers we should have running.

cleanup(delete_data: bool = False, tear_down_brokers: bool = False) NoReturn[source]#

Stops the containers and removes the network.

Optionally the postgres data container can be deleted as well as closing any active Redis connections.

Parameters:
  • delete_data – Delete postgres data

  • tear_down_brokers – True to close redis brokers, false otherwise

Returns:

NoReturn

static setup_broker(heartbeat_timeout)[source]#

Start the broker to send data to workers.

static restart_brokers(heartbeat_timeout)[source]#

Restart brokers talking to workers.

create_services(spinner=None) NoReturn[source]#

Create all the components of local platform.

Our architecture is as depicted in the UML diagram below

_images/58111b881264229fc93245b9a4037968cc5093cd27a1c506dc5351a77a0395ac.svg
Returns:

(NoReturn)

wait_on_ports_to_open(ports: List[str], wait_between_tries: int | float = 0.2, max_retries: int = 5, sleep_after: int | float = 0.5) bool[source]#

Polls list of port attributes(eg postgres_port, redis_port and checks if they are currently open.

We use this to verify postgres/redis are ready for our workers

Parameters:
  • ports – List of port attributes

  • wait_between_tries – Time between port checks

  • max_retries – Max checks

  • sleep_after – Sleep after all our found open(Postgres starts accepting connections before actually ready)

Returns:

True if ports are ready

stop_services(spinner=None) NoReturn[source]#

Stops all running IDM Tools services.

Returns:

(NoReturn)

get(container_name: str, create=True) Container[source]#

Get the server with specified name.

Parameters:
  • container_name – Name of container

  • create – Create if it doesn’t exists

Returns:

Container

get_container_config(service: BaseServiceContainer, opts=None)[source]#

Get the container config for the service.

Parameters:
  • service – Service to get config for

  • opts – Opts to Extract. Should be a fields object

Returns:

Container config

restart_all(spinner=None) NoReturn[source]#

Restart all the services IDM-Tools services.

Returns:

(NoReturn)

static is_port_open(host: str, port: int) bool[source]#

Check if a port is open.

Parameters:
  • host – Host to check

  • port – Port to check

Returns:

True if port is open, False otherwise

static stop_service_and_wait(service) bool[source]#

Stop server and wait.

Parameters:

service – Service to stop

Returns:

True

get_network() Network[source]#

Fetches the IDM Tools network.

Returns:

(Network) Return Docker network object

__init__(client: DockerClient, host_data_directory: str = '/home/docs/.local_data', network: str = 'idmtools', redis_image: str = 'redis:5.0.4-alpine', heartbeat_timeout: int = 15, redis_port: int = 6379, runtime: str | None = 'runc', redis_mem_limit: str = '256m', redis_mem_reservation: str = '32m', postgres_image: str = 'postgres:11.4', postgres_mem_limit: str = '128m', postgres_mem_reservation: str = '32m', postgres_port: str | None = 5432, workers_image: str | None = None, workers_ui_port: int = 5000, workers_mem_limit: str | None = None, workers_mem_reservation: str = '64m', run_as: str | None = None, enable_singularity_support: bool = False, _services: Dict[str, BaseServiceContainer] | None = None) None#
idmtools_platform_local.infrastructure.workers module#

idmtools workers container.

The workers container has our UI, API, and queue workers.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.infrastructure.workers.get_worker_image_default()[source]#

Get our default worker image.

We first check if it is nightly. Nightly will ALWAYS use staging.

If we are not using nightly, we then us production using the current version.

class idmtools_platform_local.infrastructure.workers.WorkersContainer(container_name: str = 'idmtools_workers', image: str = 'docker-production.packages.idmod.org/idmtools/local_workers:1.7.8', client: DockerClient | None = None, config_prefix: str = 'workers_', network: str | None = None, host_data_directory: str | None = None, postgres_port: int = 5432, redis_port: int = 6379, ui_port: int = 5000, mem_limit: str = '16g', mem_reservation: str = '64m', run_as: str | None = None, debug_api: bool = True, data_volume_name: str | None = None, enable_singularity_support: bool = False)[source]#

Bases: BaseServiceContainer

Proves the Workers container definition.

host_data_directory: str = None#
postgres_port: int = 5432#
redis_port: int = 6379#
ui_port: int = 5000#
mem_limit: str = '16g'#
mem_reservation: str = '64m'#
run_as: str = None#
debug_api: bool = True#
image: str = 'docker-production.packages.idmod.org/idmtools/local_workers:1.7.8'#
container_name: str = 'idmtools_workers'#
data_volume_name: str = None#
config_prefix: str = 'workers_'#
enable_singularity_support: bool = False#
get_configuration() Dict[source]#

Get our configuration.

create(spinner=None) Container[source]#

Create our workers container.

Raises:

EnvironmentError - If the local platform isn't in ready state.

__init__(container_name: str = 'idmtools_workers', image: str = 'docker-production.packages.idmod.org/idmtools/local_workers:1.7.8', client: DockerClient | None = None, config_prefix: str = 'workers_', network: str | None = None, host_data_directory: str | None = None, postgres_port: int = 5432, redis_port: int = 6379, ui_port: int = 5000, mem_limit: str = '16g', mem_reservation: str = '64m', run_as: str | None = None, debug_api: bool = True, data_volume_name: str | None = None, enable_singularity_support: bool = False) None#
idmtools_platform_local.platform_operations package#

idmtools local platform operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.platform_operations Submodules#
idmtools_platform_local.platform_operations.experiment_operations module#

idmtools local platform experiment operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.platform_operations.experiment_operations.LocalPlatformExperimentOperations(platform: LocalPlatform, platform_type: type = <class 'idmtools_platform_local.platform_operations.utils.ExperimentDict'>)[source]#

Bases: IPlatformExperimentOperations

Provide Experiment operation for the LocalPlatform.

platform: LocalPlatform#
platform_type#

alias of ExperimentDict

get(experiment_id: UUID, **kwargs) ExperimentDict[source]#

Get the experiment object by id.

Parameters:
  • experiment_id – Id

  • **kwargs

Returns:

Experiment Dict object

platform_create(experiment: Experiment, **kwargs) Dict[source]#

Create an experiment.

Parameters:
  • experiment – Experiment to create

  • **kwargs

Returns:

Created experiment object and UUID

get_children(experiment: Dict, **kwargs) List[SimulationDict][source]#

Get children for an experiment.

Parameters:
  • experiment – Experiment to get chidren for

  • **kwargs

Returns:

List of simulation dicts

get_parent(experiment: Any, **kwargs) None[source]#

Experiment on local platform have no parents so return None.

Parameters:
  • experiment

  • **kwargs

Returns:

None

platform_run_item(experiment: Experiment, **kwargs)[source]#

Run the experiment.

Parameters:

experiment – experiment to run

Returns:

None

send_assets(experiment: Experiment, **kwargs)[source]#

Sends assets for specified experiment.

Parameters:

experiment – Experiment to send assets for

Returns:

None

refresh_status(experiment: Experiment, **kwargs)[source]#

Refresh status of experiment.

Parameters:

experiment – Experiment to refresh status for

Returns:

None

static from_experiment(experiment: Experiment) Dict[source]#

Create a experiment dictionary from Experiment object.

Parameters:

experiment – Experiment object

Returns:

Experiment as a local platform dict

to_entity(experiment: Dict, children: bool = True, **kwargs) Experiment[source]#

Convert an ExperimentDict to an Experiment.

Parameters:
  • experiment – Experiment to convert

  • **kwargs

Returns:

object as an IExperiment object

list_assets(experiment: Experiment, **kwargs) List[Asset][source]#

List assets for a sim.

Parameters:

experiment – Experiment object

Returns:

List of assets.

__init__(platform: LocalPlatform, platform_type: type = <class 'idmtools_platform_local.platform_operations.utils.ExperimentDict'>) None#
idmtools_platform_local.platform_operations.simulation_operations module#

idmtools local platform simulation operations.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.platform_operations.simulation_operations.LocalPlatformSimulationOperations(platform: LocalPlatform, platform_type: type = <class 'idmtools_platform_local.platform_operations.utils.SimulationDict'>)[source]#

Bases: IPlatformSimulationOperations

Provides Simulation Operations to the Local Platform.

platform: LocalPlatform#
platform_type#

alias of SimulationDict

get(simulation_id: UUID, **kwargs) Dict[source]#

Fetch simulation with specified id.

Parameters:
  • simulation_id – simulation id

  • **kwargs

Returns:

SimulationDIct

platform_create(simulation: Simulation, **kwargs) Dict[source]#

Create a simulation object.

Parameters:
  • simulation – Simulation to create

  • **kwargs

Returns:

Simulation dict and created id

batch_create(sims: List[Simulation], **kwargs) List[SimulationDict][source]#

Batch creation of simulations.

This is optimized by bulk uploading assets after creating of all the assets

Parameters:
  • sims – List of sims to create

  • **kwargs

Returns:

List of SimulationDict object and their IDs

get_parent(simulation: SimulationDict, **kwargs) ExperimentDict[source]#

Get the parent of a simulation, aka its experiment.

Parameters:
  • simulation – Simulation to get parent from

  • **kwargs

Returns:

ExperimentDict object

platform_run_item(simulation: Simulation, **kwargs)[source]#

On the local platform, simulations are ran by queue and commissioned through create.

Parameters:

simulation

Returns:

None

send_assets(simulation: Simulation, worker: Container | None = None, **kwargs)[source]#

Transfer assets to local sim folder for simulation.

Parameters:
  • simulation – Simulation object

  • worker – docker worker containers. Useful in batches

Returns:

None

refresh_status(simulation: Simulation, **kwargs)[source]#

Refresh status of a sim.

Parameters:

simulation

Returns:

None

get_assets(simulation: Simulation, files: List[str], **kwargs) Dict[str, bytearray][source]#

Get assets for a specific simulation.

Parameters:
  • simulation – Simulation object to fetch files for

  • files – List of files to fetch

Returns:

Returns a dict containing mapping of filename->bytearry

list_assets(simulation: Simulation, **kwargs) List[Asset][source]#

List assets for a sim.

Parameters:

simulation – Simulation object

Returns:

List of assets

to_entity(local_sim: Dict, load_task: bool = False, parent: Experiment | None = None, **kwargs) Simulation[source]#

Convert a sim dict object to an ISimulation.

Parameters:
  • local_sim – simulation to convert

  • load_task – Load Task Object as well. Can take much longer and have more data on platform

  • parent – optional experiment object

  • **kwargs

Returns:

ISimulation object

__init__(platform: LocalPlatform, platform_type: type = <class 'idmtools_platform_local.platform_operations.utils.SimulationDict'>) None#
idmtools_platform_local.platform_operations.utils module#

idmtools local platform operations utils.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.platform_operations.utils.ExperimentDict[source]#

Bases: dict

Alias for local platform experiment objects.

class idmtools_platform_local.platform_operations.utils.SimulationDict[source]#

Bases: dict

Alias for local platform simulation objects.

idmtools_platform_local.platform_operations.utils.local_status_to_common(status) EntityStatus[source]#

Convert local platform status to idmtools status.

Parameters:

status

Returns:

Local platform status

idmtools_platform_local.platform_operations.utils.download_lp_file(filename: str, buffer_size: int = 128) Generator[bytes, None, None][source]#

Returns a generator to download files on the local platform.

Parameters:
  • filename – Filename to download

  • buffer_size – Buffer size

Returns:

Generator for file content

idmtools_platform_local Submodules#
idmtools_platform_local.config module#

idmtools local platform common configuration.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

idmtools_platform_local.config.get_api_path()[source]#

Get Path of API.

idmtools_platform_local.local_cli module#

idmtools local platform cli interface.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.local_cli.LocalCLI[source]#

Bases: IPlatformCLI

Provides the LocalCLI implementation of the common PlatformCLI interface.

get_experiment_status(id: str | None, tags: List[Tuple[str, str]] | None) NoReturn[source]#

Get experiment status.

get_simulation_status(id: str | None, experiment_id: str | None, status: str | None, tags: List[Tuple[str, str]] | None) NoReturn[source]#

Get simulation status.

get_platform_information(platform: LocalPlatform) dict[source]#

Get simulation information.

class idmtools_platform_local.local_cli.LocalCLISpecification[source]#

Bases: PlatformCLISpecification

Provides plugin spec for LocalCLI.

get(configuration: dict) LocalCLI[source]#

Get CLI using provided configuration.

get_additional_commands() NoReturn[source]#

Load our cli commands.

get_description() str[source]#

Get description of our cli plugin.

idmtools_platform_local.local_platform module#

idmtools local platform.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.local_platform.LocalPlatform(*args, **kwargs)[source]#

Bases: IPlatform

Represents the platform allowing to run simulations locally.

host_data_directory: str = '/home/docs/.local_data'#

Directory where data for local platform such as files and postgres data will be stored

network: str = 'idmtools'#

Name of the docker network to use

redis_image: str = 'redis:5.0.9-alpine'#

What redis image should we use

redis_port: int = 6379#

Port for redis to bind to

runtime: str | None = None#

Runtime. Defaults to runc, but can be set to nvidia-docker

redis_mem_limit: str = '128m'#

Memory limit for redis

redis_mem_reservation: str = '64m'#

How much memory should redis preallocate

postgres_image: str = 'postgres:11.9'#

Postgres image to use

postgres_mem_limit: str = '64m'#

Postgres memory limit

postgres_mem_reservation: str = '32m'#

How much memory should postgres reserve

postgres_port: str | None = 5432#

What port should postgres listen to

workers_mem_limit: str = '16g'#

Memory limit for workers

workers_mem_reservation: str = '128m'#

How much memory should the workers pre-allocate

workers_image: str = None#

Workers image to use. Defaults to version compatible with current idmtools version

workers_ui_port: int = 5000#

Workers UI port

heartbeat_timeout: int = 15#

Heartbeat timeout

default_timeout: int = 45#

Default timeout when talking to local platform

launch_created_experiments_in_browser: bool = False#

Launch experiments created in the browser

auto_remove_worker_containers: bool = True#

allows user to specify auto removal of docker worker containers

enable_singularity_support: bool = False#

Enables singularity support. This requires elevated privileges on docker and should only be used when using singularity within workflows. This feature is in BETA so it may be unstable

cleanup(delete_data: bool = False, shallow_delete: bool = False, tear_down_brokers: bool = False)[source]#

Cleanup the platform.

If delete data is true, the local platform data is deleted. If shallow delete is true, just the data within the local platform data directory is deleted. If tear down brokers is true, the task brokers talking to the server are destroyed.

Parameters:
  • delete_data – Should data be deleted

  • shallow_delete – Should just the files be deleted

  • tear_down_brokers – Should we destroy our brokers

post_setstate()[source]#

Post set-state.

__init__(_uid: str = None, _IItem__pre_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _IItem__post_creation_hooks: ~typing.List[~typing.Callable[[IItem, IPlatform], None]] = <factory>, _platform_defaults: ~typing.List[~idmtools.entities.iplatform_default.IPlatformDefault] = <factory>, _config_block: str = None, host_data_directory: str = '/home/docs/.local_data', network: str = 'idmtools', redis_image: str = 'redis:5.0.9-alpine', redis_port: int = 6379, runtime: str | None = None, redis_mem_limit: str = '128m', redis_mem_reservation: str = '64m', postgres_image: str = 'postgres:11.9', postgres_mem_limit: str = '64m', postgres_mem_reservation: str = '32m', postgres_port: str | None = 5432, workers_mem_limit: str = '16g', workers_mem_reservation: str = '128m', workers_image: str = None, workers_ui_port: int = 5000, heartbeat_timeout: int = 15, default_timeout: int = 45, launch_created_experiments_in_browser: bool = False, auto_remove_worker_containers: bool = True, enable_singularity_support: bool = False) None#
idmtools_platform_local.plugin_info module#

idmtools local platform plugin spec.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.plugin_info.LocalPlatformSpecification[source]#

Bases: PlatformSpecification

Provide plugin spec for the LocalPlatform.

get_description() str[source]#

Get plugin description.

get(**configuration) IPlatform[source]#

Build our local platform from the passed in configuration object.

We do our import of platform here to avoid any weird import issues on plugin load.

Parameters:

configuration – COnfiguration to use with local platform.

Returns:

Local platform object created.

example_configuration()[source]#

Get our example configuration.

get_type() Type[LocalPlatform][source]#

Get local platform type.

get_version() str[source]#

Returns the version of the plugin.

Returns:

Plugin Version

idmtools_platform_local.status module#

idmtools local platform statuses.

Copyright 2021, Bill & Melinda Gates Foundation. All rights reserved.

class idmtools_platform_local.status.Status(value)[source]#

Bases: Enum

Our status enum for jobs.

created = 'created'#
in_progress = 'in_progress'#
canceled = 'canceled'#
failed = 'failed'#
done = 'done'#

CLI reference#

Simulations#

You can use the idmtools simulation command to get the status of simulations for the local platform. To see the list of options type the following at a command prompt.

$ idmtools simulation --platform Local status --help
2024-05-10 21:28:39,881 build-2210236-project-11010-institute-for-disease-modeling-idmto root[787] DEBUG idmtools core version: 1.7.10
2024-05-10 21:28:39,881 build-2210236-project-11010-institute-for-disease-modeling-idmto user[787] VERBOSE INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/checkouts/latest/docs/idmtools.ini
2024-05-10 21:28:39,963 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools.registry.functions[787] DEBUG {<module 'idmtools.plugins.uuid_generator' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/uuid_generator.py'>, <module 'idmtools.plugins.item_sequence' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/item_sequence.py'>, <module 'idmtools.plugins.git_commit' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/git_commit.py'>}
2024-05-10 21:28:39,964 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools_cli.cli.init[787] DEBUG Cookie cutter project list: {'reproducible-science': ProjectTemplate(name='reproducible-science', url='gh:mkrapp/cookiecutter-reproducible-science', description='A boilerplate for reproducible and transparent science with close resemblances to the philosophy of Cookiecutter Data Science: A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://github.com/mkrapp/cookiecutter-reproducible-science'), 'docker-science': ProjectTemplate(name='docker-science', url='git@github.com:docker-science/cookiecutter-docker-science.git', description='This project is a tiny template for machine learning projects developed in Docker environments. In machine learning tasks, projects glow uniquely to fit target tasks, but in the initial state, most directory structure and targets in Makefile are common. Cookiecutter Docker Science generates initial directories which fits simple machine learning tasks.', info='https://docker-science.github.io/'), 'data-science': ProjectTemplate(name='data-science', url='https://github.com/drivendata/cookiecutter-data-science', description='A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://docker-science.github.io/')}
Usage: idmtools simulation status [OPTIONS]

  List of statuses for simulation(s) with the ability to filter by id,
  experiment_id, status, and tags.

  For Example Get the status of simulations for the platform using the local
  platform defaults, you would run idmtools simulation --platform Local status

  Another example would be to use a platform defined in a configuration block
  while also filtering tags where a == 0 idmtools simulation --config-block
  Local status --tags a 0

  Multiple tags idmtools simulation --config-block Local status --tags a 0
  --tags a 3

Options:
  --id TEXT             Filter status by simulation ID
  --experiment-id TEXT  Filter status by experiment ID
  --tags TEXT...        Tag to filter by. This should be in the form name
                        value. For example, if you have a tag type=PythonTask
                        you would use --tags type PythonTask. In addition, you
                        can provide multiple tags, ie --tags a 1 --tags b 2.
                        This will perform an AND based query on the tags
                        meaning only jobs contains ALL the tags specified will
                        be displayed
  --help                Show this message and exit.

Experiments#

You can use the idmtools experiment command to get the status of and to delete experiments for the local platform. Local platform must be running to use these commands. To see the list of commands and options for status, type the following at a command prompt.

$ idmtools experiment --platform Local status --help
2024-05-10 21:28:36,782 build-2210236-project-11010-institute-for-disease-modeling-idmto root[775] DEBUG idmtools core version: 1.7.10
2024-05-10 21:28:36,783 build-2210236-project-11010-institute-for-disease-modeling-idmto user[775] VERBOSE INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/checkouts/latest/docs/idmtools.ini
2024-05-10 21:28:36,866 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools.registry.functions[775] DEBUG {<module 'idmtools.plugins.git_commit' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/git_commit.py'>, <module 'idmtools.plugins.uuid_generator' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/uuid_generator.py'>, <module 'idmtools.plugins.item_sequence' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/item_sequence.py'>}
2024-05-10 21:28:36,866 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools_cli.cli.init[775] DEBUG Cookie cutter project list: {'reproducible-science': ProjectTemplate(name='reproducible-science', url='gh:mkrapp/cookiecutter-reproducible-science', description='A boilerplate for reproducible and transparent science with close resemblances to the philosophy of Cookiecutter Data Science: A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://github.com/mkrapp/cookiecutter-reproducible-science'), 'docker-science': ProjectTemplate(name='docker-science', url='git@github.com:docker-science/cookiecutter-docker-science.git', description='This project is a tiny template for machine learning projects developed in Docker environments. In machine learning tasks, projects glow uniquely to fit target tasks, but in the initial state, most directory structure and targets in Makefile are common. Cookiecutter Docker Science generates initial directories which fits simple machine learning tasks.', info='https://docker-science.github.io/'), 'data-science': ProjectTemplate(name='data-science', url='https://github.com/drivendata/cookiecutter-data-science', description='A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://docker-science.github.io/')}
Usage: idmtools experiment status [OPTIONS]

  List the status of experiment(s) with the ability to filter by experiment id
  and tags.

  Some examples: Get the status of simulations for the platform using the
  local platform defaults, you would run idmtools simulation --platform Local
  status

  Another example would be to use a platform defined in a configuration block
  while also filtering tags where a == 0 idmtools experiment --config-block
  Local status --tags a 0

  Multiple tags: idmtools experiment --config-block Local status --tags a 0
  --tags a 3

Options:
  --id TEXT       Filter status by experiment ID
  --tags TEXT...  Tag to filter by. This should be in the form name value. For
                  example, if you have a tag type=PythonTask you would use
                  --tags type PythonTask. In addition, you can provide
                  multiple tags, ie --tags a 1 --tags b 2. This will perform
                  an AND based query on the tags meaning only jobs contains
                  ALL the tags specified will be displayed
  --help          Show this message and exit.

To see the list of commands and options for delete, type the following at a command prompt.

$ idmtools experiment --platform Local delete --help
2024-05-10 21:28:37,804 build-2210236-project-11010-institute-for-disease-modeling-idmto root[779] DEBUG idmtools core version: 1.7.10
2024-05-10 21:28:37,804 build-2210236-project-11010-institute-for-disease-modeling-idmto user[779] VERBOSE INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/checkouts/latest/docs/idmtools.ini
2024-05-10 21:28:37,885 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools.registry.functions[779] DEBUG {<module 'idmtools.plugins.git_commit' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/git_commit.py'>, <module 'idmtools.plugins.uuid_generator' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/uuid_generator.py'>, <module 'idmtools.plugins.item_sequence' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/item_sequence.py'>}
2024-05-10 21:28:37,885 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools_cli.cli.init[779] DEBUG Cookie cutter project list: {'reproducible-science': ProjectTemplate(name='reproducible-science', url='gh:mkrapp/cookiecutter-reproducible-science', description='A boilerplate for reproducible and transparent science with close resemblances to the philosophy of Cookiecutter Data Science: A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://github.com/mkrapp/cookiecutter-reproducible-science'), 'docker-science': ProjectTemplate(name='docker-science', url='git@github.com:docker-science/cookiecutter-docker-science.git', description='This project is a tiny template for machine learning projects developed in Docker environments. In machine learning tasks, projects glow uniquely to fit target tasks, but in the initial state, most directory structure and targets in Makefile are common. Cookiecutter Docker Science generates initial directories which fits simple machine learning tasks.', info='https://docker-science.github.io/'), 'data-science': ProjectTemplate(name='data-science', url='https://github.com/drivendata/cookiecutter-data-science', description='A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://docker-science.github.io/')}
Usage: idmtools experiment delete [OPTIONS] EXPERIMENT_ID

  Delete an experiment, and optionally, its data

Options:
  --data / --no-data  Should we delete the data as well?
  --help              Show this message and exit.

CLI Local Platform#

Institute for Disease Modeling (IDM) includes commands for managing the local platform. To see the list of commands type the following at a command prompt.

$ idmtools local --help
2024-05-10 21:28:38,837 build-2210236-project-11010-institute-for-disease-modeling-idmto root[783] DEBUG idmtools core version: 1.7.10
2024-05-10 21:28:38,838 build-2210236-project-11010-institute-for-disease-modeling-idmto user[783] VERBOSE INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/checkouts/latest/docs/idmtools.ini
2024-05-10 21:28:38,922 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools.registry.functions[783] DEBUG {<module 'idmtools.plugins.item_sequence' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/item_sequence.py'>, <module 'idmtools.plugins.uuid_generator' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/uuid_generator.py'>, <module 'idmtools.plugins.git_commit' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/git_commit.py'>}
2024-05-10 21:28:38,922 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools_cli.cli.init[783] DEBUG Cookie cutter project list: {'reproducible-science': ProjectTemplate(name='reproducible-science', url='gh:mkrapp/cookiecutter-reproducible-science', description='A boilerplate for reproducible and transparent science with close resemblances to the philosophy of Cookiecutter Data Science: A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://github.com/mkrapp/cookiecutter-reproducible-science'), 'docker-science': ProjectTemplate(name='docker-science', url='git@github.com:docker-science/cookiecutter-docker-science.git', description='This project is a tiny template for machine learning projects developed in Docker environments. In machine learning tasks, projects glow uniquely to fit target tasks, but in the initial state, most directory structure and targets in Makefile are common. Cookiecutter Docker Science generates initial directories which fits simple machine learning tasks.', info='https://docker-science.github.io/'), 'data-science': ProjectTemplate(name='data-science', url='https://github.com/drivendata/cookiecutter-data-science', description='A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://docker-science.github.io/')}
Usage: idmtools local [OPTIONS] COMMAND [ARGS]...

  Commands related to managing the local platform

Options:
  --run-as TEXT  Change the default user you run docker containers as. Useful
                 is situations where you need to access docker with sudo.
                 Example values are "1000:1000"
  --help         Show this message and exit.

Commands:
  down     Shutdown the local execution platform(and optionally delete data.
  info
  restart  Restart the local execution platform.
  start    Start the local execution platform.
  status   Check the status of the local execution platform.
  stop     Stop the local platform.

The platform settings are contained in the idmtools.ini file. For more information, see Configuration.

Institute for Disease Modeling (IDM) includes a command-line interface (CLI) with options and commands to assist with getting started, managing and monitoring, and troubleshooting simulations and experiments. After you’ve installed idmtools_cli, you can view the available options and commands by typing the following at a command prompt

$ idmtools local --help
2024-05-10 21:28:38,837 build-2210236-project-11010-institute-for-disease-modeling-idmto root[783] DEBUG idmtools core version: 1.7.10
2024-05-10 21:28:38,838 build-2210236-project-11010-institute-for-disease-modeling-idmto user[783] VERBOSE INI File Used: /home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/checkouts/latest/docs/idmtools.ini
2024-05-10 21:28:38,922 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools.registry.functions[783] DEBUG {<module 'idmtools.plugins.item_sequence' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/item_sequence.py'>, <module 'idmtools.plugins.uuid_generator' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/uuid_generator.py'>, <module 'idmtools.plugins.git_commit' from '/home/docs/checkouts/readthedocs.org/user_builds/institute-for-disease-modeling-idmtools-local/envs/latest/lib/python3.8/site-packages/idmtools/plugins/git_commit.py'>}
2024-05-10 21:28:38,922 build-2210236-project-11010-institute-for-disease-modeling-idmto idmtools_cli.cli.init[783] DEBUG Cookie cutter project list: {'reproducible-science': ProjectTemplate(name='reproducible-science', url='gh:mkrapp/cookiecutter-reproducible-science', description='A boilerplate for reproducible and transparent science with close resemblances to the philosophy of Cookiecutter Data Science: A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://github.com/mkrapp/cookiecutter-reproducible-science'), 'docker-science': ProjectTemplate(name='docker-science', url='git@github.com:docker-science/cookiecutter-docker-science.git', description='This project is a tiny template for machine learning projects developed in Docker environments. In machine learning tasks, projects glow uniquely to fit target tasks, but in the initial state, most directory structure and targets in Makefile are common. Cookiecutter Docker Science generates initial directories which fits simple machine learning tasks.', info='https://docker-science.github.io/'), 'data-science': ProjectTemplate(name='data-science', url='https://github.com/drivendata/cookiecutter-data-science', description='A logical, reasonably standardized, but flexible project structure for doing and sharing data science work.', info='https://docker-science.github.io/')}
Usage: idmtools local [OPTIONS] COMMAND [ARGS]...

  Commands related to managing the local platform

Options:
  --run-as TEXT  Change the default user you run docker containers as. Useful
                 is situations where you need to access docker with sudo.
                 Example values are "1000:1000"
  --help         Show this message and exit.

Commands:
  down     Shutdown the local execution platform(and optionally delete data.
  info
  restart  Restart the local execution platform.
  start    Start the local execution platform.
  status   Check the status of the local execution platform.
  stop     Stop the local platform.

Glossary#

The following terms describe both the features and functionality of the idmtools software, as well as information relevant to using idmtools.

analyzer#

Functionality that uses the MapReduce framework to process large data sets in parallel, typically on a high-performance computing (HPC) cluster. For example, if you would like to focus on specific data points from all simulations in one or more experiments then you can do this using analyzers with idmtools and plot the final output.

assets#

See asset collection.

builder#

A function and list of values with which to call that function that is used to sweep through parameter values in a simulation.

calibration#

The process of adjusting the parameters of a simulation to better match the data from a particular time and place.

entity#

Each of the interfaces or classes that are well-defined models, types, and validations for idmtools items, such as simulations, analyzers, or tasks.

experiment#

Logical grouping of simulations. This allows for managing numerous simulations as a single unit or grouping.

high-performance computing (HPC)#

The use of parallel processing for running advanced applications efficiently, reliably, and quickly.

parameter sweep#

An iterative process in which simulations are run repeatedly using different values of the parameter(s) of choice. This process enables the modeler to determine what a parameter’s “best” value or range of values.

platform#

The computing resource on which the simulation runs. See Supported platforms for more information on those that are currently supported.

simulation#

An individual run of a model. Generally, multiple simulations are run as part of an experiement.

suite#

Logical grouping of experiments. This allows for managing multiple experiments as a single unit or grouping.

task#

The individual actions that are processed for each simulation.

Changelog#

0.1.0#

Analyzers#
  • #0060 - Analyzer base class

Bugs#
  • #0095 - idmtools is not working for python 3.6

  • #0096 - pytest (and pytest-runner) should be installed by setup

  • #0105 - UnicodeDecodeError when run python example in LocalPlatform mode

  • #0114 - It should be possible to set base_simulation in the PythonExperiment constructor

  • #0115 - PythonSimulation constructor should abstract the parameters dict

  • #0124 - Can not run teststest_python_simulation.py from console

  • #0125 - relative_path for AssetCollection does not work

  • #0126 - Same test in issue #125 does not working for localPlatform

  • #0129 - new python model root node changed from “config” to “parameters”

  • #0137 - PythonExperiment fails if pass assets

  • #0138 - test_sir.py does not set parameter

  • #0142 - experiment.batch_simulations seems not to be batching

  • #0143 - COMPSPlatform’s refresh_experiment_status() get called too much from ExperimentManager’s wait_till_done() mathod

  • #0150 - missing pandas package

  • #0151 - log throw error from IPersistanceService.py’s save method

  • #0161 - tests/test_python_simulation.py’s test_add_dirs_to_assets_comps() return different asset files for windows and linux

  • #0171 - Workflow: fix loop detection

  • #0203 - Running new builds on Linux fails in Bamboo due to datapostgres-data file folder permissions

  • #0206 - test_python_simulation.py failed for all local test in windows

CLI#
  • #0007 - Command line functions definition

  • #0118 - Add the printing of children in the EntityContainer

Configuration#
  • #0047 - Configuration file read on a per-folder basis

  • #0048 - Validation for the configuration file

  • #0049 - Configuration file is setting correct parameters in platform

Core#
  • #0006 - Service catalog

  • #0014 - Package organization and pre-requisites

  • #0081 - Allows the sweeps to be created in arms

  • #0087 - Raise an exception if we have 2 files with the same relative path in the asset collection

  • #0091 - Refactor the Experiment/Simulation objects to not persist the simulations

  • #0092 - Generalize the simulations/experiments for Experiment/Suite

  • #0102 - [Local Runner] Retrieve simulations for experiment

  • #0107 - LocalPlatform does not detect duplicate files in AssetCollectionFile for pythonExperiment

  • #0140 - Fetch simulations at runtime

  • #0148 - Add python tasks

  • #0180 - switch prettytable for tabulate

Documentation#
  • #0004 - Notebooks exploration for examples

  • #0085 - Setup Sphinx and GitHub pages for the docs

  • #0090 - “Development installation steps” missing some steps

Models#
  • #0008 - Which models support out of the box?

  • #0136 - Create an envelope argument for the PythonSimulation

Platforms#
  • #0068 - [Local Runner] Simulation status monitoring

  • #0069 - [Local Runner] Database

  • #0094 - Batch and parallelize simulation creation in the COMPSPlatform

1.0.0#

Analyzers#
  • #0034 - Create the Plotting step

  • #0057 - Output files retrieval

  • #0196 - Filtering

  • #0197 - Select_simulation_data

  • #0198 - Finalize

  • #0279 - Port dtk-tools analyze system to idmtools

  • #0283 - Fix up all platform-based test due to analyzer/platform refactor/genericization

  • #0337 - Change AnalyzeManager to support passing ids (Experiment, Simulation, Suite)

  • #0338 - Two AnalyzeManager files - one incorrect and needs to be removed

  • #0340 - Cleanup DownloadAnalyzer

  • #0344 - AnalyzeManager configuration should be option parameter

  • #0589 - Rename suggestion: example_analysis_multiple_cases => example_analysis_MultipleCases

  • #0592 - analyzers error on platform.get_files for COMPS: argument of type ‘NoneType’ is not iterable

  • #0594 - analyzer error multiprocessing pool StopIteration error in finalize_results

  • #0614 - Convenience function to exclude items in analyze manager

  • #0619 - Ability to get exp sim object ids in analyzers

Bugs#
  • #0124 - Can not run teststest_python_simulation.py from console

  • #0125 - relative_path for AssetCollection does not work

  • #0129 - new python model root node changed from “config” to “parameters”

  • #0142 - experiment.batch_simulations seems not to be batching

  • #0143 - COMPSPlatform’s refresh_experiment_status() get called too much from ExperimentManager’s wait_till_done() mathod

  • #0150 - missing pandas package

  • #0184 - Missing ‘data’ dir for test_experiment_manager test. (TestPlatform)

  • #0223 - UnicodeDecodeError for testcases in test_dtk.py when run with LocalPlatform

  • #0236 - LocalRunner: ExperimentsClient get_all method should have parameter ‘tags’ not ‘tag’

  • #0265 - load_files for DTKExperiment create nested ‘parameters’ in config.json

  • #0266 - load_files for demographics.json does not work

  • #0272 - diskcache objects cause cleanup failure if used in failing processes

  • #0294 - Docker containers failed to start if they are created but stopped

  • #0299 - Sometime in Windows command line, local docker runner stuck and no way to stop from command line

  • #0302 - Local Platform delete is broken

  • #0318 - Postgres Connection error on Local Platform

  • #0320 - COMPSPlatform Asset handling - currently DuplicatedAssetError content is not same

  • #0323 - idmtools is not retro-compatible with pre-idmtools experiments

  • #0332 - with large number of simulations, local platform either timeout on dramatiq or stuck on persistamceService save method

  • #0339 - Analyzer tests fails on AnalyzeManager analyze len(self.potential_items) == 0

  • #0341 - AnalyzeManager Runtime error on worker_pool

  • #0346 - UnknownItemException for analyzers on COMPSPlatform PythonExperiments

  • #0350 - RunTask in local platform should catch exception

  • #0351 - AnalyzeManager finalize_results Process cannot access the cache.db because it is being used by another process

  • #0352 - Current structure of code leads to circular dependencies or classes as modules

  • #0367 - Analyzer does not work with reduce method with no hashable object

  • #0375 - AnalyzerManager does not work for case to add experiment to analyzermanager

  • #0376 - AnalyzerManager does not work for simulation

  • #0378 - experiment/simulation display and print are messed up in latest dev

  • #0386 - Local platform cannot create more than 20 simulations in a given experiment

  • #0398 - Ensure that redis and postgres ports work as expected

  • #0399 - PopulaionAnalyzer does not return all items in reduce mathod in centos platform

  • #0424 - ExperimentBuilder’s add_sweep_definition is not flexible enough to take more parameters

  • #0427 - Access to the experiment object in analyzers

  • #0453 - cli: “idmtools local down –delete-data” not really delete any .local_data in user default dir

  • #0458 - There is no way to add custom tags to simulations

  • #0465 - BuilderExperiment for sweep “string” is wrong

  • #0545 - pymake docker-local always fail in centos

  • #0553 - BLOCKING: idmtools_model_r does not get built with make setup-dev

  • #0560 - docker-compose build does not work for r-model example

  • #0562 - workflow_item_operations get workitem querycriteria fails

  • #0564 - typing is missing in asset_collection.py which almost break every tests

  • #0565 - missing ‘copy’ in local_platform.py

  • #0566 - test_tasks.py fail for case test_command_is_required

  • #0567 - ‘platform_supports’ is missing for test_comps_plugin.py in idmtools_platform_comps/tests

  • #0570 - webui for localhost:5000 got 403 error

  • #0572 - python 3.7.3 less version will fail for task type changing

  • #0585 - print(platform) throws exception for Python 3.6

  • #0588 - Running the dev installation in a virtualenv “installs” it globally

  • #0598 - CSVAnalyzer pass wrong value to parse in super().__init__ call

  • #0602 - Analyzer doesn’t work for my Python SEIR model

  • #0605 - When running multiple analyzers together, ‘data’ in one analyzer should not contains data from other analyzer

  • #0606 - can not import cached_property

  • #0608 - Cannot add custom tag to AssetCollection in idmtools

  • #0613 - idmtools webui does not working anymore

  • #0616 - AssetCollection pre_creation failed if no tag

  • #0617 - AssetCollection’s find_index_of_asset is wrong

  • #0618 - analyzer-manager should fail if map status return False

  • #0641 - Remove unused code in the python_requirements_ac

  • #0644 - Platform cannot run workitem directly

  • #0646 - platform.get_items(ac) not return tags

  • #0667 - analyzer_manager could stuck on _run_and_wait_for_reducing

CLI#
  • #0009 - Boilerplate command

  • #0118 - Add the printing of children in the EntityContainer

  • #0187 - Move the CLI package to idmtools/cli

  • #0190 - Add a platform attribute to the CLI commands

  • #0191 - Create a PlatformFactory

  • #0241 - CLI should be distinct package and implement as plugins

  • #0251 - Setup for the CLI package should provide a entrypoint for easy use of commands

  • #0252 - Add –debug to cli main level

Configuration#
  • #0248 - Logging needs to support user configuration through the idmtools.ini

  • #0392 - Improve IdmConfigParser: make decorator for ensure_ini() method…

  • #0597 - Platform should not be case sensitive.

Core#
  • #0032 - Create NextPointAlgorithm Step

  • #0042 - Stabilize the IStep object

  • #0043 - Create the generic Workflow object

  • #0044 - Implement validation for the Steps of a workflow based on Marshmallow

  • #0058 - Filtering system for simulations

  • #0081 - Allows the sweeps to be created in arms

  • #0091 - Refactor the Experiment/Simulation objects to not persist the simulations

  • #0141 - Standard Logging throughout tools

  • #0169 - Handle 3.6 requirements automatically

  • #0172 - Decide what state to store for tasks

  • #0173 - workflows: Decide on state storage scheme

  • #0174 - workflows: Reimplement state storage

  • #0175 - workflows: Create unit tests of core classes and behaviors

  • #0176 - workflows: reorganize files into appropriate repo/directory

  • #0180 - switch prettytable for tabulate

  • #0200 - Platforms should be plugins

  • #0238 - Simulations of Experiment should be made pickle ignored

  • #0244 - Inputs values needs to be validated when creating a Platform

  • #0257 - CsvExperimentBuilder does not handle csv field with empty space

  • #0268 - demographics filenames should be loaded to asset collection

  • #0274 - Unify id attribute naming scheme

  • #0281 - Improve Platform to display selected Block info when creating a platform

  • #0297 - Fix issues with platform factory

  • #0308 - idmtools: Module names should be consistent

  • #0315 - Basic support of suite in the tools

  • #0357 - ExperimentPersistService.save are not consistent

  • #0359 - SimulationPersistService is not used in Idmtools

  • #0361 - assets in Experiment should be made “pickle-ignore”

  • #0362 - base_simulation in Experiment should be made “pickle-ignore”

  • #0368 - PersistService should support clear() method

  • #0369 - The method create_simulations of Experiment should consider pre-defined max_workers and batch_size in idmtools.ini

  • #0370 - Add unit test for deepcopy on simulations

  • #0371 - Wrong type for platform_id in IEntity definition

  • #0391 - Improve Asset and AssetCollection classes by using @dataclass (field) for clear comparison

  • #0394 - Remove the ExperimentPersistService

  • #0438 - Support pulling Eradication from URLs and bamboo

  • #0518 - Add a task class.

  • #0520 - Rename current experiment builders to sweep builders

  • #0526 - Create New Generic Experiment Class

  • #0527 - Create new Generic Simulation Class

  • #0528 - Remove old Experiments/Simulations

  • #0529 - Create New Task API

  • #0530 - Rename current model api to simulation/experiment API.

  • #0538 - Refactor platform interface into subinterfaces

  • #0681 - idmtools should have way to query comps with filter

Developer/Test#
  • #0631 - Ensure setup.py is consistent throughout

Documentation#
  • #0100 - Installation steps documented for users

  • #0312 - idmtools: there is a typo in README

  • #0360 - The tools should refer to “EMOD” not “DTK”

  • #0474 - Stand alone builder

  • #0486 - Overview of the analysis in idmtools

  • #0510 - Local platform options

  • #0512 - SSMT platform options

  • #0578 - Add installation for users

  • #0593 - Simple Python SEIR model demo example

  • #0632 - Update idmtools_core setup.py to remove model emod from idm install

Feature Request#
  • #0061 - Built-in DownloadAnalyzer

  • #0064 - Support of CSV files

  • #0070 - [Local Runner] Output files serving

  • #0233 - Add local runner timeout

  • #0437 - Prompt users for docker credentials when not available

  • #0603 - Implement install custom requirement libs to asset collection with WorkItem

Models#
  • #0021 - Python model

  • #0024 - R Model support

  • #0053 - Support of demographics files

  • #0212 - Models should be plugins

  • #0287 - Add info about support models/docker support to platform

  • #0288 - Create DockerExperiment and subclasses

  • #0519 - Move experiment building to ExperimentBuilder

  • #0521 - Create Generic Dictionary Config Task

  • #0522 - Create PythonTask

  • #0523 - Create PythonDictionaryTask

  • #0524 - Create RTask

  • #0525 - Create EModTask

  • #0535 - Create DockerTask

Platforms#
  • #0025 - LOCAL Platform

  • #0027 - SSMT Platform

  • #0094 - Batch and parallelize simulation creation in the COMPSPlatform

  • #0122 - Ability to create an AssetCollection based on a COMPS asset collection id

  • #0130 - User configuration and data storage location

  • #0186 - The local_runner client should move to the idmtools package

  • #0194 - COMPS Files retrieval system

  • #0195 - LOCAL Files retrieval system

  • #0221 - Local runner for experiment/simulations have different file hierarchy than COMPS

  • #0254 - Local Platform Asset should be implemented via API or Docker socket

  • #0264 - idmtools_local_runner’s tasks/run.py should have better handle for unhandled exception

  • #0276 - Docker services should be started for end-users without needing to use docker-compose

  • #0280 - Generalize sim/exp/suite format of ISimulation, IExperiment, IPlatform

  • #0286 - Add special GPU queue to Local Platform

  • #0305 - Create a website for local platform

  • #0306 - AssetCollection’s assets_from_directory logic wrong if set flatten and relative path at same time

  • #0313 - idmtools: MAX_SUBDIRECTORY_LENGTH = 35 should be made Global in COMPSPlatform definition

  • #0314 - Fix local platform to work with latest analyze/platform updates

  • #0316 - Integrate website with Local Runner Container

  • #0321 - COMPSPlatform _retrieve_experiment errors on experiments with and without suites

  • #0329 - Experiment level status

  • #0330 - Paging on simulation/experiment APIs for better UI experience

  • #0333 - ensure pyComps allows compatible releases

  • #0364 - Local platform should use production artfactory for docker images

  • #0381 - Support Work Items in COMPS Platform

  • #0387 - Local platform webUI only show simulations up to 20

  • #0393 - local platform tests keep getting EOFError while logger is in DEBUG and console is on

  • #0405 - Support analysis of data from Work Items in Analyze Manager

  • #0407 - Support Service Side Analysis through SSMT

  • #0447 - Set limitation for docker container’s access to memory

  • #0532 - Make updates to ExperimentManager/Platform to support tasks

  • #0540 - Create initial SSMT Plaform from COMPS Platform

  • #0596 - COMPSPlatform.get_files(item,..) not working for Experiment or Suite

  • #0635 - Update SSMT base image

  • #0639 - Add a way for the python_requirements_ac to use additional wheel file

  • #0676 - ssmt missing QueryCriteria support

  • #0677 - ssmt: refresh_status returns None

User Experience#
  • #0457 - Option to analyze failed simulations

1.0.1#

Analyzers#
  • #0778 - Add support for context platforms to analyzer manager

Bugs#
  • #0637 - pytest: ValueError: I/O operation on closed file, Printed at the end of tests.

  • #0663 - SSMT PlatformAnalysis can not put 2 analyzers in same file as main entry

  • #0696 - Rename num_retires to num_retries on COMPS Platform

  • #0702 - Can not analyze workitem

  • #0739 - Logging should load defaults with default config block is missing

  • #0741 - MAX_PATH issues with RequirementsToAssetCollection WI create_asset_collection

  • #0752 - type hint in analyzer_manager is wrong

  • #0758 - Workitem config should be validated on WorkItem for PythonAsset Collection

  • #0776 - Fix hook execution order for pre_creation

  • #0779 - Additional Sims is not being detected on TemplatedSimulations

  • #0788 - Correct requirements on core

  • #0791 - Missing asset file with RequirementsToAssetCollection

Core#
  • #0343 - Genericize experiment_factory to work for other items

  • #0611 - Consider excluding idmtools.log and COMPS_log.log on SSMT WI submission

  • #0737 - Remove standalone builder in favor of regular python

Developer/Test#
  • #0083 - Setup python linting for the Pull requests

  • #0671 - Python Linting

  • #0735 - Tag or remove local tests in idmtools-core tests

  • #0736 - Mark set of smoke tests to run in github actions

  • #0773 - Move model-emod to new repo

  • #0794 - build idmtools_platform_local fail with idmtools_webui error

Documentation#
  • #0015 - Add cookiecutter projects

  • #0423 - Create a clear document on what features are provided by what packages

  • #0473 - Create sweep without builder

  • #0476 - ARM builder

  • #0477 - CSV builder

  • #0478 - YAML builder

  • #0487 - Creation of an analyzer

  • #0488 - Base analyzer - Constructor

  • #0489 - Base analyzer - Filter function

  • #0490 - Base analyzer - Parsing

  • #0491 - Base analyzer - Working directory

  • #0492 - Base analyzer - Map function

  • #0493 - Base analyzer - Reduce function

  • #0494 - Base analyzer - per group function

  • #0495 - Base analyzer - Destroy function

  • #0496 - Features of AnalyzeManager - Overview

  • #0497 - Features of AnalyzeManager - Partial analysis

  • #0498 - Features of AnalyzeManager - Max items

  • #0499 - Features of AnalyzeManager - Working directory forcing

  • #0500 - Features of AnalyzeManager - Adding items

  • #0501 - Built-in analyzers - InsetChart analyzer

  • #0502 - Built-in analyzers - CSV Analyzer

  • #0503 - Built-in analyzers - Tags analyzer

  • #0504 - Built-in analyzers - Download analyzer

  • #0508 - Logging and Debugging

  • #0509 - Global parameters

  • #0511 - COMPS platform options

  • #0629 - Update docker endpoint on ssmt/local platform to use external endpoint for pull/running

  • #0630 - Investigate packaging idmtools as wheel file

  • #0714 - Document the Versioning details

  • #0717 - Sweep Simulation Builder

  • #0720 - Documentation on Analyzing Failed experiments

  • #0721 - AddAnalyer should have example in its self documentation

  • #0722 - CSVAnalyzer should have example in its self documentation

  • #0723 - DownloadAnalyzer should have example in its self documentation

  • #0724 - PlatformAnalysis should have explanation of its used documented

  • #0727 - SimulationBuilder Sweep builder documentation

  • #0734 - idmtools does not install dataclasses on python3.6

  • #0751 - Switch to apidoc generated RSTs for modules and remove from source control

Feature Request#
  • #0059 - Chaining of Analyzers

  • #0097 - Ability to batch simulations within simulation

  • #0704 - Tthere is no way to load custom wheel using the RequirementsToAssets utility

  • #0784 - Remove default node_group value ‘emod_abcd’ from platform

  • #0786 - Improve Suite support

Platforms#
  • #0277 - Need way to add tags to COMPSPlatform ACs after creation

  • #0638 - Change print statement to logger in python_requirements_ac utility

  • #0640 - Better error reporting when the python_requirements_ac fails

  • #0651 - A user should not need to specify the default SSMT image

  • #0688 - Load Custom Library Utility should support install packages from Artifactory

  • #0705 - Should have way to regenerate AssetCollection id from RequirementsToAssetCollection

  • #0757 - Set PYTHONPATH on Slurm

User Experience#
  • #0760 - Email for issues and feature requests

  • #0781 - Suites should support run on object

  • #0787 - idmtools should print experiment id by default in console

1.1.0#

Additional Changes#
  • #0845 - Sprint 1 Retrospective Results

Bugs#
  • #0430 - test_docker_operations.test_port_taken_has_coherent_error fails in Linux VM with no host machine

  • #0650 - analyzer_manager.py _run_and_wait_for_mapping fail frequently in bamboo

  • #0706 - Correct the number of simulations being submitted in the progress bar

  • #0846 - Checking for platform not installed

  • #0872 - python executable is not correct for slurm production

CLI#
  • #0342 - Add list of task to cli

  • #0543 - develop idm cookie cutter templates needs

  • #0820 - Add examples url to plugins specifications and then each plugin if they have examples

  • #0869 - CLI: idmtools gitrepo view - CommandTask points to /corvid-idmtools

Core#
  • #0273 - Add kwargs functionality to CacheEnabled

  • #0818 - Create Download Examples Core Functionality

  • #0828 - Add a master plugin registry

Developer/Test#
  • #0652 - Packing process should be fully automated

  • #0731 - Add basic testing to Github Actions to Pull Requests

  • #0785 - Add a miniconda agent to the bamboo testing of idmtools

  • #0833 - Add emodpy to idm and full extra installs in core

  • #0844 - For make setup-dev, we may want put login to artifactory first

Documentation#
  • #0729 - Move local platform worker container to Github Actions

  • #0814 - High Level Diagram of Packages/Repos for idmtools

  • #0858 - Fix doc publish to ghpages

  • #0861 - emodpy - add updated api diagram (API class specifications) to architecture doc

Platforms#
  • #0728 - Restructure local platform docker container build for Github Action

  • #0730 - Move SSMT Image build to github actions

  • #0826 - SSMT Build as part of GithubActions

User Experience#
  • #0010 - Configuration file creation command

  • #0684 - Create process for Changelog for future releases

  • #0819 - Create Download Examples CLI Command

  • #0821 - Provide plugin method to get Help URLs for plugin

1.2.0#

Bugs#
  • #0859 - After install idmtools, still can not find model ‘idmtools’

  • #0873 - Task Plugins all need a get_type

  • #0877 - Change RequirementsToAssetCollection to link AssetCollection and retrieve Id more reliability

  • #0881 - With CommandTask, experiment must have an asset to run

  • #0882 - CommandTask totally ignores common_assets

  • #0893 - CommandTask: with transient asset hook, it will ignore user’s transient_assets

Developer/Test#
  • #0885 - Platform to lightly execute tasks locally to enable better testing of Task life cycle

Documentation#
  • #0482 - Running experiments locally

  • #0768 - Update breadcrumbs for docs

  • #0860 - Create .puml files for UML doc examples within docs, add new files to existing .puml in diagrams directory, link to files

  • #0867 - Examples - document cli download experience for example scripts

  • #0870 - CLI - update documentation to reflect latest changes

  • #0875 - Enable JSON Documentation Builds on Help for future Help Features

  • #0889 - Parameter sweeps with EMOD

  • #0896 - Add version to docs build

  • #0903 - Add version to documentation

Feature Request#
  • #0832 - Implement underlying API needed for reload_from_simulation

  • #0876 - Add option to optionally rebuild tasks on reload

  • #0883 - Add new task type TemplateScriptTask to support Templated Scripts

Platforms#
  • #0692 - Get Docker Public Repo naming aligned with others

User Experience#
  • #0713 - Move all user output to customer logger

1.2.2#

Dependencies#
  • #0929 - Update psycopg2-binary requirement from ~=2.8.4 to ~=2.8.5

  • #0930 - Bump pandas from 0.24.2 to 1.0.5

  • #0931 - Bump docker from 4.0.1 to 4.2.1

  • #0932 - Update beautifulsoup4 requirement from ~=4.8.2 to ~=4.9.1

  • #0933 - Update pytest requirement from ~=5.4.1 to ~=5.4.3

  • #0942 - Update pyperclip requirement from ~=1.7 to ~=1.8

  • #0943 - Update packaging requirement from ~=20.3 to ~=20.4

1.3.0#

Bugs#
  • #0921 - PlatformAnalysis requires login before execution

  • #0937 - RequirementsToAssetCollection fail with Max length

  • #0946 - Upgrade pycomps to 2.3.7

  • #0972 - Template script wrapper task should proxy calls where possible

  • #0984 - Make idmtools_metadata.json default to off

Documentation#
  • #0481 - Overview of the local platform

  • #0483 - Monitoring local experiments

  • #0910 - Add documentation on plotting analysis output using matplotlib as an example

  • #0925 - Platform Local - add documentation (getting started, run example, etc)

  • #0965 - Add Analysis Output Format Support Table

  • #0969 - Create base documentation for creating a new platform plugin

Feature Request#
  • #0830 - Support for python 3.8

  • #0924 - YamlSimulationBuilder should accept a single function to be mapped to all values

Models#
  • #0834 - Add a COVASIM example with idmtools

Platforms#
  • #0852 - Add emodpy to SSMT image

User Experience#
  • #0682 - Support full query criteria on COMPS items

1.4.0#

Bugs#
  • #1012 - Asset.py length return wrong

  • #1034 - AssetCollections should not be mutable after save

  • #1035 - RequirementsToAssetCollection run return same ac_id between SLURM and COMPS

  • #1046 - print(ac) cause maximum recursion depth exceeded

  • #1047 - datetime type is missing from IDMJSONEncoder

  • #1048 - Refresh Status bug on additional columns

  • #1049 - The text should be generic not specific to asset collection in method from_id(…)

Dependencies#
  • #1007 - Update flask-sqlalchemy requirement from ~=2.4.3 to ~=2.4.4

  • #1009 - Update matplotlib requirement from ~=3.2.2 to ~=3.3.0

  • #1015 - Bump pandas from 1.0.5 to 1.1.0

  • #1024 - Update pytest requirement from ~=6.0.0 to ~=6.0.1

  • #1031 - Update yaspin requirement from ~=0.18.0 to ~=1.0.0

  • #1032 - Update tqdm requirement from ~=4.48.1 to ~=4.48.2

  • #1033 - Update pygithub requirement from ~=1.51 to ~=1.52

  • #1053 - Update sphinx requirement from ~=3.1.2 to ~=3.2.0

Documentation#
  • #0970 - idmtools.ini documentation - review current docs and possibly make changes

  • #1043 - Update build of doc to be more ReadTheDocs Friendly

Feature Request#
  • #1020 - Requirements to Asset Collection should first check what assets exist before uploading

1.5.0#

Bugs#
  • #0459 - There is no way to add simulations to existing experiment

  • #0840 - Experiment and Suite statuses not updated properly after success

  • #0841 - Reloaded experiments and simulations have incorrect status

  • #0842 - Reloaded simulations (likely all children) do not have their platform set

  • #0866 - Recursive simulation loading bug

  • #0898 - Update Experiment#add_new_simulations() to accept additions in any state

  • #1046 - print(ac) cause maximum recursion depth exceeded while calling a Python object

  • #1047 - datetime type is missing from IDMJSONEncoder

  • #1048 - typo/bug: cols.append(cols)

  • #1049 - The text should be generic not specific to asset collection in method from_id(…)

  • #1066 - User logging should still be initialized if missing_ok is supplied when loading configuration/platform

  • #1071 - Detect if an experiment needs commissioning

  • #1076 - wi_ac create ac with tag wrong for Calculon

  • #1094 - AssetCollection should check checksums when checking for duplicates

  • #1098 - Add experiment id to CSVAnalyzer and TagAnalyzer

Dependencies#
  • #1075 - Update sphinx requirement from ~=3.2.0 to ~=3.2.1

  • #1077 - Update sqlalchemy requirement from ~=1.3.18 to ~=1.3.19

  • #1078 - Update pygithub requirement from ~=1.52 to ~=1.53

  • #1080 - Bump docker from 4.3.0 to 4.3.1

  • #1087 - Update more-itertools requirement from ~=8.4.0 to ~=8.5.0

  • #1088 - Update paramiko requirement from ~=2.7.1 to ~=2.7.2

  • #1101 - Update psycopg2-binary requirement from ~=2.8.5 to ~=2.8.6

  • #1102 - Bump pandas from 1.1.1 to 1.1.2

  • #1103 - Bump diskcache from 5.0.2 to 5.0.3

  • #1107 - Update tqdm requirement from ~=4.48.2 to ~=4.49.0

  • #1108 - Update pytest requirement from ~=6.0.1 to ~=6.0.2

Documentation#
  • #1073 - Update example and tests to use platform context

Feature Request#
  • #1064 - Allow running without a idmtools.ini file

  • #1068 - COMPPlatform should allow commissioning as it goes

1.5.1#

Bugs#
  • #1166 - Properly remove/replace unsupported characters on the COMPS platform in experiment names

  • #1173 - Ensure assets are not directories on creation of Asset

Documentation#
  • #1191 - Remove idmtools.ini from examples to leverage configuration aliases. This change allows executing of examples with minimal local configuration

Feature Request#
  • #1127 - Remove emodpy from idmtools[full] and idmtools[idm] install options. This allows a more control of packages used in projects

  • #1179 - Supply multiple default templates for template script wrapper. See the examples in the cookbook.

  • #1180 - Support Configuration Aliases. This provides out of the box configurations for common platform configurations. For example, COMPS environments have predefined aliases such as Calculon, Belegost, etc

Known Issues#
  • PlatformAnalysis requires an idmtools.ini

Upcoming breaking changes in 1.6.0#
  • Assets will no longer support both absolute_path and content. That will be mutually exclusive going forward

  • The task API pre_creation method has a new parameter to pass the platform object. All tasks implementing the API will need to update the pre_creation method

  • Deprecation of the delete function from AssetCollection in favor or remove.

Upcoming features in the coming releases#
  • Ability to query the platform from task for items such as OS, supported workflows, etc

  • Utility to Asset-ize outputs within COMPS. This should make it into 1.6.0

  • HPC Container build and run utilities. Slated for next few releases

  • Better integration of errors with references to relevant documentation(ongoing)

  • Improves support for Mac OS

1.5.2#

Bugs#
  • #1271 - Fix default SSMT image detection for platform COMPS

1.6.0#

Bugs#
  • #0300 - Canceling simulations using cli’s Restful api throws Internal server error (Local Platform)

  • #0462 - Redis port configuration not working (Local Platform)

  • #0988 - Fix issues with multi-threading and requests on mac in python 3.7 or lower

  • #1104 - Run AnalyzeManager outputs ini file used multiple times

  • #1111 - File path missing in logger messages when level set to INFO

  • #1154 - Add option for experiment run in COMPS to use the minimal execution path

  • #1156 - COMPS should dynamically add Windows and LINUX Requirements based on environments

  • #1195 - PlatformAnalysis should support aliases as well

  • #1198 - PlatformAnalysis should detect should find user’s idmtools.ini instead of searching current directory

  • #1230 - Fix parsing of executable on commandline

  • #1244 - Logging should fall back to console if the log file cannot be opened

CLI#
  • #1167 - idmtools config CLI command should have option to use global path

  • #1237 - Add ability to suppress outputs for CLI commands that might generate pipe-able output

  • #1234 - Add AssetizeOutputs as COMPS Cli command

  • #1236 - Add COMPS Login command to CLI

Configuration#
  • #1242 - Enable loading configuration options from environment variables

Core#
  • #0571 - Support multi-cores(MPI) on COMPS through num_cores

  • #1220 - Workflow items should use name

  • #1221 - Workflow items should use Assets instead of asset_collection_id

  • #1222 - Workflow items should use transient assets vs user_files

  • #1223 - Commands from WorkflowItems should support Tasks

  • #1224 - Support creating AssetCollection from list of file paths

Dependencies#
  • #1136 - Remove marshmallow as a dependency

  • #1207 - Update pytest requirement from ~=6.1.0 to ~=6.1.1

  • #1209 - Update flake8 requirement from ~=3.8.3 to ~=3.8.4

  • #1211 - Bump pandas from 1.1.2 to 1.1.3

  • #1214 - Update bump2version requirement from ~=1.0.0 to ~=1.0.1

  • #1216 - Update tqdm requirement from ~=4.50.0 to ~=4.50.2

  • #1226 - Update pycomps requirement from ~=2.3.8 to ~=2.3.9

  • #1227 - Update sqlalchemy requirement from ~=1.3.19 to ~=1.3.20

  • #1228 - Update colorama requirement from ~=0.4.1 to ~=0.4.4

  • #1246 - Update yaspin requirement from ~=1.1.0 to ~=1.2.0

  • #1251 - Update junitparser requirement from ~=1.4.1 to ~=1.4.2

Documentation#
  • #1134 - Add a copy to clipboard option to source code and command line examples in documentation

Feature Request#
  • #1121 - Experiment should error if no simulations are defined

  • #1148 - Support global configuration file for idmtools from user home directory/local app directory or specified using an Environment Variable

  • #1158 - Pass platform to pre_creation and post_creation methods to allow dynamic querying from platform

  • #1193 - Support Asset-izing Outputs through WorkItems

  • #1194 - Add support for post_creation hooks on Experiments/Simulation/Workitems

  • #1231 - Allow setting command from string on Task

  • #1232 - Add a function to determine if target is Windows to platform

  • #1233 - Add property to grab the common asset path from a platform

  • #1247 - Add support for singularity to the local platform

Platforms#
  • #0230 - Entities should support created_on/modified_on fields on the Local Platform

  • #0324 - Detect changes to Local Platform config

User Experience#
  • #1127 - IDMtools install should not include emodpy, emodapi, etc when installing with idmtools[full]

  • #1141 - Add warning when user is using a development version of idmtools

  • #1160 - get_script_wrapper_unix_task should use default template that adds assets to python path

  • #1200 - Log idmtools core version when in debug mode

  • #1240 - Give clear units for progress bars

  • #1241 - Support disabling progress bars with environment variable or config

Special Notes#
  • If you encounter an issue with matplotlib after install, you may need to run pip install matplotlib –force-reinstall

  • Workitems will require a Task starting in 1.7.0

  • Containers support on COMPS and early singularity support will be coming in 1.6.1

1.6.1#

Additional Changes#
  • #1165 - Support basic building of singularity images

  • #1315 - Assets should always return paths using posix style

  • #1321 - Comps CLI should have singularity build support

Bugs#
  • #1271 - COMPS SSMT Version fetch should fetch latest compatible idmtools image

  • #1303 - Fix platform object assignment on AssetCollection

  • #1312 - Update analyze_manager.py to remove iterkeys in support of diskcache 5.1.0

  • #1313 - Support tags in prefix on AssetizeOutputs

Dependencies#
  • #1281 - Update pytest requirement from ~=6.1.1 to ~=6.1.2

  • #1287 - Update allure-pytest requirement from ~=2.8.18 to ~=2.8.19

  • #1288 - Update junitparser requirement from ~=1.6.0 to ~=1.6.1

  • #1289 - Update sphinx-copybutton requirement from ~=0.3.0 to ~=0.3.1

  • #1290 - Bump pandas from 1.1.3 to 1.1.4

  • #1291 - Update more-itertools requirement from ~=8.5.0 to ~=8.6.0

  • #1293 - Latest numpy 1.19.4 (released 11/2/2020) breaks all idmtools tests in windows

  • #1298 - Update junitparser requirement from ~=1.6.1 to ~=1.6.2

  • #1299 - Update pygit2 requirement from ~=1.3.0 to ~=1.4.0

  • #1300 - Bump diskcache from 5.0.3 to 5.1.0

  • #1307 - Update requests requirement from ~=2.24.0 to ~=2.25.0

  • #1308 - Update matplotlib requirement from ~=3.3.2 to ~=3.3.3

  • #1309 - Update sphinx requirement from ~=3.3.0 to ~=3.3.1

  • #1310 - Update pytest-html requirement from ~=2.1.1 to ~=3.0.0

  • #1311 - Update tqdm requirement from ~=4.51.0 to ~=4.52.0

  • #1327 - Update allure-pytest requirement from ~=2.8.19 to ~=2.8.20

Documentation#
  • #1279 - Add examples to override config values

  • #1285 - Examples should use Calculon instead of SLURM alias

  • #1302 - cookbook link for modifying-asset-collection is wrong

Platforms#
  • #1264 - Comps CLI should have singularity build support

User Experience#
  • #1170 - Add progress bar to upload of Assets through new callback in pyCOMPS

  • #1320 - Add progress bar to workitems

1.6.2#

Bugs#
  • #1343 - Singularity Build CLI should write AssetCollection ID to file

  • #1345 - Loading a platform within a Snakefile throws an exception

  • #1348 - We should be able to download files using glob patterns from comps from the CLI

  • #1351 - Add support to detect if target platform is windows or linux on COMPS taking into account if it is an SSMT job

  • #1363 - Ensure the lookup for latest version uses only pypi not artifactory api

  • #1368 - idmtools log rotation can crash in multi process environments

Developer/Test#
  • #1367 - Support installing SSMT packages dynamically on workitems

Feature Request#
  • #1344 - Singularity Build CLI command should support writing workitem to file

  • #1349 - Add support PathLike for add_asset in Assets

  • #1350 - Setup global exception handler

  • #1353 - Add “Assets” directory to the PYTHONPATH by default on idmtools SSMT image

  • #1366 - Support adding git commit, branch, and url to Experiments, Simulations, Workitems, or other taggable entities as tags

Platforms#
  • #0990 - Support creating and retrieving container images in AssetCollections

  • #1352 - Redirect calls to task.command to wrapped command in TemplatedScriptTask

  • #1354 - AssetizeOutputs CLI should support writing to id files

1.6.3#

Bugs#
  • #1396 - requirements to ac should default to one core

  • #1403 - Progress bar displayed when expecting only json on AssetizeOutput

  • #1404 - Autocompletion of cli does not work due to warning

  • #1408 - GA fail for local platform

  • #1416 - Default batch create_items method does not support kwargs

  • #1417 - ITask To_Dict depends on platform_comps

  • #1436 - Packages order is important in req2ac utility

CLI#
  • #1430 - Update yaspin requirement from ~=1.2.0 to ~=1.3.0

Dependencies#
  • #1340 - Bump docker from 4.3.1 to 4.4.0

  • #1374 - Update humanfriendly requirement from ~=8.2 to ~=9.0

  • #1387 - Update coloredlogs requirement from ~=14.0 to ~=15.0

  • #1414 - Update dramatiq[redis,watch] requirement from ~=1.9.0 to ~=1.10.0

  • #1418 - Update docker requirement from <=4.4.0,>=4.3.1 to >=4.3.1,<4.5.0

  • #1435 - Update gevent requirement from ~=20.12.1 to ~=21.1.2

  • #1442 - Update pygit2 requirement from ~=1.4.0 to ~=1.5.0

  • #1444 - Update pyyaml requirement from <5.4,>=5.3.0 to >=5.3.0,<5.5

  • #1448 - Update matplotlib requirement from ~=3.3.3 to ~=3.3.4

  • #1449 - Update jinja2 requirement from ~=2.11.2 to ~=2.11.3

  • #1450 - Update sqlalchemy requirement from ~=1.3.22 to ~=1.3.23

  • #1457 - Update more-itertools requirement from ~=8.6.0 to ~=8.7.0

  • #1466 - Update tabulate requirement from ~=0.8.7 to ~=0.8.9

  • #1467 - Update yaspin requirement from <1.4.0,>=1.2.0 to >=1.2.0,<1.5.0

Developer/Test#
  • #1390 - Update pytest requirement from ~=6.1.2 to ~=6.2.0

  • #1391 - Update pytest-html requirement from ~=3.1.0 to ~=3.1.1

  • #1394 - Update pytest-xdist requirement from ~=2.1 to ~=2.2

  • #1398 - Update pytest requirement from ~=6.2.0 to ~=6.2.1

  • #1411 - Update build tools to 1.0.3

  • #1413 - Update idm-buildtools requirement from ~=1.0.1 to ~=1.0.3

  • #1424 - Update twine requirement from ~=3.2.0 to ~=3.3.0

  • #1428 - Update junitparser requirement from ~=1.6.3 to ~=2.0.0

  • #1434 - Update pytest-cov requirement from ~=2.10.1 to ~=2.11.1

  • #1443 - Update pytest requirement from ~=6.2.1 to ~=6.2.2

  • #1446 - Update coverage requirement from ~=5.3 to ~=5.4

  • #1458 - Update pytest-runner requirement from ~=5.2 to ~=5.3

  • #1463 - Update allure-pytest requirement from ~=2.8.33 to ~=2.8.34

  • #1468 - Update coverage requirement from <5.5,>=5.3 to >=5.3,<5.6

  • #1478 - Update flake8 requirement from ~=3.8.4 to ~=3.9.0

  • #1481 - Update twine requirement from ~=3.4.0 to ~=3.4.1

Documentation#
  • #1259 - Provide examples container and development guide

  • #1347 - Read the Docs build broken, having issues with Artifactory/pip installation

  • #1423 - Update sphinx-rtd-theme requirement from ~=0.5.0 to ~=0.5.1

  • #1474 - Update sphinx requirement from ~=3.4.3 to ~=3.5.2

Feature Request#
  • #1384 - Add assets should ignore common directories through option

  • #1392 - RequirementsToAssetCollection should allow to create user tag

  • #1437 - req2ac utility should support getting compatible version (~=) of a package

Platforms#
  • #0558 - Develop Test Harness for SSMT platform

1.6.4#

Additional Changes#
  • #1407 - import get_latest_package_version_from_pypi throws exception

  • #1593 - Pandas items as defaults cause issue with Simulation Builder

Analyzers#
  • #1097 - Analyzer may get stuck on error

  • #1506 - DownloadAnalyzer should not stop if one sim fails, but try to download all sims independently.

  • #1540 - Convert AnalyzeManager to use futures and future pool

  • #1594 - Disable log re-initialization in subthreads

  • #1596 - PlatformAnalysis should support extra_args to be passed to AnalyzeManager on the server

  • #1608 - CSVAnalyzer should not allow users to override parse value as it is required

Bugs#
  • #1452 - idmtools work for using new slurm scheduling mechanism

  • #1518 - CommandLine add_argument should convert arguments to strings

  • #1522 - Load command line from work order on load when defined

Core#
  • #1586 - Fix the help on the top-level makefile

Dependencies#
  • #1440 - Update diskcache requirement from ~=5.1.0 to ~=5.2.1

  • #1490 - Update flask-sqlalchemy requirement from ~=2.4.4 to ~=2.5.1

  • #1498 - Update yaspin requirement from <1.5.0,>=1.2.0 to >=1.2.0,<1.6.0

  • #1520 - Update docker requirement from <4.5.0,>=4.3.1 to >=4.3.1,<5.1.0

  • #1545 - Update pygithub requirement from ~=1.54 to ~=1.55

  • #1552 - Update matplotlib requirement from ~=3.4.1 to ~=3.4.2

  • #1555 - Update sqlalchemy requirement from ~=1.4.14 to ~=1.4.15

  • #1562 - Bump werkzeug from 1.0.1 to 2.0.1

  • #1563 - Update jinja2 requirement from ~=2.11.3 to ~=3.0.1

  • #1566 - Update cookiecutter requirement from ~=1.7.2 to ~=1.7.3

  • #1568 - Update more-itertools requirement from ~=8.7.0 to ~=8.8.0

  • #1570 - Update dramatiq[redis,watch] requirement from ~=1.10.0 to ~=1.11.0

  • #1585 - Update psycopg2-binary requirement from ~=2.8.6 to ~=2.9.1

Developer/Test#
  • #1511 - Add document linting to rules

  • #1549 - Update pytest requirement from ~=6.2.3 to ~=6.2.4

  • #1554 - Update flake8 requirement from ~=3.9.1 to ~=3.9.2

  • #1567 - Update allure-pytest requirement from <2.9,>=2.8.34 to >=2.8.34,<2.10

  • #1577 - Update junitparser requirement from ~=2.0.0 to ~=2.1.1

  • #1587 - update docker python version

Documentation#
  • #0944 - Set up intersphinx to link emodpy and idmtools docs

  • #1445 - Enable intersphinx for idmtools

  • #1499 - Update sphinx requirement from ~=3.5.2 to ~=3.5.3

  • #1510 - Update sphinxcontrib-programoutput requirement from ~=0.16 to ~=0.17

  • #1516 - Update sphinx-rtd-theme requirement from ~=0.5.1 to ~=0.5.2

  • #1531 - Update sphinx requirement from ~=3.5.3 to ~=3.5.4

  • #1584 - Update sphinx-copybutton requirement from ~=0.3.1 to ~=0.4.0

Feature Request#
  • #0831 - Support for python 3.9

Platforms#
  • #1604 - idmtools_platform_local run “make docker” failed

User Experience#
  • #1485 - Add files and libraries to an Asset Collection - new documentation

1.6.5#

Analyzers#
  • #1674 - Analyzers stalling with failed simulations

Bugs#
  • #1543 - Control output of pyCOMPS logs

  • #1551 - workitem reference asset_file to user_file

  • #1579 - [Logging] section in idmtools seems not working

  • #1600 - idmtools.log does not honor file_level

  • #1618 - A special case in idmtools logging system with user_output = off

  • #1620 - idmtools logging throw random error

  • #1633 - Two issues noticed in idmtools logging

  • #1634 - idmtools: logging should honor level parameter

Dependencies#
  • #1569 - Update flask-restful requirement from ~=0.3.8 to ~=0.3.9

  • #1682 - Update click requirement from ~=7.1.2 to ~=8.1.2

  • #1688 - Update gevent requirement from <=21.2.0,>=20.12.1 to >=20.12.1,<21.13.0

Developer/Test#
  • #1689 - Update pytest-timeout requirement from ~=1.4.2 to ~=2.1.0

Platforms#
  • #0703 - Slurm simulation_operations needs to be refactored

  • #1615 - for calibra repo, if console=on, it will not print experiment url

  • #1644 - console comps client logging is too chatty

1.6.6#

Analyzers#
  • #1546 - idmtools AnalyzerManager took much longer time to start analyzer than dtktools AnalyzerManager with same input data

Dependencies#
  • #1682 - Update pyComps requirement from ~=2.5.0 to ~=2.6.0

1.6.7#

Bugs#
  • #1762 - hash_obj cause maximum recursion exception

Dependencies#
  • #1601 - Update packaging requirement from <21.0,>=20.4 to >=20.4,<22.0

  • #1694 - Update pygit2 requirement from <1.6.0,>=1.4.0 to >=1.4.0,<1.10.0

  • #1695 - Update psycopg2-binary requirement from ~=2.9.1 to ~=2.9.3

  • #1702 - Bump moment from 2.24.0 to 2.29.2 in /idmtools_platform_local/idmtools_webui

  • #1703 - Bump async from 2.6.3 to 2.6.4 in /idmtools_platform_local/idmtools_webui

  • #1734 - Update sqlalchemy requirement from ~=1.4.15 to ~=1.4.37

  • #1742 - Bump eventsource from 1.0.7 to 1.1.2 in /idmtools_platform_local/idmtools_webui

  • #1743 - Bump url-parse from 1.4.7 to 1.5.10 in /idmtools_platform_local/idmtools_webui

  • #1744 - Bump follow-redirects from 1.10.0 to 1.15.1 in /idmtools_platform_local/idmtools_webui

  • #1745 - Bump postcss from 7.0.26 to 7.0.39 in /idmtools_platform_local/idmtools_webui

  • #1746 - Bump markupsafe from 2.0.1 to 2.1.1

  • #1747 - Update yaspin requirement from <1.6.0,>=1.2.0 to >=1.2.0,<2.2.0

  • #1748 - Update pyyaml requirement from <5.5,>=5.3.0 to >=5.3.0,<6.1

  • #1777 - Update cookiecutter requirement from ~=1.7.3 to ~=2.1.1

  • #1778 - Update jinja2 requirement from ~=3.0.1 to ~=3.1.2

  • #1780 - Update sqlalchemy requirement from ~=1.4.37 to ~=1.4.39

  • #1781 - Update colorama requirement from ~=0.4.4 to ~=0.4.5

  • #1782 - Update pandas requirement from <1.2,>=1.1.4 to >=1.1.4,<1.5

  • #1783 - Update dramatiq[redis,watch] requirement from ~=1.11.0 to ~=1.13.0

  • #1784 - Update pygit2 requirement from <1.10.0,>=1.4.0 to >=1.4.0,<1.11.0

  • #1786 - Bump numpy from 1.18.1 to 1.22.0 in /idmtools_platform_comps/tests/inputs/simple_load_lib_example

  • #1787 - Bump moment from 2.29.2 to 2.29.4 in /idmtools_platform_local/idmtools_webui

  • #1788 - Update more-itertools requirement from ~=8.8.0 to ~=8.13.0

Developer/Test#
  • #1789 - Update coverage requirement from <5.6,>=5.3 to >=5.3,<6.5

  • #1792 - Update pytest-runner requirement from ~=5.3 to ~=6.0

  • #1793 - Update flake8 requirement from ~=3.9.2 to ~=4.0.1

1.7.0#

Additional Changes#
  • #1671 - experiment post creation hooks NOT get invoked

Bugs#
  • #1581 - We should default console=on for logging when use alias platform

  • #1614 - User logger should only be used for verbose or higher messages

  • #1806 - batch load module with wrong variable

  • #1807 - get_children missing status refresh

  • #1811 - Suite metadata not written when an experiment is run directly on slurm platform

  • #1812 - Running a suite does not run containing children (experiments)

  • #1820 - Handle empty status messages

CLI#
  • #1774 - need a patch release to update pandas requirement

Core#
  • #1757 - Suite to_dict method NO need to output experiments details

Dependencies#
  • #1749 - Update pluggy requirement from ~=0.13.1 to ~=1.0.0

  • #1794 - Bump pipreqs from 0.4.10 to 0.4.11

  • #1867 - Update sqlalchemy requirement from ~=1.4.39 to ~=1.4.41

  • #1870 - Update yaspin requirement from <2.2.0,>=1.2.0 to >=1.2.0,<2.3.0

  • #1873 - Update docker requirement from <5.1.0,>=4.3.1 to >=4.3.1,<6.1.0

  • #1878 - Update natsort requirement from ~=8.1.0 to ~=8.2.0

  • #1880 - Update diskcache requirement from ~=5.2.1 to ~=5.4.0

  • #1882 - Update flask requirement from ~=2.1.3 to ~=2.2.2

  • #1883 - Update backoff requirement from <1.11,>=1.10.0 to >=1.10.0,<2.2

  • #1885 - Bump async from 2.6.3 to 2.6.4 in /idmtools_platform_local/idmtools_webui

Developer/Test#
  • #1795 - Update twine requirement from ~=3.4.1 to ~=4.0.1

  • #1830 - Update pytest requirement from ~=6.2.4 to ~=7.1.3

  • #1831 - Update pytest-xdist requirement from ~=2.2 to ~=2.5

  • #1868 - Update flake8 requirement from ~=4.0.1 to ~=5.0.4

  • #1874 - Update allure-pytest requirement from <2.10,>=2.8.34 to >=2.8.34,<2.11

  • #1884 - Update junitparser requirement from ~=2.1.1 to ~=2.8.0

Documentation#
  • #1750 - Slurm Documentation skeleton

Feature Request#
  • #1691 - Feature request: Add existing experiments to suite

  • #1809 - Add cpus_per_task to SlurmPlatform

  • #1818 - Improve the output to user after a job is executed

  • #1821 - Status improvement: make “checking slurm finish” configurable

Platforms#
  • #1038 - Slurm experiment operations needs updating with newest API

  • #1039 - Slurm needs to implement some basic asset operations

  • #1040 - Slurm Simulations operations is out of date

  • #1041 - Implement suite operations on Slurm Platform

  • #1675 - File Operations: Link Operations

  • #1676 - Move metadata operation to its own class for future API

  • #1678 - Retry logic for slurm

  • #1693 - Abstract file operations in a way the underlying implementation can be changed and shared across platforms

  • #1697 - Create a new metadata operations API

  • #1717 - Formalize shell script for SLURM job submission

  • #1737 - Cleanup Metadata Operations

  • #1738 - Integrate Metadata, FileOperations, and Slurm Script into Slurm Platform

  • #1758 - Document how to cancel jobs on slurm using slurm docs

  • #1764 - Update the sbatch script to dump the SARRAY job id

  • #1765 - Update the simulation script to dump the Job id into a file within each simulation directory

  • #1770 - Develop base singularity image

  • #1822 - COMPSPlatform suite operation: platform_create returns Tuple[COMPSSuite, UUID]

1.7.1#

Bugs#
  • #1907 - Make cache directory configurable

1.7.3#

Additional Changes#
  • #1835 - Do the release of 1.7.0.pre

  • #1837 - Release 1.7.0

  • #1855 - Generate Changelog for 1.7.0

  • #1857 - Test final singularity image

  • #1858 - Complete basic use of idmtools-slurm-bridge docs

  • #1863 - Presentation for Jaline

  • #1876 - Build new singularity image

  • #1947 - Utility code to support running on COMPS/Slurm

Bugs#
  • #1623 - We should not generate debug log for _detect_command_line_from_simulation in simulation_operations.py

  • #1661 - Script seems to require pwd module but not included in requirements.txt

  • #1666 - logging.set_file_logging should pass level to create_file_handler()

  • #1756 - Suite Operation run_item doesn’t pass kwargs to sub-calls

  • #1813 - Writing experiment parent id in experiment metadata records the wrong suite id

  • #1877 - Revert sphinx to 4 and pin in dependabot

  • #1907 - Make cache directory configurable

  • #1915 - run_simulation.sh should be copied over instead of link

Core#
  • #1826 - Update to require at east python 3.7

Dependencies#
  • #1906 - Update pygithub requirement from ~=1.55 to ~=1.56

  • #1910 - Update flask-sqlalchemy requirement from ~=2.5.1 to ~=3.0.2

  • #1911 - Update sqlalchemy requirement from ~=1.4.41 to ~=1.4.42

  • #1912 - Update gevent requirement from <21.13.0,>=20.12.1 to >=20.12.1,<22.11.0

  • #1914 - Update more-itertools requirement from ~=8.14.0 to ~=9.0.0

  • #1920 - Update psycopg2-binary requirement from ~=2.9.4 to ~=2.9.5

  • #1921 - Update pytest-html requirement from ~=3.1.1 to ~=3.2.0

  • #1922 - Update pycomps requirement from ~=2.8 to ~=2.9

  • #1923 - Update colorama requirement from ~=0.4.5 to ~=0.4.6

  • #1933 - Update pytest-xdist requirement from ~=2.5 to ~=3.0

  • #1934 - Update pytest requirement from ~=7.1.3 to ~=7.2.0

  • #1942 - Update sqlalchemy requirement from ~=1.4.42 to ~=1.4.43

  • #1943 - Update pygithub requirement from ~=1.56 to ~=1.57

Developer/Test#
  • #1649 - github action test failed which can not retrieve the latest ssmt image

  • #1652 - Changelog not showing after 1.6.2 release

Documentation#
  • #1378 - Container Python Package development guide

  • #1453 - emodpy example for the local platform

Feature Request#
  • #1359 - PlatformFactory should save extra args to an object to be able to be serialized later

Platforms#
  • #1853 - Add utils to platform-comps Utils

  • #1854 - Add utils to platform-slurm utils

  • #1864 - Document user installed packages in Singularity images

  • #1963 - slurm job count issue with add_multiple_parameter_sweep_definition

User Experience#
  • #1804 - Default root for run/job directories in slurm local platform is ‘.’

  • #1805 - Slurm local platform should make containing experiments/suites as needed

1.7.4#

Core#
  • #1977 - Disable simulations in Experiment metadata for now

Feature Request#
  • #1817 - Feature request: better to have a utility to display simulations status

  • #2007 - Feature request: Make batch_size and max_workers configurable with run method

  • #2008 - Add new slurm parameter: constraint

Platforms#
  • #1829 - Performance issue: slurm commission is too slow

  • #1996 - Hotfix Slurm commission performance issue

1.7.5#

Bugs#
  • #1395 - Time for Simulation Creation Increases with Python Requirements

Platforms#
  • #2006 - Hotfix SlutmPlatform memory space issue

  • #2000 - slurm commission take too much memory which can exceeds head node’s max memory

1.7.6#

Additional Changes#
  • #1954 - idmtools objects id’s should be unique strings not UUIDs

  • #2022 - Add example for ssmt with extra packages

Core#
  • #1810 - Support alternate id generators

Dependencies#
  • #1943 - Update pygithub requirement from ~=1.56 to ~=1.57

  • #1946 - Update pygit2 requirement from <1.11.0,>=1.4.0 to >=1.4.0,<1.12.0

  • #1948 - Update sqlalchemy requirement from ~=1.4.43 to ~=1.4.44

  • #1949 - Bump sphinx-copybutton from 0.5.0 to 0.5.1

  • #1957 - Update pycomps requirement from ~=2.9 to ~=2.10

  • #1958 - Update allure-pytest requirement from <2.12,>=2.8.34 to >=2.8.34,<2.13

  • #1959 - Update flake8 requirement from ~=5.0.4 to ~=6.0.0

  • #1961 - Update twine requirement from ~=4.0.1 to ~=4.0.2

  • #1967 - Update pytest-xdist requirement from ~=3.0 to ~=3.1

  • #1976 - Bump qs from 6.5.2 to 6.5.3 in /idmtools_platform_local/idmtools_webui

  • #1978 - Update sqlalchemy requirement from ~=1.4.44 to ~=1.4.45

  • #1983 - Bump express from 4.17.1 to 4.18.2 in /idmtools_platform_local/idmtools_webui

Developer/Test#
  • #2020 - Add example for ssmt with extra packages based on Clinton’s example

  • #2023 - Add unittests for idmtools_platform_file and add/update github actions

  • #2026 - Fix test in File platform

  • #2039 - Add file platform cli tests

  • #2045 - Add unittests and examples for file and process platforms

Feature Request#
  • #1817 - Feature request: better to have a utility to display simulations status

  • #2004 - Implement SlurmPlatform Status utility

  • #2025 - File platform: implemented experiment execution (batch and status, etc.)

  • #1928 - Design: File Only Platform

  • #2029 - Add support for ProcessPlatform

  • #1938 - File platform: implementation of CLI utility

  • #2044 - Platform-General: implementation of ProcessPlatform

1.7.7#

Bugs#
  • #2084 - Potential issue with mismatch version of pandas and matplotlib

Dependencies#
  • #2013 - Update yaspin requirement from <2.3.0,>=1.2.0 to >=1.2.0,<2.4.0

  • #2024 - Update coverage requirement from <6.6,>=5.3 to >=5.3,<7.3

Documentation#
  • #2000 - slurm commission take too much memory which can exceeds head node’s max memory

  • #2042 - Write doc: run main script as SLURM job

Feature Request#
  • #1998 - Potential issue with max count of simulations in slurm platform

  • #2043 - Write Python utility to run main script as SLURM job

  • #2041 - Write workaround steps: run main script as SLURM job

  • #2095 - Add singularity bind experiment by default for slurm

  • #2096 - Add few more COMPS server aliases

1.7.8#

Additional Changes#
  • #2100 - Setup.py does not conform to newest pip in python requires

  • #2101 - Deprecate 3.6 references from idmtools

  • #2102 - Doc fix

Bugs#
  • #2083 - python11 issue with dataclass