idmtools.entities.experiment module

class idmtools.entities.experiment.Experiment(_uid: uuid.UUID = None, platform_id: uuid.UUID = None, _platform: IPlatform = None, parent_id: uuid.UUID = None, _parent: IEntity = None, status: idmtools.core.enums.EntityStatus = None, tags: Dict[str, Any] = <factory>, _platform_object: Any = None, name: str = None, assets: idmtools.assets.asset_collection.AssetCollection = <factory>, suite_id: <module 'uuid' from '/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/uuid.py'> = None, task_type: str = 'idmtools.entities.command_task.CommandTask', platform_requirements: Set[idmtools.entities.platform_requirements.PlatformRequirements] = <factory>, simulations: dataclasses.InitVar = <property object>, _Experiment__simulations: Union[idmtools.core.interfaces.entity_container.EntityContainer, Generator[Simulation, None, None], idmtools.entities.templated_simulation.TemplatedSimulations, Iterator[Simulation]] = <factory>, gather_common_assets_from_task: bool = None)

Bases: idmtools.core.interfaces.iassets_enabled.IAssetsEnabled, idmtools.core.interfaces.inamed_entity.INamedEntity

Class that represents a generic experiment. This class needs to be implemented for each model type with specifics.

Parameters
  • name – The experiment name.

  • assets – The asset collection for assets global to this experiment.

suite_id: <module ‘uuid’ from ‘/opt/hostedtoolcache/Python/3.7.9/x64/lib/python3.7/uuid.py’> = None

Suite ID

item_type: idmtools.core.enums.ItemType = 2

Item Item(always an experiment)

task_type: str = 'idmtools.entities.command_task.CommandTask'

Task Type(defaults to command)

platform_requirements: Set[idmtools.entities.platform_requirements.PlatformRequirements]

List of Requirements for the task that a platform must meet to be able to run

frozen: bool = False

Is the Experiment Frozen

gather_common_assets_from_task: bool = None

Determines if we should gather assets from a the first task. Only use when not using TemplatedSimulations

post_creation() → None

Post creation hook for object Returns:

property status
property suite
display()
pre_creation(gather_assets=True) → None

Experiment pre_creation callback

Parameters

gather_assets – Determines if an experiment will try to gather the common assets or defer. It most cases, you want this enabled but when modifying existing experiments you may want to disable if there are new assets and the platform has performance hits to determine those assets

Returns:

property done

Return if an experiment has finished executing

Returns

True if all simulations have ran, False otherwise

property succeeded

Return if an experiment has succeeded. An experiment is succeeded when all simulations have succeeded

Returns

True if all simulations have succeeded, False otherwise

property any_failed

Return if an experiment has any simulation in failed state.

Returns

True if all simulations have succeeded, False otherwise

property simulations

Simulation in this experiment

property simulation_count

Return the total simulations Returns:

refresh_simulations() → NoReturn

Refresh the simulations from the platform

Returns:

refresh_simulations_status()
pre_getstate()

Return default values for pickle_ignore_fields(). Call before pickling.

gather_assets() -> AssetCollection(_uid=None, platform_id=None, _platform=None, parent_id=None, _parent=None, status=None, tags={}, item_type=<ItemType.ASSETCOLLECTION: 5>, _platform_object=None)

Function called at runtime to gather all assets in the collection.

classmethod from_task(task, name: str = None, tags: Dict[str, Any] = None, assets: idmtools.assets.asset_collection.AssetCollection = None, gather_common_assets_from_task: bool = True)idmtools.entities.experiment.Experiment

Creates an Experiment with one Simulation from a task

Parameters
  • task – Task to use

  • assets – Asset collection to use for common tasks. Defaults to gather assets from task

  • name – Name of experiment

  • tags

  • gather_common_assets_from_task – Whether we should attempt to gather assets from the Task object for the experiment. With large amounts of tasks, this can be expensive as we loop through all

Returns:

classmethod from_builder(builders: Union[idmtools.builders.simulation_builder.SimulationBuilder, List[idmtools.builders.simulation_builder.SimulationBuilder]], base_task: idmtools.entities.itask.ITask, name: str = None, assets: idmtools.assets.asset_collection.AssetCollection = None, tags: Dict[str, Any] = None)idmtools.entities.experiment.Experiment

Creates an experiment from a SimulationBuilder object(or list of builders

Parameters
  • builders – List of builder to create experiment from

  • base_task – Base task to use as template

  • name – Experiment name

  • assets – Experiment level assets

  • tags – Experiment tags

Returns

Experiment object from the builders

classmethod from_template(template: idmtools.entities.templated_simulation.TemplatedSimulations, name: str = None, assets: idmtools.assets.asset_collection.AssetCollection = None, tags: Dict[str, Any] = None)idmtools.entities.experiment.Experiment

Creates an Experiment from a TemplatedSimulation object

Parameters
  • template – TemplatedSimulation object

  • name – Experiment name

  • assets – Experiment level assets

  • tags – Tags

Returns

Experiment object from the TemplatedSimulation object

list_static_assets(children: bool = False, platform: IPlatform = None, **kwargs) → List[idmtools.assets.asset.Asset]

List assets that have been uploaded to a server already

Parameters
  • children – When set to true, simulation assets will be loaded as well

  • platform – Optional platform to load assets list from

  • **kwargs

Returns

List of assets

run(wait_until_done: bool = False, platform: IPlatform = None, regather_common_assets: bool = None, **run_opts) → NoReturn

Runs an experiment on a platform

Parameters
  • wait_until_done – Whether we should wait on experiment to finish running as well. Defaults to False

  • platform – Platform object to use. If not specified, we first check object for platform object then the current context

  • regather_common_assets – Triggers gathering of assets for existing experiments. If not provided, we use the platforms default behaviour. See platform details for performance implications of this. For most platforms, it should be ok but for others, it could decrease performance when assets are not changing. It is important to note that when using this feature, ensure the previous simulations have finished provisioning. Failure to do so can lead to unexpected behaviour

  • **run_opts – Options to pass to the platform

Returns

None

wait(timeout: int = None, refresh_interval=None, platform: IPlatform = None)

Wait on an experiment to finish running

Parameters
  • timeout – Timeout to wait

  • refresh_interval – How often to refresh object

  • platform – Platform. If not specified, we try to determine this from context

Returns:

to_dict()
classmethod from_id(item_id: Union[str, uuid.UUID], platform: IPlatform = None, copy_assets: bool = False, **kwargs)Experiment

Helper function to provide better intellisense to end users

Parameters
  • item_id – Item id to load

  • platform – Optional platform. Fallbacks to context

  • copy_assets – Allow copying assets on load. Makes modifying experiments easier when new assets are involved.

  • **kwargs – Optional arguments to be passed on to the platform

Returns:

print(verbose: bool = False)

Print summary of experiment :param verbose: Verbose printing

Returns:

class idmtools.entities.experiment.ExperimentSpecification

Bases: idmtools.registry.experiment_specification.ExperimentPluginSpecification

get_description() → str

Get a brief description of the plugin and its functionality.

Returns

The plugin description.

get(configuration: dict)idmtools.entities.experiment.Experiment

Experiment is going

get_type() → Type[idmtools.entities.experiment.Experiment]