Environments

On this page, all environments with their environment-id are listed. In general, all environment-ids are structured as follows:

ControlType-ControlTask-MotorType-v0

  • The ControlType is in {Finite / Cont} for finite control set and continuous control set action spaces

  • The ControlTask is in {TC / SC / CC} (Torque / Speed / Current Control)

  • The MotorType is in {PermExDc / ExtExDc / SeriesDc / ShuntDc / PMSM / SynRM / / EESM / DFIM / SCIM / SIXPMSM }

Environment

environment-id

Permanently Excited DC Motor Environments

Discrete Torque Control Permanently Excited DC Motor Environment

'Finite-TC-PermExDc-v0'

Continuous Torque Control Permanently Excited DC Motor Environment

'Cont-TC-PermExDc-v0'

Discrete Speed Control Permanently Excited DC Motor Environment

'Finite-SC-PermExDc-v0'

Continuous Speed Control Permanently Excited DC Motor Environment

'Cont-SC-PermExDc-v0'

Discrete Current Control Permanently Excited DC Motor Environment

'Finite-CC-PermExDc-v0'

Continuous Current Control Permanently Excited DC Motor Environment

'Cont-CC-PermExDc-v0'

Externally Excited DC Motor Environments

Discrete Torque Control Externally Excited DC Motor Environment

'Finite-TC-ExtExDc-v0'

Continuous Torque Control Externally Excited DC Motor Environment

'Cont-TC-ExtExDc-v0'

Discrete Speed Control Externally Excited DC Motor Environment

'Finite-SC-ExtExDc-v0'

Continuous Speed Control Externally Excited DC Motor Environment

'Cont-SC-ExtExDc-v0'

Discrete Current Control Externally Excited DC Motor Environment

'Finite-CC-ExtExDc-v0'

Continuous Current Control Externally Excited DC Motor Environment

'Cont-CC-ExtExDc-v0'

Series DC Motor Environments

Discrete Torque Control Series DC Motor Environment

'Finite-TC-SeriesDc-v0'

Continuous Torque Control Series DC Motor Environment

'Cont-TC-SeriesDc-v0'

Discrete Speed Control Series DC Motor Environment

'Finite-SC-SeriesDc-v0'

Continuous Speed Control Series DC Motor Environment

'Cont-SC-SeriesDc-v0'

Discrete Current Control Series DC Motor Environment

'Finite-CC-SeriesDc-v0'

Continuous Current Control Series DC Motor Environment

'Cont-CC-SeriesDc-v0'

Shunt DC Motor Environments

Discrete Torque Control Shunt DC Motor Environment

'Finite-TC-ShuntDc-v0'

Continuous Torque Control Shunt DC Motor Environment

'Cont-TC-ShuntDc-v0'

Discrete Speed Control Shunt DC Motor Environment

'Finite-SC-ShuntDc-v0'

Continuous Speed Control Shunt DC Motor Environment

'Cont-SC-ShuntDc-v0'

Discrete Current Control Shunt DC Motor Environment

'Finite-CC-ShuntDc-v0'

Continuous Current Control Shunt DC Motor Environment

'Cont-CC-ShuntDc-v0'

Permanent Magnet Synchronous Motor (PMSM) Environments

Finite Torque Control PMSM Environment

'Finite-TC-PMSM-v0'

Torque Control PMSM Environment

'Cont-TC-PMSM-v0'

Finite Speed Control PMSM Environment

'Finite-SC-PMSM-v0'

Speed Control PMSM Environment

'Cont-SC-PMSM-v0'

Finite Current Control PMSM Environment

'Finite-CC-PMSM-v0'

Current Control PMSM Environment

'Cont-CC-PMSM-v0'

Externally Excited Synchronous Motor (EESM) Environments

Finite Torque Control EESM Environment

'Finite-TC-EESM-v0'

Torque Control EESM Environment

'Cont-TC-EESM-v0'

Finite Speed Control EESM Environment

'Finite-SC-EESM-v0'

Speed Control EESM Environment

'Cont-SC-EESM-v0'

Finite Current Control EESM Environment

'Finite-CC-EESM-v0'

Current Control EESM Environment

'Cont-CC-EESM-v0'

Synchronous Reluctance Motor (SynRM) Environments

Finite Torque Control SynRM Environment

'Finite-TC-SynRM-v0'

Torque Control SynRM Environment

'Cont-TC-SynRM-v0'

Finite Speed Control SynRM Environment

'Finite-SC-SynRM-v0'

Speed Control SynRM Environment

'Cont-SC-SynRM-v0'

Finite Current Control SynRM Environment

'Finite-CC-SynRM-v0'

Current Control SynRM Environment

'Cont-CC-SynRM-v0'

Squirrel Cage Induction Motor (SCIM) Environments

Finite Torque Control SCIM Environment

'Finite-TC-SCIM-v0'

Torque Control SCIM Environment

'Cont-TC-SCIM-v0'

Finite Speed Control SCIM Environment

'Finite-SC-SCIM-v0'

Speed Control SCIM Environment

'Cont-SC-SCIM-v0'

Finite Current Control SCIM Environment

'Finite-CC-SCIM-v0'

Current Control SCIM Environment

'Cont-CC-SCIM-v0'

Doubly Fed Induction Motor (DFIM) Environments

Finite Torque Control DFIM Environment

'Finite-TC-DFIM-v0'

Torque Control DFIM Environment

'Cont-TC-DFIM-v0'

Finite Speed Control DFIM Environment

'Finite-SC-DFIM-v0'

Speed Control DFIM Environment

'Cont-SC-DFIM-v0'

Finite Current Control DFIM Environment

'Finite-CC-DFIM-v0'

Current Control DFIM Environment

'Cont-CC-DFIM-v0'

Six Phase PMSM (SIXPMSM) Environments

Finite Torque Control SIXPMSM Environment

'Finite-TC-SIXPMSM-v0'

Torque Control SIXPMSM Environment

'Cont-TC-SIXPMSM-v0'

Finite Speed Control SIXPMSM Environment

'Finite-SC-SIXPMSM-v0'

Speed Control SIXPMSM Environment

'Cont-SC-SIXPMSM-v0'

Finite Current Control SIXPMSM Environment

'Finite-CC-SIXPMSM-v0'

Current Control SIXPMSM Environment

'Cont-CC-SIXPMSM-v0'

Motor Environments:

Electric Motor Base Environment

On the core level the electric motor environment and the interface to its submodules are defined. By using these interfaces further reference generators, reward functions, visualizations or physical models can be implemented.

Each ElectricMotorEnvironment contains the five following modules:

  • PhysicalSystem
    • Specification and simulation of the physical model. Furthermore, specifies limits and nominal values for all of its state_variables.

  • ReferenceGenerator
    • Calculation of reference trajectories for one or more states of the physical systems state_variables.

  • ConstraintMonitor
    • Observation of the PhysicalSystems state to comply to a set of user defined constraints.

  • RewardFunction
    • Calculation of the reward based on the physical systems state and the reference.* ElectricMotorVisualization

    • Visualization of the PhysicalSystems state, reference and reward for the user.

class gym_electric_motor.core.Callback[source]

Bases: object

The abstract base class for Callbacks in GEM. Each of its functions gets called at one point in the ElectricMotorEnvironment. .. attribute:: _env

The GEM environment. Use it to have full control over the environment on runtime.

on_close()[source]

Gets called at the beginning of a close

on_reset_begin()[source]

Gets called at the beginning of each reset

on_reset_end(state, reference)[source]

Gets called at the end of each reset

on_step_begin(k, action)[source]

Gets called at the beginning of each step

on_step_end(k, state, reference, reward, terminated)[source]

Gets called at the end of each step

set_env(env)[source]

Sets the environment of the motor.

class gym_electric_motor.core.ConstraintMonitor(limit_constraints=(), additional_constraints=(), merge_violations='max')[source]

Bases: object

The ConstraintMonitor is used within the ElectricMotorEnvironment to monitor the states for illegal / undesired values (e.g. overcurrents).

It consists of a list of multiple independent constraints. Each constraint gets the current observation of the environment as input and returns a violation degree within \([0.0, 1.0]\). All these are merged together and the ConstraintMonitor returns a total violation degree.

Soft Constraints:

To enable a higher flexibility, the constraints return a violation degree (float) instead of a simple violation flag (bool). So, even before the limits are violated, the reward function can take the limit violation degree into account. If the violation degree is at 0.0, no states are in a dangerous region. For values between 0.0 and 1.0 the reward will be decreased gradually so that the agent will learn to avoid these state regions. If the violation degree reaches 1.0 the episode is terminated.

Hard Constraints:

With the above concept, also hard constraints that directly terminate an episode without any “danger”-region can be modeled. Then, the violation degree of the constraint directly changes from 0.0 to 1.0, if a violation occurs.

Parameters:
  • limit_constraints (list(str)/'all_states') –

    Shortcut parameter to pass all states that limits shall be observed.
    • list(str): Pass a list with state_names and all of the states will be observed to stay within

      their limits.

    • ’all_states’: Shortcut for all states are observed to stay within the limits.

  • additional_constraints (list(Constraint/callable)) – Further constraints that shall be monitored. These have to be initialized first and passed to the ConstraintMonitor. Alternatively, constraints can be defined as a function that takes the current state and returns a float within [0.0, 1.0].

  • merge_violations (‘max’/’product’/callable(*violation_degrees) -> float) –

    Function to merge all single violation degrees to a total violation degree.

    • ’max’: Take the maximal violation degree as total violation degree.

    • ’product’: Calculates the total violation degree as one minus the product of one minus all single

      violation degrees.

    • callable(*violation_degrees) -> float: User defined function to calculate the total violation.

check_constraints(state: ndarray)[source]

Function to check and merge all constraints.

Parameters:

state (ndarray(float)) – The current environments state.

Returns:

The total violation degree in [0,1]

Return type:

float

set_modules(ps: PhysicalSystem)[source]

The PhysicalSystem of the environment is passed to save important parameters like the index of the states.

Parameters:

ps (PhysicalSystem) – The PhysicalSystem of the environment.

property constraints

Returns the list of all constraints the ConstraintMonitor observes.

class gym_electric_motor.core.ElectricMotorEnvironment(physical_system, reference_generator, reward_function, visualization=(), state_filter=None, callbacks=(), constraints=(), physical_system_wrappers=(), scale_plots=False, **kwargs)[source]

Bases: Env

Description:

The main class connecting all modules of the gym-electric-motor environments.

Modules:

Physical System:

Containing the physical structure and simulation of the drive system as well as information about the technical limits and nominal values. Needs to be a subclass of PhysicalSystem

Reference Generator:

Generation of the reference for the motor to follow. Needs to be a subclass of ReferenceGenerator

Reward Function:

Calculation of the reward based on the state of the physical system and the generated reference and observation if the motor state is within the limits. Needs to be a subclass of RewardFunction.

Visualization:

Visualization of the motors states. Needs to be a subclass of ElectricMotorVisualization

Limits:

Returns a list of limits of all states in the observation (called in state_filter) in the same order.

State Variables:

Each environment has got a list of state variables that are defined by the physical system. These define the names and order for all further state arrays in the modules. These states are announced to the other modules by announcing the physical system to them, which contains the property state_names.

Example:

['omega', 'torque','i', 'u', 'u_sup']

Observation:
Type: Tuple(State_Space, Reference_Space)

The observation is always a tuple of the State Space of the Physical System and the Reference Space of the Reference Generator. In all current Physical Systems and Reference Generators these Spaces are normalized, continuous, multidimensional boxes in [-1, 1] or [0, 1].

Actions:
Type: Discrete() / Box()

The action space of the environments are the action spaces of the physical systems. In all current physical systems the action spaces are specified by its PowerElectronicConverter and either a continuous, multidimensional box or discrete.

Reward:

The reward and the reward range are specified by the RewardFunction. In general the reward is higher the closer the motor state follows the reference trajectories.

Starting State:

The physical system and the reference generator define the starting state.

Episode Termination:

Episode terminations can be initiated by the reference generator, or the reward function. A reference generator might terminate an episode, if the reference has ended. The reward function can terminate an episode, if a physical limit of the motor has been violated.

Setting and initialization of all environments’ modules.

Parameters:
  • physical_system (PhysicalSystem) – The physical system of this environment.

  • reference_generator (ReferenceGenerator) – The reference generator of this environment.

  • reward_function (RewardFunction) – The reward function of this environment.

  • visualization (iterable(ElectricMotorVisualization)/None) – The visualization of this environment.

  • constraints (list(Constraint/str/callable) / ConstraintMonitor) –

    A list of constraints or an already initialized ConstraintMonitor object can be passed here.

    • list(Constraint/str/callable): Pass a list with initialized Constraints and/or state names. Then,

    a ConstraintMonitor object with the Constraints and additional LimitConstraints on the passed names is created. Furthermore, the string ‘all’ inside the list will create a ConstraintMonitor that observes the limit on each state. - ConstraintMonitor: Pass an initialized ConstraintMonitor object that will be used directly as

    ConstraintMonitor in the environment.

  • visualization – The visualizations of this environment.

  • state_filter (list(str)) – Selection of states that are shown in the observation.

  • physical_system_wrappers (iterable(PhysicalSystemWrapper)) – PhysicalSystemWrapper instances to be wrapped around the physical system.

  • callbacks (list(Callback)) – Callbacks being called in the environment

  • **kwargs – Arguments to be passed to the modules.

close()[source]

Called when the environment is deleted. Closes all its modules.

get_wrapper_attr(name: str) Any

Gets the attribute name from the environment.

has_wrapper_attr(name: str) bool

Checks if the attribute name exists in the environment.

make(*args, **kwargs)[source]
render(*_, **__)[source]

Update the visualization of the motor.

reset(seed=None, options=None, *_, **__)[source]

Reset of the environment and all its modules to an initial state.

Returns:

The initial observation consisting of the initial state and initial reference. info(dict): Auxiliary information (optional)

set_wrapper_attr(name: str, value: Any, *, force: bool = True) bool

Sets the attribute name on the environment with value, see Wrapper.set_wrapper_attr for more info.

step(action)[source]

Perform one simulation step of the environment with an action of the action space.

Parameters:

action – Action to play on the environment.

Returns:

Tuple of the new state and the next reference. reward(float): Amount of reward received for the last step. terminated(bool): Flag, indicating if a reset is required before new steps can be taken. info(dict): Auxiliary information (optional)

Return type:

observation(Tuple(ndarray(float),ndarray(float))

action_space: spaces.Space[ActType]
property constraint_monitor

The ConstraintMonitor of the environment.

Type:

Returns(ConstraintMonitor)

current_next_reference = None
current_reference = None
current_state = None
env_id = None
property limits

Returns a list of limits of all states in the observation (called in state_filter) in the same order

metadata: dict[str, Any] = {'render_modes': []}
property nominal_state

Returns a list of nominal values of all states in the observation (called in state_filter) in that order

property np_random: Generator

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

property np_random_seed: int

Returns the environment’s internal _np_random_seed that if not set will first initialise with a random int as seed.

If np_random_seed was set directly instead of through reset() or set_np_random_through_seed(), the seed will take the value -1.

Returns:

the seed of the current np_random or -1, if the seed of the rng is unknown

Return type:

int

observation_space: spaces.Space[ObsType]
property physical_system

Returns: PhysicalSystem: The Physical System of the Environment

property reference_generator

Returns: ReferenceGenerator: The ReferenceGenerator of the Environment

property reference_names

Returns a list of state names of all states in the observation (called in state_filter) in the same order

render_mode: str | None = None
property reward_function

Returns: RewardFunction: The RewardFunction of the environment

sim = SimulationEnvironment(tau=0.0, step=0)
spec: EnvSpec | None = None
property state_names

Returns a list of state names of all states in the observation (called in state_filter) in the same order

property unwrapped: Env[ObsType, ActType]

Returns the base non-wrapped environment.

Returns:

The base non-wrapped gymnasium.Env instance

Return type:

Env

property visualizations

Returns a list of all active motor visualizations.

workspace = Workspace()
class gym_electric_motor.core.ElectricMotorVisualization[source]

Bases: Callback

Base class for all visualizations in GEM. The visualization is basically only a Callback that is extended by a render() function to update the figure. With the function calls that are inherited by the Callback superclass (e.g. on_step_end), the data is passed from the environment to the visualization. In the render() function the passed data can be visualized in the desired way.

on_close()

Gets called at the beginning of a close

on_reset_begin()

Gets called at the beginning of each reset

on_reset_end(state, reference)

Gets called at the end of each reset

on_step_begin(k, action)

Gets called at the beginning of each step

on_step_end(k, state, reference, reward, terminated)

Gets called at the end of each step

render()[source]

Function to update the user interface.

set_env(env)

Sets the environment of the motor.

class gym_electric_motor.core.PhysicalSystem(action_space, state_space, state_names, tau)[source]

Bases: object

The Physical System module encapsulates the physical model of the system as well as the simulation from one step to the next.

Parameters:
  • action_space (gymnasium.Space) – The set of allowed actions on the system.

  • state_space (gymnasium.Space) – The set of possible systems states.

  • state_names (ndarray(str)) – The names of the systems states

  • tau (float) – The systems simulation time interval.

close()[source]

Called, when the environment is closed. Close the System and all of its submodules by closing files, saving logs etc.

reset(initial_state=None)[source]

Reset the physical system to an initial state before a new episode starts.

Returns:

The initial systems state

Return type:

element of state_space

simulate(action)[source]

Simulation of the Physical System for one time step with the input action. This method is called in the environment in every step to update the systems state.

Parameters:

action (element of action_space) – The action to play on the system for the next time step.

Returns:

The systems state after the action was applied.

Return type:

element of state_space

property action_space

Returns: gymnasium.Space: An Farama Gymnasium Space that describes the possible actions on the system.

property k

Returns: int: The current systems time step k.

property limits

Returns: ndarray(float): An array containing the maximum allowed physical values for each state variable.

property nominal_state

Returns: ndarray(float): An array containing the nominal values for each state variable.

property state_names

Returns: ndarray(str): Array containing the names of the systems states.

property state_positions

Returns: dict(int): Dictionary mapping the state names to its positions in the state arrays

property state_space

Returns: gymnasium.Space: An Farama Gymnasium Space that describes the possible states of the system.

property tau
property unwrapped

Returns this instance of the physical system.

If the system is wrapped into multiple PhysicalSystemWrappers this property returns directly the innermost system.

class gym_electric_motor.core.ReferenceGenerator[source]

Bases: object

The abstract base class for reference generators in gym electric motor environments.

reference_space:

Space of reference observations as defined in the Farama Gymnasium Toolbox.

The reference generator is called twice per step.

Call of get_reference():

Returns the reference array which has the same shape as the state array and contains values for currently referenced state variables and a default value (e.g zero) for non-referenced variables. This reference array is used to calculate rewards.

Example:

reference_array=np.array([0.8, 0.0, 0.0, 0.0])

state_variables=['omega', 'torque', 'i', 'u', 'u_sup']

Here, the state consists of five quantities but only 'omega' is referenced during control.

Call of get_reference_observation():

Returns the reference observation, which is shown to the agent. Any shape and content is generally valid, however, values must be within the declared reference space. For example, the reference observation may contain future reference values of the next n steps.

Example:

reference_observation = np.array([0.8, 0.6, 0.4])

This reference observation may be valid for an omega-controlled environment that shows the agent not only the reference for the next time step omega_(t+1)=0.8 but also omega_(t+2)=0.6 and omega_(t+3)=0.4.

close()[source]

Called by the environment, when the environment is deleted to close files, store logs, etc.

get_reference(state, *_, **__)[source]

Returns the reference array of the current time step.

The reference array needs to be in the same shape as the state variables. For referenced states the reference value is passed. For unreferenced states a default value (e.g. Zero) can be set in the reference array.

Parameters:

state (ndarray(float)) – Current state array of the environment.

Returns:

Current reference array.

Return type:

ndarray(float))

get_reference_observation(state, *_, **__)[source]

Returns the reference observation for the next time step. This observation needs to fit in the reference space.

Parameters:

state (ndarray(float)) – Current state array of the environment.

Returns:

Observation for the next reference time step.

Return type:

value in reference_space

reset(initial_state=None, initial_reference=None)[source]

Reset of references for a new episode.

Parameters:
  • initial_state (ndarray(float)) – The initial state array of the environment.

  • initial_reference (ndarray(float)) – If not None: A desired initial reference array.

Returns:

The reference array at time step 0.

reference_observation(value in reference_space): The reference observation for the next time step.

trajectories(dict(list(float)): If available, generated trajectories for the Visualization can be passed here. Otherwise return None.

Return type:

reference_array(ndarray(float))

set_modules(physical_system)[source]

Announcement of the PhysicalSystem to the ReferenceGenerator.

In subclasses, store all important information from the physical system to the ReferenceGenerator here. The environment announces the physical system to the ReferenceGenerator during its initialization.

Parameters:

physical_system (PhysicalSystem) – The physical system of the environment.

property reference_names

Returns: reference_names(list(str)): A list containing all names of the referenced states in the reference observation.

property referenced_states

Returns: ndarray(bool): Boolean-Array with the length of the state_variables indicating which states are referenced.

class gym_electric_motor.core.RewardFunction[source]

Bases: object

The abstract base class for reward functions in gym electric motor environments.

The reward function is called once per step and returns reward for the current time step.

reward_range

Defining lowest and highest possible rewards.

Type:

Tuple(float, float)

close()[source]

Called, when the environment is closed to store logs, close files etc.

reset(initial_state=None, initial_reference=None)[source]

This function is called by the environment when reset.

Inner states of the reward function can be reset here, if necessary.

Parameters:
  • initial_state (ndarray(float)) – Initial state array of the Environment

  • initial_reference (ndarray(float)) – Initial reference array of the environment.

reward(state, reference, k=None, action=None, violation_degree=0.0)[source]

Reward calculation. If limits have been violated the reward is calculated with a separate function.

Parameters:
  • state (ndarray(float)) – Environments state array.

  • reference (ndarray(float)) – Environments reference array.

  • k (int) – Systems momentary time-step

  • action (element of action space) – The previously taken action.

  • violation_degree (float in [0.0, 1.0]) – Degree of violation of the constraints. 0.0 indicates that all constraints are complied. 1.0 indicates that the constraints have been so much violated, that a reset is necessary.

Returns:

Reward for this state, reference, action tuple.

Return type:

float

set_modules(physical_system, reference_generator, constraint_monitor)[source]

Setting of the physical system, to set state arrays fitting to the environments states

Parameters:
  • physical_system (PhysicalSystem) – The physical system of the environment

  • reference_generator (ReferenceGenerator) – The reference generator of the environment.

  • constraint_monitor (ConstraintMonitor) – The constraint monitor of the environment.

reward_range = (-inf, inf)

Lower and upper possible reward

Type:

Tuple(int,int)

class gym_electric_motor.core.SimulationEnvironment(tau: float = 0.0, step: int = 0)[source]

Bases: object

step: int = 0
property t: float
tau: float = 0.0
class gym_electric_motor.core.Workspace[source]

Bases: object

test = []
../../_images/TopLevelStructure.svg
class gym_electric_motor.core.ElectricMotorEnvironment(physical_system, reference_generator, reward_function, visualization=(), state_filter=None, callbacks=(), constraints=(), physical_system_wrappers=(), scale_plots=False, **kwargs)[source]

Bases: Env

Description:

The main class connecting all modules of the gym-electric-motor environments.

Modules:

Physical System:

Containing the physical structure and simulation of the drive system as well as information about the technical limits and nominal values. Needs to be a subclass of PhysicalSystem

Reference Generator:

Generation of the reference for the motor to follow. Needs to be a subclass of ReferenceGenerator

Reward Function:

Calculation of the reward based on the state of the physical system and the generated reference and observation if the motor state is within the limits. Needs to be a subclass of RewardFunction.

Visualization:

Visualization of the motors states. Needs to be a subclass of ElectricMotorVisualization

Limits:

Returns a list of limits of all states in the observation (called in state_filter) in the same order.

State Variables:

Each environment has got a list of state variables that are defined by the physical system. These define the names and order for all further state arrays in the modules. These states are announced to the other modules by announcing the physical system to them, which contains the property state_names.

Example:

['omega', 'torque','i', 'u', 'u_sup']

Observation:
Type: Tuple(State_Space, Reference_Space)

The observation is always a tuple of the State Space of the Physical System and the Reference Space of the Reference Generator. In all current Physical Systems and Reference Generators these Spaces are normalized, continuous, multidimensional boxes in [-1, 1] or [0, 1].

Actions:
Type: Discrete() / Box()

The action space of the environments are the action spaces of the physical systems. In all current physical systems the action spaces are specified by its PowerElectronicConverter and either a continuous, multidimensional box or discrete.

Reward:

The reward and the reward range are specified by the RewardFunction. In general the reward is higher the closer the motor state follows the reference trajectories.

Starting State:

The physical system and the reference generator define the starting state.

Episode Termination:

Episode terminations can be initiated by the reference generator, or the reward function. A reference generator might terminate an episode, if the reference has ended. The reward function can terminate an episode, if a physical limit of the motor has been violated.

Setting and initialization of all environments’ modules.

Parameters:
  • physical_system (PhysicalSystem) – The physical system of this environment.

  • reference_generator (ReferenceGenerator) – The reference generator of this environment.

  • reward_function (RewardFunction) – The reward function of this environment.

  • visualization (iterable(ElectricMotorVisualization)/None) – The visualization of this environment.

  • constraints (list(Constraint/str/callable) / ConstraintMonitor) –

    A list of constraints or an already initialized ConstraintMonitor object can be passed here.

    • list(Constraint/str/callable): Pass a list with initialized Constraints and/or state names. Then,

    a ConstraintMonitor object with the Constraints and additional LimitConstraints on the passed names is created. Furthermore, the string ‘all’ inside the list will create a ConstraintMonitor that observes the limit on each state. - ConstraintMonitor: Pass an initialized ConstraintMonitor object that will be used directly as

    ConstraintMonitor in the environment.

  • visualization – The visualizations of this environment.

  • state_filter (list(str)) – Selection of states that are shown in the observation.

  • physical_system_wrappers (iterable(PhysicalSystemWrapper)) – PhysicalSystemWrapper instances to be wrapped around the physical system.

  • callbacks (list(Callback)) – Callbacks being called in the environment

  • **kwargs – Arguments to be passed to the modules.

close()[source]

Called when the environment is deleted. Closes all its modules.

get_wrapper_attr(name: str) Any

Gets the attribute name from the environment.

has_wrapper_attr(name: str) bool

Checks if the attribute name exists in the environment.

make(*args, **kwargs)[source]
render(*_, **__)[source]

Update the visualization of the motor.

reset(seed=None, options=None, *_, **__)[source]

Reset of the environment and all its modules to an initial state.

Returns:

The initial observation consisting of the initial state and initial reference. info(dict): Auxiliary information (optional)

set_wrapper_attr(name: str, value: Any, *, force: bool = True) bool

Sets the attribute name on the environment with value, see Wrapper.set_wrapper_attr for more info.

step(action)[source]

Perform one simulation step of the environment with an action of the action space.

Parameters:

action – Action to play on the environment.

Returns:

Tuple of the new state and the next reference. reward(float): Amount of reward received for the last step. terminated(bool): Flag, indicating if a reset is required before new steps can be taken. info(dict): Auxiliary information (optional)

Return type:

observation(Tuple(ndarray(float),ndarray(float))

action_space: spaces.Space[ActType]
property constraint_monitor

The ConstraintMonitor of the environment.

Type:

Returns(ConstraintMonitor)

current_next_reference = None
current_reference = None
current_state = None
env_id = None
property limits

Returns a list of limits of all states in the observation (called in state_filter) in the same order

metadata: dict[str, Any] = {'render_modes': []}
property nominal_state

Returns a list of nominal values of all states in the observation (called in state_filter) in that order

property np_random: Generator

Returns the environment’s internal _np_random that if not set will initialise with a random seed.

Returns:

Instances of np.random.Generator

property np_random_seed: int

Returns the environment’s internal _np_random_seed that if not set will first initialise with a random int as seed.

If np_random_seed was set directly instead of through reset() or set_np_random_through_seed(), the seed will take the value -1.

Returns:

the seed of the current np_random or -1, if the seed of the rng is unknown

Return type:

int

observation_space: spaces.Space[ObsType]
property physical_system

Returns: PhysicalSystem: The Physical System of the Environment

property reference_generator

Returns: ReferenceGenerator: The ReferenceGenerator of the Environment

property reference_names

Returns a list of state names of all states in the observation (called in state_filter) in the same order

render_mode: str | None = None
property reward_function

Returns: RewardFunction: The RewardFunction of the environment

sim = SimulationEnvironment(tau=0.0, step=0)
spec: EnvSpec | None = None
property state_names

Returns a list of state names of all states in the observation (called in state_filter) in the same order

property unwrapped: Env[ObsType, ActType]

Returns the base non-wrapped environment.

Returns:

The base non-wrapped gymnasium.Env instance

Return type:

Env

property visualizations

Returns a list of all active motor visualizations.

workspace = Workspace()