Core Exporter Functionality¶
This section contains detailed API documentation for the core exporter functionality of exploy.
Actor¶
Abstract interface and utilities for exportable actors.
This module defines the ExportableActor base class for policy networks that can be
exported to ONNX, along with helpers like make_exportable_actor and add_actor_memory.
- class exploy.exporter.core.actor.ExportableActor[source]¶
-
Abstract interface for an actor that can be exported to ONNX.
- abstractmethod forward(obs)[source]¶
Given a batch of observations, compute the corresponding actions.
- exploy.exporter.core.actor.make_exportable_actor(actor)[source]¶
Convert a torch.nn.Module actor to an ExportableActor.
- Parameters:
actor (
Module) – The actor to convert.- Return type:
- exploy.exporter.core.actor.add_actor_memory(context_manager, get_hidden_states_func)[source]¶
Add inputs for actor hidden states.
- Parameters:
context_manager (
ContextManager) – The context manager to add the inputs to.get_hidden_states_func (
Callable[[],tuple[Tensor,...]]) – A function that returns a tuple of hidden state tensors, used to get the hidden states to add as inputs.
Exportable Environment¶
Abstract base class for exportable environments.
This module defines the ExportableEnvironment base class that provides
the standardized interface required by the exporter to trace observation
computation, action processing, and simulation stepping.
- class exploy.exporter.core.exportable_environment.ExportableEnvironment[source]¶
Bases:
ABC- abstractmethod compute_observations()[source]¶
Compute and return the observations of the environment.
- Return type:
- abstractmethod apply_actions()[source]¶
Apply processed actions (e.g., joint targets) to the environment
- abstractmethod prepare_export()[source]¶
Prepare the environment for export. Called before each export.
- abstractmethod register_evaluation_hooks(update, reset, evaluate_substep)[source]¶
Register evaluation hooks for this environment.
- abstractmethod step(actions)[source]¶
Step the environment forward by one step. Returns the next observations and a boolean indicating if the environment was reset.
- abstractmethod get_observation_names()[source]¶
Get the names of the observations in the environment.
- abstractmethod observations_reset()[source]¶
Get the observations after an environment reset.
- Return type:
Components¶
Core building blocks for environment export.
This module defines the component abstractions (Input, Output, Memory, Group, Connection)
used to structure and manage data flow during policy export.
- class exploy.exporter.core.components.Input(name, get_from_env_cb, metadata=None)[source]¶
Bases:
objectAbstraction for controller inputs.
The input encapsulates a callback that retrieves data from the environment and provides it as a tensor for use in the generation of a computational graph.
- __init__(name, get_from_env_cb, metadata=None)[source]¶
Constuct an Input.
- Parameters:
name (str) – Identifier for this input.
get_from_env_cb (Callable[[], torch.Tensor]) – Callback function that retrieves the input from the environment as a torch.Tensor.
metadata (Any) – Optional metadata associated with this input (e.g., shape, data type, semantic information).
- class exploy.exporter.core.components.Output(name, get_from_env_cb, metadata=None)[source]¶
Bases:
objectAbstract interface for outputs.
- class exploy.exporter.core.components.Memory(name, get_from_env_cb)[source]¶
-
Handle memory inputs and outputs.
This class abstracts how to get and set values used in an environment that has memory, for example actions and previous actions. Values are retrieved by passing callables.
- __init__(name, get_from_env_cb)[source]¶
Constuct an Input.
- Parameters:
name (str) – Identifier for this input.
get_from_env_cb (Callable[[], torch.Tensor]) – Callback function that retrieves the input from the environment as a torch.Tensor.
metadata (Any) – Optional metadata associated with this input (e.g., shape, data type, semantic information).
- class exploy.exporter.core.components.Group(name, items, metadata=None)[source]¶
Bases:
objectAbstraction for grouping related inputs and outputs together.
Context Manager¶
Manages components for environment export.
The ContextManager class organizes and manages all inputs, outputs, memory components,
and connections needed during the ONNX export process.
- class exploy.exporter.core.context_manager.ContextManager[source]¶
Bases:
objectManages all components (inputs, outputs, memory) for an environment export.
- add_component(component)[source]¶
Add a component (Input, Output, or Connection) to the context manager.
- Parameters:
component (
Input|Output|Connection) – The component to add. Can be an Input, Output, or Connection.- Raises:
AssertionError – If a component with the same name already exists.
- Return type:
- add_components(components)[source]¶
Add multiple components to the context manager.
- Parameters:
components (
list[Input|Output|Connection]) – A list of components to add. Each can be an Input, Output, or Connection.- Raises:
AssertionError – If any component has a name that already exists.
- Return type:
- add_group(group)[source]¶
Add a group of components to the context manager.
Recursively adds all items in the group, including nested groups.
- Parameters:
group (
Group) – The group to add containing inputs, outputs, or nested groups.- Raises:
AssertionError – If a group or component with the same name already exists.
- Return type:
- get_connection_components()[source]¶
Get all connection components.
- Return type:
- Returns:
A list of all Connection components.
- read_inputs()[source]¶
Read and update all input components from the environment.
Calls the read() method on each input component to refresh their data.
- Return type:
- write_connections()[source]¶
Write all connection components to transfer data.
Calls the write() method on each connection component to transfer data from getters to setters.
- Return type:
- assert_unique_name(name)[source]¶
Assert that the given name is unique across all components and groups.
Evaluator¶
Validation and testing utilities for exported ONNX models.
This module provides the evaluate function to compare ONNX model outputs
against the original environment and PyTorch policy for correctness verification.
- exploy.exporter.core.evaluator.evaluate(env, context_manager, session_wrapper, num_steps, verbose=True, reset_from_onnx_counter_steps=50, atol=1e-05, rtol=1e-05, pause_on_failure=True)[source]¶
Evaluate an ONNX exported model against an ExportableEnvironment stepped through a SessionWrapper.
This function runs the simulation for a specified number of steps and compares the outputs of the ONNX model with the environment’s state and actor’s actions at each step. This is useful for verifying the correctness of the ONNX export.
- Parameters:
env (
ExportableEnvironment) – The environment to run the evaluation in.context_manager (
ContextManager) – The context manager handling inputs and outputs.session_wrapper (
SessionWrapper) – An ONNX session wrapper.num_steps (
int) – The number of steps to run the evaluation for.verbose (
bool) – Whether to print verbose output during evaluation. Defaults to True.reset_from_onnx_counter_steps (
int) –Set after how many steps we should set memory inputs from ONNX instead of using the environment’s state.
Note: we do this to avoid numerical error accumulation that would occur if we only every use the ONNX inference outputs as memory fed back as ONNX inference inputs, while all other inputs are set directly from the environment’s state.
Note: this value is chosen arbitrarily.
atol (
float) – Absolute tolerance used to compare tensors.rtol (
float) – Relative tolerance used to compare tensors.
- Return type:
- Returns:
A tuple containing a boolean indicating if the evaluation was successful and the final observations tensor.
Exporter Module¶
Core ONNX export functionality for RL policies.
This module provides the main entry point for exporting trained policies to ONNX format,
including the export_environment_as_onnx function and the OnnxEnvironmentExporter class.
- exploy.exporter.core.exporter.export_environment_as_onnx(env, actor, path, filename='policy.onnx', model_source=None, verbose=False, opset_version=20, ir_version=11)[source]¶
Export policy into a Torch ONNX file.
- Parameters:
env (
ExportableEnvironment) – The environment to be exported.actor (
Module) – The actor torch module.path (
str) – The path to the saving directory.filename (
str) – The name of exported ONNX file. Defaults to “policy.onnx”.model_source (
dict|None) – Information about the policy’s origin (e.g., wandb, local file, etc.), added to the ONNX metadata.verbose (
bool) – Whether to print the model summary. Defaults to False.opset_version (
int) – Version of the operator specification referenced by the ONNX graph. Needs to be compatible with ONNX Runtime in deployment environment, check https://onnxruntime.ai/docs/reference/compatibility.htmlir_version (
int) – Version of the intermediate representation specifications. Needs to be compatible with ONNX Runtime in deployment environment, check https://onnxruntime.ai/docs/reference/compatibility.html
- class exploy.exporter.core.exporter.ExportMode(value)[source]¶
Bases:
Enum- Default = 0¶
- ProcessActions = 1¶
- class exploy.exporter.core.exporter.OnnxEnvironmentExporter(env, actor, opset_version, ir_version, verbose=False)[source]¶
Bases:
ModuleExporter of actor-critic into ONNX file using the environment’s managers.
- __init__(env, actor, opset_version, ir_version, verbose=False)[source]¶
Initialize internal Module state, shared by both nn.Module and ScriptModule.
- forward(input_data)[source]¶
- Use the robot’s state to compute policy actions, joint position targets, and policy
observations, and outputs that support history.
- Parameters:
input_data (
dict[str,Tensor]) – A dictionary containing all input tensors required for the forward pass. The expected keys and shapes of the tensors depend on the environment’s context manager and the policy’s computational graph.
Notes
Dictionary inputs are flattened by the torch ONNX exporter implementation.
Only inputs that are part of the computational graph will be required when using the resulting ONNX file for inference. For example, if pos_base_in_w is not used by any of the observation functions, it will not be a required input. This can be verified by querying the ONNX input names when using the ONNX runtime framework.
- Returns:
Environment outputs. A tuple of desired joint positions (i.e., processed actions), actions (i.e., unprocessed actions), memory (containing the previous actions for example).
- Return type:
tuple[torch.Tensor, …]
- register_modules()[source]¶
Register all modules from the environment’s context manager.
This method iterates over all modules in the environment’s context manager and registers them using sequential names. This ensures that all relevant modules are included in the ONNX export, allowing the exported model to function correctly when loaded in an ONNX runtime environment.
Calling this method multiple times will not re-register already registered modules. The modules’ names are based on insertion order.
Session Wrapper¶
ONNX Runtime inference session management.
The SessionWrapper class provides a convenient interface for loading and running
ONNX models with ONNX Runtime, managing input/output handling and session configuration.
- class exploy.exporter.core.session_wrapper.SessionWrapper(onnx_folder, onnx_file_name, actor=None, optimize=True)[source]¶
Bases:
objectManage a torch Module and its associated ONNX inference session.
- __init__(onnx_folder, onnx_file_name, actor=None, optimize=True)[source]¶
Construct a SessionWrapper to use it for policy inference.
- Parameters:
onnx_folder (
Path) – The folder containing an ONNX file to load.onnx_file_name (
str) – The name of the ONNX file contained in ONNX_folder.actor (
ExportableActor|None) – An ExportableActor representing the actor.optimize (
bool) – If true, optimize the ONNX graph, save it to file, and use it for inference.
- __call__(**kwargs)[source]¶
Run ONNX inference with the given inputs.
- Parameters:
**kwargs – Keyword arguments where keys are input names and values are input data.
- Returns:
List of output arrays from the ONNX model inference.
- get_actor()[source]¶
Get the original ExportableActor object used by this session wrapper.
- Return type:
- Returns:
The ExportableActor representing the actor, or None if not provided.
Tensor Proxy¶
Tensor list abstraction for improved ONNX export.
The TensorProxy class manages lists of tensors and exposes them as a single stacked tensor,
improving the structure of exported computational graphs.
- class exploy.exporter.core.tensor_proxy.TensorProxy(tensor, split_dim)[source]¶
Bases:
objectManage a list of tensors and expose them to the user as a stacked tensor.
This class takes a tensor, splits it along a specified dimension, and exposes it as if it were the original tensor. For example, this class allows the user to implement the following:
# Make a tensor of all body positions. body_pos = torch.rand((batch_dim, num_bodies, 3))
# Wrap the tensor in a TensorProxy object, splitting along dimension 1 (num_bodies). body_pos_proxy = TensorProxy(body_pos, split_dim=1)
Now, the user can index into body_pos_proxy as if it was the original body_pos. Indexing will index into one of the split tensors created from unbinding along the split dimension.
One use case for this implementation is to split the body_state_w tensor from ArticulationData into separate body tensors to improve exporting.
- __getitem__(idx)[source]¶
Index into a TensorProxy as if the user was indexing into the un-split list of tensors.
- __setitem__(idx, value)[source]¶
Set into a TensorProxy as if the user was indexing into the un-split list of tensors.
- classmethod __torch_function__(func, types, args=(), kwargs=None)[source]¶
Allow using this class with the torch API.
Torch provides a mechanism to treat any object as a torch.Tensor. This is enabled by implementing the __torch_function__ method for any python class.
- For more details on how to implement __torch_function__, see:
https://docs.pytorch.org/docs/stable/notes/extending.html#extending-torch-python-api
- For a discussion on a concrete implementation of __torch_function__, see: