visualizers.viser_app.tasks.acrobot#

Module Contents#

visualizers.viser_app.tasks.acrobot.XML_PATH#
class visualizers.viser_app.tasks.acrobot.AcrobotConfig#

Bases: jacta.visualizers.viser_app.tasks.task.TaskConfig

Reward configuration for the acrobot task. .. py:attribute:: default_command

type:

Optional[numpy.ndarray]

w_vertical: float = 10.0#
w_velocity: float = 0.1#
w_control: float = 0.1#
p_vertical: float = 0.01#
cutoff_time: float = 0.15#
class visualizers.viser_app.tasks.acrobot.Acrobot#

Bases: jacta.visualizers.viser_app.tasks.mujoco_task.MujocoTask[AcrobotConfig]

Defines the acrobot balancing task. .. py:method:: reward(states: numpy.ndarray, sensors: numpy.ndarray, controls: numpy.ndarray, config: AcrobotConfig, additional_info: dict[str, Any]) -> numpy.ndarray

Implements the acrobot reward from MJPC.

Maps a list of states, list of controls, to a batch of rewards (summed over time) for each rollout.

The acrobot reward has four terms:

* `vertical_rew`, penalizing the distance between the pole angle and vertical.
* `velocity_rew` penalizing squared linear and angular velocity.
* `control_rew` penalizing any actuation.

Since we return rewards, each penalty term is returned as negative. The max reward is zero.

reset() None#

Resets the model to a default (random) state.

Return type:

None