planner.core.graph#

Module Contents#

Generates related goal states based on the provided parameters. :param params: The container holding parameters of sub goal bounds. :type params: ParameterContainer :param goal_states: The goal states to generate related goal states from. :type goal_states: FloatTensor :param start_states: The start states to generate related goal states from. :type start_states: FloatTensor :param size: Number of goal states to generate. Defaults to 1. :type size: int, optional

Returns:

Related goal states as a tensor.

Parameters:
  • params (jacta.planner.core.parameter_container.ParameterContainer) –

  • goal_states (torch.FloatTensor) –

  • start_states (torch.FloatTensor) –

  • size (int) –

Return type:

torch.FloatTensor

Note

This function assumes a diagonal covariance matrix. It relies on the fact that the entries are independent and identically distributed (i.i.d.) entries.

planner.core.graph.sample_feasible_states(plant: jacta.planner.dynamics.simulator_plant.SimulatorPlant, bound_lower: torch.FloatTensor, bound_upper: torch.FloatTensor, size: int = 1) torch.FloatTensor#
Parameters:
  • plant (jacta.planner.dynamics.simulator_plant.SimulatorPlant) –

  • bound_lower (torch.FloatTensor) –

  • bound_upper (torch.FloatTensor) –

  • size (int) –

Return type:

torch.FloatTensor

planner.core.graph.sample_random_states(bound_lower: torch.FloatTensor, bound_upper: torch.FloatTensor, size: int = 1) torch.FloatTensor#
Parameters:
  • bound_lower (torch.FloatTensor) –

  • bound_upper (torch.FloatTensor) –

  • size (int) –

Return type:

torch.FloatTensor

planner.core.graph.sample_random_start_states(plant: jacta.planner.dynamics.simulator_plant.SimulatorPlant, params: jacta.planner.core.parameter_container.ParameterContainer, size: int = 1) torch.FloatTensor#
Parameters:
  • plant (jacta.planner.dynamics.simulator_plant.SimulatorPlant) –

  • params (jacta.planner.core.parameter_container.ParameterContainer) –

  • size (int) –

Return type:

torch.FloatTensor

planner.core.graph.sample_random_goal_states(plant: jacta.planner.dynamics.simulator_plant.SimulatorPlant, params: jacta.planner.core.parameter_container.ParameterContainer, size: int = 1) torch.FloatTensor#
Parameters:
  • plant (jacta.planner.dynamics.simulator_plant.SimulatorPlant) –

  • params (jacta.planner.core.parameter_container.ParameterContainer) –

  • size (int) –

Return type:

torch.FloatTensor

planner.core.graph.sample_random_sub_goal_states(plant: jacta.planner.dynamics.simulator_plant.SimulatorPlant, params: jacta.planner.core.parameter_container.ParameterContainer, size: int = 1) torch.FloatTensor#
Parameters:
  • plant (jacta.planner.dynamics.simulator_plant.SimulatorPlant) –

  • params (jacta.planner.core.parameter_container.ParameterContainer) –

  • size (int) –

Return type:

torch.FloatTensor

class planner.core.graph.Graph(plant: jacta.planner.dynamics.simulator_plant.SimulatorPlant, params: jacta.planner.core.parameter_container.ParameterContainer)#
Parameters:
  • plant (jacta.planner.dynamics.simulator_plant.SimulatorPlant) –

  • params (jacta.planner.core.parameter_container.ParameterContainer) –

property node_id_to_search_index_map: torch.IntTensor#
Return type:

torch.IntTensor

reset() None#

Fully resets the graph data for a new search.

Return type:

None

set_start_states(start_states: torch.FloatTensor) None#
Parameters:

start_states (torch.FloatTensor) –

Return type:

None

set_goal_state(goal_state: torch.FloatTensor) None#
Parameters:

goal_state (torch.FloatTensor) –

Return type:

None

calculate_distance_rewards(ids: torch.IntTensor) torch.FloatTensor#
Parameters:

ids (torch.IntTensor) –

Return type:

torch.FloatTensor

calculate_proximity_rewards(ids: torch.IntTensor) torch.FloatTensor#
Parameters:

ids (torch.IntTensor) –

Return type:

torch.FloatTensor

calculate_reachability_rewards(ids: torch.IntTensor, delta_states: torch.FloatTensor, minimum_distance: float = 0.001) torch.FloatTensor#
Parameters:
  • ids (torch.IntTensor) –

  • delta_states (torch.FloatTensor) –

  • minimum_distance (float) –

Return type:

torch.FloatTensor

add_total_rewards(ids: torch.IntTensor) torch.FloatTensor#
Parameters:

ids (torch.IntTensor) –

Return type:

torch.FloatTensor

reachability_cache(ids: torch.IntTensor) Tuple[torch.FloatTensor, torch.FloatTensor]#
Parameters:

ids (torch.IntTensor) –

Return type:

Tuple[torch.FloatTensor, torch.FloatTensor]

add_nodes(root_ids: torch.IntTensor, parent_ids: torch.IntTensor, states: torch.FloatTensor, start_actions: torch.FloatTensor, end_actions: torch.FloatTensor, relative_actions: torch.FloatTensor, is_main_node: bool = True) Tuple[int, bool]#

Adds a new node to the graph based on its state/distance from the goal and updates its reward.

When a new node is added to the graph, it gets evaluated in terms of reward and added to the graph.

Parameters:
  • parent_id – id to which the new node will be connected

  • state – the current state of the node, used to determine its distance to goal

  • action – the action used to reach the node

  • root_ids (torch.IntTensor) –

  • parent_ids (torch.IntTensor) –

  • states (torch.FloatTensor) –

  • start_actions (torch.FloatTensor) –

  • end_actions (torch.FloatTensor) –

  • relative_actions (torch.FloatTensor) –

  • is_main_node (bool) –

Returns:

The new ids and a flag if the graph is full.

Return type:

Tuple[int, bool]

reset_sub_goal_states() None#

Resets the sub goal states to the goal states.

Return type:

None

change_sub_goal_states(sub_goal_states: torch.FloatTensor) None#
Parameters:

sub_goal_states (torch.FloatTensor) –

Return type:

None

deactivate_nodes(ids: torch.IntTensor) None#
Parameters:

ids (torch.IntTensor) –

Return type:

None

activate_all_nodes() None#

Converts all sub nodes to main nodes and activates all inactive but used nodes

Return type:

None

sorted_progress_ids(reward_based: bool, search_index: int = 0) torch.IntTensor#
Parameters:
  • reward_based (bool) –

  • search_index (int) –

Return type:

torch.IntTensor

get_best_id(reward_based: bool = True, search_indices: torch.IntTensor | None = None) int#
Parameters:
  • reward_based (bool) –

  • search_indices (Optional[torch.IntTensor]) –

Return type:

int

is_worse_than(ids: int | torch.IntTensor, comparison_ids: int) bool | torch.Tensor#
Parameters:
  • ids (Union[int, torch.IntTensor]) –

  • comparison_ids (int) –

Return type:

Union[bool, torch.Tensor]

is_better_than(ids: int | torch.IntTensor, comparison_ids: int) bool | torch.Tensor#
Parameters:
  • ids (Union[int, torch.IntTensor]) –

  • comparison_ids (int) –

Return type:

Union[bool, torch.Tensor]

number_of_nodes() int#
Return type:

int

get_active_main_ids(search_index: int | None = None) torch.IntTensor#
Parameters:

search_index (Optional[int]) –

Return type:

torch.IntTensor

get_root_ids() torch.IntTensor#
Return type:

torch.IntTensor

shortest_path_to(idx: int, start_id: int | None = None) torch.IntTensor#
Parameters:
  • idx (int) –

  • start_id (Optional[int]) –

Return type:

torch.IntTensor

save(filename: str, mask: torch.IntTensor = slice(None)) None#
Parameters:
  • filename (str) –

  • mask (torch.IntTensor) –

Return type:

None

load(filename: str) None#
Parameters:

filename (str) –

Return type:

None

add_child_ids_to_node() None#
Return type:

None

destroy() None#

Used to destroy the graph and free up GPU memory.

Return type:

None