tutorial_informed_search.ipynb 10.1 KB
Newer Older
Edmond Irani Liu's avatar
Edmond Irani Liu committed
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Tutorial: Informed Search Algorithms\n",
    "\n",
    "This tutorial shows how we can combine motion primitives of vehicles, i.e., short trajectory segments, with informed search algorithms to find feasible trajectories that connect the **initial state** and the **goal region** of a given planning problem."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 0. Preparation\n",
    "Before you proceed with this tutorial, make sure that\n",
    "\n",
    "* you have gone through the tutorials on **CommonRoad Input-Output** and **Uninformed Search Algorithms**.\n",
    "* you have installed all necessary modules for CommonRoad Search according to the installation manual.\n",
    "\n",
    "Let's start with importing relevant modules and classes for setting up the automaton and the CommonRoad (CR) scenario."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "%matplotlib inline\n",
    "%load_ext autoreload\n",
    "%autoreload 2\n",
    "\n",
    "import os\n",
    "import sys\n",
    "path_notebook = os.getcwd()\n",
    "\n",
    "# add the root folder to python path\n",
    "sys.path.append(os.path.join(path_notebook, \"../../\"))\n",
    "\n",
    "# add the 1_search_algorithms folder to python path\n",
    "sys.path.append(os.path.join(path_notebook, \"../\"))\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "from commonroad.common.file_reader import CommonRoadFileReader\n",
    "from commonroad.visualization.draw_dispatch_cr import draw_object\n",
    "\n",
    "from SMP.motion_planner.motion_planner import MotionPlanner\n",
    "from SMP.maneuver_automaton.maneuver_automaton import ManeuverAutomaton\n",
    "from SMP.motion_planner.utility import plot_primitives, display_steps"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Loading CR Scenario and Planning Problem\n",
    "In the next step, we load a CommonRoad scenario and its planning problem, which should be solved with the search algorithms. The meaning of the symbols in a scenario are explained as follows:\n",
    "* **Dot**: initial state projected onto the position domain\n",
    "* **Red rectangle**: static obstacle\n",
    "* **Yellow rectangle**: goal region projected onto the position domain"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# load scenario\n",
    "path_scenario = os.path.join(path_notebook, \"../../scenarios/tutorial/\")\n",
    "id_scenario = 'ZAM_Tutorial_Urban-3_2'\n",
    "\n",
    "# read in scenario and planning problem set\n",
    "scenario, planning_problem_set = CommonRoadFileReader(path_scenario + id_scenario + '.xml').open()\n",
    "# retrieve the first planning problem in the problem set\n",
    "planning_problem = list(planning_problem_set.planning_problem_dict.values())[0]\n",
    "    \n",
    "# plot the scenario and the planning problem set\n",
    "plt.figure(figsize=(15, 5))\n",
    "draw_object(scenario)\n",
    "draw_object(planning_problem_set)\n",
    "plt.gca().set_aspect('equal')\n",
    "plt.margins(0, 0)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. Generating a Maneuver Automaton\n",
    "\n",
    "In the following, we load the pre-generated motion primitives from an XML-File, and generate a **Maneuver Automaton** out of them.\n",
    "The maneuver automaton used for this tutorial consists of 7 motion primitives; the connectivity within the motion primitives are also computed and stored. Some additional explanations on the motion primitives:\n",
    "* The motion primitives are generated for the *Kinematic Single Track Model* (see [Vehicle Model Documentation](https://gitlab.lrz.de/tum-cps/commonroad-vehicle-models/-/blob/master/vehicleModels_commonRoad.pdf)) with parameters taken from vehicle model *BMW320i* (id_type_vehicle=2). \n",
    "* The motion primitives have a constant driving velocity of 9 m/s, varying steering angles with constant steering angle velocity, and a duration of 0.5 seconds. We assume constant input during this period.\n",
    "* The motion primitives are generated for all combinations of the steering angles of values 0 rad and 0.2 rad at the initial state and the final state, thereby producing 2 (initial states) x 2 (final states) = 4 primitives. Except for the primive moving straight (with 0 rad steering angle at the initial and the final states), the remaining 3 left-turning primitives are mirrored with regard to the x-axis, resulting in a total number of 7 motion primitives.\n",
    "* Two motion primitives are considered connectable if the velocity and the steering angle of the final state of the preceding primitive are equal to those of the initial state of the following primitive."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# load the xml with stores the motion primitives\n",
    "name_file_motion_primitives = 'V_9.0_9.0_Vstep_0_SA_-0.2_0.2_SAstep_0.4_T_0.5_Model_BMW320i.xml'\n",
    "\n",
    "# generate automaton\n",
    "automaton = ManeuverAutomaton.generate_automaton(name_file_motion_primitives)\n",
    "\n",
    "# plot motion primitives\n",
    "plt.figure(figsize=(8,8))\n",
    "\n",
    "plot_primitives(automaton.list_primitives)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Search algorithms\n",
    "Next, we demonstrate the search results for the following algorithms:\n",
    "1. Greedy-Best-First-Search (GBFS)\n",
    "2. A* Search (A*)\n",
    "\n",
    "For each of the algorithms, we create a corresponding motion planner implemented in the **MotionPlanner** class, with the scenario, the planning problem, and the generated automaton as the input. The source codes are located at **SMP/motion_planner/search_algorithms/**\n",
    "\n",
    "After executing the code block for every algorithm, you will see a **\"visualize\"** button directly beneath the **\"iteration\"** slider. Click the **\"visualize\"** button and let the search algorithm run through; once it's completed, you can use the slider to examine all intermediate search steps. Meaning of colors and lines are explained as follows:\n",
    "* **Yellow solid:** frontier \n",
    "* **Yellow dashed:** collision\n",
    "* **Red solid:** currently exploring\n",
    "* **Gray solid:** explored\n",
    "* **Green solid:** final solution"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.1 Greedy-Best-First Search (GBFS)\n",
    "As mentioned in the tutorial on uninformed search algorithms, GBFS is based on the Best-First Search algorithm, and uses the heuristic cost h(n) as the evaluation function f(n), i.e. GBFS always opts for the node that has the least h(n).\n",
    "\n",
    "As an exemplary heuristic, we simply estimate the time to reach the goal from the current state. This is estimated through obtaining the Euclidean distance between the state and the goal region, divided by the velocity."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# construct motion planner\n",
    "planner_GBFS = MotionPlanner.GreedyBestFirstSearch(scenario=scenario, \n",
    "                                                   planning_problem=planning_problem,\n",
    "                                                   automaton=automaton)\n",
    "# prepare input for visualization\n",
    "scenario_data = (scenario, planner_GBFS.state_initial, planner_GBFS.shape_ego, planning_problem)\n",
    "\n",
    "# display search steps\n",
    "display_steps(scenario_data=scenario_data, algorithm=planner_GBFS.execute_search, \n",
    "              config=planner_GBFS.config_plot)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.2 A* Search (A*)\n",
    "The evaluation function of A* is expressed by f(n)= g(n) + h(n), where g(n) is the accumulated path cost, and h(n) the heuristic cost to the goal region. Compared to GBFS, A* not only consideres the node with least h(n), but also takes the historical cost into consideration."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# construct motion planner\n",
    "planner_ASTAR = MotionPlanner.AStarSearch(scenario=scenario, \n",
    "                                          planning_problem=planning_problem,\n",
    "                                          automaton=automaton)\n",
    "# prepare input for visualization\n",
    "scenario_data = (scenario, planner_ASTAR.state_initial, planner_ASTAR.shape_ego, planning_problem)\n",
    "\n",
    "# display search steps\n",
    "display_steps(scenario_data=scenario_data, algorithm=planner_ASTAR.execute_search,\n",
    "              config=planner_ASTAR.config_plot)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the given planning problem, A* took more steps to find the same solutio as GBFS. \n",
    "\n",
    "**Question:** Is this always the case? You can verify your thoughts with re-running the search algorithms on the goal-shifted scenario (refer to the tutorial on uninformed search algorithms).\n",
    "\n",
    "**Question:** Can you further improve the heuristic function to improve the search efficiency?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Congratulations!\n",
    "\n",
    "You have finished the tutorial on informed search algorithms! Let's move on to the next tutorial on CommonRoad Search."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}