CS计算机代考程序代写 matlab Lecture 2. Dynamics and Safe Control

Lecture 2. Dynamics and Safe Control
Changliu Liu Assistant Professor Robotics Institute Carnegie Mellon University

How to Model a Car?
2
All models are wrong, but some are useful.

Getting Started:
How to Model a Car for Control?

How Will You Drive in These Situations?
4
High speed
Parking
Your control action depends on the current state of your vehicle
You are controlling your vehicle based on your prediction of the future

State Space

the smallest possible subset of system variables
that can represent the entire state of the system at any given time
such that the future evolvement of the system only depends on the current state, not the past (Markov).
The internal state variables x are:
• • •
5

Markov What does Markov mean?

“Markov” generally means that given the present state, the future and

Choice of states is important

the past are independent
6

• • •
State Space
State space X is the collection of all possible values of the state variables. State values refer to the value of the state variables.
State values can be continuous or discrete
• •
Example: grid world Example: car
7

What is the State Space of Your Car?



Control space: the set of variables that the agent can choose to affect future states.
Control Space
Control signal – u; control space – U.
The control signals can be continuous or discrete:

Example: lottery
• •
Example: vehicle steering
9


Dynamic Equation
A function describing the evolvement of states under control inputs
Deterministic Model
Stochastic Model
10
x ̇ = f ( x , u )
x ̇ = f ( x , u , w )
Random
variables
xk+1 =f(xk,uk,wk)
xk+1 =f(xk,uk)
Continuous time (Ordinary differential equation)
Discrete time (Difference equation)

Control Decisions The control decision can be made based on

the current state (state-feedback control u = π(x))
11
• • •
the measurement of the current state (measurement-feedback control) historical states or their measurements (control with memory)


In many cases, states are not directly measurable. They needed to be inferred from measurements (y)

Deterministic
Stochastic
Measurement function
• •
y = h(x, u) y = h(x,u,v)
Observation Space
12

State Space Dynamic Model
Deterministic Model Stochastic Model
13
x ̇ = f ( x , u ) y = h(x, u)
x ̇ = f ( x , u , w ) y = h(x,u,v)
xk+1 =f(xk,uk) yk =h(xk,uk)
xk+1 =f(xk,uk,wk) yk = h(xk,uk,vk)
Continuous time
Discrete time

State Space Dynamic Model (Discrete Time)
uk-2
uk-1
uk
xk+1 =f(xk,uk,wk) yk = h(xk,uk,vk)
14
xk-1
xk
xk+1
yk-1
yk
yk+1
Measurement-feedback control law

Example: Autonomous Driving

• •
Suppose a vehicle in 2D wants to move from point a to point b within minimum amount of time. The driver can directly affects the steering angle and the longitudinal acceleration.
15
What is the state and the control? How to define the vehicle dynamics?
a
b

• • • • •
Kinematic Models
Single integrator Double integrator Dubins car Unicycle – 3 state Unicycle – 4 state
16
All models describing the evolution of a system in time are dynamic models (in the broad sense).
Kinematics vs Dynamics (in the narrow sense)
• •
Kinematics do not consider forces Dynamics consider forces

Single Integrator State x ∈ R2: position

• •
x· = u
xk+1 = xk + Δt uk
u
Control u ∈ R2: velocity Dynamic equaltion
17
• •
function xnew = single_integrator(x, u, dt)
xnew = x + u*dt;
end

Single Integrator
18
Advantage
Disadvantage
Physically not achievable (inertia and constraints not considered)
* Can you move a vehicle at +∞ speed and then immediately
Simple
move it at −∞ speed?
*
Can you immediately change the heading of a vehicle?


• •
Double Integrator State x ∈ R4: position and velocity
Control u ∈ R2: acceleration Dynamic equaltion
= I2 02
u
19
x ̇=02 I2x+02u 02 02 I2

• k+1
tI2 x + t2I2 u 2
function xnew = double_integrator(x, u, dt)
xnew(1:2,1) = x(1:2) + x(3:4)*dt + u*dt^2/2;
xnew(3:4,1) = x(3:4) + u*dt;
end
x
I2 k tI2 k Inertia

Double Integrator Advantage Disadvantage
20
Relatively Simple Inertia Considered
Vehicle heading is not explicitly modeled

3
• State x ∈ R : position and heading
• Control u ∈ R: turning rate • Velocity v ∈ R is constant
• Dynamic equation
26 x(3)cosx(4) 37 26 0 0 37 x ̇ =64 x(3)sinx(4) 75+64 0 0 75u
010 001
24 vcosxk(3)t 35 24 0 35
vsinxk(3)t +
0 t
x(3)
Dubins Car
2 x(1) 3 x = 4 x(2) 5
u
21
xk+1 =xk +
0 uk
function xnew = dubins(x,u,dt)
v = 1; % velocity is constant
xnew(1) = x(1) + v*cos(x(3))*dt; % x position
xnew(2) = x(2) + v*sin(x(3))*dt; % y position
xnew(3) = x(3) + u*dt; % theta
end

Dubins Car Advantage Disadvantage
22
Heading considered Velocity does not change

Unicycle – 3 State State x ∈ R3: position and heading
Control u ∈ R2: longitudinal velocity and turning rate Dynamic equaltion

• •
24 cosx(3) 0 35 x ̇= sinx(3) 0 u
01
2 3
cos xk (3)t 0
xk+1 = xk + 4 sin xk (3)t 0 5 uk
0 t
u
23
function xnew = unicycle3(x, u, dt)
xnew(1) = x(1) + u(1)*cos(x(3))*dt; % x position
xnew(2) = x(2) + u(1)*sin(x(3))*dt; % y position
xnew(3) = x(3) + u(2)*dt; % theta
end

Unicycle 3 state Advantage Disadvantage
24
Velocity changable
Heading considered big jumps in velocity
Inertia not fully considered
* Systems usually cannot tolerate


Unicycle – 4 State State x ∈ R4: position, longitudinal velocity, and heading
26 x(1) 37 x = 64 x(2) 75
u

• Dynamic equaltion
26 x(3)cosx(4) 37 26 0 0 37 x ̇ =64 x(3)sinx(4) 75+64 0 0 75u
010 001
2 x(3) cos x(4)t 3 2 6 x(3) sin x(4)t 7 6
x(3) x(4)
xk+1 =xk +4 0 5+4 t 0 0
0 5uk t
0 0
0 3 0 7
function xnew = unicycle4(x, u, dt) xnew(1,1) = x(1) + x(3)*cos(x(4))*dt; xnew(2,1) = x(2) + x(3)*sin(x(4))*dt; xnew(3,1)=x(3)+u(1)*dt;%velocity xnew(4,1) = x(4) + u(2)*dt; % theta end
25
Control u ∈ R2: longitudinal acceleration, and turning rate

Unicycle 4 state Advantage Disadvantage
26
Inertia Considered Heading considered
More advanced dynamics not considered, e.g., drift

Kinematic Models
27
Model
Single Integrator
Double Integrator
Dubins Unicycle3 Unicycle4
State
Control
Discrete Time Dynamics


Bicycle model: consider two wheels
• •
There are other more advanced models:

Four-wheel model: more like a passenger vehicle When do we use these models?
• •
To control the ego vehicle
To predict the motion of other vehicles
Remarks
28

Remarks What if the model is not accurate?

If we are controlling a real hardware system, there will always be model
• mismatch.


There are control techniques to compensate the model mismatch.*
The purpose to study these simple models is to derive control strategies that can be applied to more complex models.
29

How to choose among different models?
Remarks

We need to pick a model that is most suitable for the task.

For highway driving, it is more important to consider inertia than

constraints. Then we can use double integrator.

For low-speed driving (e.g., in parking lot), it is more important to consider constraint than inertia. Then we can use unicycle (3 state).
30

Remarks One thing in common for all models

The models are linear in control

• •

x· = f(x) + g(x)u
xk+1 = f(xk) + g(xk)uk
These models are called control-affine dynamics
31


Fundamental question: how to move the vehicle from one location to another location?
Two problems

and stay there
a
Vehicle Control

Regulation problem: move to a certain location
32

Tracking problem: track a reference trajectory
b

• •
Nonlinear control with energy function Idea:
•Goal states are assigned low energy
•If the system is energy dissipating, then we will eventually converge to the desired states.
General Idea
33

Example: Lyapunov function, value function

• •

• •
Current state x0 and goal state G Dynamics x· = f(x) + g(x)u
Energy function V(x) = 12 ∥x − G∥2
V· = ( x − G ) T x· = ( x − G ) T f ( x ) + ( x − G ) T g ( x ) u
We just need to choose a control law u = π(x) such that V· < 0 for all x ≠ G. x0 G Nonlinear Control (Continuous) 34 • • • • • Nonlinear Control (Discrete) Current state x0 and goal state G Dynamics xk+1 = f(xk) + g(xk)uk Energy function V(x) = 12 ∥x − G∥2 V(xk+1) − V(xk) = 12 (∥xk+1∥2 − ∥xk∥2) − GT(xk+1 − xk) We just need to choose a control law u = π(x) such that V(xk+1) − V(xk) < 0 for all xk ≠ G. x0 G 35 • Continuous time ODE is easier to analyze • Continuous time vs discrete time • Remarks In the real world, most control strategies are implemented through digital systems, which result in discrete time control. 36 5min Break Single Integrator • Dynamics xk+1 = xk + Δt uk Energy function V(x) = 12 ∥x − G∥2 V(xk+1) − V(xk) = 12 (∥xk+1 − G∥2 − ∥xk − G∥2) = 12 Δt2∥uk∥2 + Δt(xk − G)Tuk • Set uk = kp(G − xk) as a proportional control. kp is a constant • Plugging the control law back to the energy function V(x )−V(x)=1Δt2k2∥x −G∥2−Δtk∥x −G∥2 k+1k2pk pk • As long as Δtkp < 2, the energy will be dissipating and the vehicle will end up in its goal. • 38 10 8 6 4 2 0 -2 -4 -6 -8 Single Integrator * Note we can extend the proportional gain to a matrix % Set up controller k = [3 0;0 1]; u = @(x) k*(goal-x); % Proportional control % Simulation for k = 1:kmax x = single_integrator(x, u(x), dt); if norm(x-goal) 0, then we will set the final control as uk = ur + Δuk kk
Δuk = c(pk + Δtvk − O), c =
φ(x , ur) kk
kφΔt∥pk + Δtvk − O∥
66

Double Integrator min ∥uk − ur∥ s.t. φ(xk+1) ≤ 0
uk=ur+max 0,
k
(pk+Δtvk−O) Move away from obstacle (active only when close to the obstacle)
{
uk k
ur =kp(G−pk)+kvvk k
φ(x , ur) } k k
Δtkφ∥pk + Δtvk − O∥
67
Move toward goal

Simulation
ur =kp(G−pk)+kvvk k
% Set up controller
kp = 1;
kd = 2;
ur = @(x) kp*(goal-x(1:2)) – kd*(x(3:4)); % Nominal control
dr = @(x,ur) x(1:2)+dt*x(3:4)-obstacle;
ddotr = @(x,ur) (x(1)+dt*x(3)-obstacle(1))*(x(3)+dt*ur(1))+
(x(2)+dt*x(4)-obstacle(2))*(x(4)+dt*ur(2));
kphi = 0.5;
r φ(x , u )
uk=ur+max 0, k k (pk+Δtvk−O) { Δtk∥p+Δtv−O∥}
68
10 8 6 4 2 0
Double Integrator with collision avoidance
kφkk
-2 varphi = @(x,ur) dmin – norm(dr(x,ur)) – kphi*ddotr(x,ur); c = @(x,ur) max(0,varphi(x,ur))/norm(dr(x,ur))/dt/kphi;
-4
-6
u = @(x) ur(x) + c(x,ur(x))*dr(x,ur(x));
% Simulation
-8 for k = 1:kmax
-10
-10 -8 -6 -4 -2 0 2 4 6 8 10
x = double_integrator(x, u(x), dt);
if norm(x(1:2)-goal)