Week 11 – Robotics
COMP30024 – Artificial Intelligence Chapter 26
Dr Wafa Johal University of Melbourne
Copyright By PowCoder代写 加微信 powcoder
Robots, Effectors, and Sensors Localization and Mapping Motion Planning
AIMA Slides © and ; Dr Wafa Johal 2
Mobile Robots
AIMA Slides © and ; Dr Wafa Johal 3
Manipulators
Configuration of robot specified by 6 numbers =⇒ 6 degrees of freedom (DOF)
6 is the minimum number required to position end-effector arbitrarily. For dynamical systems, add velocity for each DOF.
AIMA Slides © and ; Dr Wafa Johal 4
Non-holonomic robots
A car has more DOF (3) than controls (2), so is non-holonomic;
cannot generally transition between two infinitesimally close configurations
AIMA Slides © and ; Dr Wafa Johal 5
Sources of uncertainty in interaction
There are two major sources of uncertainty for any interacting mobile agent (human or robotics):
• Everything they perceive (percepts) • Everything they do (actions)
AIMA Slides © and ; Dr Wafa Johal 6
We do not see the world as it is
Kant (1724-1804)
• Distinction between ’things-in-themselves’, and ’appearances’.
• We have sensory states, but how do we know how (even if) these relate to the world?
AIMA Slides © and ; Dr Wafa Johal 7
We do not see the world as it is
What do you really see?
• Small foveal region in colour.
• Most of field of view is monochrome, at low resolution, and sensitive to motion.
• 2D image only (binocular seldom used in practice). Occasionally, perceived reality breaks down:
• hallucinations • optical illusions
AIMA Slides © and ; Dr Wafa Johal 8
Uncertainty in actions
Consider an agent action sequence as discussed in AIMA
• Move forward 1 metre; • Turn right (90 degrees); • Move forward 2 metres; • Turn left;
• Turn left;
• Move forward 2 metres. • Turn left;
• Move forward 1 metre.
Go into an open space, close your eyes (no sensory feedback – that would be cheating!), and attempt this sequence.
How close do you get?
AIMA Slides © and ; Dr Wafa Johal 9
Sources of uncertainty in motion actions
• slippage;
• inaccurate joint encoding;
• rough surfaces;
• obstacles;
• effector breakdown (injuries); • ….
All these errors accumulate without bound.
Thus, starting out with perfect knowledge, and moving using actions with very small error, after an infinitely long action sequence a system will have infinite error in its position estimate.
AIMA Slides © and ; Dr Wafa Johal 10
Dealing with uncertainty in motion actions
Use sensors to verify actions (not simple).
AIMA Slides © and ; Dr Wafa Johal 11
Sources of uncertainty robotic perception
• Let’s take a very simple sensor – distance sensor (laser range finder).
• Returns a number corresponding to nearest obstacle in a straight-line path, up to some finite distance.
• Has some known error (non-precise location).
• Has some known false positive/false negative rates (whether an obstacle is there or not).
• Has small spatial extent – what if partially obstructed?
• Finite time means cannot check every location, what does a reading say about the readings are neighbouring points.
AIMA Slides © and ; Dr Wafa Johal 12
Range finders: sonar (land, underwater), laser range finder, radar (aircraft), tactile sensors, GPS
Imaging sensors: cameras (visual, infrared)
Proprioceptive sensors: shaft decoders (joints, wheels), inertial sensors,
force sensors, torque sensors
AIMA Slides © and ; Dr Wafa Johal 13
Implications of sensor uncertainty
Must make assumptions about the way the world behaves in order to interpret the readings at all. For example:
• Some finite resolution sampling is sufficient to detect obstacles (consider an obstacle that consists of hundreds of long pins, sparsely distributed, pointing towards the sensor).
• Must know something about the structure of the robot to decide what an obstacle is.
• Given some sensor reading, only have a finite probability that it is correct – must have some way of dealing with this.
AIMA Slides © and ; Dr Wafa Johal 14
Localization—Where Am I?
An action At at time t results in state Xt+1 and observation Zt+1
Compute current location and orientation (pose) given observations:
AIMA Slides © and ;
Dr Wafa Johal 15
Localization contd.
Sensor Model: Use observation h(xt) of landmark xi,yi to estimate state xt of robot.
Motion Model: Update state using its movements vt∆t and ωt∆t
xi, yi h(xt)
ωt ∆t θt+1
Z1 Z2 Z3 Z4
Assume Gaussian noise in motion prediction, sensor range measurements
AIMA Slides © and ; Dr Wafa Johal 16
Localization contd.
Can use particle filtering to produce approximate position estimate
Start with random samples from uniform prior distribution for robot position. Update likelihood of each sample using sensor measurements.
Resample according to updated likelihoods.
Robot position
Robot position
Robot position
AIMA Slides © and ; Dr Wafa Johal
Localization contd.
We need to continuously update our distribution for the current sta1teusing the latest measurements.
Uncertainty of the robot’s state grows as it move2s until we find a landmark. robot
Assumes that landmarks are identifiable—otherwise, posterior is multimodal
AIMA Slides © and ; Dr Wafa Johal 18
Localization: given map and observed landmarks, update pose distribution
Mapping: given pose and observed landmarks, update map distribution
Simultaneous Localization and Mapping (SLAM):
given observed landmarks, update pose and map distribution
Probabilistic formulation of SLAM: addlandmarklocationsL1,…,Lk tothestatevector, proceed as for localization
AIMA Slides © and ; Dr Wafa Johal 19
Mapping contd.
Consider space with 8 identical landmarks:
When first landmark detected again:
no uncertainty about robot or landmark positions
AIMA Slides © and ; Dr Wafa Johal 20
3D Mapping example
Mapping a coal mine:
AIMA Slides © and ; Dr Wafa Johal 21
Bayesian Inference on Sensors
Need some way to determine whether an obstacle is there, given multiple measurements from a sensor.
What is Bayesian inference? (revision)
• A method for determining the probability that a hypothesis is true, given a set of measurements.
• Probability ≈ belief.
AIMA Slides © and ; Dr Wafa Johal 22
Elements of Conditional Probability: revision
The probability that A is true, given the condition B:
P(A | B) (1)
• P(I will get hit by a car tomorrow) = 0.0001
• P(I will get hit by a car tomorrow | I ride to Frankston) = 0.01
• P(I will get hit by a car tomorrow | I stay home all day) = 0.0000001
AIMA Slides © and ; Dr Wafa Johal 23
Bayes Law: revision
Bayes Law asks:
• Given measurement M;
• What is the probability of hypothesis H? Bayes Law:
P(H | M) = P(M | H)P(H) P(M)
AIMA Slides © and ;
Dr Wafa Johal 24
Bayes Law: revision
Bayes Law:
P(H | M) = P(M | H)P(H) P(M)
• P(H | M): posterior probability of H. • P(H): prior probability of H.
• P(M | H): sensor model.
• P(M): normalisation factor.
Normalisation:
P(M) = P(M | H)P(H) + P(M | ¬H)P(¬H)
AIMA Slides © and ;
Dr Wafa Johal 25
Obstacle detection:
• The odds of there being an obstacle present are 1in 10.
• The detector has 5% false positive rate and 10% false negative rate. • Probability that an obstacle is present if the detector returns positive? • Probability that an obstacle is present if the detector returns negative?
P(obstacle) = 0.1
P(noobstacle) = 0.9 (5)
AIMA Slides © and ; Dr Wafa Johal 26
Example continued
Sensor model:
P(positive | obstacle) P(negative | obstacle) P(negative | notobstacle) P(positive | notobstacle)
If the sensor returns positive:
P(obstacle | positive) = If the sensor returns negative:
= 0.9 = 0.1 = 0.95 = 0.05
P(obstacle | negative) = 0.1
0.1 × 0.1 + 0.95 × 0.9
0.1 = 0.0116 (8)
0.9 × 0.1 + 0.05 × 0.9
0.1 = 0.667 (7)
AIMA Slides © and ; Dr Wafa Johal 27
Incremental form of Bayes Law
Bayes Law can be extended to handle multiple measurements. • Given a set of independent measurements {Mj};
• What is the probability of the hypothesis H?
If measurements are independent, can use incremental form.
• Given the current probability distribution P(H);
• And a new measurement M
• What is the updated probability distribution P(H)?
Use Bayes Law in incremental form:
M P(M | H)
P(H) ←− P(M) P(H) (9)
Sometimes called Bayesian update rule.
AIMA Slides © and ; Dr Wafa Johal 28
Obstacle detection (again):
• The odds of there being an obstacle present are 1 in 10.
• The detector has 5% false positive rate and 10% false negative rate.
• What is the probability that an obstacle is present if the detector returns:
• One positive?
• Two positives?
• Two positives and a negative?
AIMA Slides © and ; Dr Wafa Johal 29
Time series:
Time M P(obs) 0-0.10 1 positive 0.67 2 positive 0.97 3 negative 0.79
P(noobs) 0.90 0.33 0.03 0.21
AIMA Slides © and ;
Dr Wafa Johal 30
Motion Planning
Now we know:
• where we are
• where the obstacles are • where we want to go
Next challenge – How do we get there?
AIMA Slides © and ;
Dr Wafa Johal 31
Motion Planning
Idea: plan in configuration space defined by the robot’s DOFs Solution is a point trajectory in free C-space
AIMA Slides © and ; Dr Wafa Johal 32
Configuration space planning
Basic problem: ∞d states! Convert to finite state space.
Cell decomposition:
divide up space into simple cells,
each of which can be traversed “easily” (e.g., convex)
Skeletonization:
identify finite number of easily connected points/lines that form a graph such that any two points are connected by a path on the graph
AIMA Slides © and ; Dr Wafa Johal 33
Cell decomposition example
Problem: may be no path in pure freespace cells
Solution: recursive decomposition of mixed (free+obstacle) cells
AIMA Slides © and ; Dr Wafa Johal 34
Skeletonization: Voronoi diagram
Voronoi diagram: locus of points equidistant from obstacles
AIMA Slides © and ; Dr Wafa Johal 35
Skeletonization: Probabilistic Roadmap
A probabilistic roadmap is generated by generating random points in C-space and keeping those in freespace; create graph by joining pairs by straight lines
AIMA Slides © and ; Dr Wafa Johal 36
• Percepts and actions are both subject to uncertainty.
• We cannot interpret our percepts without having a model of what they mean, and without (partially invalid) assumptions about how they perform.
• Uncertainty in robot perception.
• Incremental form of Bayes Law.
• Motion planning.
AIMA Slides © and ; Dr Wafa Johal 37
Implications for AI
If you can’t rely on your perceptions or your actions, does that mean that Agent methods we have discussed are of no use?
• Many problems don’t have uncertainty for perceptions and actions, e.g., scheduling, planning, game-playing, text-based machine translation.
• Can incorporate standard agent methods within a system that handles uncertainty, i.e., re-plan if something goes wrong.
• Can apply uncertainty handlers to whole system – e.g., Bayesian inference.
Certainly for autonomous robots and computer vision interaction with an environment creates many problems that cannot be easily handled with conventional AI techniques.
AIMA Slides © and ; Dr Wafa Johal 38
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com