程序代写CS代考 flex Intelligent Agents

Intelligent Agents
Chapter 2
Chapter 2 1

Outline
♦ Agents and environments
♦ Rationality
♦ PEAS (Performance measure, Environment, Actuators, Sensors) ♦ Environment types
♦ Agent types
Chapter 2 2

Agents and environments
environment
? agent
percepts
Agents include humans, robots, softbots, thermostats, etc. The agent function maps from percept histories to actions:
actions
actuators
f : P∗ → A
The agent program runs on the physical architecture to produce f
sensors
Chapter 2 3

Vacuum-cleaner world
AB
Percepts: location and contents, e.g., [A, Dirty] Actions: Left, Right, Suck, NoOp
Chapter 2 4

Percept sequence
Action
[A,Clean]
Right
[A, Dirty]
S uck
[B,Clean]
Left
[B, Dirty]
S uck
A vacuum-cleaner agent
[A, Clean], [A, Clean]
Right
[A, Clean], [A, Dirty] .
S uck
. function Reflex-Vacuum-Agent( [location,status]) returns an action
if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left
What is the right function?
Can it be implemented in a small agent program?
Chapter 2 5

Rational ̸= omniscient
– percepts may not supply all relevant information
Rational ̸= clairvoyant
– action outcomes may not be as expected
Hence, rational ̸= successful
Rational ⇒ exploration, learning, autonomy
Rationality
Fixed performance measure evaluates the environment sequence
– one point per square cleaned up in time T?
– one point per clean square per time step, minus one per move? – penalize for > k dirty squares?
A rational agent chooses whichever action maximizes the expected value of the performance measure given the percept sequence to date
Chapter 2 6

To design a rational agent, we must specify the task environment Consider, e.g., the task of designing an automated taxi: Performance measure??
Environment??
Actuators?? Sensors??
PEAS
Chapter 2 7

PEAS
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing an automated taxi:
Performance measure?? safety, destination, profits, legality, comfort, . . . Environment?? US streets/freeways, traffic, pedestrians, weather, . . . Actuators?? steering, accelerator, brake, horn, speaker/display, . . . Sensors?? video, accelerometers, gauges, engine sensors, keyboard, GPS, . . .
Chapter 2 8

Performance measure?? Environment?? Actuators??
Sensors??
Internet shopping agent
Chapter 2 9

Internet shopping agent
Performance measure?? price, quality, appropriateness, efficiency Environment?? current and future WWW sites, vendors, shippers Actuators?? display to user, follow URL, fill in form
Sensors?? HTML pages (text, graphics, scripts)
Chapter 2 10

Observable?? Deterministic?? Episodic?? Static?? Discrete?? Single-agent??
Environment types
Solitaire
Backgammon
Internet shopping Taxi
Chapter 2 11

Observable?? Deterministic?? Episodic?? Static?? Discrete?? Single-agent??
Solitaire Backgammon Yes Yes
Internet shopping Taxi No No
Environment types
Chapter 2 12

Observable?? Deterministic?? Episodic?? Static?? Discrete?? Single-agent??
Solitaire Backgammon Yes Yes
Yes No
Internet shopping Taxi No No Partly No
Environment types
Chapter 2 13

Observable?? Deterministic?? Episodic?? Static?? Discrete?? Single-agent??
Solitaire Backgammon Yes Yes
Yes No
Internet shopping Taxi No No Partly No No No
Environment types
No No
Chapter 2 14

Observable?? Deterministic?? Episodic?? Static?? Discrete?? Single-agent??
Solitaire Backgammon Yes Yes
Yes No
Internet shopping Taxi No No Partly No No No Semi No
Environment types
No No Yes Semi
Chapter 2 15

Solitaire Observable?? Yes Deterministic?? Yes
Backgammon Internet shopping Taxi Yes No No No Partly No No No No Semi Semi No Yes Yes No
Episodic?? No Static?? Yes Discrete?? Yes Single-agent??
Environment types
Chapter 2 16

Observable?? Deterministic?? Episodic?? Static?? Discrete?? Single-agent??
Solitaire Backgammon Yes Yes
Yes No
Internet shopping Taxi No No Partly No No No Semi No Yes No Yes (except auctions) No
Environment types
No No Yes Semi Yes Yes Yes No
The environment type largely determines the agent design
The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent
Chapter 2 17

Four basic types in order of increasing generality: – simple reflex agents
– reflex agents with state
– goal-based agents
– utility-based agents
All these can be turned into learning agents
Agent types
Chapter 2 18

Environment
Agent
Sensors
Condition−action rules
What action I should do now
Simple reflex agents
What the world is like now
Actuators
Chapter 2 19

if status = Dirty then return Suck else if location = A then return Right else if location = B then return Left
(defun make-reflex-vacuum-agent-program () #’(lambda (percept)
Example
function Reflex-Vacuum-Agent( [location,status]) returns an action
(setq joe (make-agent :name ’joe :body (make-agent-body)
:program (make-reflex-vacuum-agent-program))
(let ((location (first percept)) (status (second percept))) (cond ((eq status ’dirty) ’Suck)
((eq location ’A) ’Right) ((eq location ’B) ’Left)))))
Chapter 2 20

Environment
Agent
Actuators
Reflex agents with state
State
How the world evolves
What the world is like now
What my actions do
Condition−action rules
What action I should do now
Sensors
Chapter 2 21

if status = Dirty then . . .
Example
function Reflex-Vacuum-Agent( [location,status]) returns an action static: last A, last B, numbers, initially ∞
(defun make-reflex-vacuum-agent-with-state-program () (let ((last-A infinity) (last-B infinity)) #’(lambda (percept)
(let ((location (first percept)) (status (second percept))) (incf last-A) (incf last-B)
(cond
((eq status ’dirty)
(if (eq location ’A) (setq last-A 0) (setq last-B 0)) ’Suck)
((eq location ’A) (if (> last-B 3) ’Right ’NoOp)) ((eq location ’B) (if (> last-A 3) ’Left ’NoOp)))))))
Chapter 2 22

Environment
State
How the world evolves
What the world is like now
Agent
Actuators
What my actions do
What it will be like if I do action A
Goals
What action I should do now
Goal-based agents
Sensors
Chapter 2 23

Environment
State
How the world evolves
What the world is like now
Agent
Actuators
What my actions do
What it will be like if I do action A
Utility
How happy I will be in such a state
Utility-based agents
Sensors
What action I should do now
Chapter 2 24

Environment
Performance standard
feedback
Agent
Actuators
learning goals
Critic
Sensors
Learning element
knowledge
Performance element
Problem generator
Learning agents
changes
Chapter 2 25

Agent programs implement (some) agent functions
PEAS descriptions define task environments
Summary
Agents interact with environments through actuators and sensors
The agent function describes what the agent does in all circumstances
The performance measure evaluates the environment sequence
A perfectly rational agent maximizes expected performance
Environments are categorized along several dimensions:
observable? deterministic? episodic? static? discrete? single-agent?
Several basic agent architectures exist:
reflex, reflex with state, goal-based, utility-based
Chapter 2 26