CS代写 COMP30024 Artificial Intelligence

COMP30024 Artificial Intelligence
Week 2 Problem Sheet
Welcome to the first tutorial for COMP30024! Active and regular participation in these weekly problem-solving classes is an essential part of this subject: they provide the opportunity to review the topics in smaller groups, to discuss problems, and to see some of them solved in great detail.
If anything is unclear during these sessions, please ask questions! Do not be afraid of stopping the instructor when something is not clear, since often your doubts are common to other students as well. Asking questions also helps guide us to select points to be reviewed in each tutorial.

Copyright By PowCoder代写 加微信 powcoder

You will be tackling the problems in each tutorial in small groups. We strongly encourage you to actively participate in your group discussions. Discussing problems in small groups is critical to forming understanding of this subject, and developing independent thinking skills is far more valuable than working through the solution with your instructor. With this in mind, a nontrivial fraction of questions are designed to provoke discussion in tutorials, rather than having a simple closed-form answer.
Each tutorial will only be covering a subset of the questions on the problem sheet. Questions to be discussed in each tutorial are marked [T]. Especially challenging questions are marked with an asterisk [∗]. Questions which require the use of a computer are marked with [Comp]. You should attempt the other solutions on the worksheet after your weekly tute. Solutions to all problems on the sheet are released at the end of each week.
Weekly Topic Review
• An agent is any entity which perceives and acts in an environment. The percept sequence is defined as the complete history of content the agent has perceived. Agent behaviour is defined by an agent function mapping the given percept sequence to an action. An agent program is responsible for implementing the agent function within some physical system.
• We evaluate agent behaviour through the performance measure. A rational agent acts to maximize the expectation of the performance measure, conditioned on the percept sequence to date, and whatever built-in knowledge the agent possesses.

• A self-learning agent may undertake actions to modify future percepts, and/or adjust the agent function as it accumulates experience.
Environment Properties
1. Fully/partially observable – The environment is fully observable if the agent’s sensors cap- ture the complete state of the environment at every point in time. Imperfect sensors/models, or a fundamental inability to capture environmental state lead to partial observability.
2. Single/multi agent – Self explanatory, although care must be taken in deciding which entities in the environment must be considered fellow agents.
3. Deterministic/stochastic – if future states of the environment are fully determined by the current state, the environment is deterministic. A stochastic environment is a special case of a nondeterministic environment which admits a probabilistic model of environmental phe- nomena.
4. Episodic/sequential – If an agent’s percept sequence is divided into noninteracting episodes, where the agent executes a single action based on the current percept, the environment is episodic. In a sequential environment, the current action may affect future environment state and hence future decisions.
5. Static/dynamic – If the environment can change when the agent is deliberating (executing the agent program), then the environment is dynamic, otherwise static.
6. Discrete/continuous – If the environment has a finite number of distinct states, then the agent only has to contend with a finite set of percepts, and the environment is discrete, otherwise continuous. Similarly, if an agent can choose between a finite number of actions, the action set is discrete.
Agent Architectures
• Simple reflex agent – only chooses actions based on the current percept, and ignores all
preceding information.
• Model-based reflex agent – maintains some internal state which depends on the percept history, useful if the current environment cannot be fully described by the current percept.
• Goal-based agents – makes decisions in order to achieve a set of predefined goals, in addition to maintaining internal state. In practice usually superceded by…
• Utility-based agents – compares the desirability of different environment states via a utility function. This allows the comparison of different goal states and action sequences and tradeoffs between different goals.
Weekly reading: Russell and Norvig, Artificial Intelligence: A Modern Approach (3rd/4th Ed.). Chapter 2.

1. [T], [∗] Introduce yourselves to the other students in the group.
2. Agent behaviour/types [T] Decide whether each of the following statements is true or false. Support your answer with appropriate examples/counterexamples. Consider how the agent type influences your answer.
(a) Anagentthatsensesonlypartialinformationaboutthestatecannotbeperfectlyrational.
(b) There exist task environments in which no pure reflex agent can behave rationally.
(c) Suppose an agent selects its action uniformly at random from the set of possible actions. There exists a deterministic task environment in which this agent is rational.
(d) An agent which chooses the action with the highest expected value under the performance measure is superior to all other agents in the long run.
(e) Every agent is rational in an unobservable environment.
3. Tabulation [T] Assume the agent function is represented as a table – for every possible percept sequence, we explicitly specify the appropriate action to take.
(a) Is tabulation of the agent function possible for continuous percepts?
(b) Let P be the set of possible percepts, and T be the maximum length of a percept sequence. Find an upper bound on the total number of entries in this table in terms of |P| and T.
(c) Do you think this is a good idea in general?
4. Environment Properties [T] For each of the following activities, characterize the task en- vironment in terms of its properties:
(a) Playing soccer.
(b) Shopping for used books on the internet.
(c) Bidding on an item at an auction.
5. Agent Architectures [T] For each of the environments in Figure 1, determine what type of agent architecture is most appropriate.
6. Agent programs/functions
(a) Do agent functions exist which cannot be implemented by any agent program? If yes, give a counterexample.
(b) Suppose we keep the agent program fixed but speed up the machine by a factor of two. Does this change the agent function? Will your answer change depending on the environment type?
7. Non-determinism [∗] The vacuum environments in the previous exercise were deterministic. Discuss possible agent programs for each of the following stochastic versions:

Chapter 2 Intelligent Agents
Agent Type
Medical diagnosis system
Satellite image analysis system
Part-picking robot
Refinery controller
Interactive English tutor
Performance Measure
Healthy patient, reduced costs
Correct categorization of objects, terrain
Percentage of parts in correct bins
Purity, yield, safety
Student’s score on test
Environment
Patient, hospital, staff
Orbiting satellite, downlink, weather
Conveyor belt with parts; bins
Refinery, raw materials, operators
Set of students, testing agency
Display of questions, tests, diagnoses, treatments
Display of scene categorization
Jointed arm and hand
Valves, pumps, heaters, stirrers, displays
Display of exercises, feedback, speech
Touchscreen/voice entry of symptoms and findings
High-resolution digital camera
Camera, tactile and joint angle sensors
Temperature, pressure, flow, chemical sensors
Keyboard entry, voice
Unobservable
Single-agent Multiagent
drivers are thinking. If the agent has no sensors at all then the environment is unobserv- able. One might think that in such cases the agent’s plight is hopeless, but, as we discuss in Chapter 4, the agent’s goals may still be achievable, sometimes with certainty.
Single-agent vs. multiagent: The distinction between single-agent and multiagent en- vironments may seem simple enough. For example, an agent solving a crossword puzzle by itself is clearly in a single-agent environment, whereas an agent playing chess is in a two- agent environment. However, there are some subtle issues. First, we have described how an entity may be viewed as an agent, but we have not explained which entities must be viewed as agents. Does an agent A (the taxi driver for example) have to treat an object B (another vehicle) as an agent, or can it be treated merely as an object behaving according to the laws of physics, analogous to waves at the beach or leaves blowing in the wind? The key distinction is whether B’s behavior is best described as maximizing a performance measure whose value depends on agent A’s behavior.
Figure 2.5 Examples of agent types and their PEAS descriptions.
Figure 1: Examples of agent types and associated environemnts.
(a) pMerfuorpmhayn’cse lmawea:su2r5e%. Fuolflythobesetrimvaeb,letehneviSrouncmkeanctstiaorne cfoanilvsentioenctlebaecnautsheethfleoaogrenift niteeids dirty and not maintain any internal state to keep track of the world. An environment might be partially
deposits dirt onto the floor if the floor is clean. How is your agent program affected if
observable because of noisy and inaccurate sensors or because parts of the state are simply
the dirt sensor gives the wrong answer 10% of the time?
missing from the sensor data—for example, a vacuum agent with only a local dirt sensor
(b) Small children: At each time step, each clean square has a small chance of becoming
cannot tell whether there is dirt in other squares, and an automated taxi cannot see what other
dirty. Can you come up with a rational agent design for this case?

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com