COMP3620 / 6320 – S1 2022
This lecture will be recorded
Turn off your camera if you do not want to be in the recording
If you cannot see my whole slide, go to View Options -> Zoom Ratio and select Fit to Window
Copyright By PowCoder代写 加微信 powcoder
If you want to ask a question, either:
• Raise your hand and I will open your mic, or • Type the question in the chat box
COMP3620/6320
Course Organisation and Introduction
Felipe Trevizan (Convenor)
https://cs.anu.edu.au/courses/comp3620/
Course Organization
• All the information about the course, assignments, lab, tutorials, policies are in the website:
https://cs.anu.edu.au/courses/comp3620/
• You must the policies and outline sections at least once – Not knowing the policies in place is not a valid excuse
5 Minutes Summary of the Course
• 3 Topics: – Search
– KnowledgeRepresentationandReasoning(KRR)
– Planning
• 3 Assignments
– 100% penalty if late
– Plagiarismdetectionsoftware,someautomatedtestingandmanuallychecked • 6 Tutorials
– 2pertopic
– 1 quiz per tutorial (0 marks if you miss the tutorial) • 6 Labs
• Mid-semester exam – Hurdle: mid-semester exam >= 40 (out of 100) • Final exam – Hurdle: final exam >= 30 (out of 100)
Tutorials, Labs, Assignments and Quizzes
• Tutorials:
– Goal is to help understand the material and prepare exam
– Will discuss a list of questions, try answering them before the tutorial
• Quizzes:
– Goal is to provide a reality check
– Are keeping up with the content or should you spend more time studying?
• Assignments:
– Goal is to put the course into practice by building AI programs
– Essential to build a deep understanding of the course
– Goal is to get help from the tutors with the assignments
– Unstructured and self-guided (that is, you need to bring questions)
– Get started well in advance to make the most of the opportunity
Contact & Information
• The course page is the main source of information
– It has priority in case of conflicting information
• Use Piazza for all communications – See communication policy
https://cs.anu.edu.au/courses/comp3620/policies/#communication
for more details of whom to contact in different situations
Course Representatives
Why become a class representative?
• Ensure students have a voice to their course convener, lecturer, tutors, and College.
• Develop skills sought by employers, including interpersonal, dispute resolution, leadership and communication skills.
• Become empowered. Play an active role in determining the direction of your education.
• Become more aware of issues influencing your University and current issues in higher education.
• Course design and delivery. Help shape the delivery of your current courses as well as future improvements for following years.
Note: Class representatives will need to be comfortable with their contact details being made available via Wattle to all students in the class.
For more information regarding roles and responsibilities, contact: ANUSA CECS representatives:
Want to be a class representative? Nominate today!
Please nominate yourself by sending a private message on piazza by 2nd March 2022.
You are free to nominate yourself whether you are currently on-campus or studying remotely.
Introduction
• What is AI? • Brief History • Ethics
Carthy 1927-2011
Artificial Intelligence
“The science and engineering of making intelligent machines”
• Official birth: Dartmouth College Meeting, 1956
• Ambitious goals:
1. Understand “intelligence”
2. Build “intelligent” machines
But … what counts as an intelligent machine?
What is an Intelligent Machine?
What is an Intelligent Machine?
• 4 different perspectives corresponding to two dimensions
1. thinking vs acting (thought/reasoning vs actions/behavior)
2. human vs rational (close to a human vs close to optimal)
Systems that think like humans
Systems that think rationally
Systems that act like humans
Systems that act rationally
• Rationale:
1. The ability to think is what distinguishes humans from animals. But a
machine could behave intelligently without thinking.
2. Human exhibit intelligence. But why not aim at super-human, optimal
behavior, especially for specific tasks?
What is an Intelligent Machine?
Systems that think like humans
Making machines with minds in the full and literal sense [Haugeland 85]
Systems that think rationally
Studying the computations that enable perception, reasoning, and actions [Winston 92]
Systems that act like humans
Concerned with the automation of intelligent behavior [Luger & Subblefield 93]
Systems that act rationally
Making computers do things at which, for now, people are better [Kurzweil 90]
Whether machines can think is as relevant as whether submarines can swim [Dijkstra 84]
Acting Humanly: The Turing Test
• “Computing Machinery and Intelligence” [Turing 50]
• Operational test for intelligent behavior
• Imitation game: computer passes the test if interrogator
cannot tell which is the machine (no physical presence)
• Focused on major AI components:
– Natural language processing
– Knowledge representation & reasoning – Learning
• Prediction: by 2000, 30% chance to fool a human in a 5-minutes conversation
Acting Humanly: The Turing Test
• Eliza (Weizenbaum 66) rule-based program replicating a psychotherapist.
Primitive natural language processing.
• Loebner Prize ($100,000) since 1991
– A.L.I.C.E3bronzemedals
– Mitsuku 3 bronze medal in 2013, 2016-2019
www.mitsuku.com
Patient: I feel sad.
Eliza: Do you often feel sad?
Patient: Not very often.
Eliza: Please, go on.
Human: Does god exist?
Mitsuku: The bible is the most significant piece of evidence that God exists, but it is not a scientific proof.
Human: Okay, but do you believe in god?
Mitsuku: Yeah, okay, I believe there is probably something out there but I doubt it is a man with a long beard sat on a throne.
Thinking Humanly: Modelling Cognition
• Objective: develop scientific theories of the activities of the brain • Two approaches:
1. Cognitive Science (top down):
uses computer models and experimental psychology techniques to
predict and test behavior of human subjects 2. Cognitive Neuroscience (bottom up):
uses computer imaging & other neurological data to observe the brain in action – Project to simulate the brain www.humanbrainproject.eu
– Related to the AI field of neural networks
• These days, both disciplines are distinct from AI
Thinking Rationally: Laws of Thought
• Objective: formalise and mechanise valid reasoning
• Direct line through maths and philosophy to modern AI
• Logic: notation and rules to derive valid conclusions
– Aristotle’s syllogism
– Mathematical development of classical logic
• Propositional & first-order logic (Boole, Frege, 1850s)
• Most of mathematics can be derived from axioms of set theory
– Non-classical logic to formalise
common-sense reasoning
• Default logic (by default, birds fly)
Tweety is a bird Birds fly ——————- Tweety flies
∀x P(x) → Q(x) ——————- Q(a)
Thinking Rationally: Laws of Thought
• Limit 1: Undecidability
– Goedl’s Theorem: every axiomatisable consistent theory extending
arithmetic has formulas that are true but not provable within the
• Limit 2: Complexity
– Non-trivial to formalise a real-world problem in logic – Most problems are NP-complete or harder
• Limit 3: Scope
– Not all intelligent behavior requires reasoning (much doesn’t)
• Limit 4: Purpose
– Reasoning to prove what? Notion of “goal” is missing
Acting Rationally: Rational Agents
• An agent is an entity that perceives and acts in its environment (driverless car, electronic trading system, energy management system)
• Rationality is about doing the right thing:
• Decision which achieves the best (expected) outcome, given the information
available and time available (limited rationality)
• This course (and much of today’s AI) is about designing rational agents: for any given class of environment and task, we seek the agent with the best performance.
Artificial Intelligence
“The science and engineering of making intelligent machines”
• Ambitious goals:
1. Understand “intelligence”
➢ Accurate models of cognition are now the focus of cognitive science, neuroscience and psychology
2. Build “intelligent” machines
➢ Focus on developing methods that match or exceed human performance in certain domains, possibly by different means.
Applications
Financial markets
Perseverance landing (3 mins) https://youtu.be/4czjS9h4Fpg
Web mining and applications
Space Energy
Brief History
Brief History
1950: Turing test
1950s: Early programs including checkers, theorist, neural nets 1956: Dartmouth meeting, “Artificial Intelligence” adopted 1965: Robison’s complete algorithm for logical reasoning 1966-74: AI discovers complexity, neural nets research disappears 1969-79: Early knowledge-based systems
1980-88: Expert systems industry booms
1988-93: Expert systems industry “busts”, AI Winter
1988-00: Greater technical depth, resurgence of probabilities 1985-95: Neural nets return, lead to, and replaced by modern SML 2003-: Human-level AI back on the agenda
2010-: Deep learning: neural nets research is favour again 2013-: Ethical issues make the headlines
Optimism Realism
Expert Systems Winter Foundations NN returns Data, multicore NN again Maturity?
AI Achievements – Predictions
• 1958: “within ten years a digital computer will be the world’s chess champion” [ ]
• 1965: “machines will be capable, within twenty years, of doing any work a man can do.” [Herb Simon]
• 1970: “In from three to eight years we will have a machine with the general intelligence of an average human being.” [ ]
Herb Simon 1916-2001 1927-2016
AI Achievements – The Reality
• 1991: Proverb solves crosswords better than human • 1991: AI solves Gulf-war logistics planning problems • 1997: IBM Deep Blue beats chess champion Kasparov • 1999: AI agent controls NASA deep space 1 probe
• 2001: autonomous military drones unveiled
• 2005: Driverless vehicles complete the 212km DARPA Grand
Challenge through the Mojave desert
• 2007: Checkers game completely solved
• 2009: Google autonomous car drives in traffic
• 2011: IBM Watson wins Jeopardy!
• 2016: Google alphago beats go champion
• Today: AI is everywhere, injects billions into economy
AI Achievements –
• 2030: “an AI system with an ongoing existence at the level of a mouse” [ ]
• Not in his lifetime: “a robot that has any real idea about his own existence, or the existence of humans in a way a 6 years old child would” [ ]
• 2050: “Germany will loose to a robot soccer team.” [ ]
Humanoid Robot Soccer – 1998
Humanoid Robot Soccer – 2018
Robocup 2018 SPL Finals – Nao-Team HTWK vs. B-Human
AI Ethics and Risks
People might lose their jobs
– AI creates wealth and does dangerous and boring jobs for us
Accountability loss: who is responsible, AI, owner, creator? – Similar issues elsewhere (medicine, software, plane crash)
AI reproducing our negative biases and attitudes (e.g. racism) – AI should share our positive values
Use of AI as weapon (e.g. drones)
– Can also save lives? Every beneficial invention can be misused
AI Ethics and Risks
• AI Success might end of the human era
– Kurtzweil, Musk, Hawking!
– Oncemachinesurpasseshumanintelligenceit
can design smarter machines.
– Intelligence explosion and singularity at which
human era ends
• Many counter arguments
– limits to intelligence
– nothing special about human intelligence
– computationalcomplexity
– “intelligence to do a task” ≠ “ability to improve intelligence to do a task”
Robotics Laws
The Three Laws of Robotics [Azimov 1942]
1. A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law
• A robot may not injure humanity, or,
through inaction, allow humanity to
come to harm 36
UK Principles of Robotics [EPSRC 2011]
1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.
2. Humans, not robots, are responsible agents. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights freedoms, including privacy.
3. Robots are products. They should be designed using processes which assure their safety and security.
4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.
5. The person with legal responsibility for a robot should be attributed.
• How to think or how to behave? Being like humans or being rational?
– This course about acting rationally
• AI related to many fields including philosophy, mathematics, economics, neuroscience, psychology, computer sci. and control theory
• 50+ years of progress along many different paradigms: logic, expert systems, neural nets, learning, probabilities
• Increasingly scientific: focus on experimental comparisons and theoretical foundations
• AI is a high-risk high-gain area with major ethical implications
Intelligent Agents
• Agents and environments
• Rationality
• PEAS (Performance measure, Environment, Actuators, Sensors) • Environment types
• Agent types
Agents and Environments
environment
percepts actions
• Agents include humans, robots, softbots, thermostats, etc.
• Percept refers to the agent perceptual input at any given instant • The agent function maps from percept histories to actions:
f : P∗ → A
• The agent program implements f on the physical architecture.
Vacuum-cleaner World
• Percepts: current location and its content, e.g., (A, Dirty) • Actions: Left, Right, Suck, NoOp
A Vacuum-cleaner Agent
Percept sequence
(A, Clean)
(A, Dirty)
(B, Clean)
(B, Dirty)
(A, Clean), (A, Clean)
(A, Clean), (A, Dirty)
• What is the right function f ?
• Can it be implemented in a small agent program?
Rationality
The performance measure evaluates the environment sequence
– one point per room cleaned up within T time steps?
– one point per clean room per time step, minus half a point per action?
– penalize for > k dirty rooms?
suck right
left = 1.5
A rational agent chooses whichever action maximizes the expected value of
the performance measure given the percept sequence to date
• Rational ≠ omniscient
– percepts may not supply all relevant information
• Rational ≠ clairvoyant
– action outcomes may not be as expected
• Hence, rational ≠ successful
do nothing = 2
To design a rational agent, we must specify the task environment
Consider, e.g., the task of designing a driverless taxi: • Performance measure:
– safety, destination, profits, legality, comfort, … • Environment:
– streets/freeways, traffic, pedestrians, weather, … • Actuators:
– steering, accelerator, brake, horn, blinkers, … • Sensors:
– GPS, video, accelerometers, gauges, engine sensors, …
Internet shopping agent
Consider, e.g., the task of designing an internet shopping bot: • Performance measure:
• Environment:
• Actuators:
• Sensors:
Internet shopping agent
Consider, e.g., the task of designing an internet shopping bot: • Performance measure:
– price, quality, appropriateness, efficiency • Environment:
– user, WWW sites, vendors, shippers • Actuators:
– display to user, follow URL, fill in form • Sensors:
– HTML pages (text, graphics, scripts), user input
Properties of Task Environments
1. Fully vs partially observable
2. Deterministic vs stochastic
3. Known vs unknown
4. Episodic vs sequential
5. Static vs dynamic
6. Discrete vs continuous
7. Single vs multi-agent
1. Fully vs partially observable
Do the agent sensors give access to all relevant information about the environment state?
2. Deterministic vs stochastic
Is the next state completely determined by the current state and executed action?
3. Known vs unknown
Does the agent know the environment’s rules/laws of physics?
4. Episodic vs sequential
Is the next decision independent of the previous ones?
Classification in ML:
5. Static vs dynamic
Can the environment change whilst the agent is deliberating?
Semi-dynamic: only the performance score changes.
6. Discrete vs continuous
Can time, states, actions, percepts be represented in a discrete way?
7. Single vs Multi-agent
Is a single agent making decisions, or do multiple agents need to compete or cooperate to maximise interdependent performance measures?
Environment types
Crossword Poker Part picking robot Taxi
Observable Deterministic Known Episodic Static Discrete Single-agent
Yes No Yes No Yes Yes
Mostly No Mostly No
Part. No No Yes No
Yes No Yes
No No No No No
The environment type largely determines the agent design
The real world is (of course) partially observable, stochastic, sequential, dynamic, continuous, multi-agent.
Agent types
• Four basic types of agents in order of increasing generality: – simple reflex agents
– reflex agents with state
– goal-based agents
– utility-based agents
• All these can be turned into learning agents
Simple reflex agents
What the world is like now
Condition−action rules
What action I should do now
Decisions are made based on the current percept only. Raises issues for partially observable environments.
Environment
Reflex agents with state
How the world evolves
What my actions do
Sensors Sensors
What the world is like now
Condition−action rules
What action I should do now
The internal state keeps track of relevant unobservable aspects of the environment. The environment model describes how the environment works (how the environment state is affected by actions)
Environment
The time passed since a location was visited is a proxy for the likelihood of this location’s status changing from clean to dirty.
Goal-Based agents
How the world evolves
What my actions do
What the world is like now
What it will be like if I do action A
What action I should do now
• The goal describes desirable situations.
• The agent combines goal and environment model to choose actions.
• Planning and search are AI subfields devoted to building goal-based agents.
Environment
Utility-based agents
How the world evolves
What my actions do Utility
What the world is like now
What it will be like if I do action A
How happy I will be in such a state
What action I should do now
• The utility function internalises the performance measure.
• Under uncertainty, the agent chooses actions that maximise the expected utility.
Environment
Utility-based agents
Rational agent: chooses the action that maximises expected utility: u1=2.5
Expected utility of 𝑆𝑢𝑐𝑘:
𝑝1 × 𝑢1 + 𝑝2 × 𝑢2 = 0.7 × 2.5 + 0.3 × 1.5 = 2.2
• 𝑆𝑢𝑐𝑘 has an expec
程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com