COMP3702 Artificial Intelligence – Module 0: Introduction
Welcome to COMP3702
Artificial Intelligence!
Artificial Intelligence
1
A little about Alina
Name: Alina Bialkowski
Email: alina. .au
Teaching: I have taught artificial intelligence, signal processing, electrical engineering &
computer science courses
Research: Machine Learning; Computer Vision; Data Science; Interpretable/Explainable AI
Applications: Medical imaging, semi-autonomous vehicles, sports analytics, intelligent
surveillance.
2
A little about Archie
Name: Archie Chapman
Email: archie. .au
Teaching: I have taught artificial intelligence, data analytics, machine learning, game theory
and microeconomics to classes large and small, at university and for industry.
Research: Using AI, optimisation and machine learning methods to solve problems in
power and energy systems, logistics and the circular economy.
3
Primary goals of this course
In COMP3702, we aim to introduce the foundational concepts and methods used in the field of
artificial intelligence, namely:
1. searching for solutions to problems,
2. reasoning and planning with certainty,
3. reasoning and planning under uncertainty,
4. learning to act, and
5. reasoning about other agents & the human-side of AI
and to develop the skills needed to apply these techniques to real–world problems.
4
Tentative weekly plan (see Learning Resources on BB, and watch for updates)
5
Some AI topics we do not cover in this course
• Natural language processing,
• Bayesian networks/graphical models, and most other machine learning topics (see
COMP4702/COMP7703),
• Pattern Recognition and Analysis (see COMP3710),
• Image Processing and Computer Vision (see ELEC4630),
• Data Mining(see INFS4203)
• User experience and user interaction (UX/UI) design — see the the BInfTech User
Experience Design Major, and/or the Master of Interaction Design for more.
6
You’ll learn the most by doing
To help, we will provide:
Active lectures: Provide an introduction to various concepts and techniques in AI combined
with a series of in-class activities.
Weekly tutorials: Provide opportunity to practice the techniques introduced in lectures.
Assignments: design and implement AI systems in three take-home assignments.
Learnersourcing and Adaptive learning: RiPPLE gives you an opportunity to create and
evaluate high-quality learning resources, and uses explainable AI algorithms to
provide personalised recommendation for learning resources to engage with.
Discussion forums: Ed Discussion provides an opportunity for you to ask questions and
engage with responding to questions posted by your peers, moderated by the
teaching team.
7
Weekly tutorials
Tutorial exercises are provided to help you understand the materials discussed in lectures, and
to improve your skills in solving AI problems, with support from tutors.
• Tutorial worksheets will be available on BB each week.
• Tutorials follow the lecture schedule, one week behind.
• Tutorial exercises are not graded, but you are highly encouraged to do them.
• The skills you acquire in completing the tutorials will help you complete the assignments.
• You’ll get the best learning outcome when you try to solve the tutorial exercises on your
own before your tutorial session, and then use your tutorial session to ask about the
difficulties you face when trying to solve the exercises.
8
Ed Discussion Board
• Ed Discussion is a Q&A web service.
• Peer-led discussions about the course content,
• Lecture Q&A during the lecture (i.e. post lecture questions there, not via Zoom chat), and
• General questions about course logistics (i.e. things everyone might want to know).
• Register for the COMP3702 Ed Discussion forum via the Blackboard, under Learning
Resources → Course Tools
9
Assignments (50% of final mark)
Assignment 0 (0%): The purpose of Assignment 0 is to:
• refresh some of the core mathematical concepts and methods used in AI,
• introduce you to some of the programming tricks and tools needed for your assignments,
• give you a chance to learn how to the Gradescope code autograder.
Three graded assignments:
Assignment 1 (15%): due 27 Aug @4pm (Week 5)
Assignment 2 (15%): due 24 Sept @4pm (Week 9)
Assignment 3 (20%): due 29 Oct @4pm (Week 13)
• Involves problem solving + code + report
• Coding language: Python
10
RiPPLE — Learnersourcing
Students author learning resources Resources are moderated by peers and tutors
Based on the moderations, a decision is made Effective resources are added to a repository
11
RiPPLE — Adaptive learning
Students engage with a pool of resources RiPPLE approximates student’��s mastery level
across course topics
RiPPLE recommends learning resources to each Mastery level of students is updated based on
student based on their learning needs their creation moderation and responses
12
Final Exam
• Final exam during examination period
• Mock exam will be provided to help you effectively prepare.
• Internal and external exam will be invigilated (external via ProctorU)
• Final exam has a hurdle of >40%.
13
Assessment weightings and due dates
Task Weighting Due date
RiPPLE 10% Weeks 4, 7, 10 and 13
Assignments 50% Weeks 5, 9, and 13
Final Exam 40% Exam period
14
Get organised!
• Read the course profile
• Make sure that you can access the Blackboard site for the course
• Sign up for a tutorial session, and attend!
• Sign up for Ed Discussion
• Review the RiPPLE material on Blackboard, and start using it as soon as it is made
available.
• Watch out for announcements and updates through BB.
15
Additional learning resources: Textbooks
Russell& Norvig, 3e (R&N) Poole and Mackworth, 2e (P&M)
(Nb: 4e has recently been released) https://artint.info/
16
https://artint.info/
Anonymous feedback
• You can submit anonymous feedback to me and the tutors via Ed Discussion, at any time.
• This provides you with the opportunity to express what you like or dislike about the course.
• Feedback of all kinds are welcome! We are always trying to improve COMP3702.
• Please submit feedback as private posts.
• If appropriate, I might paraphrase your comment on the discussion boards and respond to
it, or raise it in the lectures (anonymously, of course).
17
COMP3702
Artificial Intelligence
Module 0: Introduction
Dr Alina Bialkowski
Semester 2, 2021
The University of Queensland
School of Information Technology and Electrical Engineering
Overview of Module 0
1. What is AI?
2. History of Artificial Intelligence
3. Intelligent agents
4. Goals of artificial intelligence
5. Intelligent agents acting in an environment
6. Agent design problem
7. Dimensions of complexity
1
What is AI?
What do you think AI is? Live lecture wordcloud:
Enter your answers here:
https://apps.elearning.uq.edu.au/wordcloud/72971
1
https://apps.elearning.uq.edu.au/wordcloud/72971
What do you think AI is? Results in 2020:
1
What is AI? Whatever AI researchers do! AAAI conference alert cloud
Source: https://aitopics.org/search
1
https://aitopics.org/search
What is AI?
AI is an attempt to build “intelligent” computers, but what is “intelligence”?
• think like humans?
• think rationally?
• act like humans?
• act rationally?
2
What is AI?
AI is an attempt to build “intelligent” computers, but what is “intelligence”?
• think like humans? Build something like a brain! But how does a human brain work?
• think rationally?
• act like humans?
• act rationally?
Maybe machines can reach intelligence a different way to the human brain?
3
What is AI?
AI is an attempt to build “intelligent” computers, but what is “intelligence”?
• think like humans?
• think rationally? Automated reasoning and logic are foundational topics in AI.
• act like humans?
• act rationally?
It is unclear if logic really captures the type of knowledge that people have or need help with.
Plus, it’s really hard to search through logical statements…
4
What is AI? The Turing Test
AI is an attempt to build “intelligent” computers, but what is “intelligence”?
• think like humans?
• think rationally?
• act like humans? The Turing test: can a human tell if a computer is a computer?
• act rationally?
5
The Turing Test — Alan Turing (1950)
Source: https://medium.com/@fatihbildiriciii/yapay-zeka-e%C4%9Fitim-serisi-b%C3%B6l%C3%BCm-3-69884059e2c1
In the Turing test, the
computer is asked questions
by a human interrogator.
Computer passes the test
if the interrogator cannot tell
whether the responses come
from a human or a computer.
The Turing test simplifies
the question “is the machine
intelligent” into “can the
machine imitate a human?”
6
Applications of Turing Test
• Turing’s idea to try to define “(artificial) intelligence” more concretely has yielded useful
results
• Chat bots: Eliza, A.L.I.C.E., automated online assistance, etc.
• CAPTCHA: Completely Automated Public Turing test to tell Computers and Humans
Apart.
• Turing test, but the “interrogator” is a computer
7
What is AI?
AI is an attempt to build “intelligent” computers, but what is “intelligence”?
• think like humans?
• think rationally?
• act like humans? The Turing test: can a human tell if a computer is a computer?
• act rationally?
Stop and think: Do we really want computers to act like humans?
8
Do we really want computers to think and act like humans?
9
What is AI?
AI is an attempt to build “intelligent” computers, but what is “intelligence”?
• think like humans?
• think rationally?
• act like humans?
• act rationally? Aka intelligent agents (approach taken in R&N and P&M texts)
Not sure that this truly captures the variety of AI research going on right now, but it is a good
place to start…
10
What is AI? Definitions
• OECD1: An AI system is a machine-based system that is capable of influencing the
environment by producing an output (predictions, recommendations or decisions) for
a given set of objectives. It uses machine and/or human-based data and inputs to
• perceive real and/or virtual environments;
• abstract these perceptions into models through analysis in an automated manner (e.g., with
machine learning), or manually; and
• use model inference to formulate options for outcomes.
AI systems are designed to operate with varying levels of autonomy.
1https://www.oecd.ai/ai-principles
11
https://www.oecd.ai/ai-principles
What is AI? Definitions
• Association for the Advancement of Artificial Intelligence (AAAI) offers this on its home
page: AI is “the scientific understanding of the mechanisms underlying thought
and intelligent behavior and their embodiment in machines.”
• Poole and Mackworth say Artificial intelligence is ”the synthesis and analysis of
computational agents that act intelligently.”
• We say: ”AI is the study and development of algorithms for solving problems that we
typically associate with intelligence.”
• AI is a disperse collection of topics. We address core method and models in this course,
which have found had wide-spread use application and as building-blocks in more
sophisticated AI systems.
We won’t do all of AI in this course!
12
History of Artificial Intelligence
A brief history of Artificial Intelligence
Source: https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence
For more of the history and development of AI, read Chapter 1 of R&N or P&M. 13
https://qbi.uq.edu.au/brain/intelligent-machines/history-artificial-intelligence
Fear of AI
• As AI becomes more prevalent, fear increases
• Similar to industrial revolution
• Jobs replaced/obsolete → shifting job opportunities
• New types of jobs
14
Open Problems
• Handling uncertainty e.g. in self-driving cars
• Explainable AI
15
Intelligent agents
What is an intelligent computational agent?
• An agent is something that acts in an environment.
• An agent acts intelligently if…
16
What do you think it means for an agent to
act intelligently?
Enter your suggestions here:
https://apps.elearning.uq.edu.au/wordstream/72971
16
https://apps.elearning.uq.edu.au/wordstream/72971
What is an intelligent computational agent?
• An agent is something that acts in an environment.
• An agent acts intelligently if:
• its actions are appropriate for its goals and circumstances
• it is flexible to changing environments and goals
• it learns from experience
• it makes appropriate choices given perceptual and computational limitations
17
Examples of agents
• Organisations: Microsoft, Facebook, Government of Australia, UQ, ITEE,…
• People: teacher, doctor, stock trader, engineer, researcher, travel agent, farmer, waiter,…
• Computers/devices: air-conditioner thermostat, airplane controller, network controller,
movie recommendation system, tutoring system, diagnostic assistant, robot, GPS
navigation app, Mars rover…
• Animals: dog, mouse, bird, insect, worm, bacterium, bacteria,…
• book(?), sentence(?), word(?), letter(?)
Can a book or article do things?
Convince? Argue? Inspire? Cause people to act differently? Learn from experience?
18
Goals of artificial intelligence
Goals of artificial intelligence
• Scientific goal: to understand the principles that make intelligent behavior possible in
natural or artificial systems.
• analyze natural and artificial agents
• formulate and test hypotheses about what it takes to construct intelligent agents
• design, build, and experiment with computational systems that perform tasks that require
intelligence
• Engineering goal: design useful, intelligent agents.
• Always make the “best” decision given the available resources (knowledge, time,
computational power and memory)
• Best: Maximize certain performance measure(s), usually represented as a utility function
More on this throughout the semester
19
In this class…
• We are interested in building software systems (called agents) that behave rationally
• i.e. systems that accomplish what they are supposed to do, well, given the available
resources
• Don’t worry about how close the systems resemble humans and about philosophical
questions on what “intelligence” is (not that we are not interested in this!)
• But we may use inspirations from humans or other “intelligent” beings and systems
20
Intelligent agents acting in an
environment
Recall our goal: To build a useful, intelligent agent
To start with:
• Computers perceive the world using sensors.
• Agents maintain models/representations of the world and use them for reasoning.
• Computers can learn from data.
So, to achieve our goal, we need to define our “agent” in a way that we can program it:
• The problem of constructing an agent is usually called the agent design problem
• Simply, it’s about defining the components of the agent, so that when the agent acts
rationally, it will accomplish the task it is supposed to perform, and do it well.
Some important things we are hoping to briefly introduce this year
• User interaction: Making agents interact comfortably with humans is a substantial
challenge for AI developers.
• Ethics of AI: AI applications can impact society in both positive and negative ways.
21
Agents acting in an environment: inputs and output
An agent performs an action in the environment; the environment generates a percept or
stimuli. The percept generated by the environment may depend on the sequence of actions the
agent has done.
22
Inputs to an agent
• Abilities — the set of possible actions it can perform
• Goals/Preferences — what it wants, its desires, its values…
• Prior Knowledge — what it knows and believes initially, what it doesn’t get from
experience…
• History of stimuli
• (current) stimuli — what it receives from environment now (observations, percepts)
• past experiences — what it has received in the past
23
Autonomous car
• abilities: steer, accelerate, brake
• goals: safety, get to destination, timeliness …
• prior knowledge: street maps, what signs mean, what to stop for …
• stimuli: vision, laser, GPS, voice commands …
• past experiences: how braking and steering affects direction and speed…
24
Air-conditioner thermostat and controller agent
• abilities: turn air-conditioner on or off
• goals: conformable temperature, save energy, save money
• prior knowledge: 24 hour cycle, weekends
• stimuli: temperature, set temperature, who is home, outside temperature, rooftop PV
generation…
• past experiences: when people come and go, who likes what temperature, building
thermal dynamics…
25
Example agent
• abilities:
• goals:
• prior knowledge:
• stimuli:
• past experiences:
26
Agent design problem
Our goal: To build a useful, intelligent agent
To start with:
• Computers perceive the world using sensors.
• Agents maintain models/representations of the world and use them for reasoning.
• Computers can learn from data.
So, to achieve our goal, we need to define our “agent” in a way that we can program it:
• The problem of constructing an agent is usually called the agent design problem
• Simply, it’s about defining the components of the agent, so that when the agent acts
rationally, it will accomplish the task it is supposed to perform, and do it well.
→
27
Agent design components
The following components are required to solve an agent design problem:
• Action Space (A): The set of all possible actions the agent can perform.
• Percept Space (P): The set of all possible things the agent can perceive.
• State Space (S): The set of all possible configurations of the world the agent is operating
in.
• World Dynamics/Transition Function (T : S × A → S′): A function that specifies how
the configuration of the world changes when the agent performs actions in it.
• Perception Function (Z : S → P): A function that maps a state to a perception.
• Utility Function (U : S → R): A function that maps a state (or a sequence of states) to
a real number, indicating how desirable it is for the agent to occupy that state/sequence
of states.
28
The agent design components
Recall:
• Best action: the action that maximizes a given performance criteria
• A rational agent selects an action that it believes will maximize its performance criteria,
given the available knowledge, time, and computational resources.
Utility function, U : S → R:
• A function that maps a state (or a sequence of states) to a real number, indicating how
desirable it is for the agent to occupy that state/sequence of states.
• Crafting the utility function is a key step in the agent design process.
29
Example: 8-puzzle
→
Initial state Goal state
A classic search problem
30
Example: 8-puzzle
• Action space (A)
• Move the empty cell left (L), right (R), up (U), down (D)
• Percept space (P)
• The sequence of numbers in left-right and up-down direction, where the empty cell is
marked with an underscore
• State space(S)
• Same as P
• World dynamics (T)
• The change from one state to another, given a particular movement of the empty cell
• Can be represented as a table
• Percept function (Z = S → P)
• Identity map
• Utility function (U)
• +1 for the goal state; 0 for all other states
31
Dimensions of complexity
Dimensions of complexity in agent design (P&M Ch 1.5)
• Research proceeds by making simplifying assumptions, and gradually reducing them.
• Each simplifying assumption gives a dimension of complexity
• multiple values in a dimension: from simple to complex
• simplifying assumptions can be relaxed in various combinations
• Much of the history of AI can be seen as starting from the simple and adding in
complexity in some of these dimensions.
32
Dimensions of complexity in agent design
From P&M Ch 1.5:
Dimension Values
Modularity: flat, modular, hierarchical
Planning horizon: non-planning, finite stage, indefinite stage, infinite stage
Representation: states, features, relations
Computational limits: perfect rationality, bounded rationality
Learning: knowledge is given, knowledge is learned
Sensing uncertainty: fully observable, partially observable
Effect uncertainty: deterministic, stochastic
Preference: goals, complex preferences
Number of agents: single agent, multiple agents
Interaction: offline, online
33
https://artint.info/2e/html/ArtInt2e.Ch1.S5.html
Modularity
• Model at one level of abstraction: flat
• Model with interacting modules that can be understood separately: modular
• Model with modules that are (recursively) decomposed into modules: hierarchical
• Flat representations are adequate for simple systems.
• Complex biological systems, computer systems, organizations are all hierarchical.
Is the environment continuous or discrete?
• A flat description is typically either continuous (exclusive-)or discrete
• hierarchical reasoning is often a hybrid of continuous and discrete.
34
Planning horizon
…how far the agent looks into the future when deciding what to do.
• Static: world does not change
• Finite stage: agent reasons about a fixed finite number of time steps
• Indefinite stage: agent reasons about a finite, but not predetermined, number of time
steps
• Infinite stage: the agent plans for going on forever (i.e. process oriented)
35
Representation
Much of modern AI is about finding compact representations and exploiting the compactness
for computational gains.
An agent can reason in terms of:
• Explicit states — a state is one way the world could be
• Features or propositions.
• States can be described using features.
• 30 binary features can represent 230 = 1, 073, 741, 824 states.
• Individuals and relations
• There is a feature for each relationship on each tuple of individuals.
• Often an agent can reason without knowing the individuals or when there are infinitely many
individuals.
36
Computational limits
• Perfect rationality: the agent can determine the best course of action, without taking
into account its limited computational resources.
• Bounded rationality: the agent must make good decisions based on its perceptual,
computational and memory limitations.
37
Learning from experience
Whether the model is fully specified a priori:
• Knowledge is given.
• Knowledge is learned from data or past experience.
…always some mix of prior (innate, programmed) knowledge and learning (nature vs nurture).
38
Uncertainty
There are two dimensions for uncertainty:
• Sensing uncertainty or noisy perception
• Effect uncertainty
In this course, we restrict our focus to probabilistic models of uncertainty. Why?
• Agents need to act even if they are uncertain.
• Predictions are needed to decide what to do:
• definitive predictions: you will be run over tomorrow
• point probabilities: probability you will be run over tomorrow is 0.002 if you are careful and
0.05 if you are not careful
• probability ranges: you will be run over with probability in range [0.001,0.34]
• Acting is gambling: agents who don’t use probabilities will lose to those who do.
• Probabilities can be learned from data and prior knowledge.
39
Sensing uncertainty
Whether an agent can determine the state from its stimuli:
• Fully-observable: the agent can observe the state of the world.
• Partially-observable: there can be a number states that are possible given the agent’s
stimuli.
40
Effect uncertainty
If an agent knew the initial state and its action, could it predict the resulting state?
The dynamics can be:
• Deterministic: the resulting state is determined from the action and the state
• Stochastic: there is uncertainty about the resulting state.
41
Preferences
What does the agent try to achieve?
• achievement goal is a goal to achieve. This can be a complex logical formula.
• complex preferences may involve tradeoffs between various desiderata, perhaps at
different times.
• ordinal only the order matters
• cardinal absolute values also matter
Examples: coffee delivery robot, medical doctor
42
Number of agents
Are there multiple reasoning agents that need to be taken into account?
• Single agent reasoning: any other agents are part of the environment.
• Multiple agent reasoning: an agent reasons strategically about the reasoning of other
agents.
Agents can have their own goals: cooperative, competitive, or goals can be independent of
each other
43
Interaction
When does the agent reason to determine what to do?
• reason offline: before acting
• reason online: while interacting with environment
44
Dimensions of complexity in agent design
From P&M Ch 1.5:
Dimension Values
Modularity: flat, modular, hierarchical
Planning horizon: non-planning, finite stage, indefinite stage, infinite stage
Representation: states, features, relations
Computational limits: perfect rationality, bounded rationality
Learning: knowledge is given, knowledge is learned
Sensing uncertainty: fully observable, partially observable
Effect uncertainty: deterministic, stochastic
Preference: goals, complex preferences
Number of agents: single agent, multiple agents
Interaction: offline, online
45
https://artint.info/2e/html/ArtInt2e.Ch1.S5.html
Example problem class: State-space search (Module 1)
Dimension Values
Modularity flat, modular, hierarchical
Planning horizon non-planning, finite stage, indefinite stage, infinite stage
Representation states, features, relations
Computational limits perfect rationality, bounded rationality
Learning knowledge is given, knowledge is learned
Sensing uncertainty fully observable, partially observable
Effect uncertainty deterministic, stochastic
Preference goals, complex preferences
Number of agents single agent, multiple agents
Interaction offline, online
46
Example problem class: Deterministic planning using CSP (Module 2)
Dimension Values
Modularity flat, modular, hierarchical
Planning horizon non-planning, finite stage, indefinite stage, infinite stage
Representation states, features, relations
Computational limits perfect rationality, bounded rationality
Learning knowledge is given, knowledge is learned
Sensing uncertainty fully observable, partially observable
Effect uncertainty deterministic, stochastic
Preference goals, complex preferences
Number of agents single agent, multiple agents
Interaction offline, online
47
Example problem class: Markov decision processes (MDPs, Module 3)
Dimension Values
Modularity flat, modular, hierarchical
Planning horizon non-planning, finite stage, indefinite stage, infinite stage
Representation states, features, relations
Computational limits perfect rationality, bounded rationality
Learning knowledge is given, knowledge is learned
Sensing uncertainty fully observable, partially observable
Effect uncertainty deterministic, stochastic
Preference goals, complex preferences
Number of agents single agent, multiple agents
Interaction offline, online
48
Example problem class: Reinforcement learning (Module 4)
Dimension Values
Modularity flat, modular, hierarchical
Planning horizon non-planning, finite stage, indefinite stage, infinite stage
Representation states, features, relations
Computational limits perfect rationality, bounded rationality
Learning knowledge is given, knowledge is learned
Sensing uncertainty fully observable, partially observable
Effect uncertainty deterministic, stochastic
Preference goals, complex preferences
Number of agents single agent, multiple agents
Interaction offline, online
49
Example problem class: Classical game theory (Module 5)
Dimension Values
Modularity flat, modular, hierarchical
Planning horizon non-planning, finite stage, indefinite stage, infinite stage
Representation states, features, relations
Computational limits perfect rationality, bounded rationality
Learning knowledge is given, knowledge is learned
Sensing uncertainty fully observable, partially observable
Effect uncertainty deterministic, stochastic
Preference goals, complex preferences
Number of agents single agent, multiple agents
Interaction offline, online
50
The real world: Humans
Dimension Values
Modularity flat, modular, hierarchical
Planning horizon non-planning, finite stage, indefinite stage, infinite stage
Representation states, features, relations
Computational limits perfect rationality, bounded rationality
Learning knowledge is given, knowledge is learned
Sensing uncertainty fully observable, partially observable
Effect uncertainty deterministic, stochastic
Preference goals, complex preferences
Number of agents single agent, multiple agents
Interaction offline, online
51
Attributions and References
Thanks to Dr Archie Chapman and Dr Hanna Kurniawati for their materials.
Many of the slides in Module 0 are adapted from David Poole and Alan Mackworth, Artificial Intelligence:
foundations of computational agents, 2E, CUP, 2017 http://https://artint.info/. These materials are
copyright © Poole and Mackworth, 2017, licensed under a Creative Commons
Attribution-NonCommercial-ShareAlike 4.0 International License.
Other materials derived from Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 3E,
Prentice Hall, 2009.
http://https://artint.info/
Welcome to COMP3702 Artificial Intelligence!
What is AI?
History of Artificial Intelligence
Intelligent agents
Goals of artificial intelligence
Intelligent agents acting in an environment
Agent design problem
Dimensions of complexity
Appendix