程序代写 CS7637 course at Georgia Tech:

KBAI EBOOK: KNOWLEDGE-BASED ARTIFICIAL INTELLIGENCE
KBAI Ebook: Knowledge-based Artificial Intelligence
KBAI: CS7637 course at Georgia Tech:
Course Creators and Instructors: , . Click here for Course Details

Copyright By PowCoder代写 加微信 powcoder

Electronic Book (eBook) Designers: , , . Last updated: October 6, 2016

Page 1 of 357 ⃝c 2016 and

KBAI EBOOK: KNOWLEDGE-BASED ARTIFICIAL INTELLIGENCE
Copyright: and , Georgia Institute of Technology.
All rights reserved. No part of this document may be reproduced, stored in any retrieval system, or transmitted in any form or by any means without prior written permission.
Page 2 of 357 ⃝c 2016 and

KBAI EBOOK: KNOWLEDGE-BASED ARTIFICIAL INTELLIGENCE
YouTube Playlists:
Part 1 of 5 Part 2 of 5 Part 3 of 5 Part 4 of 5 Part 5 of 5
NOTE: Lessons 02, 04, 14 and 25 have a correspondence problem between YouTube links, tran- scripts, and slides. Additionally, the following videos have known incomplete transcripts:
Lesson 3 – Exercise: Constructing Semantic Nets I,
Lesson 5 – Exercise: Block Problem I,
Lesson 24 – Example: Goal-Based Autonomy, and
Lesson 25 – Raven’s Progressive Matrices.
These will get fixed when the YouTube playlist is fixed by Udacity and reloaded into this KBAI Ebook.
Page 3 of 357 ⃝c 2016 and

LESSON 01 – INTRODUCTION TO KNOWLEDGE-BASED AI
Lesson 01 – Introduction to Knowledge-Based AI
To understand the intelligence functions at a fundamental level, I believe, would be a scientific achievement on the scale of nuclear physics, relativity, and molecular genetics. – .
Education is not the piling on of learning, information, data, facts, skills, or abilities – that’s training or instruction – but is rather making visible what is hidden as a seed. – .
01 – Introductions
Click here to watch the video
Figure 1: Introductions
Hello, and welcome to CS 7637, knowledge- based artificial intelligence/cognitive systems. My name is . My name is . I’m a Professor of Computer Science and Cognitive Science at Georgia Tech. I’ve been teaching knowledge-based AI for about 25 years. I’ve been doing research in this area for about 30. My personal passion is for computational creatively. Breathing air agents that human like and creative in their own right. I’m, of course, developer with Udacity and I’m also finishing up my own PhD dissertation here at Georgia Tech with Ashok as my advisor. My personal passion
is for education and especially for using modern technology to deliver individualized personal ed- ucational experiences. It would be very difficult in very large classrooms. As we’ll see AI is not just the subject of this course, but it’s all a tool we’re using to teach this course. We had a lot of fun putting this course together. We hope you enjoy it as well. We think of this course as an experiment as well. We want to understand how students learn in online classrooms. So if you have any feedback please share it with us.
02 – Preview
Click here to watch the video
Figure 2: Preview
So welcome to 7637 Knowledge Based AI. At the beginning of each lesson, we’ll briefly intro-
Page 4 of 357 ⃝c 2016 and

duce a topic as shown in the graphics to the right. We’ll also talk about how the topic fits into the overall curriculum for the course. Today, we’ll be discussing AI in general, including some of the fundamental conundrums and characteristics of AI. We will describe four schools of AI, and dis- cuss how knowledge based AI fits into the rest of AI. Next, we’ll visit the subtitle of the course, Cognitive Systems, and define an architecture for them. Finally, we’ll look at the topics that we’ll cover in this course in detail.
03 – Conundrums in AI
Click here to watch the video
Figure 3: Conundrums in AI
Let’s start a recognition today. We’re dis- cussing some of the biggest problems in AI. We obviously are not going to solve all of them to- day, but it’s good to start with a big picture. AI has several conundrums, I’m going to describe five of the main ones today. Conundrum num- ber one. All intelligent agents have little com- putational resources, processing speed, memory size, and so on. But most interesting AI prob- lems are computationally intractable. How then can we get AI agents to give us near real time performance on many interesting problems? Co- nundrum number two. All competition is local, but most AI problems have global constraints. How then can we get AI agents to address global problems using only local computation? Conun- drum number three. Computation logic is funda- mentally deductive, but many AI problems are abductive or inductive in their nature. How can we get AI agents to address abductive or induc- tive problems? If you do not understand some of these terms, like abduction, don’t worry about
it, we’ll discuss it later in the later in the class. Conundrum number four. The world is dynamic, knowledge is limited, but an AI agent must al- ways begin with what it already knows. How then can an AI agent ever address a new prob- lem? Conundrum number five. Problem solving, reasoning, and learning are complex enough, but explanation and justification add to the complex- ity. How then can we get an AI agent to ever explain or justify it’s decisions?
04 – Characteristics of AI Problems
Click here to watch the video
Figure 4: Characteristics of AI Problems
I hope our discussion of the big problems in AI didn’t scare you off, let’s bring the discus- sion down, closer to work. And talk about a few fundamental characteristics of AI problems. Number one, in many AI problems, data arrives incrementally not all the data comes right at the beginning. Number two, problems often have a recurring pattern, the same kinds of problems occur again and again. Number three, prob- lems occur at many different levels of abstrac- tion. Problem number four, many interesting AI problems are computationally intractable. Num- ber five, the world is dynamic, it’s constantly changing but knowledge of the world is relative to static. Number six, the world is open ended but knowledge of the world is relatively limited. So, the question then becomes, how can we de- sign air agents that can address air problems with these characteristics, those are the chal- lenges we’ll discuss in this course
05 – Characteristics of AI Agents
LESSON 01 – INTRODUCTION TO KNOWLEDGE-BASED AI
Click here to watch the video
Page 5 of 357 ⃝c 2016 and

LESSON 01 – INTRODUCTION TO KNOWLEDGE-BASED AI
Figure 5: Characteristics of AI Agents
In addition to AI problems having several characteristics, AI agents too have several prop- erties. Property number one. AI agents, have only a limited computing power, processing speed, memory size, and so on. Property num- ber two. AI agents have limited sensors, they cannot perceive everything in the world. Prop- erty number three. AI agents have limited at- tention, they cannot focus on everything at the same time. Property number four. Computa- tional logic is fundamentally deductive. Prop- erty number five. The world is large, but AI agents’ knowledge of the world is incomplete rel- ative to the world. So, the question then be- comes, how can AI agents with such bounded rationality address open-ended problems in the world?
06 – Exercise What are AI Problems
Click here to watch the video
Figure 6: Exercise What are AI Problems
Now that we have talked about the character- istics of AI, agents in AI problems. Let us talk a little about for what kind of problems might you build in AI agent. On the right are several tasks. Which are these AI problems? Or to put
it differently, for which of these problems would you build an AI agent to solve?
07 – Exercise What are AI Problems
Click here to watch the video
David, which one of these do you think are AI problems? Science has said that all of these are AI problems. All of these are things that we humans do on a fairly regular basis. And if the goal of artificial intelligence is to recreate human intelligence, then it seems like we need to be able to design agents that can do any of these things. I agree. In fact, during this class we’ll design AI agents that can address each of these problems. For now, let us just focus on the first one. How to design an AI agent that can answer Jeopardy questions.
08 – Exercise AI in Practice Watson
Click here to watch the video
Let’s start with looking at an example of an AI agent in action. Many are you are family with Watson, the IBM program that plays Jeop- ardy. Some of you may not be, and that’s fine, we’ll show an example in a minute. When you watch Watson in action, try to think. What are some of the things Watson must know about? What are some of the things that Watson must be able to reason about in order to play Jeop- ardy? Write them down. And anytime you feel the pain, hey, this guy refrain. Don’t carry the world upon your shoulders. Watson? Who is, Jude? Yes. Olympic Oddities, for 200. Milo- rad Cavic almost upset this man’s perfect 2008 Olympics, losing to him by one-hundredth of a second. Watson. Who is . Yes, go. Name the decade for 200. Disneyland opens and the peace symbol is create. Ken. What are the 50s? Yes. Final Frontiers for 1,000, Alex. Tickets aren’t needed for this event, a black hole’s boundary from which matter can not escape. Watson. What is event horizon?
09 – Exercise AI in Practice Watson
Click here to watch the video
David, what did you write it down? So I said that the four fundamental things a Watson must be able to do to play Jeopardy are first
Page 6 of 357 ⃝c 2016 and

read the clue, then search through it’s knowl- edge base, then actually decide on it’s answer, and then phrase it’s answer in the form of a ques- tion. That’s right. And during this course, we’ll discuss each part of David’s answer.
10 – What is Knowledge-Based AI
Click here to watch the video
Figure 7: What is Knowledge-Based AI
Let us look at the processes that Watson may be using a little bit more closely. Clearly Wat- son is doing a large number of things. It is try- ing to understand natural language sentences. It is trying to generate some natural language sen- tences. It is making some decisions. I’ll group all of these things broadly under reasoning. Reason- ing is a fundamental process of knowledge based data. A second fundamental process of knowl- edge based AIs learning. What simply is learn- ing also? It perhaps gets a right answer to some questions, and stores that answer somewhere. If it gets a wrong answer, and then once it learns about the right answer, it stores the right answer also somewhere. Learning to is a fundamental process of knowledge based AI. A third funda- mental process of knowledge based ai is mem- ory. If you’re going to learn something, that knowledge that you’re learning has to be store somewhere, in memory. If you’re going to rea- son using knowledge, then that knowledge has to accessed from somewhere, from memory. From memory process it will store, what we learn as well as provide access to knowledge it will need for reasoning. These three forms of processes of learning, memory, and reasoning are intimately connected. We learn, so that we can reason. The result of reasoning often. Result in additional
learning. Once we learn, we can store it in mem- ory. However, we need knowledge to learn. The more we know, the more we can learn. Reason- ing requires knowledge that memory can provide access to. The results of reasoning can also go into memory. So, here are three processes that are closely related. A key aspect of this course on knowledge based AI is that we will be talk- ing about theories of knowledge based AI that unify reasoning, learning, and memory. And sort of, discussing any one of the three separately as sometimes happens in some schools of AI. We’re going to try to build, unify the concept. These 3 processes put together, I will call them delib- eration. This deliberation process is 1 part of the overall architecture of a knowledge based AI agent. This figure illustrates the older architec- ture of an AI agent. Here we have input in the form of perceptions of the world. And output in the form of actions in the world. The agent may have large number of processes that map these perceptions to actions. We are going to focus right now on deliberation, but the agent archi- tecture also includes metacognition and reaction, that we’ll discuss later
11 – Foundations The Four Schools of AI
Click here to watch the video
LESSON 01 – INTRODUCTION TO KNOWLEDGE-BASED AI
Figure 8: Foundations The Four Schools of AI Page 7 of 357 ⃝c 2016 and

LESSON 01 – INTRODUCTION TO KNOWLEDGE-BASED AI
Figure 9: Foundations The Four Schools of AI
Another way of understanding what is knowl- edge based AI, is to contrast it with the other schools of thought in AI. We can think in terms of a spectrum. On one end of the spectrum, is acting. The other end of the spectrum is thinking. As an example, when you’re driving a car, you’re acting on the world. But when you are planning what route to take, you’re think- ing about the world. There is a second dimen- sion for distinguishing between different schools of thought of AI. At one end of the spectrum we can think of AI agents that are optimal. At the other end of the spectrum, we can think of air agents that act and think like humans. Humans are multifunctional, they have very robust in- telligence. That intelligence need not be optimal relative to any one task, but it’s very general pur- pose, it works for a very large number of tasks. Were as we can pick up here, agents on the other side which are optimal for a given task. Given these 2 axis we get 4 quadrants. Starting from the top left and going counter clockwise, here are Agents that think optimally, Agents that act optimally, Agents that act like humans. And agents that think like humans. In this particu- lar course in knowledged based AI, we’re inter- ested in agents that think like humans. Let us take a few examples to make sure that we under- stand this four quadrants world. Here are some well known computational techniques. Consider many machine learning algorithms. These algo- rithms analyse large amounts of data, and de- termine patterns of the regularity of that data. Well I might think of them as being in the top left quadrant. This is really doing thinking, and they often are optimal, but they’re not neces- sarily human like. Airplane autopilots. They
would go under acting optimally. They’re sud- denly acting in the world, and you want them to act optimally. Improvisational robots that can perhaps dance to the music that you play, they’re acting, and they are behaving like hu- mans, dancing to some music. Semantic web, a new generation of web technologies in which the web understands the various pages, and in- formation on it. I might put that under think- ing like humans. They are thinking. Not acting in the world. And is much more like humans, than, let’s say, some of the other computational techniques here. If you’re interested in reading some more about these projects, you can check out the course materials. Where we’ve provided some recent papers on these different computa- tional techniques. There’s a lot of cutting edge research going on here at Georgia Tech and else- where, on these different technologies. And if, if you really are interested in this, this is something where we’re always looking for contributors.
12 – Exercise What is KBAI
Click here to watch the video
So one thing that many students in this class are probably familiar with is ’s Robotics class on autonomous vehicles. David, where do you think an autonomous vehicle would fall on the spectrum? So it seems to me like an autonomous vehicle definitely moves around in the world so it certainly acts in the world. And driving is a very human-like behavior so I’d say that it acts like a human. What do you think? Do you agree with David?
13 – Exercise What is KBAI
Click here to watch the video
Page 8 of 357
Figure 10: Exercise What is KBAI
⃝c 2016 and

David, do we really care whether or not the autonomous vehicle thinks and acts the way we do? I guess, now that you mention it, as long as the vehicle gets me to my destination and doesn’t run over anything on the way, I really don’t care if it does think the way I do. And if you look at the way I drive, I really hope it doesn’t act the way I do. So the autonomous vehicle may really belong to the acting rationally side of the spectrum. At the same time, looking at the way humans write might help us design a robot. And looking at the robot design might help us reflect on human cognition. This is one of the patterns of knowledge-based data.
14 – Exercise The Four Schools of AI
Click here to watch the video
Figure 11: Exercise The Four Schools of AI
Let us do an exercise together. Once again, we have the four quadrants shown here, and at the top left are four compression artifacts. I’m sure you’re familiar with all four of them. C- 3PO is a fictitious artifact from Star Wars. Can we put these four artifacts in the quadrants to which they best belong?
15 – Exercise The Four Schools of AI
Click here to watch the video
What do you think about this David? So starting with Roomba, I would put Roomba in the bottom left. It definitely acts in the world. But it definitely doesn’t act like I do. It criss- crosses across the floor until it vacuums every- thing up. So we’re going to say that’s acting op- timally. C-3PO is fluent in over 6 million forms of communication. And that means that it in- teracts with human and other species very often.
In order to do that, it has to understand natural sentences and put its own knowledge back into natural sentences. So, it has to act like humans. Apple’s virtual assistant, Siri, doesn’t act in the world. So she is more on the thinking end of the spectrum but, like C-3PO, she has to interact with humans. She has to read human sentences and she has to put her own responses back into normal vernacular. So we’re going to say that she thinks like humans. Google Maps plots your route from your origin to your destination. So it’s definitely doing thinking, it’s not doing any acting in the world. But we don’t really care if it does the route planning like we would do it. So we would say it does its route planning optimally. It takes into consideration traffic. Current construction, different things like that, where we would probably think of the routes we have taken in the past. So Google Maps thinks optimally. That is a good answer David, I agree with you, but not here that some aspects of Siri, may well belong in some of the other quadrants. So, putting under 3 sounds plausible. But Siri might also be viewed as, perhaps, acting when it gives you a response. Siri, some aspects of Siri might also be optimal, not necessarily like humans. So if you’d like to discuss where these technologies belong on these spectrums or, per- haps, discuss where some other AI technologies that you’re familiar with belong on these spec- trums, feel free to head on over to our forums where you can bring up your own technologies and discuss the different. Ways in which they fit into the broader school of AI.
16 – What are Cognitive Systems
Click here to watch the video
I’m sure you have noticed that this class has a subtitle, cognitive systems. Let’s talk about this term and break it down into its components. Cognitive, in this context, means dealing with human-like intelligence. The ultimate goal is to dwell up human level, human-like intelligence. Systems, in this context, means having multiple interacting components, such as learning, rea- soning and memory. Cognitive systems, they are sys

程序代写 CS代考 加微信: powcoder QQ: 1823890830 Email: powcoder@163.com